text
stringlengths 7.27k
650k
| label
int64 0
10
|
---|---|
Path lengths in tree-child time consistent hybridization
networks
Gabriel Cardona1 , Mercè Llabrés12 , Francesc Rosselló12 , and Gabriel Valiente2 3
arXiv:0807.0087v1 [q-bio.PE] 1 Jul 2008
1
3
Department of Mathematics and Computer Science, University of the Balearic Islands, E-07122
Palma de Mallorca, Spain
2
Research Institute of Health Science (IUNICS), E-07122 Palma de Mallorca, Spain
Algorithms, Bioinformatics, Complexity and Formal Methods Research Group, Technical University
of Catalonia, E-08034 Barcelona, Spain
Abstract. Hybridization networks are representations of evolutionary histories that allow for the inclusion of reticulate events like recombinations, hybridizations, or lateral gene
transfers. The recent growth in the number of hybridization network reconstruction algorithms has led to an increasing interest in the definition of metrics for their comparison
that can be used to assess the accuracy or robustness of these methods. In this paper
we establish some basic results that make it possible the generalization to tree-child time
consistent (TCTC) hybridization networks of some of the oldest known metrics for phylogenetic trees: those based on the comparison of the vectors of path lengths between leaves.
More specifically, we associate to each hybridization network a suitably defined vector of
‘splitted’ path lengths between its leaves, and we prove that if two TCTC hybridization
networks have the same such vectors, then they must be isomorphic. Thus, comparing
these vectors by means of a metric for real-valued vectors defines a metric for TCTC hybridization networks. We also consider the case of fully resolved hybridization networks,
where we prove that simpler, ‘non-splitted’ vectors can be used.
1
Introduction
An evolutionary history is usually modelled by means of a rooted phylogenetic tree,
whose root represents a common ancestor of all species under study (or whatever other
taxonomic units are considered: genes, proteins,. . . ), the leaves, the extant species, and
the internal nodes, the ancestral species. But phylogenetic trees can only cope with speciation events due to mutations, where each species other than the universal common
ancestor has only one parent in the evolutionary history (its parent in the tree). It is
clearly understood now that other speciation events, which cannot be properly represented by means of single arcs in a tree, play an important role in evolution [10]. These
are reticulation events like genetic recombinations, hybridizations, or lateral gene transfers, where a species is the result of the interaction between two parent species. This
has lead to the introduction of networks as models of phylogenetic histories that capture
these reticulation events side by side with the classical mutations.
Contrary to what happens in the phylogenetic trees literature, where the basic concepts are well established, there is still some lack of consensus about terminology in the
field of ‘phylogenetic networks’ [16]. Following [23], in this paper we use the term hybridization network to denote the most general model of reticulated evolutionary history:
a directed acyclic graph with only one root, which represents the last universal common
ancestor and which we assume, thus, of out-degree greater than 1. In such a graph, nodes
represent species (or any other taxonomy unit) and arcs represent direct descendance.
A node with only one parent (a tree node) represents a species derived from its parent species through mutation, and a node with more than one parent (a hybrid node)
represents a species derived from its parent species through some reticulation event.
The interest in representing phylogenetic histories by means of networks has lead
to many hybridization network reconstruction methods [13, 14, 17–19, 21, 25, 28]. These
reconstruction methods often search for hybridization networks satisfying some restriction, like for instance to have as few hybrid nodes as possible (in perfect phylogenies),
or to have their reticulation cycles satisfying some structural restrictions (in galled trees
and networks). Two popular and biologically meaningful such restrictions are the time
consistency [1, 18], the possibility of assigning times to the nodes in such a way that
tree children exist later than their parents and hybrid children coexist with their parents (and in particular, the parents of a hybrid species coexist in time), and the tree
child condition [8, 31], that imposes that every non-extant species has some descendant
through mutation alone. The tree-child time consistent (TCTC) hybridization networks
have been recently proposed as the class where meaningful phylogenetic networks should
be searched [30]. Recent simulations (reported in [27]) have shown that over 64% of
4132 hybridization networks obtained using the coalescent model [15] under various population and sample sizes, sequence lengths, and recombination rates, were TCTC: the
percentage of TCTC networks among the time consistent networks obtained in these
simulations increases to 92.8%.
The increase in the number of available hybridization networks reconstruction algorithms has made it necessary the introduction of methods for the comparison of hybridization networks to be used in their assessment, for instance by comparing inferred
networks with either simulated or true phylogenetic histories, and by evaluating the robustness of reconstruction algorithms when adding new species [18, 32]. This has lead
recently to the definition of several metrics defined on different classes of hybridization
networks [4–6, 8, 9, 18, 20]. All these metrics generalize in one way or another well-known
metrics for phylogenetic trees.
Some of the most popular metrics for phylogenetic trees are based on the comparison
of the vectors of path lengths between leaves [3, 11, 12, 22, 26, 29]. Introduced in the early
seventies, with different names depending on the author and the way these vectors are
compared, they are globally known as nodal distances. Actually, these vectors of paths
lengths only separate (in the sense that equal vectors means isomorphic trees), on the
one hand, unrooted phylogenetic trees, and, on the other hand, fully resolved rooted
phylogenetic trees, and therefore, as far as rooted phylogenetic trees goes, the distances
defined through these vectors are only true metrics for fully resolved trees. These metrics
were recently generalized to arbitrary rooted phylogenetic trees [7]. In this generalization,
each path length between two leaves was replaced by the pair of distances from the
leaves to their least common ancestor, and the vector of paths lengths between leaves
was replaced by the splitted path lengths matrix obtained in this way. These matrices
2
separate arbitrary rooted phylogenetic trees, and therefore the splitted nodal distances
defined through them are indeed metrics on the space of rooted phylogenetic trees.
In a recent paper [6] we have generalized these splitted nodal distances to TCTC
hybridization networks with all their hybrid nodes of out-degree 1. The goal of this
paper is to go one step beyond in two directions: to generalize to the TCTC hybridization
networks setting both the classical nodal distances for fully resolved rooted phylogenetic
trees and the new splitted nodal distances for rooted phylogenetic trees. Thus, on the one
hand, we introduce a suitable generalization of the vectors of path lengths between leaves
that separate fully resolved (where every non extant species has exactly two children,
and every reticulation event involves exactly two parent species) TCTC hybridization
networks. On the other hand, we show that if we split these new path lengths in a suitable
way and we add a bit of extra information, the resulting vectors separate arbitrary
TCTC hybridization networks. Then, the vectors obtained in both cases can be used to
define metrics that generalize, respectively, the nodal distances for fully resolved rooted
phylogenetic trees and the splitted nodal distances for rooted phylogenetic trees.
The key ingredient in the proofs of our main results is the use of sets of suitable
reductions that applied to TCTC hybridization networks with n leaves and m internal
nodes produce TCTC hybridization networks with either n−1 leaves or with n leaves and
m − 1 internal nodes (in the fully resolved case, the reductions we use are specifically tailored to make them remove always one leaf). Similar sets of reductions have already been
introduced for TCTC hybridization networks with all their hybrid nodes of out-degree
1 [6] and for tree sibling (where every hybrid node has a tree sibling) time consistent
hybridization networks with all their hybrid nodes of in-degree 2 and out-degree 1 [4],
and they have been proved useful in those contexts not only to establish properties of the
corresponding networks by algebraic induction, but also to generate in a recursive way
all networks of the type under consideration. We hope that the reductions introduced in
this paper will find similar applications elsewhere.
2
2.1
Preliminaries
Notations on DAGs
Let N = (V, E) denote in this subsection a directed acyclic (non-empty, finite) graph;
a DAG, for short. A node v ∈ V is a child of u ∈ V if (u, v) ∈ E; we also say in this
case that u is a parent of v. All children of the same parent are said to be sibling of each
other.
Given a node v ∈ V , its in-degree degin (v) and its out-degree degout (v) are, respectively, the number of its parents and the number of its children. The type of v is the
ordered pair (degin (v), degout (v)). A node v is a root when degin (v) = 0, a tree node
when degin (v) 6 1, a hybrid node when degin (v) > 2, a leaf when degout (v) = 0, internal
when degout (v) > 1, and elementary when degin (v) 6 1 and degout (v) = 1. A tree arc
(respectively, a hybridization arc) is an arc with head a tree node (respectively, a hybrid
node). A DAG N is rooted when it has only one root.
3
A path on N is a sequence of nodes (v0 , v1 , . . . , vk ) such that (vi−1 , vi ) ∈ E for all
i = 1, . . . , k. We call v0 the origin of the path, v1 , . . . , vk−1 its intermediate nodes, and
vk its end. The length of the path (v0 , v1 , . . . , vk ) is k, and it is non-trivial if k > 1. The
acyclicity of N means that it does not contain cycles: non-trivial paths from a node to
itself.
We denote by u v any path with origin u and end v. Whenever there exists a path
u v, we shall say that v is a descendant of u and also that u is an ancestor of v. When
the path u v is non-trivial, we say that v is a proper descendant of u and that u is an
proper ancestor of v. The distance from a node u to a descendant v is the length of a
shortest path from u to v.
The height h(v) of a node v in a DAG N is the largest length of a path from v to a
leaf. The absence of cycles implies that the nodes of a DAG can be stratified by means of
their heights: the nodes of height 0 are the leaves, the nodes of height 1 are those nodes
all whose children are leaves, the nodes of height 2 are those nodes all whose children are
leaves and nodes of height 1, and so on. If a node has height m > 0, then all its children
have height smaller than m, and at least one of them has height exactly m − 1.
A node v of N is a strict descendant of a node u if it is a descendant of it, and every
path from a root of N to v contains the node u: in particular, we understand every node
as a strict descendant of itself. When v is a strict descendant of u, we also say that u is
a strict ancestor of v.
The following lemma will be used several times in this paper.
Lemma 1. Let u be a proper strict ancestor of a node v in a DAG N , and let w be an
intermediate node in a path u v. Then, u is also a strict ancestor of w.
Proof. Let r w be a path from a root of N to w, and concatenate to it the piece w v
of the path u v under consideration. This yields a path r v that must contain u.
Since u does not appear in the piece w v, we conclude that it is contained in the path
r w. This proves that every path from a root of N to w contains the node u.
For every pair of nodes u, v of N :
– CSA(u, v) is the set of all common ancestors of u and v that are strict ancestors of
at least one of them;
– the least common semi-strict ancestor (LCSA) of u and v, in symbols [u, v], is the
node in CSA(u, v) of minimum height.
The LCSA of two nodes u, v in a phylogenetic network is well defined and it is unique:
it is actually the unique element of CSA(u, v) that is a descendant of all elements of
this set [5]. The following result on LCSAs will be used often. It is the generalization to
DAGs of Lemma 6 in [6], and we include its easy proof for the sake of completeness.
Lemma 2. Let N be a DAG and let u, v be a pair of nodes of N such that v is not a
descendant of u. If u is a tree node with parent u′ , then [u, v] = [u′ , v].
4
Proof. We shall prove that CSA(u, v) = CSA(u′ , v).
Let x ∈ CSA(u, v). Since u is not an ancestor of v, x 6= u and hence any path
x u is non-trivial. Then, since u′ is the only parent of u, it appears in this path, and
therefore x is also an ancestor of u′ . This shows that x is a common ancestor of u′ and
v. Now, if x is a strict ancestor of v, we already conclude that x ∈ CSA(u′ , v), while if
x is a strict ancestor of u, it will be also a strict ancestor of u′ by Lemma 1, and hence
x ∈ CSA(u′ , v), too. This proves that CSA(u, v) ⊆ CSA(u′ , v)
Conversely, let x ∈ CSA(u′ , v). Since u′ is the parent of u, it is clear that x is
a common ancestor of u and v, too. If x is a strict ancestor of v, this implies that
x ∈ CSA(u, v). If x is a strict ancestor of u′ , then it is also a strict ancestor of u (every
path r u must contain the only parent u′ of u, and then x will belong to the piece
r u′ of the path r u), and therefore x ∈ CSA(u, v), too. This finishes the proof of
the equality.
Let S be any non-empty finite set of labels. We say that the DAG N is labeled in S,
or that it is an S-DAG, for short, when its leaves are bijectively labeled by elements of S.
Although in real applications the set S would correspond to a given set of extant taxa,
for the sake of simplicity we shall assume henceforth that S = {1, . . . , n}, with n = |S|.
We shall always identify, usually without any further notice, each leaf of an S-DAG with
its label in S.
Two S-DAGs N, N ′ are isomorphic, in symbols N ∼
= N ′ , when they are isomorphic
as directed graphs and the isomorphism maps each leaf in N to the leaf with the same
label in N ′ .
2.2
Path lengths in phylogenetic trees
A phylogenetic tree on a set S of taxa is a rooted S-DAG without hybrid nodes and such
that its root is non-elementary. A phylogenetic tree is fully resolved, or binary, when
every internal node has out-degree 2. Since all ancestors of a node in a phylogenetic tree
are strict, the LCSA [u, v] of two nodes u, v in a phylogenetic tree is simply their least
common ancestor : the unique common ancestor of them that is a descendant of every
other common ancestor of them.
Let T be a phylogenetic tree on the set S = {1, . . . , n}. For every i, j ∈ S, we shall
denote by ℓT (i, j) and ℓT (j, i) the lengths of the paths [i, j] i and [i, j] j, respectively.
In particular, ℓT (i, i) = 0 for every i = 1, . . . , n.
Definition 1. Let T be a phylogenetic tree on the set S = {1, . . . , n}. The path length
between two leaves i and j is
LT (i, j) = ℓT (i, j) + ℓT (j, i).
The path lengths vector of T is the vector
L(T ) = LT (i, j) 16i<j6n ∈ Nn(n−1)/2
with its entries ordered lexicographically in (i, j).
5
The following result is a special case of Prop. 2 in [7].
Proposition 1. Two fully resolved phylogenetic trees on the same set S of taxa are
isomorphic if, and only if, they have the same path lengths vectors.
⊔
⊓
The thesis in the last result is false for arbitrary phylogenetic trees. Consider for
instance the phylogenetic trees with Newick strings (1,2,(3,4)); and ((1,2),3,4);
depicted1 in Fig. 1. It is straightforward to check that they have the same path lengths
vectors, but they are not isomorphic.
1
2
3
4
1
2
3
4
Fig. 1. Two non-isomorphic phylogenetic trees with the same path lengths vectors.
This problem was overcome in [7] by replacing the path lengths vectors by the following matrices of distances.
Definition 2. The splitted path lengths matrix of T is the n × n square matrix
ℓ(T ) = ℓT (i, j) i=1,...,n ∈ Mn (N).
j=1,...,n
Now, the following result is (again, a special case of) Theorem 11 in [7].
Proposition 2. Two phylogenetic trees on the same set S of taxa are isomorphic if, and
only if, they have the same splitted path lengths matrices.
⊔
⊓
3
TCTC networks
While the basic notion of phylogenetic tree is well established, the notion of phylogenetic
network is much less well defined [16]. The networks we consider in this paper are the
(almost) most general possible ones: rooted S-DAGs with non-elementary root. Following
[23], we shall call them hybridization networks. In these hybridization networks, every
node represents a different species, and the arcs represent direct descendance, be it
through mutation (tree arcs) or through some reticulation event (hybridization arcs).
It is usual to forbid elementary nodes in hybridization networks [23], mainly because
they cannot be reconstructed. We allow them here for two reasons. On the one hand,
1
Henceforth, in graphical representations of DAGs, hybrid nodes are represented by squares, tree nodes
by circles, and indeterminate nodes, that is, nodes that can be of tree or hybrid type, by squares with
rounded corners.
6
because allowing them simplifies considerably some proofs, as it will be hopefully clear
in Section 5. On the other hand, because, as Moret et al point out [18, §4.3], they can be
useful both from the biological point of view, to include auto-polyploidy in the model, as
well as from the formal point of view, to make a phylogeny satisfy other constraints, like
for instance time consistency (see below) or the impossibility of successive hybridizations.
Of course, our main results apply without any modification to hybridization networks
without elementary nodes as well.
Following [5], by a phylogenetic network on a set S of taxa we understand a rooted SDAG N with non-elementary root where every hybrid node has exactly one child, and it is
a tree node. Although, from the mathematical point of view, phylogenetic networks are a
special case of hybridization networks, from the point of view of modelling they represent
in a different way evolutive histories with reticulation events: in a phylogenetic network,
every tree node represents a different species and every hybrid node, a reticulation event
that gives rise to the species represented by its only child.
A hybridization network N = (V, E) is time consistent when it allows a temporal
representation [1]: a mapping
τ :V →N
such that τ (u) < τ (v) for every tree arc (u, v) and τ (u) = τ (v) for every hybridization
arc (u, v). Such a temporal representation can be understood as an assignment of times
to nodes that strictly increases from parents to tree children and so that the parents of
each hybrid node coexist in time.
Remark 1. Let N = (V, E) be a time consistent hybridization network, and let N1 =
(V1 , E1 ) be a hybridization network obtained by removing from N some nodes and all
their descendants (as well as all arcs pointing to any removed node). Then N1 is still
time consistent, because the restriction of any temporal representation τ : V → N of N
to V1 yields a temporal representation of N1 .
A hybridization network satisfies the tree-child condition, or it is tree-child, when
every internal node has at least one child that is a tree node (a tree child). So, tree-child
hybridization networks can be understood as general models of reticulate evolution where
every species other that the extant ones, represented by the leaves, has some descendant
through mutation. Tree-child hybridization networks include galled trees [13, 14] as a
particular case [8].
A tree path in a tree-child hybridization network is a non-trivial path such that its
end and all its intermediate nodes are tree nodes. A node v is a tree descendant of a
node u when there exists a tree path from u to v. By [9, Lem. 2], every internal node u
of a tree-child hybridization network has some tree descendant leaf, and by [9, Cor. 4]
every tree descendant v of u is a strict descendant of u and the path u v is unique.
To simplify the notations, we shall call TCTC-networks the tree-child time consistent hybridization networks: these include the tree-child time consistent phylogenetic
networks, which were the objects dubbed TCTC-networks in [5, 6]. Every phylogenetic
tree is also a TCTC-network. Let TCTCn denote the class of all TCTC-networks on
S = {1, . . . , n}.
7
We prove now some basic properties of TCTC-networks that will be used later.
Lemma 3. Let u be a node of a TCTC-network N , and let v be a child of u. The node
v is a tree node if, and only if, it is a strict descendant of u.
Proof. Assume first that v is a tree child of u. Since u is the only parent of v, every
non-trivial path ending in v must contain u. This shows that u is a strict ancestor of v.
Assume now that v is a hybrid child of u that is also a strict descendant of it, and
let us see that this leads to a contradiction. Indeed, in this case the set H(u) of hybrid
children of u that are strict descendants of it is non-empty, and we can choose a node
v0 in it of largest height. Let v1 be any parent of v0 other than u. Since u is a strict
ancestor of v0 , it must be an ancestor of v1 , and since u and v1 have the hybrid child v0
is common, they must have the same temporal representation, and therefore v1 as well
as all intermediate nodes in any path u v1 must be hybrid. Moreover, since u is a strict
ancestor of v0 , it is also a strict ancestor of v1 as well as of any intermediate node in any
path u v1 (by Lemma 1). In particular, the child of u in a path u v1 will belong to
H(u) and its height will be larger than the height of v0 , which is impossible.
Corollary 1. All children of the root of a TCTC-network are tree nodes.
Proof. Every node in a hybridization network is a strict descendant of the root. Then,
Lemma 3 applies.
The following result is the key ingredient in the proofs of our main results; it generalizes to hybridization networks Lemma 3 in [6], which referred to phylogenetic networks.
A similar result was proved in [4] for tree-sibling (that is, where every hybrid node has
a sibling that is a tree node) time consistent phylogenetic networks with all its hybrid
nodes of in-degree 2.
Lemma 4. Every TCTC-network with more than one leaf contains at least one node v
satisfying one of the following properties:
(a) v is an internal tree node and all its children are tree leaves.
(b) v is a hybrid internal node, all its children are tree leaves, and all its siblings are
leaves or hybrid nodes.
(c) v is a hybrid leaf, and all its siblings are leaves or hybrid nodes.
Proof. Let N be a TCTC-network and τ a temporal representation of it. Let v0 be an
internal node of highest τ -value and, among such nodes, of smallest height. The tree
children of v0 have strictly higher τ -value than v, and therefore they are leaves. And the
hybrid children of v0 have the same τ -value than v0 but smaller height, and therefore
they are also leaves.
Now:
– If v0 is a tree node all whose children are tree nodes, taking v = v0 we are in case
(a).
8
– If v0 is a hybrid node all whose children are tree nodes, then its parents have its same
τ -value, which, we recall, is the highest one. This implies that their children (v0 ’s
siblings) cannot be internal tree nodes, and hence they are leaves or hybrid nodes.
So, taking v = v0 , we are in case (b).
– If v0 has some hybrid child, take as the node v in the statement this hybrid child: it
is a leaf, and all its parents have the same τ -value as v0 , which implies, arguing as in
the previous case, that all siblings of v are leaves or hybrid nodes. Thus, v satisfies
(c).
We introduce now some reductions for TCTC-networks. Each of these reductions
applied to a TCTC-network with n leaves and m internal nodes produces a TCTCnetwork with either n − 1 leaves and m internal nodes or with n leaves and m − 1
internal nodes, and given any TCTC-network with more than two leaves, it will always
be possible to apply to it some of these reductions. This lies at the basis of the proofs
by algebraic induction of the main results in this paper.
Let N be a TCTC-network with n > 3 leaves.
(U) Let i be one tree leaf of N and assume that its parent has only this child. The U (i)
reduction of N is the network NU (i) obtained by removing the leaf i, together with its
incoming arc, and labeling with i its former parent; cf. Fig. 2. This reduction removes
the only child of a node, and thus it is clear that NU (i) is still a TCTC-network, with
the same number of leaves but one internal node less than N .
=⇒
i
i
Fig. 2. The U (i)-reduction.
(T) Let i, j be two sibling tree leaves of N (that may, or may not, have other siblings). The
T (i; j) reduction of N is the network NT (i;j) obtained by removing the leaf i, together
with its incoming arc; cf. Fig. 3. This reduction procedure removes one tree leaf, but
its parent u keeps at least another tree child, and if u was the root of N then it would
not become elementary after the reduction, because n > 3 and therefore, since j is a
leaf, u should have at least another child. Therefore, NT (i;j) is a TCTC-network with
the same number of internal nodes as N and n − 1 leaves.
(H) Let i be a hybrid leaf of N , let v1 , . . . , vk , with k > 2, be its parents, and assume
that each one of these parents has (at least) one tree leaf child: for every l = 1, . . . , k,
let jl be a tree leaf child of vl . The H(i; j1 , . . . , jk ) reduction of N is the network
NH(i;j1 ,...,jk ) obtained by removing the hybrid leaf i and its incoming arcs; cf. Fig. 4.
This reduction procedure preserves the time consistency and the tree-child condition
(it removes a hybrid leaf), and the root does not become elementary: indeed, the
9
=⇒
···
j
i
···
j
Fig. 3. The T (i; j)-reduction.
only possibility for the root to become elementary is to be one of the parents of i,
which is impossible by Corollary 1. Therefore, NH(i;j1 ,...,jk ) is a TCTC-network with
the same number of internal nodes as N and n − 1 leaves.
v1
···
vk
v1
j1
vk
=⇒
i
···
···
jk
···
···
j1
jk
···
Fig. 4. The H(i; j1 , . . . , jk )-reduction.
We shall call the inverses of the U, T, and H reduction procedures, respectively, the
U , T−1 , and H−1 expansions, and we shall denote them by U −1 (i), R−1 (i; j), and
H −1 (i; j1 , . . . , jk ). More specifically, for every TCTC-network N :
−1
– If N has some leaf labeled i, the expansion U −1 (i) can be applied to N and the
resulting network NU −1 (i) is obtained by unlabeling the leaf i and adding to it a tree
leaf child labeled with i. NU −1 (i) is always a TCTC-network.
– If N has no leaf labeled with i and some tree leaf labeled with j, the expansion
T −1 (i; j) can be applied to N , and the resulting network NT −1 (i;j) is obtained by
adding to the parent of the leaf j an new tree leaf child labeled with i. NT −1 (i;j) is
always a TCTC-network.
– If N has no leaf labeled with i and some tree leaves labeled with j1 , . . . , jk , k > 2,
that are not sibling of each other, the expansion H −1 (i; j1 , . . . , jk ) can be applied to
N and the resulting network NH −1 (i;j1 ,...,jk ) is obtained by adding a new hybrid node
labeled with i and arcs from the parents of j1 , . . . , jk to i. NH −1 (i;j1 ,...,jk ) is always a
tree child hybridization network, but it need not be time consistent, as the parents
of j1 , . . . , jk may have different temporal representations in N (for instance, one of
them could be a tree descendant of another one).
The following result is easily deduced from the explicit descriptions of the reduction
and expansion procedures, and the fact that isomorphisms preserve labels and parents.
Lemma 5. Let N and N ′ be two TCTC-networks. If N ∼
= N ′ , then the result of applying
′
to both N and N the same U reduction (respectively, T reduction, H reduction, U−1
10
expansion, T−1 expansion, or H−1 expansion) are again two isomorphic hybridization
networks.
Moreover, if we apply an U reduction (respectively, T reduction, or H reduction) to
a TCTC-network N , and then we apply to the resulting TCTC-network the inverse U−1
expansion (respectively, T−1 expansion, or H−1 expansion), we obtain a TCTC-network
isomorphic to N .
⊔
⊓
As we said above, every TCTC-network with at least 3 leaves allows the application
of some reduction.
Proposition 3. Let N be a TCTC-network with more than two leaves. Then, at least
one U, R, or H reduction can be applied to N .
Proof. By Lemma 4, N contains either an internal (tree or hybrid) node v all whose
children are tree leaves, or a hybrid leaf i all whose siblings are leaves or hybrid nodes.
In the first case, we can apply to N either the reduction U (i) (if v has only one child,
and it is the tree leaf i) or T (i; j) (if v has at least two tree leaf children, i and j). In the
second case, let v1 , . . . , vk , with k > 2, be the parents of i. By the tree child condition,
each vl , with l = 1, . . . , k, has some tree child, and by the assumption on i, it will be a
leaf, say jl . Then, we can apply to N the reduction H(i; j1 , . . . , jk ).
Therefore, every TCTC-network with n > 3 leaves and m internal nodes is obtained
by the application of an U−1 , T−1 , or H−1 expansion to a TCTC-network with either
n − 1 leaves or n leaves and m − 1 internal nodes. This allows the recursive construction
of all TCTC-networks from TCTC-networks (actually, phylogenetic trees) with 2 leaves
and 1 internal node.
Example 1. Fig. 5 shows how a sequence of reductions transforms a certain TCTCnetwork N with 4 leaves into a phylogenetic tree with 2 leaves. The sequence of inverse
expansions would then generate N from this phylogenetic tree. This sequence of expansions generating N is, of course, not unique.
4
Path lengths vectors for fully resolved networks
Let N be a hybridization network on S = {1, . . . , n}. For every pair of leaves i, j of N ,
let ℓN (i, j) and ℓN (j, i) be the distance from [i, j] to i and to j, respectively.
Definition 3. The LCSA-path length between two leaves i and j in N is
LN (i, j) = ℓN (i, j) + ℓN (j, i).
The LCSA-path lengths vector of N is
L(N ) = LN (i, j) 16i<j6n ∈ Nn(n−1)/2 ,
with its entries ordered lexicographically in (i, j).
11
U (3)
=⇒
1
2
3
4
1
T (2; 1)
=⇒
2
U (1)
=⇒
1
H(3; 1, 4)
=⇒
3
4
1
U (4)
=⇒
1
4
1
2
4
4
4
Fig. 5. A sequence of reductions.
Notice that LN (i, j) = LN (j, i), for every pair of leaves i, j ∈ S.
If N is a phylogenetic tree, the LCSA-path length between two leaves is the path
length between them as defined in §2.2, and therefore the vectors L(N ) defined therein
and here are the same. But, contrary to what happens in phylogenetic trees, the LCSApath length between two leaves i and j in a hybridization network need not be the
smallest sum of the distances from a common ancestor of i and j to these leaves (that
is, the distance between these leaves in the undirected graph associated to the network).
Example 2. Consider the TCTC-network N depicted in Fig. 6. Table 4 gives, in its
upper triangle, the LCSA of every pair of different leaves, and in its lower triangle, the
LCSA-path length between every pair of different leaves.
Notice that, in this network, [3, 5] = r, because the root is the only common ancestor
of 3 and 5 that is strict ancestor of some of them, and hence LN (3, 5) = 8, but e is
a common ancestor of both leaves and the length of both paths e 3 and e 5 is 3.
Similarly, f is also a common ancestor of both leaves and the length of both paths f 3
and f
5 is 3. This is an example of LCSA-path length between two leaves that is
largest than the smallest sum of the distances from a common ancestor of these leaves
to each one of them.
In a fully resolved phylogeny with reticulation events, every non extant species should
have two direct descendants, and every reticulation event should involve two parent
species, as such an event corresponds always to the exchange of genetic information
between two parents: as Semple points out [23], hybrid nodes with in-degree greater than
2 actually represent “an uncertainty of the exact order of ‘hybridization’.” Depending
on whether we use hybridization or phylogenetic networks to model phylogenies, we
distinguish between:
12
r
e
a
f
c
b
A
1
2
d
B
3
4
5
6
Fig. 6. The network N in Example 2.
Table 1. For every 1 6 i < j 6 6, the entry (i, j) of this table is [i, j], and the entry (j, i) is LN (i, j),
with N the network in Fig. 6.
1
2
3
4
5
6
1 2
e
4
5 3
6 6
3 5
6 6
3 4 5 6
e r a r
b r e r
c r f
3
f f
8 5
d
5 4 3
– Fully resolved hybridization networks: hybridization networks with all their nodes of
types (0, 2), (1, 0), (1, 2), (2, 0), or (2, 2).
– Fully resolved phylogenetic networks: phylogenetic networks with all their nodes of
types (0, 2), (1, 0), (1, 2), or (2, 1).
To simplify the language, we shall say that a hybridization network is quasi-binary when
all its nodes are of types (0, 2), (1, 0), (1, 2), (2, 0), (2, 1), or (2, 2). These quasi-binary
networks include as special cases the fully resolved hybridization and phylogenetic networks.
Our main result in this section establishes that the LCSA-path lengths vectors separate fully resolved (hybridization or phylogenetic) TCTC-networks, thus generalizing
Proposition 1 from trees to networks. To prove this result, we shall use the same strategy as the one developed in [4] or [6] to prove that the metrics introduced therein were
indeed metrics: algebraic induction based on reductions. Now, we cannot use the reductions defined in the last section as they stand, because they may generate elementary
nodes that are forbidden in fully resolved networks. Instead, we shall use certain suitable
combinations of them that always reduce in one the number of leaves.
13
So, consider the following reduction procedures for quasi-binary TCTC networks N
with n leaves:
(R) Let i, j be two sibling tree leaves of N . The R(i; j) reduction of N is the quasi-binary
TCTC-network NR(i;j) obtained by applying first the T (i; j) reduction to N and then
the U (j) reduction to the resulting network. The final result is that the leaves i and
j are removed, together with their incoming arcs, and then their former common
parent, which now has become a leaf, is labeled with j; cf. Fig. 7.
u
j
j
=⇒
i
Fig. 7. The R(i; j)-reduction.
(H0 ) Let i be a hybrid leaf, let v1 and v2 be its parents and assume that the other children
of these parents are tree leaves j1 and j2 , respectively. The H0 (i; j1 , j2 ) reduction
of N is the quasi-binary TCTC-network NH0 (i;j1 ,j2 ) obtained by applying first the
reduction H(i; j1 , j2 ) to N and then the reductions U (j1 ) and U (j2 ) to the resulting
network. The overall effect is that the hybrid leaf i and the tree leaves j1 , j2 are
removed, together with their incoming arcs, and then the former parents v1 , v2 of j1
and j2 are labeled with j1 and j2 , respectively; cf. Fig. 8.
v1
v2
i
j1
=⇒
j1
j2
j2
Fig. 8. The H0 (i; j1 , j2 )-reduction.
(H1 ) Let A be a hybrid node with only one child i, that is a tree node. Let v1 and v2
be the parents of A and assume that the other children of these parents are tree
leaves j1 and j2 , respectively. The H1 (i; j1 , j2 ) reduction of N is the TCTC-network
NH1 (i;j1 ,j2 ) obtained by applying first the reduction U (i) to N , followed by the reduction H0 (i; j1 , j2 ) to the resulting network. The overall effect is that the leaf i, its
parent A and the leaves j1 , j2 are removed, together with their incoming arcs, and
then the former parents v1 , v2 of j1 and j2 are labeled with j1 and j2 , respectively;
cf. Fig. 9.
We use H0 and H1 instead of H and U because, for our purposes in this section, it has
to be possible to decide whether or not we can apply a given reduction to a given fully
14
v1
v2
=⇒
j1
j2
A
j1
i
j2
Fig. 9. The H1 (i; j1 , j2 )-reduction.
resolved network N from the knowledge of L(N ), and this cannot be done for the U
reduction, while, as we shall see below, it is possible for H0 and H1 .
H0 reductions cannot be applied to fully resolved phylogenetic networks (they don’t
have hybrid leaves) and H1 reductions cannot be applied to fully resolved hybridization
networks (they don’t have out-degree 1 hybrid nodes). The application of an R or an H0
reduction to a fully resolved TCTC hybridization network is again a fully resolved TCTC
hybridization network, and the application of an R or an H1 reduction to a fully resolved
TCTC phylogenetic network is again a fully resolved TCTC phylogenetic network.
We shall call the inverses of the R, H0 and H1 reduction procedures, respectively, the
−1
−1
−1
−1
R , H−1
0 and H1 expansions, and we shall denote them by R (i; j), H1 (i; j1 , j2 ) and
−1
H0 (i; j1 , j2 ). More specifically, for every quasi-binary TCTC-network N with no leaf
labeled i:
– the expansion R−1 (i; j) can be applied to N if it has a leaf labeled j, and the resulting
network NR−1 (i;j) is obtained by unlabeling the leaf j and adding to it two leaf tree
children labeled with i and j;
– the expansion H0−1 (i; j1 , j2 ) can be applied to N if it has a pair of leaves labeled
j1 , j2 , and the resulting network NH −1 (i;j1 ,j2) is obtained by adding a new hybrid leaf
0
labeled with i, and then, for each l = 1, 2, unlabeling the leaf jl and adding to it a
new tree leaf child labeled with jl and an arc to i.
– the expansion H1−1 (i; j1 , j2 ) can be applied to N if it has a pair of leaves labeled
j1 , j2 , and the resulting network NH −1 (i;j1 ,j2 ) is obtained by adding a new node A, a
1
tree leaf child i to it, and then, for each l = 1, 2, unlabeling the leaf jl and adding to
it a new tree leaf child labeled with jl and an arc to A.
A R−1 (i; j) expansion of a quasi-binary TCTC-network is always a quasi-binary TCTC−1
network, but an H−1
0 (i; j1 , j2 ) or an H1 (i; j1 , j2 ) expansion of a quasi-binary TCTCnetwork, while still being always quasi-binary and tree child, needs not be time consistent:
for instance, the leaves j1 and j2 could be a hybrid leaf and a tree sibling of it. Moreover,
we have the following result, which is a direct consequence of Lemma 5 and we state it
for further reference.
Lemma 6. Let N and N ′ be two quasi-binary TCTC-networks. If N ∼
= N ′ , then the
′
result of applying to both N and N the same R reduction (respectively, H0 reduction, H1
−1
reduction, R−1 expansion, H−1
0 expansion, or H1 expansion) is again two isomorphic
networks.
15
Moreover, if we apply an R reduction (respectively, H0 reduction or H1 reduction) to
a quasi-binary TCTC-network N , and then we apply to the resulting network the inverse
−1
R−1 expansion (respectively, H−1
0 expansion or H1 expansion), we obtain a quasi-binary
TCTC-network isomorphic to N .
⊔
⊓
We have moreover the following result.
Proposition 4. Let N be a quasi-binary TCTC-network with more than one leaf. Then,
at least one R, H0 , or H1 reduction can be applied to N .
Proof. If N contains some internal node with two tree leaf children i and j, then the
reduction R(i; j) can be applied. If N does not contain any node with two tree leaf
children, then, by Lemma 4, it contains a hybrid node v that is either a leaf (say, labeled
with i) or it has only one child, which is a tree leaf (say, labeled with i), and such that
all siblings of v are leaves or hybrid nodes. Now, the quasi-binarity of N and the tree
child condition entail that v has two parents, that each one of them has exactly one child
other than v, and that this second child is a tree node. So, v has exactly two siblings,
and they are tree leaves, say j1 and j2 . Then, the reduction H0 (i; j1 , j2 ) (if v is a leaf)
or H1 (i; j1 , j2 ) (if v is not a leaf) can be applied.
Corollary 2. (a) If N is a fully resolved TCTC hybridization network with more than
one leaf, then, at least one R or H0 reduction can be applied to it.
(b) If N is a fully resolved TCTC phylogenetic network with more than one leaf, then,
at least one R or H1 reduction can be applied to it.
⊔
⊓
We shall prove now that the application conditions for the reductions introduced
above can be read from the LCSA-path lengths vector of a fully resolved TCTC-network
and that they modify in a specific way the LCSA-path lengths of the network which they
are applied to. This will entail that if two fully resolved (hybridization or phylogenetic)
TCTC-networks have the same LCSA-path lengths vectors, then the same reductions can
be applied to both networks and the resulting fully resolved TCTC-networks still have
the same LCSA-path lengths vectors. This will be the basis of the proof by induction on
the number of leaves that two TCTC hybridization or phylogenetic networks with the
same LCSA-path lengths vectors are always isomorphic.
Lemma 7. Let i, j be two leaves of a quasi-binary TCTC-network N . Then, i and j are
siblings if, and only if, LN (i, j) = 2.
Proof. If LN (i, j) = 2, then the paths [i, j] i and [i, j] j have length 1, and therefore
[i, j] is a parent of i and j. Conversely, if i and j are siblings and u is a parent in common
of them, then, by the quasi-binarity of N , they are the only children of u, and by the
tree-child condition, one of them, say i, is a tree node. But then, u is a strict ancestor of
i, an ancestor of j, and no proper descendant of u is an ancestor of both i and j. This
implies that u = [i, j] and hence that LN (i, j) = 2.
Lemma 8. Let N be a quasi-binary TCTC-network on a set S of taxa.
16
(1) The reduction R(i; j) can be applied to N if, and only if, LN (i, j) = 2 and, for every
k ∈ S \ {i, j}, LN (i, k) = LN (j, k).
(2) If the reduction R(i; j) can be applied to N , then
LNR(i;j) (j, k) = LN (j, k) − 1 for every k ∈ S \ {i, j}
LNR(i;j) (k, l) = LN (k, l) for every k, l ∈ S \ {i, j}
Proof. As far as (1) goes, R(i; j) can be applied to N if, and only if, the leaves i and j
are siblings and of tree type. Now, if i and j are two tree sibling leaves and u is their
parent, then on the one hand, LN (i, j) = 2 by Lemma 7, and on the other hand, since,
by Lemma 2, [i, k] = [u, k] = [j, k] for every leaf k 6= i, j, we have that
ℓN (i, k) = ℓN (j, k) = 1 + distance from [u, k] to u
ℓN (k, i) = ℓN (k, j) = distance from [u, k] to k
and therefore LN (i, k) = LN (j, k) for every k ∈ S \ {i, j}.
Conversely, assume that LN (i, j) = 2 and that LN (i, k) = LN (j, k) for every k ∈
S \ {i, j}. The fact that LN (i, j) = 2 implies that i and j share a parent u. If one of these
leaves, say i, is hybrid, then the tree child condition implies that the other, j, is of tree
type. Let now v be the other parent of i and k a tree descendant leaf of v, and let h be
the length of the unique path v k. Then v is a strict ancestor of k and an ancestor of
i, and no proper tree descendant of v can possibly be an ancestor of i: otherwise, there
would exist a path from a proper tree descendant of v to u, and then the time consistency
property would forbid u and v to have a hybrid child in common. Therefore v = [i, k]
and LN (i, k) = h + 1. Now, the only possibility for the equality LN (j, k) = h + 1 to hold
is that some intermediate node in the path v k is an ancestor of the only parent u of
j, which, as we have just seen, is impossible. This leads to a contradiction, which shows
that i and j are both tree sibling leaves. This finishes the proof of (1).
As far as (2) goes, in NR(i;j) we remove the leaf i and we replace the leaf j by its parent.
By Lemma 2, this does not modify the LCSA [j, k] of j and any other remaining leaf k,
and since we have shortened in 1 any path ending in j, we deduce that LNR(i;j) (j, k) =
LN (j, k) − 1 for every k ∈ S \ {i, j}. On the other hand, for every k, l ∈ S \ {i, j}, the
reduction R(i; j) has affected neither the LCSA [k, l] of k and l, nor the paths [k, l] k
or [k, l] l, which implies that LNR(i;j) (k, l) = LN (k, l)
Lemma 9. Let N be a fully resolved TCTC hybridization network on a set S of taxa.
(1) The reduction H0 (i; j1 , j2 ) can be applied to N if, and only if, LN (i, j1 ) = LN (i, j2 ) =
2.
(2) If the reduction H0 (i; j1 , j2 ) can be applied to N , then
LNH0 (i;j1 ,j2 ) (j1 , j2 ) = LN (j1 , j2 ) − 2
LNH0 (i;j1 ,j2 ) (j1 , k) = LN (j1 , k) − 1 for every k ∈ S \ {i, j1 , j2 }
LNH0 (i;j1 ,j2 ) (j2 , k) = LN (j2 , k) − 1 for every k ∈ S \ {i, j1 , j2 }
LNH0 (i;j1 ,j2 ) (k, l) = LN (k, l) for every k, l ∈ S \ {i, j1 , j2 }
17
Proof. As far as (1) goes, the reduction H0 (i; j1 , j2 ) can be applied to N if, and only
if, i is a hybrid sibling of the tree leaves j1 and j2 . If this last condition happens, then
LN (i, j1 ) = 2 and LN (i, j2 ) = 2 by Lemma 7. Conversely, LN (i, j1 ) = LN (i, j2 ) = 2
implies that i, j1 and i, j2 are pairs of sibling leaves. Since no node of N can have more
than 2 children, and at least one of its children must be of tree type, this implies that i
is a hybrid node (with two different parents), and j1 and j2 are tree nodes.
As far as (2) goes, the tree leaves j1 and j2 are replaced by their parents. By Lemma
7, this does not affect any LCSA and it only shortens in 1 the paths ending in j1 or j2 .
Thus, the H0 (i; j1 , j2 ) reduction does not affect the LCSA-path length between any pair
of remaining leaves other than j1 and j2 , it shortens in 1 the LCSA-path length between
j1 or j2 and any remaining leaf other than j1 or j2 , and it shortens in 2 the LCSA-path
length between j1 and j2 .
Lemma 10. Let N be a fully resolved TCTC phylogenetic network on a set S of taxa.
(1) The reduction H1 (i; j1 , j2 ) can be applied to N if, and only if,
– LN (i, j1 ) = LN (i, j2 ) = 3,
– LN (j1 , j2 ) > 4,
– if LN (j1 , j2 ) = 4, then LN (j1 , k) = LN (j2 , k) for every k ∈ S \ {j1 , j2 , i}.
(2) If the reduction H1 (i; j1 , j2 ) can be applied to N , then
LNH1 (i;j1 ,j2 ) (j1 , j2 ) = LN (j1 , j2 ) − 2
LNH1 (i;j1 ,j2 ) (j1 , k) = LN (j1 , k) − 1 for every k ∈ S \ {i, j1 , j2 }
LNH1 (i;j1 ,j2 ) (j2 , k) = LN (j2 , k) − 1 for every k ∈ S \ {i, j1 , j2 }
LNH1 (i;j1 ,j2 ) (k, l) = LN (k, l) for every k ∈ S \ {i, j1 , j2 }
Proof. As far as (1) goes, the reduction H1 (i; j1 , j2 ) can be applied to N if, and only if,
j1 and j2 are tree leaves that are not siblings and they share a sibling hybrid node that
has the tree leaf i as its only child. Now, if this application condition for H1 (i; j1 , j2 ) is
satisfied, then LN (i, j1 ) = 3, because the parent of j1 is an ancestor of i, a strict ancestor
of j1 , and clearly no proper descendant of it is an ancestor of i and j1 ; by a similar
reason, LN (i, j2 ) = 3. Moreover, since j1 and j2 are not sibling, LN (j1 , j2 ) > 3. But if
LN (j1 , j2 ) = 3, then there would exist an arc from the parent of j1 to the parent of j2 ,
or vice versa, which would entail a node of out-degree 3 that cannot exist in the fully
resolved network N . Therefore, LN (j1 , j2 ) > 4. Finally, if LN (j1 , j2 ) = 4, this means that
the parents x and y of j1 and j2 (that are tree nodes, because they have out-degree 2 and
N is a phylogenetic network) are sibling: let u be their parent in common. In this case,
no leaf other than j1 , j2 , i is a descendant of u, and therefore, for every k ∈ S \ {j1 , j2 , i},
[j1 , k] = [x, k] = [u, k] = [y, k] = [j2 , k]
by Lemma 4, and thus
ℓN (j1 , k) = ℓN (j2 , k) = 2 + distance from [u, k] to u
ℓN (k, j1 ) = ℓN (k, j2 ) = distance from [u, k] to k,
18
which implies that LN (j1 , k) = LN (j2 , k).
Conversely, assume that LN (i, j1 ) = LN (i, j2 ) = 3, that LN (j1 , j2 ) > 4, and that if
LN (j1 , j2 ) = 4, then LN (j1 , k) = LN (j2 , k) for every k ∈ S \ {j1 , j2 , i}. Let x, y and
z be the parents of j1 , j2 and i, respectively. Notice that these parents are pairwise
different (otherwise, the LCSA-path length between a pair among j1 , j2 , i would be 2).
Moreover, since N is a phylogenetic network, j1 , j2 and i are tree nodes. Then, LN (i, j1 ) =
LN (i, j2 ) = 3 implies that there must exist an arc between the nodes x and z and an arc
between the nodes y and z.
Now, if these arcs are (z, x) and (z, y), the node z would have out-degree 3, which
is impossible. Assume now that (x, z) and (z, y) are arcs of N . In this case, both z and
x have out-degree 2, which implies (recall that N is a phylogenetic network) that they
are tree nodes. Then, x = [j1 , j2 ] (it is an ancestor of j2 , a strict ancestor of j1 , and no
proper descendant of it is an ancestor of j1 and j2 ) and therefore LN (j1 , j2 ) = 4. In this
case, we assume that LN (j1 , k) = LN (j2 , k) for every k ∈ S \ {j1 , j2 , i}. Now we must
distinguish two cases, depending on the type of node y:
– If y is a tree node, let p be its child other than j2 , and let k be a tree descendant leaf
of p. In this case, [j1 , k] = x and [j2 , k] = y (by the same reason why x is [j1 , j2 ]),
and hence LN (j1 , k) = LN (j2 , k) + 2, against the assumption LN (j1 , k) = LN (j2 , k).
– If y is a hybrid node, let p be its parent other than z, and let k be a tree descendant
leaf of p (k 6= j2 , because j2 is not a tree descendant of p). In this case, [j2 , k] = p
(because p is an ancestor of j2 and a strict ancestor of k, and the time consistency
property implies that no intermediate node in the path p k can be an ancestor of
y). Now, if the length of the (only) path p k is h, then LN (j2 , k) = h + 2, and
for the equality LN (j1 , k) = h + 2 to hold, either the arc (x, p) belongs to N , which
is impossible because x would have out-degree 3, or a node in the path p k is an
ancestor of x, which is impossible because of the time consistency property.
In both cases we reach a contradiction that implies that the arcs (x, z), (z, y) do not exist
in N . By symmetry, the arcs (y, z), (z, x) do not exist in N , either. Therefore, the only
possibility is that N contains the arcs (x, z), (y, z), that is, that z is hybrid child of the
nodes x and y. This finishes the proof of (1).
As far as (2) goes, it is proved as in Lemma 9.
Now we can prove the main results in this section.
Proposition 5. Let N and N ′ be two fully resolved TCTC hybridization networks on
the same set S of taxa. Then, L(N ) = L(N ′ ) if, and only if, N ∼
= N ′.
Proof. The ‘if’ implication is obvious. We prove the ‘only if’ implication by induction on
the number n of elements of S.
The cases n = 1 and n = 2 are straightforward, because there exist only one TCTCnetwork on S = {1} and one TCTC-network on S = {1, 2}: the one-node graph and the
phylogenetic tree with leaves 1,2, respectively.
19
Assume now that the thesis is true for fully resolved TCTC hybridization networks
with n leaves, and let N and N ′ be two fully resolved TCTC hybridization networks on
the same set S of n + 1 labels such that L(N ) = L(N ′ ). By Corollary 2.(a), an R(i; j) or
a H0 (i; j1 , j2 ) can be applied to N . Moreover, since the possibility of applying one such
reduction depends on the LCSA-path lengths vector by Lemmas 8.(1) and 9.(1), and
L(N ) = L(N ′ ), it will be possible to apply the same reduction to N ′ . So, let N1 and N1′
be the fully resolved TCTC hybridization networks obtained by applying the same R or
H0 reduction to N and N ′ .
From Lemmas 8.(2) and 9.(2) we deduce that L(N1 ) = L(N1′ ) and hence, by the
induction hypothesis, N1 ∼
= N1′ . Finally, if we apply to N1 and N1′ the R−1 or H−1
0
expansion that is inverse to the reduction applied to N and N ′ , then, by Lemma 6, we
obtain again N and N ′ and they are isomorphic.
A similar argument, using Lemmas 8 and 10, proves the following result.
Proposition 6. Let N and N ′ be two fully resolved TCTC phylogenetic networks on the
same set S of taxa. Then, L(N ) = L(N ′ ) if, and only if, N ∼
⊔
⊓
= N ′.
Remark 2. The LCSA-path lengths vectors do not separate quasi-binary TCTC-networks.
Indeed, consider the TCTC-networks N, N ′ depicted in Fig. 10. They are quasi-binary
(but neither fully resolved phylogenetic networks nor fully resolved hybridization networks), and a simple computation shows that
L(N ) = L(N ′ ) = (3, 6, 3, 3, 6, 3).
The network N in Fig. 10 also shows that Lemma 10.(1) is false for quasi-binary hybridization networks.
1
2
3
1
4
2
3
4
N′
N
Fig. 10. These two quasi-binary TCTC-networks have the same LCSA-path length vectors.
20
Let FRHn (respectively, FRPn ) denote the classes of fully resolved TCTC hybridization (respectively, phylogenetic) networks on S = {1, . . . , n}. We have just proved that
the mappings
L : FRHn → Rn(n−1)/2 , L : FRPn → Rn(n−1)/2
are injective, and therefore they can be used to induce metrics on FRHn and FRPn from
metrics on Rn(n−1)/2 .
Proposition 7. For every n > 1, let D be any metric on Rn(n−1)/2 . The mappings d :
FRHn ×FRHn → R and d : FRPn ×FRPn → R defined by d(N1 , N2 ) = D(L(N1 ), L(N2 ))
satisfy the axioms of metrics up to isomorphisms:
(1)
(2)
(3)
(4)
d(N1 , N2 ) > 0,
d(N1 , N2 ) = 0 if, and only if, N1 ∼
= N2 ,
d(N1 , N2 ) = d(N2 , N1 ),
d(N1 , N3 ) 6 d(N1 , N2 ) + d(N2 , N3 ).
Proof. Properties (1), (3) and (4) are direct consequences of the corresponding properties
of D, while property (2) follows from the separation axiom for D (which says that
D(M1 , M2 ) = 0 if, and only if, M1 = M2 ) and Proposition 5 or 6, depending on the case.
For instance, using as D the Manhattan distance on Rn(n−1)/2 , we obtain the metric
on FRHn or FRPn
X
|LN1 (i, j) − LN2 (i, j)|,
d1 (N1 , N2 ) =
16i<j6n
and using as D the Euclidean distance we obtain the metric
s X
(LN1 (i, j) − LN2 (i, j))2 .
d2 (N1 , N2 ) =
16i<j6n
These metrics generalize to fully resolved TCTC (hybridization or phylogenetic) networks
the classical distances for fully resolved phylogenetic trees introduced by Farris [11] and
Clifford [29] around 1970.
5
Splitted path lengths vectors for arbitrary networks
As we have seen in §2.2 and Remark 2, the path lengths vectors do not separate arbitrary
TCTC-networks. Since to separate arbitrary phylogenetic trees we splitted the path
lengths (Definition 2), we shall use the same strategy in the networks setting. In this
connection, we already proved in [6] that the matrix
ℓ(N ) = ℓN (i, j) i=1,...,n
j=1,...,n
separates TCTC phylogenetic networks on S = {1, . . . , n} with tree nodes of arbitrary
out-degree and hybrid nodes of arbitrary in-degree. But it is not true for TCTC hybridization networks, as the following example shows.
21
1
2
3
4
5
1
6
2
3
4
5
6
N′
N
Fig. 11. These two hybridization TCTC-networks are such that ℓN (i, j) = ℓN ′ (i, j), for every pair of
leaves i, j.
Example 3. Consider the pair of non-isomorphic TCTC-networks N and N ′ depicted in
Fig. 11. A simple computation shows that
012212
1 0 1 1 2 2
′
ℓ(N ) = ℓ(N ) = 22 11 02 20 21 22
122101
222210
So, in order to separate arbitrary TCTC-networks we need to add some extra information to the distances ℓN (i, j) from LCSAs to leaves. The extra information we shall
use is whether the LCSA of each pair of leaves is a strict ancestor of one leaf or the other
(or both). So, for every pair of different leaves i, j of N , let hN (i, j) be −1 if [i, j] is a
strict ancestor of i but not of j, 1 if [i, j] is a strict ancestor of j but not of i, and 0 if
[i, j] is a strict ancestor of both i and j. Notice that hN (j, i) = −hN (i, j).
Definition 4. Let N be a hybridization network on the set S = {1, . . . , n}.
For every i, j ∈ S, the splitted LCSA-path length from i to j is the ordered 3-tuple
LsN (i, j) = (ℓN (i, j), ℓN (j, i), hN (i, j)).
The splitted LCSA-path lengths vector of N is
Ls (N ) = LsN (i, j) 16i<j6n ∈ (N × N × {−1, 0, 1})n(n−1)/2
with its entries ordered lexicographically in (i, j).
Example 4. Consider the quasi-binary TCTC-networks N and N ′ depicted in Fig. 10.
Then
Ls (N )= (2, 1, −1), (3, 3, 0), (1, 2, −1), (1, 2, 1), (2, 4, 0), (1, 2, −1)
Ls (N ′ )= (1, 2, 1), (2, 4, 0), (1, 2, 1), (1, 2, −1), (3, 3, 0), (2, 1, 1)
22
Example 5. Consider the TCTC-networks N and N ′ depicted in Fig. 11. Then
Ls (N )= (1, 1, −1), (2, 2, 0), (2, 2, 0), (1, 1, −1), (2, 2, 0), (1, 1, 1), (1, 1, 1),
(2, 2, 0), (2, 2, 0), (2, 2, 0), (2, 2, 0), (2, 2, 0), (1, 1, −1), (2, 2, 0), (1, 1, 1)
Ls (N ′ )= (1, 1, 1), (2, 2, 0), (2, 2, 0), (1, 1, 1), (2, 2, 0), (1, 1, −1), (1, 1, −1),
(2, 2, 0), (2, 2, 0), (2, 2, 0), (2, 2, 0), (2, 2, 0), (1, 1, 1), (2, 2, 0), (1, 1, −1)
Remark 3. If N is a phylogenetic tree on S, then hN (i, j) = 0 for every i, j ∈ S.
We shall prove now that these splitted LCSA-path lengths vectors separate arbitrary
hybridization TCTC-networks. The master plan for proving it is similar to the one used
in the proof of Proposition 5: induction based on the fact that the application conditions
for the reductions introduced in Section 3 can be read in the splitted LCSA-path lengths
vectors of TCTC-networks and that these reductions modify in a controlled way these
vectors.
Lemma 11. Let N be a TCTC-network on a set S of taxa.
(1) The reduction U (i) can be applied to N if, and only if, ℓN (i, j) > 2 for every j ∈
S \ {i}.
(2) If the reduction U (i) can be applied to N , then
LNU (i) (i, j) = LN (i, j) − (1, 0, 0) for every j ∈ S \ {i}
LNU (i) (j, k) = LN (j, k) for every j, k ∈ S \ {i}
Proof. As far as (1) goes, the reduction U (i) can be applied to N if, and only if, the leaf
i is a tree node and the only child of its parent. Let us check now that this last condition
is equivalent to ℓN (i, j) > 2 for every j ∈ S \ {i}. To do this, we distinguish three cases:
– Assume that i is a tree node and the only child of its parent x. Then, for every
j ∈ S \ {i}, the LCSA of i and j is a proper ancestor of x, and therefore ℓN (i, j) > 2.
– Assume that i is a tree node and that it has a sibling y. Let x be the parent of i
and y and let j be a tree descendant leaf of y. Then [i, j] = x, because x is a strict
ancestor of i, an ancestor of j and clearly no descendant of x is an ancestor of both
i and j. Therefore, in this case, ℓN (i, j) = 1 for this leaf j.
– Assume that i is a hybrid node. Let x be any parent of i and let j be a tree descendant
of x. Then, [i, j] = x, because x is a strict ancestor of j, an ancestor of i, and no
intermediate node in the unique path x j is an ancestor of i (it would violate the
time consistency property). Therefore, in this case, ℓN (i, j) = 1 for this leaf j, too.
Since these three cases cover all possibilities, we conclude that i is a tree node without
siblings if, and only if, ℓN (i, j) > 2 for every j ∈ S \ {i}. This finishes the proof of (1).
As far as (2) goes, in NU (i) we replace the tree leaf i by its parent. By Lemma 2, this
does not modify any LCSA, and it only shortens in 1 any path ending in i. Therefore
ℓNU (i) (i, j) = ℓN (i, j) − 1, ℓNU (i) (j, i) = ℓN (j, i) for every j ∈ S \ {i}
ℓNU (i) (j, k) = ℓN (j, k), ℓNU (i) (k, j) = ℓN (k, j) for every j, k ∈ S \ {i}
23
As far as the h component of the splitted LCSA-path lengths goes, notice that a node
u is a strict ancestor of a tree leaf i if, and only if, it is a strict ancestor of its parent
x (because every path ending in i contains x). Therefore, an internal node of NU (i) is a
strict ancestor of the leaf i in NU (i) if, and only if, it is a strict ancestor of the leaf i in
N . On the other hand, replacing a tree leaf without siblings by its only parent does not
affect any path ending in another leaf, and therefore an internal node of NU (i) is a strict
ancestor of a leaf j 6= i in NU (i) if, and only if, it is a strict ancestor of the leaf j in N .
So, by Lemma 2, the LCSA of a pair of leaves in N and in NU (i) is the same, and we
have just proved that this LCSA is a strict ancestor of exactly the same leaves in both
networks: this implies that
hNU (i) (i, j) = hN (i, j) for every j ∈ S \ {i}
hNU (i) (j, k) = hN (j, k) for every j, k ∈ S \ {i}
Lemma 12. Let N be a TCTC-network on a set S of taxa.
(1) The reduction T (i; j) can be applied to N if, and only if, LsN (i, j) = (1, 1, 0).
(2) If the reduction T (i; j) can be applied to N , then
LsNT (i;j) (k, l) = LsN (k, l)
for every k, l ∈ S \ {i}
Proof. As far as (1) goes, T (i; j) can be applied to N if, and only if, the leaves i and j are
tree nodes and sibling. Let us prove that this last condition is equivalent to ℓN (i, j) =
ℓN (j, i) = 1 and hN (i, j) = 0. Indeed, if the leaves i and j are tree nodes and sibling,
then their parent is their LCSA and moreover it is a strict ancestor of both of them,
which implies that ℓN (i, j) = ℓN (j, i) = 1 and hN (i, j) = 0. Conversely, assume that
ℓN (i, j) = ℓN (j, i) = 1 and hN (i, j) = 0. The equalities ℓN (i, j) = ℓN (j, i) = 1 imply that
[i, j] is a parent of i and j, and hN (i, j) = 0 implies that this parent of i and j is a strict
ancestor of both of them, and therefore, by Lemma 3, that i and j are tree nodes. This
finishes the proof of (1).
As far as (2) goes, in NT (i;j) we simply remove the leaf i without removing anything
else. Therefore, no path ending in a remaining leaf is affected, and as a consequence no
Ls (k, l) with k, l 6= i, is modified.
Lemma 13. Let N be a TCTC-network on a set S of taxa.
(1) The reduction H(i; j1 , . . . , jk ) can be applied to N if, and only if,
– LsN (i, jl ) = (1, 1, 1), for every l ∈ {1, . . . , k}.
– ℓN (ja , jb ) > 2 or ℓN (jb , ja ) > 2 for every a, b ∈ {1, . . . , k}.
– For every s ∈
/ {j1 , . . . , jk }, if ℓN (i, s) = 1 and hN (i, s) = 1, then ℓN (jl , s) = 1
and hN (jl , s) = 0 for some l ∈ {1, . . . , k}.
(2) If the reduction H(i; j1 , . . . , jk ) can be applied to N , then
LNH(i;j1 ,...,j ) (s, t) = LN (s, t)
k
24
for every s, t ∈ S \ {i}
Proof. As far as (1) goes, H(i; j1 , . . . , jk ) can be applied to N if, and only if, j1 , . . . , jk
are tree leaves that are not sibling of each other, the leaf i is a hybrid sibling of j1 , . . . , jk ,
and the only parents of i are those of j1 , . . . , jk . Now:
– For each l = 1, . . . , k, the condition LsN (i, jl ) = (1, 1, 1) says that i and jl are sibling,
and that their parent in common is a strict ancestor of jl but not of i. Using Lemma
3, we conclude that this condition is equivalent to the fact that i and jl are sibling,
jl is a tree node, and i a hybrid node.
– Assume that j1 , . . . , jk are tree leaves, with parents v1 , . . . , vk , respectively. In this
case, the condition ℓN (ja , jb ) > 2 or ℓN (jb , ja ) > 2 is equivalent to the fact that
ja , jb are not sibling. Indeed, if ja and jb are sibling, then ℓN (ja , jb ) = ℓN (jb , ja ) = 1.
Conversely, if ja and jb are not sibling, then there are two possibilities: either va is
an ancestor of jb , but not its parent, in which case va = [ja , jb ] and ℓN (jb , ja ) > 2, or
va is not an ancestor of jb , in which case [ja , jb ] is a proper ancestor of va and hence
ℓN (ja , jb ) > 2.
– Assume that j1 , . . . , jk are tree leaves, with parents v1 , . . . , vk , respectively, and that
i is a hybrid sibling of them. Let us see that the only parents of i are v1 , . . . , vk
if, and only if, for every s ∈
/ {j1 , . . . , jk }, ℓN (i, s) = 1 and hN (i, s) = 1 imply that
ℓN (jl , s) = 1 and hN (jl , s) = 0 for some l = 1, . . . , k.
Indeed, assume that the only parents of i are v1 , . . . , vk , and let s ∈
/ {j1 , . . . , jk } be a
leaf such that ℓN (i, s) = 1 and hN (i, s) = 1. Since ℓN (i, s) = 1, some parent of i, say
vl , is the LCSA of i and s, and hN (i, s) = 1 implies that vl is a strict ancestor of s.
But then vl will be the LCSA of its tree leaf jl and s and strict ancestor of both of
them, and thus ℓN (jl , s) = 1 and hN (jl , s) = 0.
Conversely, assume that, for every s ∈
/ {j1 , . . . , jk }, ℓN (i, s) = 1 and hN (i, s) = 1
imply that ℓN (jl , s) = 1 and hN (jl , s) = 0 for some l = 1, . . . , k. Let v be a parent of
i, and let s be a tree descendant leaf of v. Then, v = [i, s] (v is a strict ancestor of s,
an ancestor of i, and no intermediate node in the unique path v s is an ancestor
of i, by the time consistency property) and thus ℓN (i, s) = 1; moreover, hN (i, s) = 1
by Lemma 3. Now, if s = jl , for some l = 1, . . . , k, then v = vl . On the other hand, if
s∈
/ {j1 , . . . , jk }, then by assumption, there will exist some jl such that ℓN (jl , s) = 1
and hN (jl , s) = 0, that is, such that vl is a strict ancestor of s. This implies that
v = vl . Indeed, if v 6= vl , then either vl is an intermediate node in the path v s,
and in particular a tree descendant of v, which is forbidden by the time consistency
because v and vl have the hybrid child i in common, or v is a proper descendant
of vl through a path where vl and all the intermediate nodes are hybrid (if some of
these nodes were of tree type, the temporal representation of v would be greater than
that of vl , contradicting again the time consistency), in which case the child of vl in
this path would be a hybrid child of vl that is a strict descendant of it (because it
is intermediate in the path vl v s and s is a strict descendant of vl ), which is
impossible by Lemma 3.
This finishes the proof of (1).
25
As far as (2) goes, in NH(i;j1 ,...,jk ) we simply remove the hybrid leaf i without removing
anything else, and therefore no splitted LCSA-path length of a pair of remaining leaves
is affected.
Theorem 1. Let N and N ′ be two TCTC-networks on the same set S of taxa. Then,
Ls (N ) = Ls (N ′ ) if, and only if, N ∼
= N ′.
Proof. The ‘if’ implication is obvious. We prove the ‘only if’ implication by double induction on the number n of elements of S and the number m of internal nodes of N .
As in Proposition 5, the cases n = 1 and n = 2 are straightforward, because both
TCTC1 and TCTC2 consist of a single network.
On the other hand, the case when m = 1, for every n, is also straightforward: assuming
S = {1, . . . , n}, the network N is in this case the phylogenetic tree with Newick string
(1,2,...,n);, consisting only of the root and the leaves, and in particular LsN (i, j) =
(1, 1, 0) for every 1 6 i < j 6 n. If Ls (N ) = Ls (N ′ ), we have that LsN ′ (i, j) = (1, 1, 0)
for every 1 6 i < j 6 n, and therefore all leaves in N ′ are tree nodes and sibling of each
other by Lemma 3. Since the root of a hybridization network cannot be elementary, this
says that N ′ is also a phylogenetic tree with Newick string (1,2,...,n); and hence it
is isomorphic to N .
Let now N and N ′ two TCTC-networks with n > 3 leaves such that Ls (N ) = Ls (N ′ )
and N has m > 2 internal nodes. Assume as induction hypothesis that the thesis in the
theorem is true for pairs of TCTC-networks N1 , N1′ with n − 1 leaves or with n leaves
and such that N1 has m − 1 internal nodes.
By Proposition 3, a reduction U (i), T (i; j) or H(i; j1 , . . . , jk ) can be applied to N .
Since the application conditions for such a reduction depend only on the splitted LCSApath lengths vectors by Lemmas 11.(1), 12.(1) and 13.(1), and Ls (N ) = Ls (N ′ ), we
conclude that we can apply the same reduction to N ′ .
Now, we apply the same reduction to N and N ′ to obtain new TCTC-networks N1
and N1′ , respectively. If the reduction was of the form U (i), N1 and N1′ have n leaves and
N1 has m − 1 internal nodes; if the reduction was of the forms T (i; j) or H(i; j1 , . . . , jk ),
N1 and N1′ have n − 1 leaves. In all cases, Ls (N1 ) = Ls (N1′ ) by Lemmas 11.(2), 12.(2)
and 13.(2), and therefore, by the induction hypothesis, N1 ∼
= N1′ .
′
Finally, by Lemma 5, N and N are obtained from N1 and N1′ by applying the same
expansion U−1 , T−1 , or H−1 , and they are isomorphic.
The vectors of splitted LCSA-path lengths vectors do not separate hybridization
networks much more general than the TCTC, as we following examples show.
Remark 4. The vectors of splitted distances do not separate arbitrary (that, is, possibly
time inconsistent) tree-child phylogenetic networks. Indeed, the non-isomorphic tree-child
binary phylogenetic networks N and N ′ depicted in Fig. 12 have the same Ls vectors:
Ls (N ) = Ls (N ′ ) = (2, 1, 1), (4, 1, 1), (3, 1, 1) .
26
1
2
1
3
2
N
N
3
′
Fig. 12. These two tree-child binary phylogenetic networks have the same splitted LCSA-path lengths
vectors.
Remark 5. The splitted LCSA-path lengths vectors do not separate tree-sibling time
consistent phylogenetic networks, either. Consider for instance the tree-sibling time consistent fully resolved phylogenetic networks N and N ′ depicted in Figure 13. A simple
computation shows that they have the same Ls vectors, but they are not isomorphic.
As in the fully resolved case, the injectivity of the mapping
Ls : TCTCn → R3n(n−1)/2
makes it possible to induce metrics on TCTCn from metrics on R3n(n−1)/2 . The proof of
the following result is similar to that of Proposition 7.
Proposition 8. For every n > 1, let D be any metric on R3n(n−1)/2 . The mapping
ds : TCTCn × TCTCn → R defined by d(N1 , N2 ) = D(Ls (N1 ), Ls (N2 )) satisfies the
axioms of metrics up to isomorphisms.
⊔
⊓
27
1
2
3
4
5
6
7
8
5
6
3
4
N
1
2
7
8
N′
Fig. 13. These two tree-sibling time consistent binary phylogenetic networks have the same splitted
LCSA-path lengths vectors.
For instance, using as D the Manhattan distance or the Euclidean distance, we obtain,
respectively, the metrics on TCTCn
ds1 (N1 , N2 ) =
X
16i<j6n
=
ds2 (N1 , N2 )
=
X
16i6=j6n
X
|ℓN1 (i, j) − ℓN2 (i, j)| + |ℓN1 (j, i) − ℓN2 (j, i)|
+|hN1 (i, j) − hN2 (i, j)|
1
|ℓN1 (i, j) − ℓN2 (i, j)| + |hN1 (i, j) − hN2 (i, j)|
2
16i<j6n
=
s
X
16i6=j6n
(ℓN1 (i, j) − ℓN2 (i, j))2 + (ℓN1 (j, i) − ℓN2 (j, i))2
12
+(hN1 (i, j) − hN2 (i, j))2
1
(ℓN1 (i, j) − ℓN2 (i, j))2 + (hN1 (i, j) − hN2 (i, j))2
2
28
These metrics generalize to TCTC-networks the splitted nodal metrics for arbitrary
phylogenetic trees defined in [7]. and the nodal metric for TCTC phylogenetic networks
defined in [6].
6
Conclusions
A classical result of Smolenskii [24] establishes that the vectors of distances between
pairs of leaves separate unrooted phylogenetic trees on a given set of taxa. This result
generalizes easily to fully resolved rooted phylogenetic trees [7], and it lies at the basis of
the classical definitions of nodal distances for unrooted as well as for fully resolved rooted
phylogenetic trees based on the comparison of these vectors [3,11,12,22,26,29]. But these
vectors do not separate arbitrary rooted phylogenetic trees, and therefore they cannot
be used to compare the latter in a sound way. This problem was overcome in [7] by
introducing the splitted path lengths matrices and showing that they separate arbitrary
rooted phylogenetic trees on a given set of taxa. It is possible then to define splitted nodal
metrics for arbitrary rooted phylogenetic trees by comparing these matrices.
In this paper we have generalized these results to the class TCTCn of tree-child time
consistent hybridization networks (TCTC-networks) with n leaves. For every pair i, j of
leaves in a TCTC-network N , we have defined the LCSA-path length LN (i, j) and the
splitted LCSA-path length LsN (i, j) between i and j and we have proved that the vectors
L(N ) = (LN (i, j))16i<j6n separate fully resolved networks in TCTCn and the vectors
Ls (N ) = (LsN (i, j))16i<j6n separate arbitrary TCTC-networks.
The vectors L(N ) and Ls (N ) can be computed in low polynomial time by means of
simple algorithms that do not require the use of sophisticated data structures. Indeed,
let n be the number of leaves and m the number of internal nodes in N . As we explained
in [5, §V.D], for each internal node v and for each leaf i, it can be decided whether v
is a strict or a non-strict ancestor of i, or not an ancestor of it at all, by computing
by breadth-first search the shortest paths from the root to each leaf before and after
removing each of the m nodes in turn, because a non-strict descendant of a node will
still be reachable from the root after removing that node, while a strict descendant will
not. All this information can be computed in O(m(n + m)) time, and once it has been
computed the least common semi-strict ancestor of two leaves can be computed in O(m)
time by selecting the node of least height among those which are ancestors of the two
leaves and strict ancestors of at least one of them. This allows the computation of L(N )
and Ls (N ) in O(m2 + n2 m) time.
These vectors L(N ) and Ls (N ) can be used then to define metrics for fully resolved
and arbitrary TCTC-networks, respectively, from metrics for real-valued vectors. The
metrics obtained in this way can be understood as generalizations to TCTCn of the
(non-splitted or splitted) nodal metrics for phylogenetic trees and they can be computed
in low polynomial time if the metric used to compare the vectors can be done so: this is
the case, for instance, when this metric is the Manhattan or the Euclidean metric (in the
last case, computing the square root with O(10m+n ) significant digits [2], which should
be more than enough).
29
It remains to study the main properties of the metrics defined in this way, like for
instance their diameter or the distribution of their values. It is important to recall here
that these are open problems even for the classical nodal distances for fully resolved
rooted phylogenetic trees.
Acknowledgment
The research reported in this paper has been partially supported by the Spanish DGI
projects MTM2006-07773 COMGRIO and MTM2006-15038-C02-01.
References
1. M. Baroni, C. Semple, M. Steel, Hybrids in real time, Syst. Biol. 55 (2006) 46–56.
2. P. Batra, Newton’s method and the computational complexity of the fundamental theorem of algebra,
Electron. Notes Theor. Comput. Sci. 202 (2008) 201–218.
3. J. Bluis, D.-G. Shin, Nodal distance algorithm: Calculating a phylogenetic tree comparison metric,
in: Proc. 3rd IEEE Symp. BioInformatics and BioEngineering, 2003.
4. G. Cardona, M. Llabrés, F. Rosselló, G. Valiente, A distance metric for a class of tree-sibling phylogenetic networks, Bioinformatics 24 (13) (2008) 1481–1488.
5. G. Cardona, M. Llabrés, F. Rosselló, G. Valiente, Metrics for phylogenetic networks I: Generalizations
of the Robinson-Foulds metric, submitted (2008).
6. G. Cardona, M. Llabrés, F. Rosselló, G. Valiente, Metrics for phylogenetic networks II: Nodal and
triplets metrics, submitted (2008).
7. G. Cardona, M. Llabrés, F. Rosselló, G. Valiente, Nodal metrics for rooted phylogenetic trees, submitted, available at arxiv.org/abs/0806.2035 (2008).
8. G. Cardona, F. Rosselló, G. Valiente, Comparison of tree-child phylogenetic networks, IEEE T.
Comput. Biol. preprint, 30 June 2008 , doi:10.1109/TCBB.2007.70270.
9. G. Cardona, F. Rosselló, G. Valiente, Tripartitions do not always discriminate phylogenetic networks,
Math. Biosci. 211 (2) (2008) 356–370.
10. W. F. Doolittle, Phylogenetic classification and the universal tree, Science 284 (5423) (1999) 2124–
2128.
11. J. S. Farris, A successive approximations approach to character weighting, Syst. Zool. 18 (1969)
374–385.
12. J. S. Farris, On comparing the shapes of taxonomic trees, Syst. Zool. 22 (1973) 50–54.
13. D. Gusfield, S. Eddhu, C. Langley, The fine structure of galls in phylogenetic networks, INFORMS
J. Comput, 16 (4) (2004) 459–469.
14. D. Gusfield, S. Eddhu, C. Langley, Optimal, efficient reconstruction of phylogenetic networks with
constrained recombination, J. Bioinformatics Comput. Biol. 2 (1) (2004) 173–213.
15. J. Hein, M. H. Schierup, C. Wiuf, Gene Genealogies, Variation and Evolution: A Primer in Coalescent
Theory, Oxford University Press, 2005.
16. D. H. Huson, D. Bryant, Application of Phylogenetic Networks in Evolutionary Studies, Mol. Biol.
Evol. 23 (2) (2006) 254–267.
17. D. H. Huson, T. H. Klöpper, Beyond galled trees - decomposition and computation of galled networks,
in: Proceedings RECOMB 2007, vol. 4453 of Lecture Notes in Computer Science, Springer-Verlag,
2007.
18. B. M. E. Moret, L. Nakhleh, T. Warnow, C. R. Linder, A. Tholse, A. Padolina, J. Sun, R. Timme,
Phylogenetic networks: Modeling, reconstructibility, and accuracy, IEEE T. Comput. Biol. 1 (1)
(2004) 13–23.
19. L. Nakhleh, J. Sun, T. Warnow, C. R. Linder, B. M. E. Moret, A. Tholse, Towards the development
of computational tools for evaluating phylogenetic network reconstruction methods, in: Proc. 8th
Pacific Symp. Biocomputing, 2003.
30
20. L. Nakhleh, J. Sun, T. Warnow, C. R. Linder, B. M. E. Moret, A. Tholse, Towards the development
of computational tools for evaluating phylogenetic network reconstruction methods, in: Proc. 8th
Pacific Symp. Biocomputing, 2003.
21. L. Nakhleh, T. Warnow, C. R. Linder, K. S. John, Reconstructing reticulate evolution in species:
Theory and practice, J. Comput. Biol. 12 (6) (2005) 796–811.
22. J. B. Phipps, Dendrogram topology, Syst. Zool. 20 (1971) 306–308.
23. C. Semple, Hybridization networks, in: O. Gascuel, M. Steel (eds.), Reconstructing evolution: New
mathematical and computational advances, Oxford University Press, 2008, pp. 277–314.
24. Y. A. Smolenskii, A method for the linear recording of graphs, USSR Computational Mathematics
and Mathematical Physics 2 (1963) 396–397.
25. Y. S. Song, J. Hein, Constructing minimal ancestral recombination graphs, J. Comput. Biol. 12 (2)
(2005) 147–169.
26. M. A. Steel, D. Penny, Distributions of tree comparison metrics—some new results, Syst. Biol. 42 (2)
(1993) 126–141.
27. G. Valiente, Phylogenetic networks, course at the Int. Summer School on Bioinformatics and Computational Biology Lipari (June 14–21, 2008).
28. L. Wang, K. Zhang, L. Zhang, Perfect phylogenetic networks with recombination, J. Comput. Biol.
8 (1) (2001) 69–78.
29. W. T. Williams, H. T. Clifford, On the comparison of two classifications of the same set of elements,
Taxon 20 (4) (1971) 519–522.
30. S. J. Willson, Restrictions on meaningful phylogenetic networks, contributed talk at the EMBO
Workshop on Current Challenges and Problems in Phylogenetics (Isaac Newton Institute for Mathematical Sciences, Cambridge, UK, 3–7 September 2007).
31. S. J. Willson, Reconstruction of certain phylogenetic networks from the genomes at their leaves, J.
Theor. Biol. 252 (2008) 338–349.
32. S. M. Woolley, D. Posada, K. A. Crandall, A comparison of phylogenetic network methods using
computer simulation, Plos ONE 3 (4) (2008) e1913.
31
| 5 |
Technical report: CSVM dictionaries.
Extension of CSVM-1 specification: definition, usages, Python toolkit.
Frédéric Rodriguez a,b,*
a
b
CNRS, Laboratoire de Synthèse et Physico-Chimie de Molécules d’Intérêt Biologique, LSPCMIB, UMR5068, 118 Route de Narbonne, F-31062 Toulouse Cedex 9, France.
Université de Toulouse, UPS, Laboratoire de Synthèse et Physico-Chimie de Molécules d’Intérêt
Biologique, LSPCMIB, 118 route de Narbonne, F-31062 Toulouse Cedex 9, France.
Abstract
CSVM (CSV with Metadata) is a simple file format for tabular data. The possible application domain is the
same as typical spreadsheets files, but CSVM is well suited for long term storage and the inter-conversion of
RAW data. CSVM embeds different levels for data, metadata and annotations in human readable format and
flat ASCII files. As a proof of concept, Perl and Python toolkits were designed in order to handle CSVM data
and objects in workflows. These parsers can process CSVM files independently of data types, so it is
possible to use same data format and parser for a lot of scientific purposes.
CSVM-1 is the first version of CSVM specification, an extension of CSVM-1 for implementing a translation
system between CSVM files is presented in this paper. The necessary data used to make the translation are
also coded in another CSVM file. This particular kind of CSVM is called a CSVM dictionary, it is also
readable by the current CSVM parser and it is fully supported by the Python toolkit. This report presents a
proposal for CSVM dictionaries, a working example in chemistry, and some elements of Python toolkit
usable to handle these files.
Keywords
CSVM; Tabular Data; Python; Specification; Data Conversion; Open Format; Open Data; Open Format;
Dictionaries; Canonical Data Model.
Status
As CSVM itself, the dictionaries extension shown in this document must be considered as an Open format.
* Corresponding authors.
CNRS, Laboratoire de Synthèse et Physico-Chimie de Molécules d’Intérêt Biologique, LSPCMIB, UMR-5068, 118 Route de
Narbonne, F-31062 Toulouse Cedex 9, France.
Tel.: þ33 (0) 5 61556486; fax: þ33 (0) 5 61556011.
E-mail address: [email protected] (F. Rodriguez).
1
1. Definition of a CSVM dictionary
The CSVM dictionary 1) must use the same basis than a CVM file defined by CSVM-1 specification; 2)
must be processed by the same parser that a common CSVM file and 3) must embed all the information
needed to transform a CSVM file in another.
We show in this section how we can define a translation set, encode it in a CSVM dictionary and how the
translation set is used to transform or normalize data of data CSVM file.
1.1. Data structure of a CSVM dictionary
The following CSVM file shown below is a chemical inventory table limited to 6 rows and 5 columns for
simplicity. This table codes for: a rank number (numero), a chemical structure (fichier), a molecular
weight (masse_exacte), a common molecule name (nom), an amount (i.e. g or mg of product) of chemical
in laboratory (vrac) :
Figure 1. - CSVM file for a chemical inventory.
The #HEADER, #TYPE, #WIDTH define a first system (SYS1) with particular types and column names,
and this file is used as a data file.
Now we want to transfer the data in another system with different naming conventions (ie. to prepare import
to a RDBMS), we call it SYS2 (second system). To do this task we have defined a new CSVM file called the
dictionary:
Figure 2. - CSVM dictionary for the table shown in Figure 1.
We see that obviously this is also a CSVM file. The only difference is that some keywords with the #
character are present in the two last columns of data and metadata blocks. These two columns (order: 4,5
from left to right) are used to store data types of translation sets, while data columns (order: 1,2,3) are used to
store the translation set data itself.
2
This CSVM dictionary stores the columns names used in SYS1 (orange column below) and expected in
SYS2 (pink) or another name space/set (green). The dictionary stores also the #TYPE and #WITH fields of
each name set (blue columns) :
Table 1. – CSVM dictionary (Figure 2.) shown as colored table.
numero
nom
fichier_mol
masse_exacte
vrac
ID
identificateur
MOLSTRUCTURE
crac
number
name
molfile
mol.weight
qtity
#NUMERIC
#TEXT
#TEXT
#NUMERIC
#NUMERIC
#10
#100
#50
#50
#10
#TITLE
#HEADER
#TYPE
#WIDTH
CSVM dictionary for SYS1, SYS2 and SYS1_UK
SYS1
SYS2
SYS1_UK
#TYPE
TEXT
TEXT
TEXT
#TEXT
50
50
50
#50#
#WIDTH
#TEXT
#50
So we have here all data needed to convert a CSVM file using SYS1 in another naming space because we
have the #HEADER, #TYPE, #WIDTH values of SYS1 and SYS2 in this dictionary. The case of a column
(#HEADER value) named TYPE or WIDTH is possible, so have chosen to mark type and width metadata in
data block (blue columns) with # character to avoid confusions.
A CSVM dictionary could be defined as a way to store Metadata of n CSVM files in the data block of
another CSVM file.
1.2. Translation sets
If we read the first row dictionary file, we find a value ‘ numero’ that is the contents of #HEADER first
column in data file. The second and third values of the row (‘ID’ and ‘ number’) are alternate values of
‘numero’. If we read the first column of dictionary file, we find ‘ numero’, ‘nom’, ‘fichier_mol’,
‘masse_exacte’, ‘vrac’. All are values used in data file and corresponding to SYS1, we call this ensemble
a translation set. The previous dictionary defines 3 translation sets :
The SYS1 set : [numero, nom, fichier_mol, masse_exacte, vrac]
The SYS2 set : [ID, indentificateur, MOLSTRUCTURE, , vrac]
The SYS1_UK set : English translation of SYS1 [number, name, molfile, mol.weight, qttity]
1.3. Guidelines for Standards and data transformation
So we can use one translation set defined in a dictionary to change values of #HEADERS of a CSVM
file/object. But what about the data of target CSVM file ?
First, take in account that the perimeter of CSVM-1 covers only the syntax of a CSVM file, not the value of
keywords. The recommendation about values of #TYPE fields (i.e. TEXT, NUMERIC …) must be
considered only as a good practice. So, it is possible to include a lot of values for #TYPE and #WIDTH
keywords, typically information to make data conversions.
The previous CSVM table shows that only one column #TYPE or #WIDTH is included in the dictionary for
the all 3 translation sets. This is what we call a Standard, with this dictionary it is possible to convert all data
type of each CSVM processed into the units defined by the standard.
Consider a current case in a laboratory, some columns of a CSVM file prepared by a first scientific team
(encoded by translation set TEAM1) must have column names changed before data transfer to another team
(translation set TEAM2). But the second team uses also different units (i.e. mass concentration unit rather
than molar units).
3
The transformation of data units into a standard is resumed by the following figure:
Figure 3. - Schematics of transformation of a CSVM file using a CSVM dictionary.
A software component knowing a rule to transform Mol.l -1 in g/L (gram per liter) could make the data
transformation, but this operation must be let out of CSVM parser’s range. The corresponding coding part of
dictionary for this #HEADER is given by the Table 2.
Table 2. – Data normalization using a standard as data type in a CSVM dictionary.
concentration
CONC
#GR/L
#10
#TITLE
#HEADER
#TYPE
#WIDTH
Example
TEAM1
TEXT
50
TEAM2
TEXT
50
#TYPE
#TEXT
#50
#WIDTH
#TEXT
#50
1.4 Guidelines for data transformation
Now if a data conversion (not the Standard) is planned for a particular column of a CSVM table ? In
CSVM’s context, different approaches are available. In example, if a CSVM dictionary is used 1, it is
possible to add columns (TEAM1_UNITS, TEAM2_UNITS) coding for local units used by TEAM1 and
TEAM2 as shown in the following table :
1
If a dictionary is not used, the programmer could make all transformations on the target CSVM using information
picked manually in another CSVM and organized in columns or rows (this approach is often used when the
transformations/rules defined in dictionary must be applied sequentially). For more complex case, it could be
interesting to split information in more than one CSVM file (dictionaries of common files).
4
Table 3. – Adding specific columns for multi-standards.
concentration MOLAR
CONC
GR/L
#NUMERIC
#10
#TITLE
#HEADER
#TYPE
#WIDTH
TEAM1_UNITS
TEXT
10
TEAM2
TEXT
50
TEAM2_UNITS
TEXT
10
#TYPE
#TEXT
#50
Example
TEAM1
TEXT
50
#WIDTH
#TEXT
#50
In the example shown in Table 3 no Standard is defined, but it is also possible to add one. It is also possible
to extend this approach to #WIDTH columns. In dictionary file, the columns defined by values
#TEAM1_UNITS and #TEAM2_UNITS are not used by the conversion functions defined in toolkits. The
reason is simple: the #TEAM1_UNITS and #TEAM2_UNITS are not defined as #HEADER keywords in
data files. But it is not a problem for making conversions outside the CSVM parser range, the dictionary
could be read as a CSVM object and its contents used as it is a common CSVM file.
A CSVM dictionary could also used to save knowledge about a data space of CSVM files, including
supplemental fields.
1.5 Lost words in translation sets
The first table (Table 1.) shows that keyword defined by ‘ masse_exacte’ in first translation set (SYS1 set)
is not defined in SYS2 set (the char ‘ -‘ is used here to mark an empty cell).
No particular processing is done about that, because the CSVM parser doesn’t make interpretation about data
or metadata values. Holes in tables are very often found in real data and can be taken in account, if a
#HEADER value is not defined in the translation set, the corresponding column can be suppressed (or not) in
the resulting CSVM file 2 after CSVM dictionary application.
For advanced uses, it is also possible to make a union between two CSVM files in order to restore all missing
empty columns (a specific CSVM file can be forged and used as a data mask). Unions, Intersections,
Additions of tables are implemented in CSVM Python toolkit, given an example:
print "\n*** Test CSVM files with equivalent, different, missing columns"
c1 = csvm_ptr()
c1 = csvm_ptr_read_extended_csvm(c1,"test/test1.csvm","\t")
c2 = csvm_ptr()
c2 = csvm_ptr_read_extended_csvm(c2,"test/test2.csvm","\t")
print "=> Compute INTERSECTION"
r = csvm_ptr_intersect(c1, c2)
if (r != None):
r.csvm_ptr_dump(0,0)
r.csvm_ptr_clear()
else:
print "None data found"
print "\n=> compute UNION"
r = csvm_ptr_union(c1, c2)
r.csvm_ptr_dump(0,0)
r.csvm_ptr_clear()
c1.csvm_ptr_clear()
c2.csvm_ptr_clear()
Using CSVM paradigm, table unions or tables intersection are easy to implement, even if the two files have
not exactly the same number of columns or columns names (an intermediate step using CSVM dictionaries
could be added to the process).
2
A typical use is the generation of a subset of RAW data (one or n CSVM files) prior to a RDBMS import.
5
1.6 Annotation of dictionaries
Dictionaries are CSVM files and can be annotated in the same way that standard CSVM. Remark lines
tagged by a # character in data block (or metadata block) are used. The only rule is that a CSVM keyword
(#TITLE, #HEADER, #TYPE, #WIDTH, #META) cannot be used (but combinations such as # TITLE,
#_HEADER, # HEADER, ##HEADER … are allowed). The level of annotation of a CSVM file can be very
high because the corresponding rows are not taken in account by the CSVM parser.
The following figure shows annotation of a CSVM file, in the case of a commented parameter’s set for doing
calculations using AMBER3 4 package.
th
Figure 4. – Example of CSVM annotations inside the data block. The image is truncated after the 80 column (vertical
grey line at right). Some lines (7, 19, 21, 31) are wrapped by the text editor 5.
3
4
5
The Amber Molecular Dynamics Package - http://ambermd.org/
Watson-Crick base-pairing properties of nucleic acid analogues with stereocontrolled alpha and beta torsion
angles (alpha,beta-D-CNAs). C. Dupouy, N. Iché-Tarrat, M.P. Durrieu, F. Rodriguez, J.M. Escudier, A. Vigroux.
Angewandte Chemie (2006) 45:22, 3623-3627.
Scintilla and SciTE - http://www.scintilla.org/
6
2. A working example in Chemistry
A SDF 6 file is a collection of n data blocks. For each block, one can find a molecules (coded using MDL
Molfile format) and [0..m] descriptors stored as key/values pairs, in another terms, it is a serialized molecular
data table. Given a such molecular collection 7 8, we want to transform the descriptor space: removing or
renaming some of them in order to export the collection to a chemical RDBMS.
9
Figure 5. – Molecular collection (displayed using a SDF viewer ). The molecular formulas for each compound are
shown at top part of each record. The descriptors (key/values pairs) are shown in bottom.
The corresponding CSVM table has one row for each compound and a column for each descriptor key
(number, name, plate, chemist, amount, ref_product, ref_labbook, id_lab, id_team, id_box,
rights, chr_row_box, num_col_box). Another column is used to store the 2D chemical structure at
SMILES 10 format.
6
7
8
9
10
Chemical table file [Wikipedia] - http://en.wikipedia.org/wiki/Chemical_table_file
L. Hammal, S. Bouzroura, C. André, B. Nedjar-Kolli, P. Hoffmann (2007) Versatile Cyclization of Aminoanilino
Lactones: Access to Benzotriazoles and Condensed Benzodiazepin-2-thione. Synthetic Commun., 37:3, 501-511.
M .Fodili, M. Amari, B. Nedjar-Kolli, B. Garrigues, C. Lherbet, P. Hoffmann (2009) Synthesis of Imidazoles from
Ketimines Using Tosylmethyl Isocyanide (TosMIC) Catalyzed by Bismuth Triflate. Lett. Org. Chem., 6, 354-358.
ChemAxon marvinView - http://www.chemaxon.com/products/marvin/marvinview/
Chemical file format [Wikipedia] - http://en.wikipedia.org/wiki/Chemical_file_format
7
The following image shows the end of data block of the CSVM file and metadata block:
Figure 6. – The molecular collection of Figure 4. encoded in a CSVM table, details only: last compounds, some rows [73
.. 80] are cut off at right. Tabs are used as field separator and are shown as red arrows.
A first CSVM dictionary (dict1, Figure 7) will be used to filter CSVM columns of the molecular collection
(Figure 6). In dict1, the columns 1-2-3 of data block, list the keywords of CSVM files allowed in
corresponding translation sets LOCAL, LOCAL2, CN. If the translation occurs from LOCAL to LOCAL2,
all columns of the molecular table that are named num_col_box will be renamed to ccol.
Figure 7. – A CSVM dictionary (dict1).
Some values in dict1.CN are set to __DEL__ string: if translation occurs from LOCAL/LOCAL2 to CN, all
columns tagged with __DEL__ will be deleted in the resulting collection:
Figure 8. – Molecular collection after application of dict1.CN filter.
8
The corresponding SDF file is shown in Figure 9. The molecular formulas are regenerated and some
keywords are missing or are renamed (number -> ID, name -> Identificateur, plate -> plaque,
amount -> vrac) in the new SDF file.
Figure 9. – Molecular collection after filtering by dict1.CN set.
This example shows how to use CSVM as intermediate format in order to make operations (here a simple
filtering) on tables. This kind of operation is very often done in a lot of scientific fields and using CSVM let
us integrate table in scientific workflows and normalize data.
CSVM’s paradigm permits a real data management and storage of RAW data. Eventually these RAW data
will be integrated in a RDBMS, if this job is done by another team, the CSVM files are enough documented,
and the producer’s team will be very little demand. If the RAW data will not be integrated in a RDBMS: all
that is needed (metadata, annotations) to use these files some years later is embedded inside (event is a
CSVM parser is not available, all information is human readable).
9
3. Python toolkit
The CSVM dictionaries are fully supported in the Python toolkit for CSVM, the following code was used for
the translation shown in previous section:
Code 1. – Filtering process.
if __name__ == '__main__':
from build.file import file_file2str, file_str2file, file_cleanpath
print "*** CSVM dictionary test"
print
print "*** TEST1: using a dictionary in which __DEL__ are included."
print "*** strong mode IS NOT used"
print "\n=> A new blank CSVM object"
c = csvm_ptr()
print "\n=> Populates it with a CSVM file ... "
c = csvm_ptr_read_extended_csvm(c, file_cleanpath("test/hoffmann.csvm"), "\t")
c.csvm_ptr_dump(0,0)
print "\n=> Apply a filter using a dictionary"
dict_file = file_cleanpath("test/dictionary_test1.csvm")
print "\n=> dictionary file is [%s]\n" % (dict_file)
c = csvm_dict_file_filter(c, dict_file, 'CN', 0)
print "\n=> resulting output"
c.csvm_ptr_dump(0,0)
print "\n=> save new CSVM file"
s = csvm_ptr_make_csvm(c,"\n","\t")
file_str2file(file_cleanpath("test/hofmann_test1.csvm"), s)
print "\n=> Clear CSVM object"
c.csvm_ptr_clear()
The csvm_ptr_read_extended_csvm functions make a CSVM object (c) in memory, using a CSVM file.
The method csvm_dict_file_filter make the filter operation. It uses the name of the dict1 dictionary file
(“dictionary_test1.csvm” stored in argument dict_file) and the translation set ‘CN’. Then the CSVM
object is converted in a file and the memory cleared.
The csvm_dict_file_filter launch a simple function csvm_dict_ptr_filter to do the filtering :
Code 2. – Basic filtering.
def csvm_dict_ptr_filter(self, dict, set, delcol='__DEL__'):
"""
The subroutine filters a CSVM file (a csvm_ptr object self) using a
dictionary (another csvm_ptr object, dict). The HEADER values (columns names
of CSVM structure) are translated using the set of dictionnary given as
argument. If the HEADER values are explicitely translated to '__DEL__' value,
the corresponding columns of CSVM structure are deleted.
The argument set is a string used as the identifier (column name, element
of #HEADER list) of a translate set included in dict (a csvm_ptr object).
*** we call 'standard' this filtering mode.
"""
if (dict.HEADER_N <= 0): return self
dict.csvm_ptr_dump(0,0)
if (len(set) <= 0): return self
if ((set in dict.HEADER) == False): return self
self = csvm_ptr_colfilter(self, dict, set)
self = csvm_ptr_delcol(self, delcol)
return self
One function does the filtering (csvm_ptr_colfilter) and the other (csvm_ptr_delcol) does the column
removal.
3.1 Special cases: strong mode
Sometimes user cannot define if columns must be removed explicitly in a translation set, or if columns have
a corresponding value in another translation set.
The Figure 10 illustrates this case for a CSVM dictionary (dict2) derived from previous dict1. The values
__DEL__ have disappeared from the table and are replaced by characters ’-‘ currently used to mark empty
cells in the CSVM files.
10
Figure 10. – A CSVM dictionary (dict2) without explicit column deletion.
In this case another approach could be used, first the calling code similar to Code 1 example, but using
strong mode: the last argument of csvm_dict_file_filter function is set to 1 rather than zero.
Code 3. – Advanced filtering.
print "*** TEST2: using a dictionary without __DEL__ or meta commands."
print "*** strong mode IS used"
print "\n=> A new blank CSVM object"
c = csvm_ptr()
print "\n=> Populates it with a CSVM file ... "
c = csvm_ptr_read_extended_csvm(c, file_cleanpath("test/hoffmann.csvm"), "\t")
c.csvm_ptr_dump(0,0)
print "\n=> Apply a filter using a dictionary"
dict_file = file_cleanpath("test/dictionary_test2.csvm")
print "=> dictionary file is [%s]" % (dict_file)
c = csvm_dict_file_filter(c, dict_file, 'CN', 1)
print "\n=> resulting output"
c.csvm_ptr_dump(0,0)
print "\n=> save new CSVM file"
s = csvm_ptr_make_csvm(c,"\n","\t")
file_str2file(file_cleanpath("test/hofmann_test2.csvm"), s)
print "\n=> Clear CSVM object"
c.csvm_ptr_clear()
The csvm_dict_ptr_filter_blank function is called and could be taken as a basis to make more advanced
filtering subroutines.
Code 4. – Evolution of csvm_dict_ptr_filter subroutine.
def csvm_dict_ptr_filter_blank(self, dict, set, blank_list=['', '-'], delcol='__DEL__'):
"""
The subroutine filters a CSVM file like csvm_dict_ptr_filter subroutine
but using another approach to destroy columns. A csvm_ptr object (self) is
given in input and filtered using a set (set) of a CSVM dictionary (dict).
The argument set is a string used as the identifier (column name, element
of #HEADER list) of a translate set included in dict (a csvm_ptr object).
If a column name of self, is not found in the list defined by set, the
corresponding column must be deleted in self. The argument blank_list
is used to store the values used in cells for no data (empty string or
a special char as '-').
If the argument delcol is set (and its length is > 1), the destruction of
corresponding columns (same as csvm_dict_ptr_filter) is applied before
destruction of columns with data of blank_list.
The values of blank_list are case sensitive and comparated in equal (not
include) mode for security reasons about data.
*** we call 'strong' or 'blank' this filtering mode.
"""
if (dict.HEADER_N <= 0): return self
if (len(set) <= 0): return self
if ((set in dict.HEADER) == False): return self
## apply filter
self = csvm_ptr_colfilter(self, dict, set)
if (len(delcol) > 1):
self = csvm_ptr_delcol(self, delcol)
11
## construct kwlist (only unique values in words list)
iset = query_row_eq(dict.HEADER,set,1,0)
if (iset < 0): return (self)
kwlist = []
for i in range (0, dict.DATA_R, 1):
kwlist.append(dict.DATA[i][iset[0]])
## remove blank cells in kwlist
for i in range (len(kwlist)-1, -1, -1):
for j in range (0, len(blank_list), 1):
if (kwlist[i] == blank_list[j]):
del(kwlist[i])
## query using kwlist and column(s) removal
dlist = query_row_not_eqsv(self.HEADER, kwlist, 'AND', 1, 0)
if ((len(dlist) <= 0) or (dlist == None)): return self
hlist = []
for i in range (0, len(dlist), 1):
hlist.append(self.HEADER[dlist[i]])
for i in range (0, len(hlist), 1):
self = csvm_ptr_delcol(self, hlist[i])
## done
return self
The first part (## apply filter) is the same as code shown in Code 2 example. The iset variable is used to store
the index of translation set selected in the dict2, in this case iset = [2] because it is the third element of
#HEADER list.
The kwlist Python list (mono-dimensional array) is used to store the column names corresponding to the
translation set: kwlist = ['ID', 'identificateur', '-', 'vrac', 'plaque', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', 'smi', 'mdl', '-', '-'].
This list is filtered using the argument blank_list of the function. With default values the empty cells or cell
tagged by ‘-‘ are removed from kwlist, and now, kwlist = ['ID', 'identificateur', 'vrac', 'plaque', 'smi', 'mdl'] .
Knowing this information, it is possible to get the column names (in this case indexes are returned) of
molecular table that are not found in kwlist. And we have dlist = [3, 5, 6, 7, 8, 9, 10, 11, 12, 13] . The
corresponding column names are stored in hlist, in this case hlist = ['-', '-', '-', '-', '-', '-', '-', '-', '-', '-'] and all
columns with names included in hlist will be deleted.
Here all columns names of molecular collection are specified in the dictionary (using a value or a ‘ -’). In the
general case, a dictionary may not have all possible column names that can be found in data files. To
illustrate this remark, we can comment two rows in dict2 dictionary, one that is included in target translation
set (plate row) and one that is not included (chemist). These two rows are tagged using the ‘#’ character
in first position and will not be read by the CSVM parser (they are considered now like annotations in the
data block of the dictionary):
#plate
#chemist
plaque
laboratoire
plaque
-
#TEXT
#TEXT
#10
#50
The corresponding values of intermediate variables are:
iset = [2]
kwlist = ['ID', 'identificateur', '-', 'vrac', '-', '-', '-', '-', '-', '-', '-', '-', '-', 'smi', 'mdl', '-', '-']
kwlist = ['ID', 'identificateur', 'vrac', 'smi', 'mdl']
dlist = [2, 3, 5, 6, 7, 8, 9, 10, 11, 12, 13]
hlist = ['plate', 'chemist', '-', '-', '-', '-', '-', '-', '-', '-', '-']
And the column ‘ plate’ is not added to the final molecular collection. The following lines are the dump of
the corresponding CSVM object (only the ten first lines of the data block are shown):
=> resulting output
DUMP: CSVM info {
SOURCE test\hoffmann.csvm
CSV
CSVM
META
[]
TITLE_N
1
TITLE
HEADER_N
4
TYPE_N 4
WIDTH_N
4
0
10
TEXT
{ID}
1
10
TEXT
{identificateur}
12
2
3
DATA_R
DATA_C
0
1
2
3
4
5
6
7
8
9
10
…
10
TEXT
{vrac}
10
TEXT
{smi}
80
4
80
4
[01][af01][114][C1C(OC(=O)C=C1Nc1ccccc1N)C]
[02][af02][85][c1(c(ccc(c1)[N+](=O)[O-])N)/N=C(/C1=C(O)CC(OC1=O)C)\C]
[03][af03][60][n12nc(sc1nc(C)c/c/2=C/1\C(=O)N(N=C1C)c1ccccc1)SCC]
[04][af04][50][c1(ccc(cc1)C)C1Nc2c(N=C(c3c(=O)oc(cc3O)C)C1)cccc2]
[05][af05][100][C1(=CC(=O)OC1)Nc1ccccc1N]
[06][af06][71][C1(=CC(=O)OC1)Nc1ccc(cc1N)C]
[07][af07][60][c1(ccc(cc1)OC)C1Nc2c(N=C(c3c(=O)oc(cc3O)C)C1)cccc2]
[08][af08][50][C1(=CC(=O)OC1)Nc1ccc(cc1N)[N+](=O)[O-]]
[09][af09][60][c12c(cccc2)n(cn1)C1=CC(=O)OC(C1)C]
[10][af10][45][c12c(ccc(c2)C)n(cn1)C1=CC(=O)OC(C1)C]
[11][af11][43][c12c(cc(cc2)Cl)n(cn1)C1=CC(=O)OC(C1)C]
13
5. Conclusion and perspectives
One typical tedious task is to develop software in order to convert different data flows. The combined use of
text components and CSVM dictionaries has helped us to reduce greatly the number of lines of code. The use
of dictionaries rather that UNIX filters has considerably helped all users to design-debug-share filters.
All the steps of data flows can be documented: the data itself, the dictionaries, the indexes of collections
(data, files), the intermediate files used in interchange flows … allowing re-using information at high level,
even for RAW data.
The CSVM files are easy to generate, not only for automatic processes but also for humans. The only need is
to add a metadata block at the bottom of a spreadsheet file and to save it using a ASCII/CSVM format and a
well chosen field delimiter.
For these reasons CSVM seems a good candidate for being a canonical data model in a lot of applications for
science and industry.
Supporting information
Please contact corresponding author for support on the Python (Pybuild) CSVM toolkit or data format.
The CSVM-1 specification details can be found in the following reference:
G. Beyries, F. Rodriguez (2012) Technical Report: CSVM format for scientific tabular data arXiv:1207.5711v1 [ http://fr.arxiv.org/abs/1207.5711v1 ].
Acknowledgments
I am grateful to Dr Michel Baltas and Dr Casimir Blonski (LSPCMIB, UMR 5068 CNRS-Toulouse
University) for supporting this work.
I would like to especially thank Dr Jean Louis Tichadou (“Université Paul Sabatier”, Toulouse University)
for helpful discussions and for the support of University Course (2006-2011) around RAW data questions in
experimental or environmental sciences.
I thank all collaborators in different laboratories which shared data, and help to the development the format's
usage, especially Dr Pascal Hoffmann (LSPCMIB, UMR 5068 CNRS-Toulouse University) which has
provided the chemical data used as example in section 2 of this manuscript.
I would like to thank the CNRS, the “Université Paul Sabatier” for their financial support.
14
| 5 |
Facial Emotion Recognition using Min-Max
Similarity Classifier
Olga Krestinskaya and Alex Pappachen James
arXiv:1801.00451v1 [] 1 Jan 2018
School of Engineering, Nazarbayev University, Astana
www.biomicrosystems.info/alex
Email: [email protected]
Abstract—Recognition of human emotions from the imaging
templates is useful in a wide variety of human-computer interaction and intelligent systems applications. However, the automatic
recognition of facial expressions using image template matching
techniques suffer from the natural variability with facial features
and recording conditions. In spite of the progress achieved
in facial emotion recognition in recent years, the effective
and computationally simple feature selection and classification
technique for emotion recognition is still an open problem. In
this paper, we propose an efficient and straightforward facial
emotion recognition algorithm to reduce the problem of interclass pixel mismatch during classification. The proposed method
includes the application of pixel normalization to remove intensity
offsets followed-up with a Min-Max metric in a nearest neighbor
classifier that is capable of suppressing feature outliers. The
results indicate an improvement of recognition performance
from 92.85% to 98.57% for the proposed Min-Max classification
method when tested on JAFFE database. The proposed emotion
recognition technique outperforms the existing template matching methods.
Index Terms—Face emotions, Classifier, Emotion recognition,
spatial filters, gradients
I. I NTRODUCTION
In recent years, the human-computer interaction challenge
has led to the demand to introduce efficient facial and speech
recognition systems [1]–[5]. Facial emotion recognition is
the identification of a human emotion based on the facial
expression and mimics [6]. The facial emotion recognition has
a wide range of appliction prospects in different areas, such as
medicine [7], robotics [8], [9], computer vision, surveillance
systems [1], machine learning [10], artificial intelligence,
communication [11], [12], psychological studies [4], smart
vehicles [9], security and embedded systems [13].
There are two main approaches for facial expression recognition: geometry-based and appearance-based methods [2].
The geometry-based methods extract main feature points and
their shapes from the face image and calculate the distances
between them. While, appearance-based methods focus on
the face texture using different classification and template
matching methods [14], [15]. In this paper, we focus on facial
emotion recognition based on template matching techniques
that remains a challenging task [16]–[18]. Since the orientation
of pixel features are sensitive to the changes in illumination, pose, scale and other natural imaging variabilities, the
matching errors tend to be high [4], [19], [20]. Pixel matching
methods are known to be useful when the images has missing
features because imaging matrices become sparse and feature
computation process is not trivial. As facial expressions cause
a mismatch of intra-class features due to their orientation
variability, it is difficult to map them between the imaging
templates.
Facial emotion recognition accuracy depends on the robustness of a feature extraction method to intra-class variations and
classifier performance under noisy conditions and with various
types of occlusions [10]. Even thought a variety of approaches
for the automated recognition of human expressions from
the face images using template matching methods have been
investigated and proposed over the last few years [14], the
emotion recognition method with robust feature extraction
and effective classification techniques accompanied by low
computational complexity is still an open research problem
[21]. Therefore, in this paper, we address the issues of
matching templates through pixel normalization followed by
the removal of inter-image feature outliers using a Min-Max
similarity metric. We apply Gaussian normalization method
with local mean and standard deviation to normalize the
pixels and extract relevant face features followed by MinMax classification method to determine an emotion class. The
simulation is performed in Matlab for the Japanese Female
Facial Expression (JAFFE) database [22] and the emotion
recognition accuracy is calculated using leave-one-out crossvalidation method.
The main contributions of this work are the following:
•
•
•
We develop a simplified approach for facial emotion
recognition with template matching method using Gaussian normalization, mean and standard deviation based
feature extraction and Min-Max classification approach.
We present simple and effective facial emotion recognition algorithm having low computational complexity
and ability to suppress the outliers and remove intensity
offsets.
We conduct the experiments and simulations on JAFFE
database to demonstrate the efficiency of the proposed
approach and highlight its advantages, comparing to the
other existing methods.
The paper is organized as follows. Section II presents the
overview of the existing methods for facial emotion recognition, their drawbacks and reasons to propose a new method.
In Section III, we show normalization, feature extraction
and classification parts of the proposed method, present the
algorithm and describe the experiments. Section IV contains
the simulation results and comparison of the obtained results
with the existing methods. In Section V, we discuss the
benefits and drawbacks of the proposed method, in comparison
to the traditional methods. Section VI concludes the paper.
II. BACKGROUND AND RELATED WORKS
To address the problem of facial emotion recognition, several template matching methods have been proposed in the last
decades [1], [8], [23]–[25]. In most of the cases, the process of
emotion recognition from human face images is divided into
two main stages: feature extraction and classification [1], [8].
The main aim of feature extraction methods is to minimize
intra-class variations and maximize inter-class variations. The
most important facial elements for human emotion recognition
are eyes, eyebrows, nose, mouth and skin texture. Therefore, a
vast majority of feature extraction methods focus on these features [2], [26]. The selection of irrelevant face image features
or insufficient number of them would lead to low emotion
recognition accuracy, even applying effective classification
methods [21]. The main purpose of the classification part is
to differentiate the elements of different emotion classes to
enhance emotion recognition accuracy.
The commonly used feature extraction methods include twodimensional Linear Discriminant Analysis (2D-LDA) [8], [25],
two-dimensional Principle Component Analysis (2D-PCA)
[27], Discrete Wavelet Transform (DWT) [6], [8], [28], Gabor
based methods [29], [30] and wavelets-based techniques [23],
[31]. In 2D-LDA method, the two-dimensional image matrix
is exploited to form scatter matrices between the classes and
within the class [8]. 2D-LDA method can be applied for facial
features extraction alone or accompanied with the other feature
extraction method, as DWT [8], [25]. In 2D-PCA feature
extraction method, the covariance matrix representation of the
image is derived directly from the original image [8], [27]. The
size of the derived principle component matrix is smaller than
the original image size that allows to decrease the amount
of processing data, and consequently, reduce the required
computational memory [32]. However, 2D-LDA and 2D-PCA
methods applied in template matching techniques require an
additional processing of the image, dimensionality reduction
techniques or application of another feature extraction method
to achieve higher recognition accuracy, which leads to the
increase in processing time.
The other feature extraction method is DWT. This method
is based on the low-pass and high-pass filtering, therefore,
it is appropriate for the images with different resolution
levels [8]. In the emotion recognition task, DWT is applied
for the extraction of useful features from the face images
and can be replaced with its Orthogonal Wavelet Transform
(OWT) and Biorthogonal Wavelet Transform (BWT) having
the advantages of orthogonality [6]. Another method for facial emotion recognition is Gauss-Laguerre wavelet geometry
based technique. This method represents the processed image
in polar coordinates with the center at a particular pivot point.
The degree of freedom is one of the advantages that GaussLaguerre approach provides, which in turn allows to extract of
features of the desirable frequency from the images [23], [31].
However, DWT and Gauss-Laguerre approaches are complex
and require time and memory consuming calculations.
The classification of the extracted features can be implemented using Support Vector Machine (SVM) algorithm [8],
[28], K-Nearest Neighbor (KNN) method [23], [33], Random
Forest classification method [7], [34] and Gaussian process
[24]. The SVM principle is based on non-linear mapping and
identification of a hyperplane for the separation of data classes.
SVM classifier is used with the application of different kernel
functions, such as linear, quadratic, polynomial and radial
basis functions, to optimize the SVM performance [8], [28].
KNN approach is based on the numerical comparison of a
testing data sample with training data samples of each class
followed by the determination of the similarity scores. The
data class is defined by K most similar data samples based
on the minimum difference between train and test data [23],
[33]. KNN and SVM classifiers are simple and widely used for
emotion recognition, however these classifiers do not suppress
the outliers that leads to lower recognition accuracy.
The Random Forest classification method is based on the decision making tree approach with randomized parameters [7],
[34]. To construct the decision tree, Random Forest algorithm
takes a random set of options and selects the most suitable
from them [35]. Random Forest classifier is robust and has a
high recognition rate for the images of large resolution [36].
The drawback of Random Forest classifier is its computational
complexity. The other classification method is the Gaussian
process approach. Gaussian process is based on the predicted
probabilities and can be used for facial emotion recognition without application of feature selection algorithms. The
Gaussian process allows a simplified computational approach,
however has a smaller emotion recognition rate, comparing to
the other methods [24].
Even thought the a number of methods for feature extraction
and classification have been proposed, there is a lack of template matching methods that allow to achieve high recognition
accuracy with minimum computational cost. Therefore, the
method that we propose has a potential to reduce the computational complexity of facial emotion recognition operation and
increase the recognition accuracy due to the simplicity of the
algorithm, effective feature extraction and ability to suppress
outliers. The proposed algorithm can be implemented using
small computational capacity devices keeping facial emotion
recognition operation fast and accurate.
III. M ETHODOLOGY
The main idea of the proposed method is to extract the
spatial change of standardized pixels in a face image and
detect the emotion class of the face using a Min-Max similarity Nearest Neighbor classier. The images from the JAFFE
database [22] are used for the experiments. This database
contains 213 images of 10 female faces comprising 6 basic
facial expressions and neutral faces. The original images from
the database have the size of 256 × 256 pixels and in our
experiments they are cropped to a size of 101 × 114 pixels
retaining only the relevant information of the face area. A
block diagram of the proposed method is shown in Fig. 1.
JAFFE database does not contain the face images with different illumination conditions, the illumination change was
created by adding and subtracting the value of 10 from the
original image. Fig. 2 (b) illustrates the respective images after
Gaussian local normalization. Irrespective of the illumination
conditions, the three locally normalized images appear similarly with the minimum pixel intensity variation.
(b)
(a)
(c)
Fig. 2. (a) Sample image from JAFFE database with different lighting
conditions obtained by adding and subtracting a value of 10 from the original
image.(b) Normalized images of above sample images obtained by performing
Gaussian normalization using local mean and local standard deviation taken
over a window of size N=11. (c) Feature detected images from the normalized
image by performing local standard deviation using a window of size M=11.
Fig. 1. Outline of the proposed emotion recognition system
B. Feature detection
A. Pre-processing
Illumination variability introduces the inter-class feature
mismatch resulting in the inaccuracies in the detection of
emotion discriminating features from the face images. Therefore, image normalization is essential to reduce the inter-class
feature mismatch that can be viewed as intensity offsets. Since
the intensity offsets are uniform within a local region, we
perform Gaussian normalization using local mean and standard
deviation. The input image is represented as x(i, j), and y(i, j)
is the normalized output image, where i and j are the row and
column number of the processed image. The normalized output
image is calculated by Eq. 1 [37], where µ is a local mean
and σ is a local standard deviation computed over a window
of N × N size.
The feature parts useful for the facial emotion recognition
are eyes, eyebrows, cheeks and mouth regions. In this experiment, we perform the feature detection by calculating local
standard deviation of normalized image using a window of
M×M size. Eq. 4 is applied for the feature detection with
b = (M − 1)/2.
v
u
b
b
X
u 1 X
[y(k + i, h + j) − µ0 (i, j)]2 (4)
w(i, j) = t 2
M
k=−b h=−b
In Eq. 4 the parameter µ0 refers to the mean of the
normalized image y(i, j) and can be calculated by Eq. 5.
b
b
1 X X
µ (i, j) = 2
y(k + i, h + j)
M
0
y(i, j) =
x(i, j) − µ(i, j)
6σ(i, j)
(1)
The parameters µ and σ are calculated using Eq. 2 and 3
with a = (N − 1)/2.
µ(i, j) =
a
a
1 X X
x(k + i, h + j)
N2
(2)
k=−a h=−a
v
u
a
a
X
u 1 X
σ(i, j) = t 2
[x(k + i, h + j) − µ(i, j)]2 (3)
N
k=−a h=−a
An example image from the JAFFE database with three
different lighting conditions is shown in Fig. 2 (a). As the
(5)
k=−b h=−b
Fig. 2 (c) shows the results of feature detection corresponding to the normalized images.
C. Emotion Classification
For the recognition stage, we propose a Min-Max similarity metric in a Nearest Neighbor classifier framework. This
method is based on the principle that the ratio of the minimum
difference to the maximum difference of two pixels will
produce a unity output for equal pixels and an output less than
unity for unequal pixels. The proposed method is described in
Algorithm 1. The algorithm parameter trainlen refers to the
number of train images, N corresponds to the normalization
window size, and M indicates the feature detection window
size. Each cropped image is of m × n pixel dimension. The
parameter train is a feature array of trainlen×(m × n) size,
where each row corresponds to the processed train images.
After normalization and feature detection, test images are
stored in to a vector test of 1×(m × n) size. A single test
image is compared pixel-wise with processed train images of
all the classes in the feature array using the proposed Min-Max
classifier:
s(i, j) = [
min[train(i, j), test(1, j)] α
] ,
max[train(i, j), test(1, j)]
(6)
where a parameter α controls the power of exponential to
suppress the outlier similarities. Outliers are the observations
that come inconsistent with the remaining observations and
are common in real-time image processing. The presence of
outliers may cause misclassification of an emotion, since sample maximum and sample minimum are maximally sensitive
to them. In order to remove the effects of the outliers, α = 3 is
selected to introduce the maximum difference to the inter-class
images and minimum difference to the intra-class images.
After Min-Max classification, a column vector z of
trainlen × 1 size containing the weights obtained after comparing the test image with each of the trainlen number of train
images is calculated using Eq. 7.
z(i) =
m×n
X
s(i, j)
IV. R ESULTS AND COMPARISON
To benchmark the performance of the proposed algorithm,
leave-one-out cross-validation method has been used. In this
method, one image of each expression of each person is
applied for testing and the remaining images are used for
training [23]. The cross-validation is repeated 30 times to
obtain a statistically stable performance of the recognition
system and to ensure that all the images in JAFFE database are
used for testing at least once. The overall emotion recognition
accuracy of the system is obtained by averaging the results of
the cross-validation trials. Fig. 3 shows the different accuracy
rates obtained for each trial by varying feature detection
window size M from 3 to 21 and keeping normalization
window size at N = 11. It is shown that the maximum emotion
recognition accuracy this normalization window size can be
achieved with the detection window size of 11.
(7)
j=1
The classification output out shown in Eq. 8 is the maximum
value of z corresponding to the train image that shows the
maximum match. The recognized emotion class is the class of
the matched train image.
out = max(z(i))
(8)
Fig. 3. The accuracy rates obtained for four trials of leave-one-out crossvalidation method for different feature detection window size M ranging from
3 to 21.
Fig. 4 shows the recognition accuracy rates obtained for
Algorithm 1 Emotion Recognition using Min-Max classifier different normalization and feature detection window sizes
Require: Test image Y,Train images Xt ,trainlen,window size ranging from 3 to 21 for a single trial.
N and M
1: Crop the images to a dimension of m × n
2: for t = 1 to trainlen do
3:
C(i, j) = Xt (i,j)−µ(i,j)
q 6σ(i,j)
Pb
Pb
4:
W (i, j) = M12 k=−b h=−b [C(k + i, h + j) − µ(i, j)]2
5:
Store the value of W to an array train of dimension
trainlen × m × n
6: end for
7: for t = 1 to trainlen do
8:
V (i, j) = Y (i,j)−µ(i,j)
6σ(i,j)
q
Pb
Pb
9:
test(i, j) = M12 k=−b h=−b [V (k + i, h + j) − µ(i, j)]2
min[train(t,j),test(1,j)] 3
s(t, j) = [ max[train(t,j),test(1,j)]
]
Pm×n
11:
z(t) = j=1 s(t, j)
12:
out = max(z(t))
13: end for
10:
Fig. 4. Graph shows the accuracy rates obtained for different normalization
and feature detection window sizes ranging from M=N=3 to 21 for one trial
of leave-one-out cross-validation method.
To evaluate the performance of the proposed Min-Max
classifier, it has been compared with the other classifiers,
such as Nearest Neighbor [38] and Random Forest [39], after
normalization and feature detection. The obtained accuracy
values are shown in Table I.
TABLE I
T ESTING OF FEATURE DETECTED IMAGES ON OTHER CLASSIFIERS
Classif ier
Nearest Neighbor
Random Forest
Proposed Min-Max classifier
Accuracy(%)
92.85
88.57
98.57
The proposed method achieves a maximum accuracy of
98.57% for a window size of M = N = 11 which outperforms
the other existing methods in the literature for leave-one-out
cross-validation method. Table II shows the performance comparison of the proposed method with the other existing systems
on the same database using leave-one-out cross-validation
method. Applying the proposed method, we achieved emotion
recognition accuracy of 98.57% proving the significance of
emotion detection method with Min-Max similarity classifier.
TABLE II
C OMPARISON OF PROPOSED EMOTION RECOGNITION SYSTEM WITH
OTHER EXISTING SYSTEM BASED ON LEAVE - ONE - OUT CROSS - VALIDATION
methods. The effect of the proposed Min-Max classification
method on recognition accuracy is also important. Table I
shows the application of the other classification method for the
same approach where proposed Min-Max classifier illustrates
the performance improvement. In comparison to the existing
method, the proposed Min-Max classifier has an ability to
suppress the outliers that significantly impacts overall performance of this approach.
The simplicity and straightforwardness of the proposed
approach are also important due to the resultant low computational complexity. Most of the existing methods use complicated feature extraction and classification approaches that
double the complexity of the facial recognition process and
require the device with large computational capacity to process
the images. We address this problem applying direct local
mean and standard deviation based feature detection methods
and simple Min-Max classification method. In comparison to
the existing feature detection methods, such as PCA [27] and
LDA [8], the proposed method is straightforward and does
not require dimensionality reduction. Moreover, simple MinMax classification method also reduce the computational time,
comparing to the traditional classification approaches, such
as SVM [28], KNN [33], Gaussian process [24] and Neural
Network [41]. Therefore, the algorithm can be run on the
device with low computational capacity.
METHOD
Existing systems
Cheng et. al [24]
Hamester et. al [40]
Frank et. al [8]
Poursaberi et. al [23]
Hegde et. al [29]
Proposed method
M ethod used
Gaussian Process
Convolutional Neural
Network
DWT + 2D-LDA
+SVM
Gauss
Laguerre
wavelet+ KNN
Gabor and geometry
based features
Min-Max classifier
Accuracy(%)
93.43
95.80
95.71
96.71
97.14
98.57
In addition, comparing to the proposed emotion recognition
system, the other existing methods require specialized feature
extraction and dimensionality reduction techniques before
classification stage. The main advantages of the proposed
emotion recognition system are its simplicity and straightforwardness.
V. D ISCUSSION
The main advantages of the proposed facial motion recognition approach are high recognition accuracy and low computational complexity. To achieve high recognition accuracy,
the effective feature selection is required. In the existing
methods, the complex algorithms for feature selection are
applied without normalizing the image. The normalization
stage is important and has a considerable effect on the accuracy
of the feature selection process. In the proposed algorithm,
we apply simple pre-processing methods to normalize the
images and eliminate intensity offsets that effects the accuracy
of the feature selection process and leads to the increase of
emotion recognition accuracy, in comparison to the existing
VI. C ONCLUSION
In this paper, we have represented the approach to improve the performance of emotion recognition task using
template matching method. We have demonstrated that the
pixel normalization and feature extraction based on local
mean and standard deviation followed up by the Mix-Max
similarity classification can result in the improvement of
overall classification rates. We achieved emotion recognition
accuracy of 98.57% that exceeds the performance of the
existing methods for the JAFFE database for leave-one-out
cross-validation method. The capability of the algorithm to
suppress feature outliers and remove intensity offsets results
in the increase of emotion recognition accuracy. Moreover,
the proposed method is simple and direct, in comparison to
the other existing methods requiring the application of dimensionality reduction techniques and complex classification
methods for computation and analysis. Low computational
complexity is a noticeable benefit of the proposed algorithm
that implies the reduction of computational time and required
memory space. This method can be extended to the other
template matching problems, such as face recognition and
biometric matching. The drawback of the proposed method, as
in any other template matching method, is the metric learning
requiring to create the templates for each class that, in turn,
consumes additional memory space to store the templates.
R EFERENCES
[1] W.-L. Chao, J.-J. Ding, and J.-Z. Liu, “Facial expression recognition
based on improved local binary pattern and class-regularized locality
preserving projection,” Signal Processing, vol. 117, pp. 1 – 10,
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
2015. [Online]. Available: //www.sciencedirect.com/science/article/pii/
S0165168415001425
T. Danisman, I. M. Bilasco, J. Martinet, and C. Djeraba, “Intelligent
pixels of interest selection with application to facial expression
recognition using multilayer perceptron,” Signal Processing, vol. 93,
no. 6, pp. 1547 – 1556, 2013, special issue on Machine Learning in
Intelligent Image Processing. [Online]. Available: //www.sciencedirect.
com/science/article/pii/S0165168412002745
D. Ververidis and C. Kotropoulos, “Fast and accurate sequential floating
forward feature selection with the bayes classifier applied to speech
emotion recognition,” Signal Processing, vol. 88, no. 12, pp. 2956
– 2970, 2008. [Online]. Available: //www.sciencedirect.com/science/
article/pii/S0165168408002120
X. Li, Q. Ruan, Y. Jin, G. An, and R. Zhao, “Fully automatic
3d facial expression recognition using polytypic multi-block local
binary patterns,” Signal Processing, vol. 108, pp. 297 – 308,
2015. [Online]. Available: //www.sciencedirect.com/science/article/pii/
S0165168414004563
S. Gupta, A. Mehra et al., “Speech emotion recognition using svm
with thresholding fusion,” in Signal Processing and Integrated Networks
(SPIN), 2015 2nd International Conference on. IEEE, 2015, pp. 570–
574.
Y. D. Zhang, Z. J. Yang, H. M. Lu, X. X. Zhou, P. Phillips, Q. M. Liu,
and S. H. Wang, “Facial emotion recognition based on biorthogonal
wavelet entropy, fuzzy support vector machine, and stratified cross
validation,” IEEE Access, vol. 4, pp. 8375–8385, 2016.
S. Zhao, F. Rudzicz, L. G. Carvalho, C. Marquez-Chin, and S. Livingstone, “Automatic detection of expressed emotion in parkinson’s
disease,” in 2014 IEEE International Conference on Acoustics, Speech
and Signal Processing (ICASSP), May 2014, pp. 4813–4817.
F. Y. Shih, C.-F. Chuang, and P. S. Wang, “Performance comparisons
of facial expression recognition in jaffe database,” International Journal
of Pattern Recognition and Artificial Intelligence, vol. 22, no. 03, pp.
445–459, 2008.
S. P., K. D., and S. Tripathi, “Pose invariant method for emotion
recognition from 3d images,” in 2015 Annual IEEE India Conference
(INDICON), Dec 2015, pp. 1–5.
A. C. Cruz, B. Bhanu, and N. S. Thakoor, “Vision and attention
theory based sampling for continuous facial emotion recognition,” IEEE
Transactions on Affective Computing, vol. 5, no. 4, pp. 418–431, Oct
2014.
M. H. A. Latif, H. M. Yusof, S. N. Sidek, and N. Rusli, “Thermal
imaging based affective state recognition,” in 2015 IEEE International
Symposium on Robotics and Intelligent Sensors (IRIS), Oct 2015, pp.
214–219.
V. Sudha, G. Viswanath, A. Balasubramanian, P. Chiranjeevi, K. Basant,
and M. Pratibha, “A fast and robust emotion recognition system for realworld mobile phone data,” in 2015 IEEE International Conference on
Multimedia Expo Workshops (ICMEW), June 2015, pp. 1–6.
Y. Sun and Y. An, “Research on the embedded system of facial
expression recognition based on hmm,” in 2010 2nd IEEE International
Conference on Information Management and Engineering, April 2010,
pp. 727–731.
P. Chiranjeevi, V. Gopalakrishnan, and P. Moogi, “Neutral face classification using personalized appearance models for fast and robust emotion
detection,” IEEE Transactions on Image Processing, vol. 24, no. 9, pp.
2701–2711, Sept 2015.
D. Ghimire and J. Lee, “Geometric feature-based facial expression
recognition in image sequences using multi-class adaboost and support
vector machines,” Sensors, vol. 13, no. 6, pp. 7714–7734, 2013.
R. Brunelli and T. Poggio, “Face recognition: features versus templates,”
IEEE Transactions on Pattern Analysis and Machine Intelligence,
vol. 15, no. 10, pp. 1042–1052, Oct 1993.
L. Zhang, D. Tjondronegoro, and V. Chandran, “Toward a more robust
facial expression recognition in occluded images using randomly sampled gabor based templates,” in 2011 IEEE International Conference on
Multimedia and Expo, July 2011, pp. 1–6.
X. Wang, X. Liu, L. Lu, and Z. Shen, “A new facial expression
recognition method based on geometric alignment and lbp features,”
in 2014 IEEE 17th International Conference on Computational Science
and Engineering, Dec 2014, pp. 1734–1737.
H. Tang, B. Yin, Y. Sun, and Y. Hu, “3d face recognition using local
binary patterns,” Signal Processing, vol. 93, no. 8, pp. 2190 – 2198,
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
[36]
[37]
2013, indexing of Large-Scale Multimedia Signals. [Online]. Available:
//www.sciencedirect.com/science/article/pii/S0165168412001120
X. Zhao, J. Zou, H. Li, E. Dellandra, I. A. Kakadiaris, and L. Chen,
“Automatic 2.5-d facial landmarking and emotion annotation for social
interaction assistance,” IEEE Transactions on Cybernetics, vol. 46, no. 9,
pp. 2042–2055, Sept 2016.
S. K. A. Kamarol, M. H. Jaward, J. Parkkinen, and R. Parthiban,
“Spatiotemporal feature extraction for facial expression recognition,”
IET Image Processing, vol. 10, no. 7, pp. 534–541, 2016.
M. J. Lyons, S. Akamatsu, M. Kamachi, J. Gyoba, and J. Budynek, “The
japanese female facial expression (jaffe) database,” in Proceedings of
third international conference on automatic face and gesture recognition,
1998, pp. 14–16.
A. Poursaberi, H. A. Noubari, M. Gavrilova, and S. N. Yanushkevich,
“Gauss–laguerre wavelet textural feature fusion with geometrical information for facial expression identification,” EURASIP Journal on Image
and Video Processing, vol. 2012, no. 1, pp. 1–13, 2012.
F. Cheng, J. Yu, and H. Xiong, “Facial expression recognition in jaffe
dataset based on gaussian process classification,” IEEE Transactions on
Neural Networks, vol. 21, no. 10, pp. 1685–1690, Oct 2010.
S. Kamal, F. Sayeed, and M. Rafeeq, “Facial emotion recognition for
human-computer interactions using hybrid feature extraction technique,”
in Data Mining and Advanced Computing (SAPIENCE), International
Conference on. IEEE, 2016, pp. 180–184.
B. Fasel and J. Luettin, “Automatic facial expression analysis: a survey,”
Pattern recognition, vol. 36, no. 1, pp. 259–275, 2003.
S. Rajendran, A. Kaul, R. Nath, A. Arora, and S. Chauhan, “Comparison
of pca and 2d-pca on indian faces,” in Signal Propagation and Computer
Technology (ICSPCT), 2014 International Conference on. IEEE, 2014,
pp. 561–566.
A. Basu, A. Routray, S. Shit, and A. K. Deb, “Human emotion recognition from facial thermal image based on fused statistical feature and
multi-class svm,” in 2015 Annual IEEE India Conference (INDICON),
Dec 2015, pp. 1–5.
G. Hegde, M. Seetha, and N. Hegde, “Kernel locality preserving
symmetrical weighted fisher discriminant analysis based subspace
approach for expression recognition,” Engineering Science and
Technology, an International Journal, vol. 19, no. 3, pp. 1321 – 1333,
2016. [Online]. Available: //www.sciencedirect.com/science/article/pii/
S2215098615300616
L. Zhang, D. Tjondronegoro, and V. Chandran, “Random gabor
based templates for facial expression recognition in images with
facial occlusion,” Neurocomputing, vol. 145, pp. 451 – 464,
2014. [Online]. Available: //www.sciencedirect.com/science/article/pii/
S0925231214005712
A. Poursaberi, S. Yanushkevich, and M. Gavrilova, “An efficient facial
expression recognition system in infrared images,” in Emerging Security
Technologies (EST), 2013 Fourth International Conference on. IEEE,
2013, pp. 25–28.
D. Marvadi, C. Paunwala, M. Joshi, and A. Vora, “Comparative analysis
of 3d face recognition using 2d-pca and 2d-lda approaches,” in Engineering (NUiCONE), 2015 5th Nirma University International Conference
on. IEEE, 2015, pp. 1–5.
S. Kamal, F. Sayeed, M. Rafeeq, and M. Zakir, “Facial emotion
recognition for human-machine interaction using hybrid dwt-sfet feature
extraction technique,” in Cognitive Computing and Information Processing (CCIP), 2016 Second International Conference on. IEEE, 2016,
pp. 1–5.
W. Wei, Q. Jia, and G. Chen, “Real-time facial expression recognition
for affective computing based on kinect,” in 2016 IEEE 11th Conference
on Industrial Electronics and Applications (ICIEA), June 2016, pp. 161–
165.
A. Hariharan and M. T. P. Adam, “Blended emotion detection for
decision support,” IEEE Transactions on Human-Machine Systems,
vol. 45, no. 4, pp. 510–517, Aug 2015.
J. Jia, Y. Xu, S. Zhang, and X. Xue, “The facial expression recognition
method of random forest based on improved pca extracting feature,” in
2016 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC), Aug 2016, pp. 1–5.
A. P. James and S. Dimitrijev, “Inter-image outliers and their application
to image classification,” Pattern recognition, vol. 43, no. 12, pp. 4101–
4112, 2010.
[38] A. Saha and Q. J. Wu, “Curvelet entropy for facial expression recognition,” in Pacific-Rim Conference on Multimedia. Springer, 2010, pp.
617–628.
[39] A. Liaw and M. Wiener, “Classification and regression by randomforest,”
R news, vol. 2, no. 3, pp. 18–22, 2002.
[40] D. Hamester, P. Barros, and S. Wermter, “Face expression recognition
with a 2-channel convolutional neural network,” in 2015 International
Joint Conference on Neural Networks (IJCNN), July 2015, pp. 1–8.
[41] M. N. Dailey, G. W. Cottrell, C. Padgett, and R. Adolphs, “Empath: A
neural network that categorizes facial expressions,” Journal of cognitive
neuroscience, vol. 14, no. 8, pp. 1158–1173, 2002.
| 1 |
Quotients of mapping class groups from Out(Fn )
1
arXiv:1608.03868v3 [] 27 Mar 2018
Quotients of mapping class groups from Out(Fn)
Khalid Bou-Rabee∗
Christopher J. Leininger †
March 29, 2018
Abstract
We give a short proof of Masbaum and Reid’s result that mapping class groups involve any
finite group, appealing to free quotients of surface groups and a result of Gilman, following
Dunfield–Thurston.
keywords: mapping class groups, involve, finite groups
Let Σg be a closed oriented surface of genus g and Fn a nonabelian free group of rank n. The
fundamental group, π1 (Σg ), is residually free [Bau62] and Fn has a wealth finite index subgroups
[MKS04, pp. 116]. In [DT06], N. Dunfield and W. Thurston consider the action of the mapping class
group Mod(Σg ) on the set of finite index normal subgroups of π1 (Σg ) with finite simple quotients,
and in particular those containing the kernel of an epimorphism π1 (Σg ) → Fg . Their observations
relating to work of R. Gilman [Gil77], give rise to finite index subgroups of Mod(Σg ) that surjects
symmetric groups of arbitrarily large order; see the discussion following the proof of [Bau62, Theorem 7.4].
Theorem 1 (Dunfield–Thurston). For all g ≥ 3, r ≥ 1, there exists an epimorphism φ : π1 (Σg ) → Fg
and a prime q, so that
N ⊳ π1 (Σg ) | ker φ < N and π1 (Σg )/N ∼
= PSL(2, q)
has at least r elements, and its (finite index) stabilizer in Mod(Σg ) acts as the full symmetric group
on this set.
We explain the proof of this in Section 1.2. In this note, we observe that since every finite group
embeds in some finite symmetric group, Theorem 1 provides a new elementary proof of a result of
G. Masbaum and A. Reid [MR12]. Recall that a group G involves a group H if there exists a finite
index subgroup L ≤ G and a surjective map φ : L → H.
Corollary 2 (Masbaum–Reid). Let Σg,m be a surface of genus g with m punctures. If 3g − 3 + m ≥ 1
(or g = 1 and m = 0) then Mod(Σg,m ) involves any finite group.
The few mapping class groups not covered by the corollary are finite groups; see, e.g. [FM12].
Corollary 2 is also proved using arithmetic methods by F. Grunewald, M. Larsen, A. Lubotzky, and
J. Malestein [GLLM15].
Further applications of the quotients from Theorem 1 include new proofs of residual finiteness
and separability of handlebody groups; see §3 for theorem statements and proofs.
Acknowledgements. The authors would like to thank Alan Reid and Alex Lubotzky for helpful
conversations and Benson Farb for suggesting improvements on an earlier draft.
∗ K.B.
† C.L.
supported in part by NSF grant DMS-1405609
supported in part by NSF grant DMS-1510034.
Quotients of mapping class groups from Out(Fn )
2
1 Preliminaries
1.1 G-defining subgroups
Here we collect some results surrounding definitions and discussions in R. Gilman [Gil77]. Let
G and F be groups. A G-defining subgroup of F is a normal subgroup N of F such that F/N is
isomorphic to G. Let X(F, G) denote the set of all G–defining subgroups of F. The automorphism
group Aut(F) acts on normal subgroups of F while preserving their quotients, and hence on the set
X(F, G) of G-defining subgroups of F. Since inner automorphisms act trivially, the action descends
to an action of the outer automorphism group of F, Out(F), on X(F, G). If G is finite, and F is
finitely generated, one obtains a finite permutation representation of Out(F). Let Fn be the free
group of rank n. The following is Theorem 1 of [Gil77].
Theorem 3 (Gilman). For any n ≥ 3 and prime p ≥ 5, Out(Fn ) acts on the PSL(2, p)-defining
subgroups of Fn as the alternating or symmetric group, and both cases occur for infinitely many
primes.
From the proof, Gilman obtains the following strengthened form of residual finiteness for Out(Fn ).
Corollary 4 (Gilman). For any n ≥ 3, the group Out(Fn ) is residually finite alternating and residually finite symmetric via the quotients from Theorem 3.
This means that for any φ ∈ Out(Fn ) − {1}, there exist primes p so that the action of Out(Fn )
on X(Fn , PSL(2, p)) is alternating (and also primes p so that the action is symmetric), and φ acts
nontrivially.
We will also need the following well-known fact, obtained from the classical embedding of a
free group into PSL(2, Z) as a subgroup of finite index, (c.f. A. Peluso [Pel66]).
Lemma 5. For any n ≥ 2, any element α ∈ Fn − {1}, and all but finitely many primes p, there exists
a PSL(2, p)–defining subgroup of Fn not containing α .
Proof. Let Fn be a finite index, free subgroup of rank n in the free group F2 := ha, bi. Identify Fn
with its image in PSL(2, Z) under the injective homomorphism F2 → PSL(2, Z) given by
1 2
1 0
a 7→
, and b 7→
.
0 1
2 1
Let α ∈ Fn − {1} be given and let A ∈ SL(2, Z) be a matrix representing α . Since α 6= 1, we may
assume that either A has a nonzero off-diagonal entry d 6= 0, or else a diagonal entry d > 1. Then for
any prime p not dividing d in the former case, or d ± 1 in the latter, we have that π p (α ) is nontrivial
in the quotient π p : PSL(2, Z) → PSL(2, p); that is, α ∈
/ ker π p .
Since Fn has finite index in F2 , there exists m ≥ 1 so that the matrices
1 m
1 0
and
0 1
m 1
represent elements of Fn in PSL(2, Z). For any prime p not dividing m, the π p –image of these
elements generate PSL(2, p). Thus, for all but finitely many primes p, ker π p ∩ Fn is a PSL(2, p)–
defining subgroup not containing α .
1.2 Handlebody subgroups and maps to free groups
Let Σ = Σg be a closed surface of genus g ≥ 2 and let H = Hg be a handlebody of genus g. Given a
homeomorphism φ : Σ → ∂ H ⊂ H, the induced homomorphism is a surjection φ∗ : π1 (Σ) → π1 (H) ∼
=
Fg . As is well-known, every epimorphism π1 (Σ) → Fg arises in this way (see e.g. Lemma 2.2 in
[LR02]). The kernel, ∆φ = ker(φ∗ ) is an Fg –defining subgroup, and is the subgroup generated by the
Quotients of mapping class groups from Out(Fn )
3
simple closed curves on Σ whose φ –images bound disks in H. We write Hφ for the handlebody H,
equipped with the homeomorphism φ : Σ → ∂ H.
Let Mod(Hφ ) denote the subgroup of the mapping class group Mod(Σ) consisting of the isotopy
classes of homeomorphisms that extend over Hφ (via the identification φ : Σ → ∂ H). Equivalently,
Mod(Hφ ) consists of those mapping classes [ f ] such that f∗ (∆φ ) = ∆φ ; that is Mod(Hφ ) is the
stabilizer in Mod(Σ) of ∆φ . Any element [ f ] ∈ Mod(Hφ ) induces an automorphism we denote
Φ∗ ([ f ]) ∈ Out(Fg ), which defines a homomorphism Φ∗ : Mod(Hφ ) → Out(Fg ). The main result of
[Gri64] implies the next proposition.
Proposition 6. For any g ≥ 0, and homeomorphism φ : Σ → ∂ H, Φ∗ : Mod(Hφ ) → Out(Fg ) is
surjective.
The kernel of Φ∗ , the set of mapping classes in Mod(Hφ ) that act trivially on π1 (H) is also a
well-studied subgroup denoted Mod0 (Hφ ).
Recall that X(Fg , G) and X(π1 (Σ), G) are the sets of G–defining subgroups of Fg and π1 (Σ),
respectively. Define
X φ (π1 (Σ), G) := {φ∗−1 (N) | N ∈ X(Fg , G)} ⊂ X(π1 (Σ), G),
Alternatively, this is precisely the set of G–defining subgroups containing ∆φ :
X φ (π1 (Σ), G) = {N ∈ X(π1 (Σ), G) | ∆φ < N}.
Lemma 7. The handlebody subgroup is contained in the stabilizer
Mod(Hφ ) < stab X φ (π1 (Σ), G).
Moreover, if Out(Fg ) acts on X(Fg , G) as the full symmetric group, then Mod(Hφ ) (and hence
stab X φ (π1 (Σ), G)) acts on X φ (π1 (Σ), G) as the full symmetric group.
Proof. Let N ∈ X φ (π1 (Σ), G) and let [ f ] ∈ Mod(Hφ ), where f is a representative homeomorphism.
Since f∗ (∆φ ) = ∆φ , we have ∆φ < f∗ (N), and f∗ (N) ∈ X φ (π1 (Σ), G). Thus, f∗ preserves X φ (π1 (Σ), G),
as required.
The last statement follows immediately from Proposition 6 and the fact that the bijection from
the correspondence theorem X φ (π1 (Σ), G) → X(Fg , G) is Φ∗ –equivariant.
2 Mapping class groups involve any finite group: The proofs of
Theorem 1 and Corollary 2.
Here we give the proof of Theorem 1, following Dunfield–Thurston (see [DT06, pp. 505-506]).
Proof of Theorem 1. Fix g ≥ 3 and let Π be the infinitely many primes for which Out(Fg ) acts on the
PSL(2, p)-defining subgroups as the symmetric group, guaranteed by Theorem 3. As a consequence
of Corollary 4, the cardinality of X(Fg , PSL(2, p)) is unbounded over any infinite set of primes p,
and hence there exists a prime p ∈ Π where the number of PSL(2, p)-defining subgroups is R ≥ r.
By Lemma 7, stab X φ (π1 (Σ), PSL(2, p)) acts on X φ (π1 (Σ), PSL(2, p)) as the symmetric group,
defining a surjective homomorphism
stab X φ (π1 (Σ), PSL(2, p)) → Sym(X φ (π1 (Σ), PSL(2, p))) ∼
= SR .
Since X(π1 (Σ), PSL(2, p)) is a finite set, stab X φ (π1 (Σ), PSL(2, p)) < Mod(Σ) has finite index, completing the proof.
Quotients of mapping class groups from Out(Fn )
4
Proof of Corollary 2. For g, m as in the statement and any r ∈ N, we show that there is a finite index
subgroup of Mod(Σg,m ) that surjects a symmetric group on at least r elements. Since any finite
subgroup is isomorphic to a subgroup of some such symmetric group, this will prove the theorem.
First observe that for any m, g ≥ 0, the kernel of the action of Mod(Σg,m ) on the m punctures
of Σg,m is a finite index subgroup Mod′ (Σg,m ) < Mod(Σg,m ). Furthermore, if 0 ≤ m < m′ , there
is a surjective homomorphism Mod′ (Σg,m′ ) → Mod′ (Σg,m ) obtained by “filling in” m′ − m of the
punctures; see [FM12].
Now, because Mod′ (Σ0,4 ) ∼
= F2 , and the symmetric group on r elements is 2–generated, it follows
that Mod′ (Σ0,4 ) surjects Sr . From the previous paragraph, it follows that Mod′ (Σ0,m ) surjects Sr for
all m ≥ 4. Similarly, Mod(Σ1,0 ) ∼
= SL(2, Z), which has a finite index subgroup isomorphic to F2 ,
and so there is a finite index subgroup of Mod(Σ1,m ) that surjects Sr for all m ≥ 0. As shown
in [BH71], there is a surjective homomorphism Mod(Σ2,0 ) → Mod(Σ0,6 ), and consequently, we
may find surjective homomorphisms from finite index subgroups of Mod(Σ2,m ) to Sr for all m ≥
0. From this and the previous paragraph, it suffices to assume g ≥ 3 and m = 0. The required
surjective homomorphism to a symmetric group in this case follows from Theorem 1, completing
the proof.
3 Separating handlebody subgroups and residual finiteness
The finite quotients of Mod(Σ) coming from surjective homomorphisms π1 (Σg ) → Fg also allow us
to deduce the following result of [LM07]. Recall that a subgroup K < F is said to be separable in F
if for any α ∈ F − K, there exists a finite index subgroup G < F containing K and not containing α .
Theorem 8 (Leininger-McReynolds). For any g ≥ 2 and homeomorphism to the boundary of a
handlebody, φ : Σ → ∂ H, the groups Mod(Hφ ) and Mod0 (Hφ ) are separable in Mod(Σg ).
While the proof of separability of Mod(Hφ ) in Mod(Σg ) works for all g ≥ 2, separability of Mod0 (Σg )
only follows from the discussion here when g ≥ 3.
Proof. Suppose Σ = Σg for g ≥ 2, and observe that for any p, any [h] ∈ stab X φ (π1 (Σ), PSL(2, p)),
and any α ∈ ∆φ , we have h∗ (α ) ∈ K for all K ∈ X φ (π1 (Σ), PSL(2, p)). By Lemma 7 this is true for
all [h] ∈ Mod(Hφ ).
Now let [ f ] ∈ Mod(Σ) − Mod(Hφ ), so that f∗ (∆φ ) 6< ∆φ . Let γ ∈ ∆φ be an element such that (the
conjugacy class of) f∗ (γ ) is not in ∆φ . (In fact, well-defining f∗ (γ ) requires a choice of basepoint
preserving representative homeomorphisms for the mapping class of f , which we make arbitrarily.)
Then φ∗ ( f∗ (γ )) ∈ Fg − {1}, and so by Lemma 5, we can find a prime p and a PSL(2, p)–defining
subgroup N ∈ X(Fg , PSL(2, p)) so that φ∗ ( f∗ (γ )) 6∈ N. Therefore,
f∗ (γ ) 6∈ φ∗−1 (N) ∈ X φ (π1 (Σ), PSL(2, p)),
and hence [ f ] 6∈ stab X φ (π1 (Σ), PSL(2, p)). Since stab X φ (π1 (Σ), PSL(2, p)) is a finite index subgroup containing Mod(Hφ ) (by Lemma 7) and not containing [ f ], and since [ f ] was arbitrary, it
follows that Mod(Hφ ) is separable.
Since Mod0 (Hφ ) < Mod(Hφ ) and since Mod(Hφ ) is separable, it suffices to consider an element
[ f ] ∈ Mod(Hφ ) \ Mod0 (Hφ ), and produce a finite index subgroup of Mod(Σ) containing Mod0 (Hφ )
and not containing [ f ]. For all p, Mod0 (Hφ ) is contained in the subgroup of stab X φ (π1 (Σ), PSL(2, p))
consisting of those mapping classes that act trivially on X φ (π1 (Σ), PSL(2, p)). Since [ f ] 6∈ Mod0 (Hφ ),
Φ∗ ([ f ]) 6= 1 in Out(Fg ). For g ≥ 3, Corollary 4 implies that for some p, Φ∗ ([ f ]) acts nontrivially
on X(Fg , PSL(2, p)). Therefore, [ f ] acts nontrivially on X φ (π1 (Σ), PSL(2, p)), and so the finite index
subgroup G < Mod(Σ) consisting of those mapping classes preserving the subset X φ (π1 (Σ), PSL(2, p))
and acting trivially on this does not contain [ f ], proving that Mod0 (Hφ ) is separable.
Mapping class groups were shown to be residually finite by Grossman as a consequence of the
fact that surface groups are conjugacy separable; see [Gro75].
Quotients of mapping class groups from Out(Fn )
5
Theorem 9 (Grossman). Mapping class groups are residually finite.
Residual finiteness of Mod(Σg ) follows immediately from separability of the handlebody subgroups Mod(Hφ ), and the following.
Lemma 10. The intersection of all handlebody subgroups Mod(Hφ ), over all homeomorphisms
φ : Σg → ∂ H is trivial if g ≥ 3, and isomorphic to Z/2Z if g = 2. The intersection of handlebody
subgroups Mod0 (Hφ ) is trivial for all g ≥ 2.
Proof. In [Mas86], Masur proved that the limit set of the handlebody subgroup in the Thurston
boundary of Teichmüller space is a nowhere dense subset. The intersection of all handlebody subgroups is a normal subgroup and so is either finite, or else has limit set equal to the entire Thurston
boundary. By Masur’s result, we must be in the former case, and hence the intersection of handlebody subgroups is finite. But Mod(Σg ) has no nontrivial finite, normal subgroups if g ≥ 3, while
for g = 2, the only nontrivial, finite normal subgroup is the order-two subgroup generated by the
hyperelliptic involution. This proves the first statement. The second follows from the first and the
fact that the hyperelliptic involution of Σ2 induces a nontrivial automorphism of F2 ∼
= π1 (H), for any
homeomorphism φ : Σ2 → H.
Proof of Theorem 9 for Mod(Σg ), with g ≥ 2. An equivalent formulation of residual finiteness is that
the intersection of all finite index subgroups is trivial. Therefore Theorem 8 and Lemma 10 immediately implies the result.
References
[Bau62]
Gilbert Baumslag, On generalised free products, Math. Z. 78 (1962), 423–438. MR
0140562
[BH71]
Joan S. Birman and Hugh M. Hilden, On the mapping class groups of closed surfaces
as covering spaces, Advances in the theory of Riemann surfaces (Proc. Conf., Stony
Brook, N.Y., 1969), Princeton Univ. Press, Princeton, N.J., 1971, pp. 81–115. Ann. of
Math. Studies, No. 66. MR 0292082
[DT06]
Nathan M. Dunfield and William P. Thurston, Finite covers of random 3-manifolds,
Invent. Math. 166 (2006), no. 3, 457–521. MR 2257389
[FM12]
Benson Farb and Dan Margalit, A primer on mapping class groups, Princeton Mathematical Series, vol. 49, Princeton University Press, Princeton, NJ, 2012. MR 2850125
[Gil77]
Robert Gilman, Finite quotients of the automorphism group of a free group, Canad. J.
Math. 29 (1977), no. 3, 541–551. MR 0435226
[GLLM15] Fritz Grunewald, Michael Larsen, Alexander Lubotzky, and Justin Malestein, Arithmetic quotients of the mapping class group, Geom. Funct. Anal. 25 (2015), no. 5, 1493–
1542. MR 3426060
[Gri64]
H. B. Griffiths, Automorphisms of a 3-dimensional handlebody, Abh. Math. Sem. Univ.
Hamburg 26 (1963/1964), 191–210. MR 0159313
[Gro75]
Edna K. Grossman, On the residual finiteness of certain mapping class groups, J. London Math. Soc. (2) 9 (1974/75), 160–164. MR 0405423
[LM07]
Christopher J. Leininger and D. B. McReynolds, Separable subgroups of mapping class
groups, Topology Appl. 154 (2007), no. 1, 1–10. MR 2271769
Quotients of mapping class groups from Out(Fn )
6
[LR02]
Christopher J. Leininger and Alan W. Reid, The co-rank conjecture for 3-manifold
groups, Algebr. Geom. Topol. 2 (2002), 37–50 (electronic). MR 1885215
[Mas86]
Howard Masur, Measured foliations and handlebodies, Ergodic Theory Dynam. Systems 6 (1986), no. 1, 99–116. MR 837978
[MKS04]
Wilhelm Magnus, Abraham Karrass, and Donald Solitar, Combinatorial group theory,
second ed., Dover Publications, Inc., Mineola, NY, 2004, Presentations of groups in
terms of generators and relations. MR 2109550
[MR12]
Gregor Masbaum and Alan W. Reid, All finite groups are involved in the mapping class
group, Geom. Topol. 16 (2012), no. 3, 1393–1411. MR 2967055
[Pel66]
Ada Peluso, A residual property of free groups, Comm. Pure Appl. Math. 19 (1966),
435–437. MR 0199245
Khalid Bou-Rabee
Department of Mathematics, CCNY CUNY
E-mail: [email protected]
Christopher Leininger
Department of Mathematics, University of Illinois Urbana-Champaigne
E-mail: [email protected]
| 4 |
arXiv:1506.02361v1 [] 8 Jun 2015
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
Julien Chevallier
Laboratoire J. A. Dieudonné, UMR CNRS 6621, Université de Nice Sophia-Antipolis, Parc
Valrose
06108 Nice Cedex 2, France
[email protected]
Marı́a José Cáceres
Departamento de Matemática Aplicada , Universidad de Granada, Campus de Fuentenueva
E-18071 Granada, Spain
[email protected]
Marie Doumic
UPMC University of Paris 6, JL Lions Lab., 4 place Jussieu
75005 Paris, France
Patricia Reynaud-Bouret
Laboratoire J. A. Dieudonné, UMR CNRS 6621, Université de Nice Sophia-Antipolis, Parc
Valrose
06108 Nice Cedex 2, France
[email protected]
The spike trains are the main components of the information processing in the brain.
To model spike trains several point processes have been investigated in the literature. And
more macroscopic approaches have also been studied, using partial differential equation
models. The main aim of the present article is to build a bridge between several point
processes models (Poisson, Wold, Hawkes) that have been proved to statistically fit
real spike trains data and age-structured partial differential equations as introduced by
Pakdaman, Perthame and Salort.
Keywords: Hawkes process; Wold process; renewal equation; neural network
AMS Subject Classification:35F15, 35B10, 92B20, 60G57, 60K15
Introduction
In Neuroscience, the action potentials (spikes) are the main components of the realtime information processing in the brain. Indeed, thanks to the synaptic integration,
the membrane voltage of a neuron depends on the action potentials emitted by some
others, whereas if this membrane potential is sufficiently high, there is production
of action potentials.
1
June 9, 2015
2
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
To access those phenomena, schematically, one can proceed in two ways: extracellularly record in vivo several neurons, at a same time, and have access to
simultaneous spike trains (only the list of events corresponding to action potentials) or intracellularly record the whole membrane voltage of only one neuron at a
time, being blind to the nearby neurons.
Many people focus on spike trains. Those data are fundamentally random and
can be modelled easily by time point processes, i.e. random countable sets of points
on R+ . Several point processes models have been investigated in the literature, each
of them reproducing different features of the neuronal reality. The easiest model is
the homogeneous Poisson process, which can only reproduce a constant firing rate
for the neuron, but which, in particular, fails to reproduce refractory periodsa . It
is commonly admitted that this model is too poor to be realistic. Indeed, in such a
model, two points or spikes can be arbitrary close as soon as their overall frequency
is respected in average. Another more realistic model is the renewal process 37 ,
where the occurrence of a point or spike depends on the previous occurrence. More
precisely, the distribution of delays between spikes (also called inter-spike intervals,
ISI) is given and a distribution, which provides small weights to small delays, is
able to mimic refractory periods. A deeper statistical analysis has shown that Wold
processes is showing good results, with respect to goodness-of-fit test on real data
sets 38 . Wold processes are point processes for which the next occurrence of a spike
depends on the previous occurrence but also on the previous ISI. From another
point of view, the fact that spike trains are usually non stationary can be easily
modelled by inhomogeneous Poisson processes 43 . All those models do not reflect
one of the main features of spike trains, which is the synaptic integration and there
has been various attempts to catch such phenomenon. One of the main model is the
Hawkes model, which has been introduced in 13 and which has been recently shown
to fit several stationary data 40 . Several studies have been done in similar directions
(see for instance 5 ). More recently a vast interest has been shown to generalized
linear models 36 , with which one can infer functional connectivity and which are
just an exponential variant of Hawkes models.
There has also been several models of the full membrane voltage such as
Hodgkin-Huxley models. It is possible to fit some of those probabilistic stochastic differential equations (SDE) on real voltage data 22 and to use them to estimate
meaningful physiological parameters 18 . However, the lack of simultaneous data
(voltages of different neurons at the same time) prevent these models to be used as
statistical models that can be fitted on network data, to estimate network parameters. A simple SDE model taking synaptic integration into account is the well-known
Integrate-and-Fire (IF) model. Several variations have been proposed to describe
several features of real neural networks such as oscillations 7,8 . In particular, there
exists hybrid IF models including inhomogeneous voltage driven Poisson process 21
that are able to mimic real membrane potential data. However up to our knowledge
a Biologically,
a neuron cannot produce two spikes too closely in time.
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
3
and unlike point processes models, no statistical test have been applied to show
that any of the previous variations of the IF model fit real network data.
Both, SDE and point processes, approaches are microscopic descriptions, where
random noise explains the intrinsic variability. Many authors have argued that there
must be some more macroscopic approach describing huge neural networks as a
whole, using PDE formalism 15,42 . Some authors have already been able to perform
link between PDE approaches as the macroscopic system and SDE approach (in
particular IF models) as the microscopic model 39,30,26 . Another macroscopic point
of view on spike trains is proposed by Pakdaman, Perthame and Salort in a series
of articles 31,32,33 . It uses a nonlinear age-structured equation to describe the spikes
density. Adopting a population view, they aim at studying relaxation to equilibrium or spontaneous periodic oscillations. Their model is justified by a qualitative,
heuristic approach. As many other models, their model shows several qualitative
features such as oscillations that make it quite plausible for real networks, but once
again there is no statistical proof of it, up to our knowledge.
In this context, the main purpose of the present article is to build a bridge between several point processes models that have been proved to statistically fit real
spike trains data and age structured PDE of the type of Pakdaman, Perthame and
Salort. The point processes are the microscopic models, the PDE being their mesomacroscopic counterpart. In this sense, it extends PDE approaches for IF models to
models that statistically fit true spike trains data. In the first section, we introduce
Pakdaman, Perthame and Salort PDE (PPS) via its heuristic informal and microscopic description, which is based on IF models. Then, in Section 2, we develop
the different point process models, quite informally, to draw the main heuristic
correspondences between both approaches. In particular, we introduce the conditional intensity of a point process and a fundamental construction, called Ogata’s
thinning 29 , which allows a microscopic understanding of the dynamics of a point
process. Thanks to Ogata’s thinning, in Section 3, we have been able to rigorously
derive a microscopic random weak version of (PPS) and to propose its expectation deterministic counterpart. An independent and identically distributed (i.i.d)
population version is also available. Several examples of applications are discussed
in Section 4. To facilitate reading, technical results and proofs are included in two
appendices. The present work is clearly just a first to link point processes and PDE:
there are much more open questions than answered ones and this is discussed in the
final conclusion. However, we think that this can be fundamental to acquire a deeper
understanding of spike train models, their advantages as well as their limitations.
1. Synaptic integration and (PPS) equation
Based on the intuition that every neuron in the network should behave in the same
way, Pakdaman, Perthame and Salort proposed in 31 a deterministic PDE denoted
(PPS) in the sequel. The origin of this PDE is the classical (IF) model. In this section
we describe the link between the (IF) microscopic model and the mesoscopic (PPS)
June 9, 2015
4
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
model, the main aim being to show thereafter the relation between (PPS) model
and other natural microscopic models for spike trains: point processes.
1.1. Integrate-and-fire
Integrate-and-fire models describe the time evolution of the membrane potential,
V (t), by means of ordinary differential equations as follows
dtV
= −gL (V − VL ) + I(t),
(1.1)
dt
where Cm is the capacitance of the membrane, gL is the leak conductance and VL
is the leak reversal potential. If V (t) exceeds a certain threshold θ, the neuron fires
/ emits an action potential (spike) and V (t) is reset to VL . The synaptic current
I(t) takes into account the fact that other presynaptic neurons fire and excite the
neuron of interest, whose potential is given by V (t).
As stated in 31 , the origin of (PPS) equation comes from 35 , where the explicit
solution of a classical IF model as (1.1) has been discussed. To be more precise the
membrane voltage of one neuron at time t is described by:
Z t
V (t) = Vr + (VL − Vr )e−(t−T )/τm +
h(t − u)Ninput (du),
(1.2)
Cm
T
where Vr is the resting potential satisfying VL < Vr < θ, T is the last spike emitted
by the considered neuron, τm is the time constant of the system (normally τm =
gL /Cm ), h is the excitatory post synaptic potential (EPSP) and Ninput is the sum
of Dirac masses at each spike of the presynaptic neurons. Since after firing, V (t)
is reset to VL < Vr , there is a refractory period when the neuron is less excitable
than at rest. The constant time τm indicatesR whether the next spike can occur
t
more or less rapidly. The other main quantity, T h(t − u)Ninput (du), is the synaptic
integration term.
In 35 , they consider a whole random network of such IF neurons and look at the
behavior of this model, where the only randomness is in the network. In many other
studies 7,8,9,11,26,42,30 IF models as (1.1) are considered to finally obtain other systems of partial differential equations (different to (PPS)) describing neural networks
behavior. In these studies, each presynaptic neuron is assumed to fire as an independent Poisson process and via a diffusion approximation, the synaptic current is then
approximated by a continuous in time stochastic process of Ornstein-Uhlenbeck.
1.2. The (PPS) equation
The deterministic PDE proposed by Pakdaman, Perthame and Salort, whose origin
is also the microscopic
IF model (1.2), is the following:
(
∂n(s,t)
+ ∂n(s,t)
+ p (s, X (t)) n (s, t) = 0
∂t
∂s
(PPS)
R +∞
m (t) := n (0, t) = 0 p (s, X (t)) n (s, t) ds.
In this equation, n(s, t) represents a probability density of neurons at time t having
discharged at time t − s. Therefore, s represents the time elapsed since the last
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
5
discharge. The fact that the equation is an elapsed time structured equation is
natural, because the IF model (1.2) clearly only depends on the time since the last
spike. More informally, the variable s represents the ”age” of the neuron.
The first equation of the system (PPS) represents a pure transport process and
means that as time goes by, neurons of age s and past given by X(t) are either
aging linearly or reset to age 0 with rate p (s, X (t)).
The second equation of (PPS) describes the fact that when neurons spike, the
age (the elapsed time) returns to 0. Therefore, n(0, t) depicts the density of neurons
undergoing a discharge at time t and it is denoted by m(t). As a consequence of
this boundary condition, for n at s = 0, the following conservation law is obtained:
Z ∞
Z ∞
n (s, t) ds =
n (s, 0) ds
0
0
This means that if n (·, 0) is a probabilistic density then n (·, t) can be interpreted
as a density at each time t. Denoting by dt the Lebesgue measure and since m(t) is
the density of firing neurons at time t in (PPS), m(t)dt can also be interpreted as
the limit of Ninput (dt) in (1.2) when the population of neurons becomes continuous.
The system (PPS) is nonlinear since the rate p (s, X(t)) depends on n(0, t) by
means of the quantity X(t):
Zt
Zt
h(u)m(t − u)du =
X(t) =
0
h(u)n(0, t − u)du.
(1.3)
0
The quantity X(t) represents the interactions between neurons. It ”takes into account the averaged propagation time for the ionic pulse in this network” 31 . More
precisely with respect to the IF models (1.2), this is the synaptic integration term,
once the population becomes continuous. The only difference is that in (1.2) the
memory is cancelled once the last spike has occurred and this is not the case here.
However informally, both quantities have the same interpretation. Note nevertheless, that in 31 , the function h can be much more general than the h of the IF models
which clearly corresponds to EPSP. From now on and in the rest of the paper, h is
just a general non negative function without forcing the connection with EPSP.
The larger p (s, X(t)) the more likely neurons of age s and past X(t) fire. Most of
the time (but it is not a requisite), p is assumed to be less than 1 and is interpreted
as the probability that neurons of age s fire. However, as shown in Section 3 and
as interpreted in many population structured equation 14,19,34 , p(s, X(t)) is closer
to a hazard rate, i.e. a positive quantity such that p (s, X(t)) dt is informally the
probability to fire given that the neuron has not fired yet. In particular, it could be
not bounded by 1 and does not need to integrate to 1. A toy example is obtained
if p (s, X(t)) = λ > 0, where a steady state solution is n(s, t) = λe−λs 1s≥0 : this is
the density of an exponential variable with parameter λ.
However, based on the interpretation of p (s, X(t)) as a probability bounded
by 1, one of the main model that Pakdaman, Perthame and Salort consider is
p (s, X(t)) = 1s≥σ(X(t)) . This again can be easily interpreted by looking at (1.2).
June 9, 2015
6
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
Indeed, since in the IF models the spike happens when the threshold θ is reached,
one can consider that p (s, X(t)) should be equal to 1 whenever
V (t) = Vr + (VL − Vr )e−(t−T )/τm + X(t) ≥ θ,
and 0 otherwise. Since VL − Vr < 0, p (s, X(t)) = 1 is indeed equivalent to s = t − T
larger than some decreasing function of X(t). This has the double advantage to give
a formula for the refractory period (σ(X(t))) and to model excitatory systems: the
refractory period decreases when the whole firing rate increases via X(t) and this
makes the neurons fire even more. This is for this particular case that Pakdaman,
Perthame and Salort have shown existence of oscillatory behavior 32 .
Another important parameter in the (PPS)
model and introduced in 31 is J,
R
which can be seen with our formalism as h and which describes the network
connectivity or the strength of the interaction. In 31 it has been proved that, for
highly or weakly connected networks, (PPS) model exhibits relaxation to steady
state and periodic solutions have also been numerically observed for moderately
connected networks. The authors in 32 have quantified the regime where relaxation
to a stationary solution occurs in terms of J and described periodic solution for
intermediate values of J.
Recently, in 33 , the (PPS) model has been extended including a fragmentation
term, which describes the adaptation and fatigue of the neurons. In this sense, this
new term incorporates the past activity of the neurons. For this new model, in
the linear case there is exponential convergence to the steady states, while in the
weakly nonlinear case a total desynchronization in the network is proved. Moreover,
for greater nonlinearities, synchronization can again been numerically observed.
2. Point processes and conditional intensities as models for spike
trains
We first start by quickly reviewing the main basic concepts and notations of point
processes, in particular, conditional intensities and Ogata’s thinning 29 . We refer
the interested reader to 3 for exhaustiveness and to 6 for a much more condensed
version, with the main useful notions.
2.1. Counting processes and conditional intensities
We focus on locally finite point processes on R, equipped with the borelians B(R).
Definition 2.1 (Locally finite point process). A locally finite point process N
on R is a random set of points such that it has almost surely (a.s.) a finite number
of points in finite intervals. Therefore, associated to N there is an ordered sequence
of extended real valued random times (Tz )z∈Z : · · · ≤ T−1 ≤ T0 ≤ 0 < T1 ≤ · · · .
For a measurable set A, NA denotes the number of points of N in A. This is a
random variable with values in N ∪ {∞}.
Definition 2.2 (Counting process associated to a point process). The process on R+ defined by t 7→ Nt := N(0,t] is called the counting process associated to
the point process N .
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
7
The natural and the predictable filtrations are fundamental for the present work.
Definition 2.3 (Natural
filtration of a point process). The natural filtration
of N is the family FtN t∈R of σ-algebras defined by FtN = σ (N ∩ (−∞, t]).
Definition 2.4 (Predictable filtration of a point
process). The NpreN
defined by Ft− =
dictable filtration of N is the family of σ-algebra Ft−
t∈R
σ (N ∩ (−∞, t)).
The intuition behind this concept is that FtN contains all the information given
by the point process at time t. In particular, it contains the information whether
N
only contains the information given
t is a point of the process or not while Ft−
by the point process strictly before t. Therefore, it does not contain (in general)
N
represents (the
the information whether t is a point or not. In this sense, Ft−
information contained in) the past.
Under some rather classical conditions 3 , which are always assumed to be satN
), which is
isfied here, one can associate to (Nt )t≥0 a stochastic intensity λ(t, Ft−
N
a non negative random quantity. The notation λ(t, Ft− ) for the intensity refers
to the Rpredictable version of the intensity associated to the natural filtration and
t
N
N
(Nt − 0 λ(u, Fu−
)du)t≥0 forms a local martingale 3 . Informally, λ(t, Ft−
)dt represents the probability to have a new point in interval [t, t + dt) given the past. Note
N
) should not be understood as a function, in the same way as density
that λ(t, Ft−
is for random variables. It is a ”recipe” explaining how the probability to find a
new point at time t depends on the past configuration: since the past configuration
depends on its own past, this is closer to a recursive formula. In this respect, the
intensity should obviously depend on N ∩(−∞, t) and not on N ∩(−∞, t] to predict
the occurrence at time t, since we cannot know whether t is already a point or not.
The distribution of the point process N on R is completely characterized by the
N
) on R+ and the distribution of N− = N ∩ R− ,
knowledge of the intensity λ(t, Ft−
which is denoted by P0 in the sequel. The information about P0 is necessary since
each point of N may depend on the occurrence of all the previous points: if for all
N
t > 0, one knows the ”recipe” λ(t, Ft−
) that gives the probability of a new point at
time t given the past configuration, one still needs to know the distribution of N−
to obtain the whole process.
Two main assumptions
are used depending on the type of results we seek:
1
RT
,a.s.
N
ALλ,loc
for any T ≥ 0, 0 λ(t, Ft−
)dt is finite a.s.
1
hR
i
T
,exp
N
ALλ,loc
for any T ≥ 0, E 0 λ(t, Ft−
)dt is finite.
1
1
1
Clearly ALloc,exp implies ALloc,a.s. . Note that ALloc,a.s. implies non-explosion in
finite time for the counting processes (Nt ).
Definition 2.5 (Point measure associated to a point process). The point
P
measure associated to N is denoted by N (dt) and defined by N (dt) = i∈Z δTi (dt),
where δu is the Dirac mass in u.
By analogy with (PPS), and since points of point processes correspond to spikes
June 9, 2015
8
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
(or times of discharge) for the considered neuron in spike train analysis, N (dt) is the
microscopic equivalent of the distribution of discharging neurons m(t)dt. Following
this analogy, and since TNt is the last point less or equal to t for every t ≥ 0, the
age St at time t is defined by St = t − TNt . In particular, if t is a point of N , then
St = 0. Note that St is FtN measurable for every t ≥ 0 and therefore, S0 = −T0 is
F0N measurable. To define an age at time t = 0, one assumes that
(AT0 ) There exists a first point before 0 for the process N− , i.e. −∞ < T0 .
As we have remarked before, conditional intensity should depend on N ∩ (−∞, t).
Therefore, it cannot be function of St , since St informs us if t is a point or not.
N
That is the main reason for considering this Ft−
measurable variable
St− = t − TNt− ,
(2.1)
where TNt− is the last point strictly before t (see Figure 1). Note also that knowing
(St− )t≥0 or (Nt )t≥0 is completely equivalent given F0N .
The last and most crucial equivalence between (PPS) and the present point
N
process set-up, consists in noting that the quantities p(s, X(t)) and λ(t, Ft−
) have
informally the same meaning: they both represent a firing rate, i.e. both give the
rate of discharge as a function of the past. This dependence is made more explicit
N
).
in p(s, X(t)) than in λ(t, Ft−
2.2. Examples
Let us review the basic point processes models of spike trains and see what kind of
analogy is likely to exist between both models ((PPS) equation and point processes).
These informal analogies are possibly exact mathematical results (see Section 4).
N
Homogeneous Poisson process This is the simplest case where λ(t, Ft−
) = λ,
with λ a fixed positive constant representing the firing rate. There is no dependence
in time t (it is homogeneous) and no dependence with respect to the past. This
case should be equivalent to p(s, X(t)) = λ in (PPS). This can be made even more
explicit. Indeed in the case where the Poisson process exists on the whole real line
(stationary case), it is easy to see that
P (St− > s) = P N[t−s,t) = 0 = exp(−λs),
meaning that the age St− obeys an exponential distribution with parameter λ, i.e.
the steady state of the toy example developed for (PPS) when p(s, X(t)) = λ.
Inhomogeneous Poisson process To model non stationarity, one can use
N
λ(t, Ft−
) = λ(t), which only depends on time. This case should be equivalent to the
replacement of p(s, X(t)) in (PPS) by λ(t).
Renewal process This model is very useful to take refractory period into account.
It corresponds to the case where the ISIs (delays between spikes) are independent
and identically distributed (i.i.d.) with a certain given density ν on R+ . The asso-
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
9
ciated hazard rate is
ν(s)
f (s) = R +∞
,
ν(x)dx
s
R +∞
when s ν(x)dx > 0. Roughly speaking, f (s)ds is the probability that a neuron
spikes with age s given that its age is larger than s. In this case, considering the set
of spikes as the point process N , it is easy to show (see the Appendix B.1) that its
N
corresponding intensity is λ(t, Ft−
) = f (St− ) which only depends on the age. One
can also show quite easily that the process (St− )t>0 , which is equal to (St )t>0 almost
everywhere (a.e.), is a Markovian process in time. This renewal setting should be
equivalent in the (PPS) framework to p(s, X(t)) = f (s).
Note that many people consider IF models (1.2) with Poissonian inputs with or
without additive white noise. In both cases, the system erases all memory after each
spike and therefore the ISIs are i.i.d. Therefore as long as we are only interested by
the spike trains and their point process models, those IF models are just a particular
case of renewal process 8,10,17,35 .
Wold process and more general structures Let A1t be the delay (ISI) between
the last point and the occurrence just before (see also Figure 1), A1t = TNt− −TNt− −1 .
N
) = f (St− , A1t ). This model
A Wold process 24,16 is then characterized by λ(t, Ft−
has been matched to several real data thanks to goodness-of-fit tests 38 and is
therefore one of our main example with the next discussed Hawkes process case.
One can show in this case that the successive ISI’s form a Markov chain of order 1
and that the continuous time process (St− , A1t ) is also Markovian.
This case should be equivalent to the replacement of p(s, X(t)) in (PPS) by
f (s, a1 ), with a1 denoting the delay between the two previous spikes. Naturally in
this case, one should expect a PDE of higher dimension with third variable a1 .
More generally, one could define
Akt = TNt− −(k−1) − TNt− −k ,
(2.2)
N
and point processes with intensity λ(t, Ft−
) = f (St− , A1t , ..., Akt ). Those processes
satisfy more generally that their ISI’s form a Markov chain of order k and that the
continuous time process (St− , A1t , ..., Akt ) is also Markovian (see the Appendix B.2).
Remark 2.1. The dynamics of the successive ages is pretty simple. On the one
hand, the dynamics of the vector of the successive ages (St− , A1t , ..., Akt )t>0 is deterministic between two jumping times. The first coordinate increases with rate 1.
On the other hand, the dynamics at any jumping time T is given by the following
shift:
the age process goes to 0, i.e. ST = 0,
(2.3)
the first delay becomes the age, i.e. A1T + = ST − ,
i−1
i
the other delays are shifted, i.e. AT + = AT for all i ≤ k.
June 9, 2015
10
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
Hawkes processes The most classical setting is the linear (univariate) Hawkes
process, which corresponds to
N
λ(t, Ft−
)
t−
Z
h(t − u)N (du),
=µ+
−∞
where the positive parameter µ is called the spontaneous rate and the non negative
function h, with support
in R+ , is called the interaction function, which is generally
R
assumed to satisfy R+ h < 1 to guarantee the existence of a stationary version 16 .
This model has also been matched to several real neuronal data thanks to goodnessof-fit tests 40 . Since it can mimic synaptic integration, as explained below, this
represents the main example of the present work.
In the case where T0 tends to −∞, this is equivalent to say that there is no point
on the negative half-line and in this case, one can rewrite
N
λ(t, Ft−
)
Z
t−
h(t − u)N (du).
=µ+
0
R t−
By analogy between N (dt) and m(t)dt, one sees that 0 h(t − u)N (du) is indeed
the analogous of X(t) the synaptic integration in (1.3). So one could expect that
the PDE analogue is given by p(s, X(t)) = µ + X(t). In Section 4, we show that
this does not hold stricto sensu, whereas the other analogues work well.
Note that this model shares also some link with IF models. Indeed, the formula
for the intensity is close to the formula for the voltage (1.2), with the same flavor for
the synaptic integration term. The main difference comes from the fact that when
the voltage reaches a certain threshold, it fires deterministically for the IF model,
whereas the higher the intensity, the more likely is the spike for the Hawkes model,
but without certainty. In this sense Hawkes models seem closer to (PPS) since as we
discussed before, the term p(s, X(t)) is closer to a hazard rate and never imposes
deterministically the presence of a spike.
To model inhibition (see 41 for instance), one can use functions h that may take
R t−
N
negative values and in this case λ(t, Ft−
) = µ + −∞ h(t − u)N (du) , which
+
N
should correspond to p(s, X(t)) = (µ + X(t))+ . Another possibility is λ(t, Ft−
)=
R t−
exp µ + −∞ h(t − u)N (du) , which is inspired by the generalized linear model as
used by 36 and which should correspond to p(s, X(t)) = exp (µ + X(t)).
Note finally that Hawkes models in Neuroscience (and their variant) are usually
multivariate meaning that they model interaction between spike trains thanks to
interaction functions between point processes, each process representing a neuron.
To keep the present analogy as simple as possible, we do not deal with those multivariate models in the present article. Some open questions in this direction are
presented in conclusion.
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
11
2.3. Ogata’s thinning algorithm
N
To turn the analogy between p(s, X(t)) and λ(t, Ft−
) into a rigorous result on the
PDE level, we need to understand the intrinsic dynamics of the point process. This
dynamics is often not explicitly described in the literature (see e.g. the reference
book by Brémaud 3 ) because martingale theory provides a nice mathematical setting in which one can perform all the computations. However, when one wants to
simulate point processes based on the knowledge of their intensity, there is indeed
a dynamics that is required to obtain a practical algorithm. This method has been
described at first by Lewis in the Poisson setting 25 and generalized by Ogata in
29
. If there is a sketch of proof in 29 , we have been unable to find any complete
mathematical proof of this construction in the literature and we propose a full and
mathematically complete version of this proof with minimal assumptions in the
Appendix B.4. Let us just informally describe here, how this construction works.
The principle consists in assuming that is given an external homogeneous Poisson
process Π of intensity 1 in R2+ and with associated point measure Π (dt, dx) =
P
(T,V )∈Π δ(T,V ) (dt, dx). This means in particular that
E [Π(dt, dx)] = dt dx.
(2.4)
Once a realisation of N− fixed, which implies that F0N is known and which can be
seen as an initial condition for the dynamics, the construction of the process N on
R+ only depends on Π.
N
) in the sense of the ”recipe”
More precisely, if we know the intensity λ(t, Ft−
that explicitly depends on t and N ∩(−∞, t), then once a realisation of Π and of N−
N
) for t ∈ R+
is fixed, the dynamics to build a point process N with intensity λ(t, Ft−
is purely deterministic. It consists (see also Figure 1) in successively projecting on
N
the abscissa axis the points that are below the graph of λ(t, Ft−
). Note that a point
N
projection may change the shape of λ(t, Ft− ), just after the projection. Therefore the
N
graph of λ(t, Ft−
) evolves thanks to the realization of Π. For a more mathematical
description, see Theorem B.11 in the Appendix B.4.
Note in particular that the
construction ends on any finite interval [0, T ] a.s. if A1,a.s
λ,loc holds.
Then the point process N , result of Ogata’s thinning, is given by the union of
N
N− on R− and the projected points on R+ . It admits the desired intensity λ(t, Ft−
)
on R+ . Moreover, the point measure can be represented by
1t>0 N (dt) =
X
(T,X)∈Π /
X≤λ(T,FTN− )
N
Z λ(t,Ft−
)
δT (dt) =
!
Π (dt, dx) .
(2.5)
x=0
NB: The last equality comes from the following convention. If δ(c,d) is a Dirac mass
Rb
in (c, d) ∈ R2+ , then x=a δ(c,d) (dt, dx), as a distribution in t, is δc (dt) if d ∈ [a, b]
and 0 otherwise.
June 9, 2015
12
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
Fig. 1. Example of Ogata’s thinning algorithm on a linear Hawkes process with interaction function
h(u) = e−u and no point before 0 (i.e. N− = ∅). The crosses represent a realization of Π, Poisson
N ),
process of intensity 1 on R2+ . The blue piecewise continuous line represents the intensity λ(t, Ft−
which starts in 0 with value µ and then jumps each time a point of Π is present underneath it.
N )) is given by the blue circles. Age S
The resulting Hawkes process (with intensity λ(t, Ft−
t− at
1
time t and the quantity At are also represented.
3. From point processes to PDE
Let us now present our main results. Informally, we want to describe the evolution
of the distribution in s of the age St according to the time t. Note that at fixed time
t, St− = St a.s. and therefore it is the same as the distribution of St− . We prefer
to study St− since its predictability, i.e. its dependence in N ∩ (−∞, t), makes all
definitions proper from a microscopic/random point of view. Microscopically, the
interest lies in the evolution of δSt− (ds) as a random measure. But it should also be
seen as a distribution in time, for equations like (PPS) to make sense. Therefore,
we need to go from a distribution only in s to a distribution in both s and t. Then
one can either focus on the microscopic level, where the realisation of Π in Ogata’s
thinning construction is fixed or focus on the expectation of such a distribution.
3.1. A clean setting for bivariate distributions in age and time
In order to obtain, from a point process, (PPS) system we need to define bivariate
distributions in s and t and marginals (at least in s), in such a way that weak solutions of (PPS) are correctly defined. Since we want to possibly consider more than
two variables for generalized Wold processes, we consider the following definitions.
In the following, < ϕ, ν > denotes the integral of the integrable function ϕ with
respect to the measure ν.
Let k ∈ N. For every bounded measurable function ϕ of (t, s, a1 , ..., ak ) ∈ Rk+2
+ ,
one can define
(1)
ϕt (s, a1 , ..., ak ) = ϕ(t, s, a1 , ..., ak )
and ϕ(2)
s (t, a1 , ..., ak ) = ϕ(t, s, a1 , ..., ak ).
Let us now define two sets of regularities for ϕ.
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
Mc,b (Rk+2
+ )
13
The function ϕ belongs to Mc,b (Rk+2
+ ) if and only if
• ϕ is a measurable bounded function,
(1)
• there exists T > 0 such that for all t > T , ϕt = 0.
∞
The function ϕ belongs to Cc,b
(Rk+2
+ ) if and only if
•
ϕ
is
continuous,
uniformly
bounded,
∞
Cc,b
(Rk+2
+ )
• ϕ has uniformly bounded derivatives of every order,
(1)
• there exists T > 0 such that for all t > T , ϕt = 0.
Let (ν1t )t≥0 be a (measurable w.r.t. t) family of positive measures on Rk+1
+ , and
k+1
s
(ν2 )s≥0 be a (measurable w.r.t. s) family of positive measures R+ . Those families
satisfy the Fubini property if
R (1) t
R (2)
(PF ubini ) for any ϕ ∈ Mc,b (Rk+2
hϕt , ν1 idt = hϕs , ν2s ids.
+ ),
k+2
In this case, one can define ν, measure on Rk+2
+ , by the unique measure on R+
such that for any test function ϕ in Mc,b (Rk+2
+ ),
Z
Z
(1)
s
< ϕ, ν >= hϕt , ν1t idt = hϕ(2)
s , ν2 ids.
To simplify notations, for any such measure ν(t, ds, da1 , ..., dak ), we define
ν(t, ds, da1 , ..., dak ) = ν1t (ds, da1 , ..., dak ),
ν(dt, s, da1 , ..., dak ) = ν2s (dt, da1 , ..., dak ).
In the sequel, we need in particular a measure on R2+ , ηx , defined for any real x
by its marginals that satisfy (PF ubini ) as follows
∀ t, s ≥ 0,
ηx (t, ds) = δt−x (ds)1t−x>0
and ηx (dt, s) = δs+x (dt)1s≥0 . (3.1)
It represents a Dirac mass ”travelling” on the positive diagonal originated in (x, 0).
3.2. The microscopic construction of a random PDE
For a fixed realization of Π, we therefore want to define a random distribution
U (dt, ds) in terms of its marginals, thanks to (PF ubini ), such that, U (t, ds) represents the distribution at time t > 0 of the age St− , i.e.
∀ t > 0,
U (t, ds) = δSt− (ds)
(3.2)
and satisfies similar equations as (PPS). This is done in the following proposition.
N
Proposition 3.1. Let Π, F0N and an intensity λ(t, Ft−
) t>0 be given as in Section
1
,a.s.
2.3 and satisfying (AT0 ) and ALλ,loc
. On the event Ω of probability 1, where
Ogata’s thinning is well defined, let N be the point process on R that is constructed
thanks to Ogata’s thinning with associated predictable age process (St− )t>0 and
whose points are denoted (Ti )i∈Z . Let the (random) measure U and its corresponding
marginals be defined by
U (dt, ds) =
+∞
X
i=0
ηTi (dt, ds) 10≤t≤Ti+1 .
(3.3)
June 9, 2015
14
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
Then, on Ω, U satisfies (PF ubini ) and U (t, ds) = δSt− (ds). Moreover, on Ω, U is a
solution in the weak sense of the following system
!
N
Z λ(t,Ft−
)
∂
∂
U (dt, ds) +
U (dt, ds) +
Π (dt, dx) U (t, ds) = 0,
∂t
∂s
x=0
!
N
Z
Z λ(t,Ft−
)
U (dt, 0) =
Π (dt, dx) U (t, ds) + δ0 (dt)1T0 =0 ,
s∈R+
(3.4)
(3.5)
x=0
U (0, ds) = δ−T0 (ds)1T0 <0 = U in (ds)1s>0 ,
(3.6)
∞
where U in (ds) = δ−T0 (ds). The weak sense means that for any ϕ ∈ Cc,b
(R2+ ),
Z
∂
∂
ϕ (t, s) +
ϕ (t, s) U (dt, ds) +
∂t
∂s
R+ ×R+
!
N
Z
Z λ(t,Ft−
)
[ϕ (t, 0) − ϕ (t, s)]
Π (dt, dx) U (t, ds) + ϕ(0, −T0 ) = 0. (3.7)
R+ ×R+
x=0
The proof of Proposition 3.1 is included in Appendix A.1. Note also that thanks to
the Fubini property, the boundary condition (3.5) is satisfied also in a strong sense.
System (3.4)–(3.6) is a random microscopic version of (PPS) if T0 < 0, where
n(s, t) the density of the age at time t is replaced by U (t, ·) = δSt− , the Dirac mass
in the age at time t. The assumption T0 < 0 is satisfied a.s. if T0 has a density, but
this may not be the case for instance if the experimental device gives an impulse
at time zero (e.g. 38 studied Peristimulus time histograms (PSTH), where the spike
trains are locked on a stimulus given at time 0).
This result may seem rather poor from a PDE point of view. However, since this
equation is satisfied at a microscopic level, we are able to define correctly all the
important quantities at a macroscopic level. Indeed, the analogy between p(s, X(t))
N
and λ(t, Ft−
) is actually on the random microscopic scale a replacement of p(s, X(t))
N
R λ(t,Ft−
)
Π (dt,dx), whose expectancy given the past is, heuristically speaking,
by x=0
N
equal to λ t, Ft−
because the mean behaviour of Π is given by the Lebesgue
measure (see (2.4)). Thus, the main question at this stage is : can we make this
argument valid by taking the expectation of U ? This is addressed in the next section.
The property (PF ubini ) and the quantities ηTi mainly allows to define U (dt, 0)
as well as U (t, ds). As expected, with this definition, (3.2) holds as well as
U (dt, 0) = 1t≥0 N (dt),
(3.8)
i.e. the spiking measure (the measure in time with age 0) is the point measure.
Note also that the initial condition is given by F0N , since F0N fixes in particular
the value of T0 and (AT0 ) is required to give sense to the age at time 0. To understand
the initial condition, remark that if T0 = 0, then U (0, ·) = 0 6= limt→0+ U (t, ·) = δ0
by definitions of ηTi but that if T0 < 0, U (0, ·) = limt→0+ U (t, ·) = δ−T0 .
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
15
R∞
The conservativeness (i.e. for all t ≥ 0, 0 U (t, ds) = 1) is obtained by using (a
sequence of test functions converging to) ϕ = 1t≤T .
Proposition 3.1 shows that the (random) measure U , defined by (3.3), in terms
of a given point process N , is a weak solution of System (3.4)-(3.6). The study of
the well-posedness of this system could be addressed following, for instance, the
ideas given in 12 . In this case U should be the unique solution of system (3.4)–(3.6).
As last comment about Proposition 3.1, we analyse the particular case of the
linear Hawkes process, in the following remark.
R t−
N
) = µ + −∞ h(t − z)N (dz).
Remark 3.1. In the linear Hawkes process, λ(t, Ft−
Thanks to (3.8) one decomposes the intensity into a term given Rby the initial condit−
N
) = µ+F0 (t)+ 0 h(t−z)U (dz, 0),
tion plus a term given by the measure U , λ(t, Ft−
R0
where F0 (t) = −∞ h(t − z)N− (dz) is (F0N )-measurable and considered as an initial
N
) is
condition. Hence, (3.4)–(3.6) becomes a closed system in the sense that λ(t, Ft−
now an explicit function of the solution of the system. This is not true in general.
3.3. The PDE satisfied in expectation
In this section, we want to find the system satisfied by the expectation of the
random measure U . First, we need to give a proper definition of such an object. The
construction is based on the construction of U and is summarized in the following
proposition. (The proofs of all the results of this subsection are in Appendix A.1).
N
Proposition 3.2. Let Π, F0N and an intensity λ(t, Ft−
) t>0 be given as in Section
1
,exp
2.3 and satisfying (AT0 ) and ALλ,loc
. Let N be the process resulting of Ogata’s
thinning and let U be the random measure defined by (3.3). Let E denote the expectation with respect to Π and F0N .
R
2
RThen for any test function ϕ in Mc,b (R+ ), E ϕ(t, s)U (t, ds) and
E ϕ(t, s)U (dt, s) are finite and one can define u(t, ds) and u(dt, s) by
Z
Z
ϕ(t, s)u(t, ds) = E
ϕ(t, s)U (t, ds) ,
∀ t ≥ 0,
Z
R
ϕ(t, s)u(dt, s) = E
ϕ(t, s)U (dt, s) .
∀ s ≥ 0,
Moreover, u(t, ds) and u(dt, s) satisfy (PF ubini ) and one can define u(dt, ds) =
u(t, ds)dt = u(dt, s)ds on R2+ , such that for any test function ϕ in Mc,b (R2+ ),
Z
Z
ϕ(t, s)u(dt, ds) = E
ϕ(t, s)U (dt, ds) ,
quantity which is finite.
R
R
In particular, since ϕ(t, s)u(t, ds) = E ϕ(t, s)U (t, ds) = E [ϕ(t, St− )], u(t, ·)
is therefore the distribution of St− , the (predictable version of the) age at time t.
Now let us show that as expected, u satisfies a system similar to (PPS).
N
Theorem 3.3. Let Π, F0N and an intensity λ(t, Ft−
) t>0 be given as in Section
June 9, 2015
16
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
2.3 and satisfying (AT0 ) and
L1 ,exp
Aλ,loc
. If N is the process resulting of Ogata’s
thinning, (St− )t>0 its associated predictable age process, U its associated random
measure, defined by (3.3), and u its associated mean measure, defined in Proposition 3.2, then, there exists a bivariate measurable function ρλ,P0 satisfying
Z TZ
∀ T ≥ 0,
ρλ,P0 (t, s)u(dt, ds) < ∞,
(3.9)
0
s
N
u(dt, ds)- a.e
ρλ,P0 (t, s) = E λ t, Ft− St− = s
and such that u is solution in the weak sense of the following system
∂
∂
u (dt, ds) +
u (dt, ds) + ρλ,P0 (t, s)u (dt, ds) = 0,
∂t
∂s
Z
ρλ,P0 (t, s)u (t, ds) dt + δ0 (dt)uin ({0}),
u (dt, 0) =
(3.10)
(3.11)
s∈R+
u (0, ds) = uin (ds)1s>0 ,
(3.12)
∞
where uin is the law of −T0 . The weak sense means here that for any ϕ ∈ Cc,b
(R2+ ),
Z
∂
∂
+
ϕ (t, s) u (dt, ds) +
∂t ∂s
R+ ×R+
Z
Z
[ϕ(t, 0) − ϕ(t, s)]ρλ,P0 (t, s)u(dt, ds) +
ϕ(0, s)uin (ds) = 0, (3.13)
R+ ×R+
R+
Comparing this system to (PPS), one first sees that n(·, t), the density of the
age at time t, is replaced by the mean measure u(t, ·). If uin ∈ L1 (R+ ) we have
uin ({0}) = 0 so we get an equation which is exactly of renewal type, as (PPS).
In the general case where uin is only a probability measure, the difference with
(PPS) lies in the term δ0 (dt)uin ({0}) in the boundary condition for s = 0 and in
the term 1s>0 in the initial condition for t = 0. Both these extra terms are linked
to the possibility for the initial measure uin to charge zero. This possibility is not
considered in 31 - else, a similar extra term would be needed in the setting of 31 as
well. As said above in the comment of Proposition 3.1, we want to keep this term
here since it models the case where there is a specific stimulus at time zero 38 .
In general and without more assumptions on λ, it is not clear that u is not only
a measure satisfying (PF ubini ) but also absolutely continuous wrt to dt ds and that
the equations can be satisfied in a strong sense.
Concerning p(s, X(t)), which has always been thought of as the equivalent of
N
N
λ(t, Ft−
), it is not replaced by λ(t, Ft−
), which would
have no meaning in general
N
since this is a random quantity, nor
by
E
λ(t,
F
)
t− which would have been a first
N
possible guess; it is replaced by E λ(t, Ft− )|St− = s . Indeed intuitively, since
"Z
#
N
λ(t,Ft−
)
N
N
= λ t, Ft−
dt,
E
Π (dt, dx) Ft−
x=0
the corresponding weak term can be interpreted as, for any test function ϕ,
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
N
Z λ(t,Ft−
)
"Z
E
ϕ (t, s)
!
#
Z
Π (dt, dx) U (t, ds) = E
ϕ (t, s) λ
N
t, Ft−
17
δSt− (ds)dt
x=0
Z
=
Zt
=
N
E ϕ (t, St− ) λ t, Ft−
dt
N
E ϕ (t, St− ) E λ t, Ft−
|St− dt,
t
R
which is exactly ϕ(t, s)ρλ,P0 (t, s)u(dt, ds).
This conditional expectation makes dependencies particularly complex, but this
also enables to derive equations even in non-Markovian setting (as Hawkes processes
for instance, see Section 4). More explicitly, ρλ,P0 (t, s) is a function of the time t,
of the age s, but it also depends on λ, the shape of the intensity of the underlying
process and on the distribution of the initial condition N− , that is P0 . As explained
in Section 2, it is both the knowledge of P0 and λ that characterizes the distribution
of the process and in general the conditional expectation cannot be reduced to
something depending on less than that. In Section 4, we discuss several examples
of point processes where one can (or cannot) reduce the dependence.
Note that here again, we can prove that the equation is conservative by taking
(a sequence of functions converging to) ϕ = 1t≤T as a test function.
A direct corollary of Theorem 3.3 can be deduced thanks to the law of large
numbers. This can be seen as the interpretation of (PPS) equation at a macroscopic
level, when the population of neurons is i.i.d..
∞
Corollary 3.4. Let N i i=1 be some i.i.d. point processes with intensity given by
1
,exp
Ni
λ(t, Ft−
) on (0, +∞) satisfying ALλ,loc
and associated predictable age processes
i
(St−
)t>0 . Suppose furthermore that the distribution of N 1 on (−∞, 0] is given by
1
P0 which is such that P0 (N−
= ∅) = 0.
Then there exists a measure u satisfying (PF ubini ), weak solution of Equations (3.10) and (3.11), with ρλ,P0 defined by
h
i
N1
1
ρλ,P0 (t, s) = E λ t, Ft−
|St−
= s , u(dt, ds) − a.e.
∞
and with uin distribution of the age at time 0, such that for any ϕ ∈ Cc,b
(R2+ )
!
Z
Z
n
1X
a.s.
δ i (ds) −−−−→ ϕ(t, s)u(t, ds),
(3.14)
∀ t > 0,
ϕ(t, s)
n→∞
n i=1 St
In particular, informally, the fraction of neurons at time t with age in [s, s + ds)
in this i.i.d. population of neurons indeed tends to u(t, ds).
4. Application to the various examples
Let us now apply these results to the examples presented in Section 2.2.
June 9, 2015
18
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
4.1. When the intensity only depends on time and age
N
= f (t, St− ) (homogeneous and inhomogeneous Poisson processes and
If λ t, Ft−
renewal processes are particular
examples) then the intuition giving that p(s, X(t))
N
is analogous to λ t, F
works.
Let us assume that f (t, s) ∈ L∞ (R2+ ). We have
t−
N
E λ t, Ft−
|St− = s = f (t, s). Under this assumption, we may apply Theorem 3.3,
so that we know that the mean measure u associated to the random process is a solution of System (3.10)–(3.12). Therefore the mean measure u satisfies a completely
explicit PDE of the type (PPS) with ρλ,P0 (t, s) = f (t, s) replacing p(s, X(t)). In
particular, in this case ρλ,P0 (t, s) does not depend on the initial condition. As already underlined,
in general, the distribution of the process is characterized by
N
= f (t, St− ) and by the distribution of N− . Therefore, in this special
λ t, Ft−
case, this dependence is actually reduced
to the function f and the distribution of
−T0 . Since f (·, ·) ∈ L∞ [0, T ] × R+ , assuming also uin ∈ L1 (R+ ), it is well-known
that there exists a unique solution u such that (t 7→ u(t, ·)) ∈ C [0, T ], L1 (R+ ) ,
see for instance 34 Section 3.3. p.60. Note that following 12 uniqueness for measure solutions may also be established, hence the mean measure u associated to
the random process
is the unique solution of System (3.10)–(3.12), and it is in
1
C [0, T ], L (R+ ) : the PDE formulation, together with existence and uniqueness,
has provided a regularity result on u which is obtained under weaker assumptions
than through Fokker-Planck / Kolmogorov equations. This is another possible application field of our results: using the PDE formulation to gain regularity. Let us
now develop the Fokker-Planck / Kolmogorov approach for renewal processes.
N
Renewal processes The renewal process, i.e. when λ t, Ft−
= f (St− ), with f a
continuous function on R+ , has particular properties. As noted in Section 2.2, the
renewal age process (St− )t>0 is an homogeneous Markovian process. It is known
for a long time that it is easy to derive PDE on the corresponding density through
Fokker-Planck / Kolmogorov equations, once the variable of interest (here the age)
is Markovian (see for instance 1 ). Here we briefly follow this line to see what kind of
PDE can be derived through the Markovian properties and to compare the equation
with the (PPS) type system derived in Theorem 3.3.
Since f is continuous, the infinitesimal generatorb of (St )t>0 is given by
(Gφ)(x) = φ0 (x) + f (x) (φ(0) − φ(x)) ,
(4.1)
for all φ ∈ C 1 (R+ ) (see 2 ). Note that, since for every t > 0 St− = St a.s., the process
(St− )t>0 is also Markovian with the same infinitesimal generator.
b The
infinitesimal generator of an homogeneous Markov process (Zt )t≥0 is the operator G which
is defined to act on every function φ : Rn → R in a suitable space D by
Gφ(x) = lim
t→0+
E [φ(Zt )|Z0 = x] − φ(x)
.
t
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
19
Let us now define for all t > 0 and all φ ∈ C 1 (R+ ),
Z
Pt φ(x) = E [φ(St− )|S0 = x] = φ(s)ux (t, ds),
where x ∈ R+ and ux (t, ·) is the distribution of St− given that S0 = x. Note
that ux (t, ds) corresponds to the marginal in the sense of (PF ubini ) of ux given by
Theorem 3.3 with ρλ,P0 (t, s) = f (s) and initial condition δx , i.e. T0 = −x a.s.
In this homogeneous Markovian case, the forward Kolmogorov equation gives
∂
Pt = Pt G.
∂t
∞
Let ϕ ∈ Cc,b
(R2+ ) and let t > 0. This implies that
∂
∂
(Pt ϕ(t, s)) = Pt Gϕ(t, s) + Pt ϕ(t, s)
∂t
∂t
∂
∂
= Pt
ϕ(t, s) + f (s) (ϕ(t, 0) − ϕ(t, s)) + ϕ(t, s) .
∂s
∂t
Since ϕ is compactly supported in time, an integration with respect to t yields
Z
Z
∂
∂
−P0 ϕ(0, s) = Pt
+
ϕ(t, s)dt + Pt f (s) (ϕ(t, 0) − ϕ(t, s)) dt,
∂t ∂s
or equivalently
Z
Z
∂
∂
+
ϕ (t, s) ux (t, ds) dt − (ϕ(t, s) − ϕ(t, 0))f (s)ux (t, ds)dt,
− ϕ(0, x) =
∂t ∂s
(4.2)
in
in terms of ux . This is exactly Equation (3.13) with u = δx .
The result of Theorem 3.3 is stronger than the application of the forward Kolmogorov equation on homogeneous Markovian systems since the result of Theorem
3.3 never used the Markov assumption and can be applied to non Markovian processes (see Section 4.3). So the present work is a general set-up where one can deduce
PDE even from non Markovian microscopic random dynamics. Note also that only
boundedness assumptions and not continuity ones are necessary to directly obtain
(4.2) via Theorem 3.3: to obtain the classical Kolmogorov theorem, one would have
assumed f ∈ C 0 (R2+ ) rather than f ∈ L∞ (R2+ ).
4.2.
Generalized Wold process
N
= f (St− , A1t , ..., Akt ), with f being a non-negative funcIn the case where λ t, Ft−
tion, one can define in a similar way uk (t, s, a1 , . . . , ak ) which is informally the
distribution at time t of the processes with age s and past given by a1 , ...ak for
the last k ISI’s. We want to investigate this case not for its Markovian properties,
which are nevertheless presented in Proposition B.2 in the appendix for sake of
completeness, but because this is the first basic example where the initial condition
is indeed impacting ρλ,P0 in Theorem 3.3.
June 9, 2015
20
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
To do so, the whole machinery
applied on u(dt, ds) is first extended in the next result
1
k
to uk dt, ds, da , . . . , da which represents the dynamics of the age and the last
k ISI’s. This could have been done in a very general way by an easy generalisation
of Theorem 3.3. However to avoid too cumbersome equations, we express it only
for generalized Wold processes to provide a clean setting to illustrate the impact
of the initial conditions on ρλ,P0 . Hence, we similarly define a random distribution
Uk (dt, ds, da1 , . . . , dak ) such that its evaluation at any given time t exists and is
Uk (t, ds, da1 , . . . , dak ) = δ(St− ,A1t ,...,Akt ) (ds, da1 , . . . , dak ).
(4.3)
The following result states the PDE satisfied by uk = E [Uk ].
Proposition 4.1. Let k be a positive integer and f be some non negative function on Rk+1
+ . Let N be a generalized Wold process with predictable age process
N
) = f (St− , A1t , ..., Akt ) sat(St− )t>0
points (Ti )i∈Z and intensity λ(t, Ft−
, associated
1
,exp
isfying ALλ,loc
, where A1t , . . . , Akt are the successive ages defined by (2.2). Suppose
that P0 is such that P0 (T−k > −∞) = 1. Let Uk be defined by
Uk (dt, ds, da1 , . . . , dak ) =
+∞
X
i=0
ηTi (dt, ds)
k
Y
j=1
δAj (daj ) 10≤t≤Ti+1 ,
(4.4)
Ti
If N is the result of Ogata’s thinning on the Poisson process Π, then Uk satisfies
(4.3) and (PF ubini ) a.s. in Π and F0N . Assume that the initial condition uin
k , defined
1
k
k+1
as the distribution of (−T0 , A0 , . . . , A0 ) which is a random vector in R
, is such
k
)
=
0.
Then
U
admits
a
mean
measure
u
which
also
satisfies
({0}
×
R
that uin
k
k
+
k
,
(PF ubini ) and the following system in the weak sense: on R+ × Rk+1
+
∂
∂
+
uk (dt, ds, da1 , ..., dak )+f (s, a1 , ..., ak )uk (dt, ds, da1 , ..., dak ) = 0,
∂t ∂s
Z∞
uk (dt, 0, ds, da1 , ..., dak−1 ) =
f (s, a1 , ..., ak ) uk (t, ds, da1 , ..., dak ) dt,
(4.5)
(4.6)
ak =0
uk (0, ds, da1 , . . . , dak ) = uin
k (ds, da1 , . . . , dak ) .
(4.7)
k
We have assumed uin
k ({0}×R+ ) = 0 (i.e. T0 6= 0 a.s.) for the sake of simplicity,
but this assumption may of course be relaxed and Dirac masses at 0 should then
be added in a similar way as in Theorem 3.3.
If f ∈ L∞ (Rk+1
+ ), we may apply Proposition 4.1, so that the mean measure
k+1
1
uk satisfy System (4.5)–(4.7). Assuming an initial condition uin
k ∈ L (R+ ), we
can prove exactly as for the renewal equation (with a Banach fixed point argument for instance)
34that there exists a unique solution uk such that (t 7→ uk (t, ·)) ∈
C R+ , L1 (Rk+1
)
to the generalized Wold case, the boundary assumption on the
+
kth penultimate point before time 0 being necessary to give sense to the successive
ages at time 0. By uniqueness, this proves
that the mean measure uk is this solution,
so that it belongs to C R+ , L1 (Rk+1
: Proposition 4.1 leads to a regularity result
)
+
on the mean measure.
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
21
Now that we have clarified the dynamics of the successive ages, one can look
at this system from the point of view of Theorem 3.3, that is when only two variables s and t are considered.
In this respect, let us note that U defined by (3.3) is
R
such that U (dt, ds) = a1 ,...,ak Uk (dt, ds, da1 , . . . , dak ). Since the integrals and the
expectations are exchangeable in the weak
R sense, the mean measure u defined in
Proposition 3.2 is such that u(dt, ds) = a1 ,...,ak uk (dt, ds, da1 , . . . , dak ). But (4.5)
∞
in the weak sense means, for all ϕ ∈ Cc,b
(Rk+2 ),
Z
∂
∂
+
ϕ(t, s, a1 , ..., ak )uk (dt, ds, da1 , . . . , dak )
∂t ∂s
Z
+ [ϕ (t, 0, a1 , . . . , ak ) − ϕ(t, s, a1 , . . . , ak )] f (s, a1 , . . . , ak ) uk (dt, ds, da1 , . . . , dak )
Z
+ ϕ (0, s, a1 , . . . , ak ) uin
k (ds, da1 , . . . , dak ) = 0. (4.8)
∞
∞
Letting ψ ∈ Cc,b
(R2 ) and ϕ ∈ Cc,b
(Rk+2 ) being such that
∀ t, s, a1 , . . . , ak ,
ϕ(t, s, a1 , . . . , ak ) = ψ(t, s),
we end up proving that the function ρλ,P0 defined in Theorem 3.3 satisfies
Z
ρλ,P0 (t, s)u (dt, ds) =
f (s, a1 , . . . , ak ) uk (dt, ds, da1 , . . . , dak ) ,
(4.9)
a1 ,...,ak
u(dt, ds)−almost everywhere (a.e.). Equation (4.9) means exactly from a probabilistic point of view that
ρλ,P0 (t, s) = E f (St− , A1t , ..., Akt )|St− = s , u(dt, ds) − a.e.
Therefore, in the particular case of generalized Wold process, the quantity ρλ,P0
depends on the shape of the intensity (here the function f ) and also on uk . But,
by Proposition 4.1, uk depends on its initial condition given by the distribution of
(−T0 , A10 , . . . , Ak0 ), and not only −T0 as in the initial condition for u. That is, as
announced in the remarks following Theorem 3.3, ρλ,P0 depends in particular on the
whole distribution of the underlying process before time 0, namely P0 and not only
on the initial condition for u. Here, for generalized Wold processes, it only depends
on the last k points before time 0. For more general non Markovian settings, the
integration cannot be simply described by a measure uk in dimension (k + 2) being
integrated with respect to da1 ...dak . In general, the integration has to be done on
N
all the ”randomness” hidden behind the dependence of λ(t, Ft−
) with respect to
the past once St− is fixed and in this sense it depends on the whole distribution
P0 of N− . This is made even clearer on the following non Markovian example: the
Hawkes process.
4.3. Hawkes process
As we have seen in Section 2.2, there are many different
examples of Hawkes
proR
t−
N
cesses that can all be expressed as λ t, Ft− = φ −∞ h (t − x) N (dx) , where the
main case is φ(θ) = µ + θ, for µ some positive constant, which is the linear case.
June 9, 2015
22
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
R
t−
N
When there is no point before 0, λ t, Ft−
= φ 0 h (t − x) N (dx) . In this
case, the interpretation is so close to (PPS) that the first guess, which is wrong,
would be that the analogous in (PPS) is
p(s, X(t)) = φ(X(t)),
(4.10)
hR
i
Rt
t−
where X(t) = E 0 h (t − x) N (dx) = 0 h (t − x) u(dx, 0). This is wrong, even
N
in the linear case since λ t, Ft−
depends on all the previous points. Therefore ρλ,P0
defined by (3.9) corresponds to a conditioning given only the last point.
By looking at this problem through the generalized Wold approach, one can
hope that for h decreasing fast enough:
N
λ t, Ft−
' φ h(St− ) + h(St− + A1t ) + ... + h(St− + A1t + ... + Akt ) .
In this sense and with respect to generalized Wold processes described in the
previous section, we are informally integrating on ”all the previous points” except
the last one and not integrating over all the previous points. This is informally
why (4.10) is wrong even in the linear case. Actually, ρλ,P0 is computableR for linear
t
Hawkes processes R: we show in the next section that ρλ,P0 (t, s) 6= φ( −∞ h(t −
∞
x)u(dx, 0)) = µ + 0 h(t − x)u(dx, 0) and that ρλ,P0 explicitly depends on P0 .
4.3.1. Linear Hawkes process
We are interested in Hawkes processes with a past before time 0 given by F0N , which
is not necessarily the past given by a stationary Hawkes process. To illustrate the
fact
that
the past is impacting the value of ρλ,P0 , we focus on two particular cases:
A1N
N− = {T0 } a.s. and T0 admits a bounded density f0 on R−
−
A2N−
N− is an homogeneous Poisson process with intensity α on R−
Before stating the main result, we need some technical definitions. Indeed the
proof is based on the underlying branching structure of the linear Hawkes process
described in Section B.3.1 of the appendix and the following functions (Ls , Gs ) are
naturally linked to this branching decomposition (see Lemma B.7).
Lemma 4.2. Let h ∈ L1 (R+ ) such that khkL1 < 1. For all s ≥ 0, there exist a
unique solution (Ls , Gs ) ∈ L1 (R+ ) × L∞ (R+ ) of the following system
Z
(x−s)∨0
Z
Gs (x − w)h(w)dw −
log(Gs (x)) =
0
x
h(w)dw,
(4.11)
(h (w) + Ls (w)) Gs (w)h(x − w) dw,
(4.12)
0
Z
x
Ls (x) =
s∧x
where a ∨ b (resp. a ∧ b) denotes the maximum (resp. minimum) between a and b.
Moreover, Ls (x ≤ s) ≡ 0, Gs : R+ → [0, 1], and Ls is uniformly bounded in L1 .
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
23
This result allows to define two other important quantities, Ks and q, by, for all
s, t ≥ 0, z ∈ R,
Z
(t−s)∨0
[h(t − x) + Ls (t − x)] Gs (t − x)h(x − z)dx,
Ks (t, z) :=
0
Z
t
log(q(t, s, z)) := −
Z
(t−s)∨0
h(x − z)dx −
(t−s)∨0
[1 − Gs (t − x)] h(x − z)dx. (4.13)
0
Finally, the following result is just an obvious remark that helps to understand the
resulting system.
Remark 4.1. For a non negative Φ ∈ L∞ (R+ ) and v in ∈ L∞ (R+ ), there exists a
unique solution v ∈ L∞ (R2+ ) in the weak sense to the following system,
∂
∂
v(t, s) +
v(t, s) + Φ(t, s)v(t, s) = 0,
∂t
∂s
v(t, 0) = 1
v(t = 0, s) = v in (s)
(4.14)
(4.15)
Moreover t 7→ v(t, .) is in C(R+ , L1loc (R+ )).
If v in is a survival function (i.e. non increasing from 0 to 1), then v(t, .) is a
survival function and −∂s v is a probability measure for all t > 0.
Proposition 4.3. Using the notations of Theorem
3.3,
let
N be
a Hawkes process
with past before 0 given by N− satisfying either A1N− or A2N− and with intensity
on R+ given by
Z t−
N
λ(t, Ft−
)=µ+
h(t − x)N (dx),
−∞
where µ is a positive realRnumber and h ∈ L∞ (R+ ) is a non-negative function with
support in R+ such that h < 1.
Then, the mean measure u defined in Proposition 3.2 satisfies Theorem 3.3 and
R∞
moreover its integral v(t, s) := u(t, dσ) is the unique solution of the system (4.14)–
s
∞
(4.15) where v in is the survival function of −T0 , and where Φ = Φµ,h
P0 ∈ L (R+ ) is
defined by
µ,h
h
Φµ,h
P0 = Φ+ + Φ−,P0 ,
(4.16)
where for all non negative s, t
Z
Φµ,h
(t,
s)
=
µ
1
+
+
t
(h(x) + Ls (x))Gs (x)dx ,
(4.17)
s∧t
and where under Assumption A1N− ,
R 0∧(t−s)
Φh−,P0 (t, s)
=
−∞
(h(t − t0 ) + Ks (t, t0 )) q(t, s, t0 )f0 (t0 )dt0
,
R 0∧(t−s)
q(t, s, t0 )f0 (t0 )dt0
−∞
(4.18)
June 9, 2015
24
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
or, under Assumption A2N− ,
Φh−,P0 (t, s) = α
Z
0∧(t−s)
(h(t − z) + Ks (t, z)) q(t, s, z)dz.
(4.19)
−∞
In these formulae, Ls , Gs , Ks and q are given by Lemma 4.2 and (4.13). Moreover
Z +∞
Z +∞
∀ s ≥ 0,
ρλ,P0 (t, x)u(t, dx) = Φµ,h
(t,
s)
u(t, dx).
(4.20)
P0
s
s
The proof is included in Appendix B.3. Proposition 4.3 givesa purely
analytical
1
definition for v, and thus for u, in two specific cases, namely AN− or A2N− .
In the general case, treated in Appendix B (Proposition B.5), there remains a
dependence with respect to the initial condition P0 , via the function Φh−,P0 .
Remark 4.2. Contrarily to the general result in
Theorem 3.3, Proposition 4.3
R +∞
focuses on the equation satisfied by v(dt, s) = s u(dt, dx) because in Equation (4.14) the function parameter Φ = Φµ,h
P0 may be defined independently of the
definitions of v or u, which is not the case for the rate ρλ,P0 appearing in Equation (3.10). Thus, it is possible to depart from the system of equations defining v,
study it, prove existence, uniqueness and regularity for v under some assumptions
on the initial distribution uin as well as on the birth function h, and then deduce
regularity or asymptotic properties for u without any previous knowledge on the
underlying process.
In Sections 4.1 and 4.2, we were able to use the PDE formulation to prove that the
distribution of the ages u has a density. Here, since we only obtain a closed formula
for v and not for u, we would need to derive Equation (4.14) in s to obtain a similar
µ,h
result, so that we need to prove more regularity on Φµ,h
P0 . Such regularity for ΦP0
is not obvious since it depends strongly on the assumptions on N− . This paves the
way for future research, where the PDE formulation would provide regularity on
the distribution of the ages, as done above for renewal and Wold processes.
Remark 4.3. These two cases A1N− and A2N− highlight the dependence with
respect to all the past before time 0 (i.e. P0 ) and not only the initial condition (i.e.
in
the
age at time 0). In fact, they can give the same initial condition u : for instance,
A1N−
with −T0 exponentially distributed with parameter α > 0 gives the same
law for −T0 as A2N− with parameter α. However, if we fix some non-negative real
number s, one can show that Φh−,P0 (0, s) is different in those two cases. It is clear
from the definitions that for every real number z, q(0, s, z) = 1 and Ks (0, z) = 0.
Thus, in the first case,
R −s
R∞
h(−t0 )αeαt0 dt0
h(z)αe−αz dz
−∞
h
Φ−,P0 (0, s) =
= s R ∞ −αz
,
R −s
αe
dz
αeαt0 dt0
s
−∞
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
25
R −s
R∞
while in the second case, Φh−,P0 (0, s) = α −∞ h(−z)dz = α s h(w)dw. Therefore
Φh−,P0 clearly depends on P0 and not just on the distribution of the last point before
0, and so is ρλ,P0 .
Rt
Remark 4.4. If we follow our first guest, ρλ,P0 would be either µ + 0 h(t −
Rt
x)u(dx, 0) or µ+ −∞ h(t−x)u(dx, 0). In particular, it would not depend on the age
s. Therefore by (4.20), so would Φµ,h
P0 . But for instance at time t = 0, when N− is
R +∞
an homogeneous Poisson process of parameter α, Φµ,h
h(w)dw,
P0 (0, s) = µ + α s
which obviously depends on s. Therefore the intuition linking Hawkes processes and
(PPS) does not apply.
4.3.2. Linear Hawkes process with no past before time 0
A classical framework in point processes theory is the case in A1N− where T0 →
R t−
N
) = µ + 0 h(t − x)N (dx). The
−∞, or equivalently, when N has intensity λ(t, Ft−
problem in this case is that the age at time 0 is not finite. The age is only finite for
times greater than the first spiking time T1 .
Here again, the quantity v(t, s) reveals more informative and easier to use: having
the distribution of T0 going to −∞ means that Supp(uin ) goes to +∞, so that the
initial condition for v tends to value uniformly 1 for any 0 ≤ s < +∞. If we
can prove that the contribution of Φh−,P0 vanishes, the following system is a good
candidate to be the limit system:
∂ ∞
∂ ∞
∞
v (t, s) +
v (t, s) + Φµ,h
+ (t, s) v (t, s) = 0,
∂t
∂s
v ∞ (t, 0) = 1,
v ∞ (0, s) = 1,
(4.21)
(4.22)
where Φµ,h
+ is defined in Proposition 4.3. This leads us to the following proposition.
Proposition 4.4. Under the assumptions and notations of Proposition 4.3, consider for all M ≥ 0, vM theunique solution of system (4.14)-(4.15) with Φ given by
Proposition 4.3, case A1N− , with T0 uniformly distributed in [−M −1, −M ]. Then,
as M goes to infinity, vM converges uniformly on any set of the type (0, T ) × (0, S)
towards the unique solution v ∞ of System (4.21)-(4.22).
Conclusion
We present in this article a bridge between univariate point processes, that can
model the behavior of one neuron through its spike train, and a deterministic age
structured PDE introduced by Pakdaman, Perthame and Salort, named (PPS).
More precisely Theorem 3.3 present a PDE that is satisfied by the distribution u
of the age s at time t, where the age represents the delay between time t and the
last spike before t. This is done in a very weak sense and some technical structure,
namely (PF ubini ), is required.
June 9, 2015
26
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
The main point is that the ”firing rate” which is a deterministic quantity written
as p(s, X(t)) in (PPS) becomes the conditional expectation of the intensity given
the age at time t in Theorem 3.3. This first makes clear that p(s, X(t)) should
be interpreted as a hazard rate, which gives the probability that a neuron fires
given that it has not fired yet. Next, it makes clearly rigorous several ”easy guess”
bridges between both set-ups when the intensity only depends on the age. But it
also explained why when the intensity has a more complex shape (Wold, Hawkes),
this term can keep in particular the memory of all that has happened before time 0.
One of the main point of the present study is the Hawkes process, for which
what was clearly expected was a legitimation of the term X(t) in the firing rate
p(s, X(t)) of (PPS), which models the synaptic integration. This is not the case,
and the interlinked equations that have been found for the cumulative distribution
function v(t, ·) do not have a simple nor direct deterministic interpretation. However
one should keep in mind that the present bridge, in particular in the population wide
approach, has been done for independent neurons. This has been done to keep the
complexity of the present work reasonable as a first step. But it is also quite obvious
that interacting neurons cannot be independent. So one of the main question is: can
we recover (PPS) as a limit with precisely a term of the form X(t) if we consider
multivariate Hawkes processes that really model interacting neurons ?
Acknowledgment
This research was partly supported by the European Research Council (ERC Starting Grant SKIPPERAD number 306321), by the french Agence Nationale de la
Recherche (ANR 2011 BS01 010 01 projet Calibration) and by the interdisciplanary axis MTC-NSC of the University of Nice Sophia-Antipolis. MJC acknowledges
support from the projects MTM2014-52056-P (Spain) and P08-FQM-04267 from
Junta de Andalucı́a (Spain). We warmly thank François Delarue for very fruitful
discussions and ideas.
A. Proofs linked with the PDE
A.1. Proof of Proposition 3.1
First, let us verify that U satisfies Equation (3.2). For any t > 0,
U (t, ds) =
X
ηTi (t, ds)10≤t≤Ti+1 ,
i≥0
by definition of U . Yet, ηTi (t, ds) = δt−Ti (ds)1t>Ti , and the only i ∈ N such that
Ti < t ≤ Ti+1 is i = Nt− . So, for all t > 0, U (t, ds) = δt−TNt− (ds) = δSt− (ds).
Secondly, let us verify that U satisfies (PF ubini ). Let ϕ ∈ Mc,b (R2+ ), and let T be
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
27
(1)
P+∞
= 0. Then since U (t, ds) = i=0 ηTi (t, ds)10≤t≤Ti+1 ,
!
Z
Z
X
ϕ(t, s)U (t, ds) dt ≤
|ϕ(t, s)|
ηTi (t, ds)10≤t≤Ti+1 dt
such that for all t > T , ϕt
Z
Z
R+
=
R+
R+
XZ
i≥0
R+
|ϕ(t, t − Ti )|1t>Ti 10≤t≤Ti+1 dt =
R+
i≥0
XZ
i≥0
Z
T1
0
|ϕ(t, t − Ti )|dt
max(0,Ti )
Z
X
|ϕ(t, t − T0 )| +
=
Ti+1
Ti+1
|ϕ(t, t − Ti )|dt.
i/0<Ti <T
Ti
Since there is a finite number of points of NR between
0 and T , on Ω, this quantity
P
+∞ R +∞
is finite and one can exchange i≥0 and t=0 s=0 . Therefore, since all the ηTi
satisfy (PF ubini ) and ϕ(t, s)10≤t≤Ti+1 is in Mc,b (R2+ ), so does U .
∞
For the dynamics of U , similar computations lead for every ϕ ∈ Cc,b
(R+ 2 ) to
Z
X Z Ti+1 −Ti
ϕ (t, s) U (dt, ds) =
ϕ (s + Ti , s) ds.
i≥0
max(0,−Ti )
We also have
Z
X Z Ti+1 −Ti ∂
∂
∂
∂
+
ϕ (t, s) U (dt, ds) =
+
ϕ (s + Ti , s) ds
∂t ∂s
∂t ∂s
i≥0 max(0,−Ti )
X
=
[ϕ (Ti+1 , Ti+1 − Ti ) − ϕ (Ti , 0)] + ϕ(T1 , T1 − T0 ) − ϕ(0, −T0 ).
(A.1)
i≥1
R λ(t,F N )
P
It remains to express the term with x=0 t− Π (dt, dx) = i≥0 δTi+1 (dt), that is
X
Z
Z Z
X
ϕ (t, s) U (t, ds)
δTi+1 (dt) =
ϕ (t, s) U (t, ds)
δTi+1 (dt)
i≥0
i≥0
Z
=
ϕ (t, St− )
X
i≥0
and, since
R
δTi+1 (dt) =
X
ϕ (Ti+1 , Ti+1 − Ti ) , (A.2)
i≥0
U (t, ds) = 1 for all t > 0,
Z Z
X
X
ϕ (t, 0) U (t, ds)
δTi+1 (dt) =
ϕ (Ti+1 .0) ,
i≥0
(A.3)
i≥0
Identifying all the terms in the right-hand side of Equation (A.1), this lead to
Equation (3.7), which is the weak formulation of System (3.4)–(3.6).
A.2. Proof of Proposition 3.2
(1)
Let ϕ ∈ Mc,b (R2+ ), and let T be such that for all t > T , ϕt = 0. Then,
Z
|ϕ(t, s)|U (t, ds) ≤ ||ϕ||L∞ 10≤t≤T ,
(A.4)
June 9, 2015
28
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
R
since
R at any fixed time t > 0, U (t, ds) = 1. Therefore, the expectation
E ϕ(t, s)U (t, ds) is well-defined and finite and so u(t, .) is well-defined.
On the other hand, at any fixed age s,
Z
∞
X
|ϕ(t, s)|U (dt, s) =
|ϕ(s + Ti , s)|10≤s≤Ti+1 −Ti
i=0
=
X
|ϕ(s + Ti , s)|10≤s+Ti ≤T 10≤s≤Ti+1 −Ti ,
i≥0
(1)
because for all t > T , ϕt
Z
|ϕ(t, s)|U (dt, s)
= 0. Then, one can deduce the following bound
≤ |ϕ(s + T0 , s)|1−T0 ≤s≤T −T0 10≤s≤T1 −T0 +
X
|ϕ(s + Ti , s)|10≤s≤T 1Ti ≤T
i≥1
Since the intensity is L1loc
Z
E
≤ ||ϕ||L∞ (1−T0 ≤s≤T −T0 + NT 10≤s≤T ) .
hR
i
T
N
in expectation, E [NT ] = E 0 λ(t, Ft−
)dt <∞ and
|ϕ(t, s)|U (dt, s) ≤ ||ϕ||L∞ (E [1−T0 ≤s≤T −T0 ] + E [NT ] 10≤s≤T ) ,
(A.5)
so the expectation is well-defined and finite and so u(·, s) is well-defined.
Now, let us show (PF ubini ). First Equation (A.4) implies
Z
Z
E
|ϕ(t, s)|U (t, ds) dt ≤ T ||ϕ||L∞ ,
and Fubini’s theorem implies that the following integrals are well-defined and that
the following equality holds,
Z
Z Z
Z
E
ϕ(t, s)U (t, ds) dt = E
ϕ(t, s)U (t, ds)dt .
(A.6)
Secondly, Equation (A.5) implies
Z
Z
E
|ϕ(t, s)|U (dt, s) ds ≤ ||ϕ||L∞ (T + T E [NT ]) ,
by exchanging the integral with the expectation and Fubini’s theorem implies that
the following integrals are well-defined and that the following equality holds,
Z
Z Z
Z
E
ϕ(t, s)U (dt, s) ds = E
ϕ(t, s)U (dt, s)ds .
(A.7)
Now, it only remains to use (PF ubini ) for U to deduce that the right members of
Equations (A.6) and (A.7)
Rare
(PF ubini ) for U tells that these two
R equal. Moreover,
quantities are equal to E
ϕ(t, s)U (dt, ds) . This concludes the proof.
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
A.3. Proof of Theorem 3.3
N
E λ(t,Ft−
)1|S
t− −s|≤ε
29
, for every t > 0 and s ≥ 0. Since
Let ρλ,P0 (t, s) := lim inf ε↓0
P(|St− −s|≤ε)
N
(λ(t, Ft− ))t>0 and (St− )t>0 are predictable processes, and a fortiori progressive
processes (see page 9 in 3 ), ρλ,P0 is a measurable function of (t, s).
N
For every t > 0, let µt be the measure defined by
µ
(A)
=
E
λ(t,
F
)1
(S
)
t
A
t−
t−
1
,exp
for all measurable set A. Since Assumption ALλ,loc
implies that dt-a.e.
N
E λ(t, Ft− ) < ∞ and since u(t, ds) is the distribution of St− , µt is absolutely
continuous with respect to u(t, ds) for dt-almost every t.
Let ft denote the Radon Nikodym derivative
of µt with respect to u(t, ds).
N
) St− = s by definition of the conditional
For u(t, ds)-a.e. s, ft (s) = E λ(t, Ft−
expectation. Moreover, a Theorem of Besicovitch 27 claims
that for u(t, ds)-a.e.
N
) St− = s holds
s, ft (s) = ρλ,P0 (t, s). Hence, the equalityρλ,P0 (t, s) = E λ(t, Ft−
u(t, ds)dt = u(dt, ds)-almost everywhere.
Next, in order to use (PF ubini ), let us note that for any T, K > 0,
2
ρK,T
(A.8)
λ,P0 : (t, s) 7→ (ρλ,P0 (t, s) ∧ K) 10≤t≤T ∈ Mc,b (R+ )
R R K,T
R R K,T
Hence,
ρλ,P0 (t, s)u(dt, ds) =
ρλ,P0 (t, s)u(t, ds) dt which is always upper
RT R
RT
RT
N
bounded by 0
ρλ,P0 (t, s)u(t, ds) dt = 0 µt (R+ )dt = 0 E λ(t, Ft−
) dt < ∞.
RT R
Letting K → ∞, one has that 0 ρλ,P0 (t, s)u(dt, ds) is finite for all T > 0.
Once ρλ,P0 correctly defined, the proof of Theorem 3.3 is a direct consequence
of Proposition 3.1.
More precisely, let us show that (3.7) implies (3.13). Taking the expectation
∞
of (3.7) gives that for all ϕ ∈ Cc,b
(R2+ ),
N
Z λ(t,Ft−
)
"Z
E
[ϕ (t, s) − ϕ(t, 0)]
!
#
Z
Π (dt, dx) U (t, ds) −
ϕ (0, s) uin (ds)
x=0
Z
−
(∂t + ∂s ) ϕ (t, s) u (dt, ds) = 0. (A.9)
denote ψ(t,s) := ϕ(t, s) − ϕ(t, 0). Due to Ogata’s thinning construction,
Let us
N
R λ(t,Ft−
)
Π (dt, dx) = N (dt)1t>0 where N is the point process constructed by
x=0
thinning, and so,
!
#
"Z
N
Z
Z λ(t,Ft−
)
E
ψ (t, s)
Π (dt, dx) U (t, ds) = E
x=0
ψ (t, St− ) N (dt) .
t>0
(A.10)
But ψ(t, St− ) is a (FtN )-predictable process and
Z
E
t>0
N
|ψ(t, St− )|λ(t, Ft−
)dt
"Z
≤ kψkL∞ E
0
#
T
N
λ(t, Ft−
)dt
< ∞,
June 9, 2015
30
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
hence, using the martingale property of the predictable intensity,
Z
Z
N
E
ψ (t, St− ) N (dt) = E
ψ (t, St− ) λ t, Ft−
dt .
t>0
(A.11)
t>0
Moreover,
thanks to Fubini’s Theorem, the right-hand term is finite and equal to
R
N
E[ψ (t, St− ) λ(t, Ft−
)]dt, which can also be seen as
Z
Z
E [ψ (t, St− ) ρλ,P0 (t, St− )] dt = ψ(t, s)ρλ,P0 (t, s)u(t, ds)dt.
(A.12)
For all K > R0, ((t, s) 7→ ψ(t, s) (ρλ,P0 (t, s) ∧ K)) ∈RMc,b (R2+ ) and, from (PF ubini ), it
is clear that ψ(t, s) (ρλ,P0 (t, s) ∧ K) u(t, ds)dt = ψ(t, s) (ρλ,P0 (t, s) ∧ K) u(dt, ds).
Since RoneR can always upper-bound this quantity in absolute value by
T
kψkL∞ 0 s ρλ,P0 (t, s)u(dt, ds), this is finite. Letting K → ∞ one can show that
Z
Z
ψ(t, s)ρλ,P0 (t, s)u(t, ds)dt = ψ(t, s)ρλ,P0 (t, s)u(dt, ds).
(A.13)
Gathering (A.10)-(A.13) with (A.9) gives (3.13).
A.4. Proof of Corollary 3.4
i
i
For all i ∈ N∗ , let us denote N+
= N i ∩ (0, +∞) and N−
= N i ∩ R− . Thanks
i
can be seen as constructed via thinning
to Proposition B.12, the processes N+
2
of independent Poisson processes on R+ . Let (Πi )i∈N be the sequence of point
measures associated to independent Poisson processes of intensity 1 on R2+ given
i
. In particular,
by Proposition B.12. Let T0i denote the closest point to 0 in N−
i
(T0 )i∈N∗ is a sequence of i.i.d. random variables.
For each i, let U i denote the solution of the microscopic equation corresponding
to Πi and T0i as defined in Proposition 3.1 by (3.3). Using (3.2), it is clear that
Pn
Pn
i
∞
2
i (ds) =
i=1 δSt−
i=1 U (t, ds) for all t > 0. Then, for every ϕ ∈ Cc,b (R+ ),
!
Z
n
n Z
1X
1X
δ i (ds) =
ϕ(t, s)U i (t, ds).
ϕ(t, s)
n i=1 St
n i=1
R
The right-hand side is a sum n i.i.d. random variables with mean ϕ(t, s)u(t, ds),
so (3.14) clearly follows from the law of large numbers.
B. Proofs linked with the various examples
B.1. Renewal process
Proposition B.1. With the notations of Section 2, let N be a point process on R,
with predictable age process (St− )t>0 , such that T0 = 0 a.s. The following statements
are equivalent:
(i) N+ = N ∩ (0, +∞) is a renewal process with ISI’s distribution given by some
density ν : R+ → R+ .
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
31
N
N
(ii) N admits λ(t, Ft−
) = f (St− ) as an intensity on (0, +∞) and λ(t, Ft−
) t>0
1
,a.s.
satisfies ALλ,loc
, for some f : R+ → R+ .
In such a case, for all x ≥ 0, f and ν satisfy
Z x
• ν(x) = f (x) exp(−
f (y)dy) with the convention exp(−∞) = 0,
0
Z ∞
ν(x)
R
• f (x) = ∞
if
ν(y)dy 6= 0, else f (x) = 0.
ν(y)dy
x
x
(B.1)
(B.2)
Proof. For (ii) ⇒ (i). Since T0 = 0 a.s., Point (2) of Proposition B.2 given later
on for the general Wold case implies that the ISI’s of N forms a Markov chain of
order 0 i.e. they are i.i.d. with density given
R ∞ by (B.1).
For (i) ⇒ (ii). Let x0 = inf{x ≥ 0, x ν(y)dy = 0}. It may be infinite. Let us
define f by (B.2) for every 0 ≤ x < x0 and let Ñ be a point process on R such
Ñ
Ñ
that Ñ− = N− and Ñ admits λ(t, Ft−
) = f (St−
) as an intensity on (0, +∞) where
Ñ
(St− )t>0 is the predictable age process associated to Ñ . Applying (ii) ⇒ (i) to Ñ
gives that the ISI’s of Ñ are i.i.d. with density given by
!
Z x
ν(x)
ν(y)
R∞
ν̃(x) = R ∞
exp −
dy ,
ν(y)dy
ν(z)dz
0
x
y
for every 0 ≤ x < x
0 and ν̃(x) = 0 for x
≥ x0 . It is clear that ν = ν̃ since the function
Rx
ν(y)
1
exp − 0 R ∞
dy is differentiable with derivative equal to 0.
x 7→ R ∞
x
ν(y)dy
y
ν(z)dz
Since N and Ñ are renewal processes with same density ν and same first point
T0 = 0, they have the same distribution. Since the intensity characterizes a point
N
N
process, N also admits λ(t, Ft−
) = f (St−
) as an intensity on (0, +∞). Moreover,
N
since N is a renewal process, it is non-explosive in finite time and so λ(t, Ft−
) t>0
1
,a.s.
satisfies ALλ,loc
.
B.2. Generalized Wold processes
In this Section, we suppose that there exists k ≥ 0 such that the underlying point
process N has intensity
N
λ t, Ft−
= f (St− , A1t , ..., Akt ),
(B.3)
where f is a function and the Ai ’s are defined by Equation (2.2).
B.2.1. Markovian property and the resulting PDE
Let N be a point process of intensity given by (B.3). If T−k > −∞, its associated
age process (St )t can be defined up to t > T−k . Then let, for any integer i ≥ −k,
Ai = Ti+1 − Ti = STi+1 −
(B.4)
June 9, 2015
32
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
and denote (FiA )i≥−k the natural filtration associated to (Ai )i≥−k .
For any t ≥ 0, and point process Π on R2+ , let us denote Π≥t (resp. Π>t ) the
restriction to R2+ (resp. (0, +∞) × R+ ) of the point process Π shifted t time units
to the left on the first coordinate. That is, Π≥t (C × D) = Π((t + C) × D) for all
C ∈ B(R+ ), D ∈ B(R+ ) (resp. C ∈ B((0, +∞))).
Proposition B.2.
Let consider k a non-negative integer, f some non negative function on Rk+1
+
and N a generalized Wold process of intensity given by (B.3).Supposethat P0 is
1
,a.s.
N
) t>0 satisfies ALλ,loc
. Then,
such that P0 (T−k > −∞) = 1 and that λ(t, Ft−
(1) If (Xt )t≥0 = (St− , A1t , ..., Akt ) t≥0 , then for any finite non-negative stopping time τ , (Xtτ )t≥0 = (Xt+τ )t≥0 is independent of FτN− given Xτ .
(2) the process (Ai )i≥1 given by (B.4) forms a Markov chain of order k with
transition measure given by
Z x
ν(dx, y1 , ..., yk ) = f (x, y1 , ..., yk ) exp −
f (z, y1 , ..., yk )dz dx. (B.5)
0
If T0 = 0 a.s., this holds for (Ai )i≥0 .
If f is continuous then G, the infinitesimal generator of (Xt )t≥0 , is given by
∀φ ∈ C 1 (Rk+1
(Gφ)(s, a1 , ..., ak ) =
+ ),
∂
φ(s, a1 , ..., ak ) + f (s, a1 , ..., ak ) (φ(0, s, a1 , ..., ak−1 ) − φ(s, a1 , ..., ak )) . (B.6)
∂s
Proof. First, let us show the first point of the Proposition. Let Π be such that N
is the process resulting of Ogata’s thinning with Poisson measure Π. The existence
of such a measure is assured by Proposition B.12. We show that for any finite
stopping time τ , the process (Xtτ )t≥0 can be expressed as a function of Xτ and
Π≥τ which is the restriction to R2+ of the Poisson process Π shifted τ time units
to the left on the first coordinate. Let e1 = (1, 0, . . . , 0) ∈ Rk+1 . For all t ≥ 0, let
Yt = Xτ + te1 and define
(
)
Z
Z
f (Yw )
R0 = inf
t ≥ 0,
Π≥τ (dw, dx) = 1 .
[0,t]
x=0
Note that R0 may be null, in particular when τ is a jumping time of the underlying
point process N . It is easy to check that R0 can be expressed as a measurable
τ
function of Xτ and Π≥τ . Moreover, it is clear that Xt∧R
= Yt∧R0 for all t ≥ 0.
0
So, R0 can be seen as the delay until the first point of the underlying process N
after time τ . Suppose that Rp , the delay until the (p + 1)th point, is constructed
for some p ≥ 0 and let us show how Rp+1 can be constructed. For t ≥ Rp , let
τ
Zt = θ(XR
)+te1 , where θ : (x1 , . . . , xk+1 ) 7→ (0, x1 , . . . , xk ) is a right shift operator
p
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
33
modelling the dynamics described by (2.3). Let us define
(
Rp+1 = inf
Z
Z
f (Zw )
t > Rp ,
)
Π≥τ (dw, dx) = 1 .
(Rp ,Rp +t]
(B.7)
x=0
Note that for any p ≥ 0, Rp+1 cannot be null. It is coherent with the fact that the
counting process (Nt )t>0 only admits jumps with height 1. It is easy to check that
τ
Rp+1 can be expressed as a measurable function of θ(XR
) and Π>τ +Rp . It is also
p
τ
clear that Xt∧Rp+1 = Zt∧Rp+1 for all t ≥ Rp . So, Rp+1 can be seen as the delay
τ
can be
until the (p + 2)th point of the process N after time τ . By induction, XR
p
τ
expressed as a function of Xτ and Π≥τ , and this holds for Rp+1 and XRp+1 too.
To conclude, remark that the process (Xtτ )t≥0 is a measurable function of Xτ
and all the Rp ’s for p ≥ 0. Thanks to the independence of the Poisson measure
Π, FτN− is independent of Π≥τ . Then, since (Xtτ )t≥0 is a function of Xτ and Π≥τ ,
(Xtτ )t≥0 is independent of FτN− given Xτ which concludes the first point.
For Point (2), fix i ≥ 1 and apply Point (1) with τ = Ti . It appears that
in this case, R0 = 0 and R1 = Ai . Moreover, R1 = Ai can be expressed as a
A
function of θ(Xτ ) and Π>τ . However, θ(Xτ ) = (0, Ai−1 , . . . , Ai−k ) and Fi−1
⊂ FTNi .
A
given
Since τ = Ti , Π>τ is independent of FTNi and so Ai is independent of Fi−1
(Ai−1 , . . . , Ai−k ). That is, (Ai )i≥1 forms a Markov chain of order k.
Note that if T0 = 0 a.s. (in particular it is non-negative), then one can use the
previous argumentation with τ = 0 and conclude that the Markov chain starts one
time step earlier, i.e. (Ai )i≥0 forms a Markov chain of order k.
For (B.5),R1 = Ai , defined by (B.7), has the same distribution as the first
point of a Poisson process with intensity λ(t) = f (t, Ai−1 , . . . , Ai−k ) thanks to the
thinning Theorem. Hence, the transition measure of (Ai )i≥1 is given by (B.5).
Now that (Xt )t≥0 is Markovian, one can compute its infinitesimal generator.
Suppose that f is continuous and let φ ∈ Cb1 (Rk+1
+ ), The generator of (Xt )t≥0 is
defined by Gφ(s, a1 , . . . , ak ) = limh→0+ Phh−Id φ, where
Ph φ (s, a1 , . . . , ak ) = E [φ (Xh )|X0 = (s, a1 , . . . , ak )]
= E φ (Xh ) 1{N ([0,h])=0} X0 = (s, a1 , . . . , ak )
+E φ (Xh ) 1{N ([0,h])>0} X0 = (s, a1 , . . . , ak )
= E0 + E>0 .
The case with no jump is easy to compute,
E0 = φ (s + h, a1 , . . . , ak ) (1 − f (s, a1 , . . . , ak ) h) + o(h),
(B.8)
thanks to the continuity of f . When h is small, the probability to have more than
two jumps in [0, h] is a o(h), so the second case can be reduced to the case with
June 9, 2015
34
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
exactly one random jump (namely T ),
E>0 = E φ (Xh ) 1{N ([0,h])=1} X0 = (s, a1 , . . . , ak ) + o(h)
= E φ (θ(X0 + T ) + (h − T )e1 ) 1{N ∩[0,h]={T }} X0 = (s, a1 , . . . , ak ) + o(h)
= E (φ (0, s, a1 , . . . , ak−1 ) + o(1)) 1{N ∩[0,h]={T }} X0 = (s, a1 , . . . , ak ) + o(h)
= φ (0, s, a1 . . . , ak−1 ) (f (s, a1 , . . . , ak ) h) + o (h) ,
(B.9)
thanks to the continuity of φ and f . Gathering (B.8) and (B.9) with the definition
of the generator gives (B.6).
B.2.2. Sketch of proof of Proposition 4.1
Let N be the point process construct by Ogata’s thinning of the Poisson process Π
and Uk be as defined in Proposition 4.1. By an easy generalisation of Proposition
3.1, one can prove that on the event Ω of probability 1, where Ogata’s thinning is
well defined, and where T0 < 0, Uk satisfies (PF ubini ), (4.3) and on R+ × Rk+1
+ , the
following system in the weak sense
!
Z f (s,a1 ,...,ak )
∂
∂
+
Uk (dt, ds, da) +
Π (dt, dx) Uk (t, ds, da) = 0,
∂t ∂s
x=0
!
Z
Z
f (s,a1 ,...,ak )
Uk (dt, 0, ds, da1 , ..., dak−1 ) =
Π (dt, dx) Uk (t, ds, da) ,
ak ∈R
x=0
with da = da1 × ... × dak and initial condition U in = δ(−T0 ,A10 ,...,Ak0 ) .
Similarly to R
Proposition 3.2, one can
ϕ in
also prove
R that for any test function
Mc,b (Rk+2
),
E
ϕ(t,
s,
a)U
(t,
ds,
da)
and
E
ϕ(t,
s,
a)U
(dt,
s,
da)
are
finite
k
k
+
and one can define uk (t, ds, da) and uk (dt, s, da) by, for all ϕ in Mc,b (Rk+2
+ ),
Z
Z
ϕ(t, s, a)uk (t, ds, da) = E
ϕ(t, s, a)Uk (t, ds, da) ,
for all t ≥ 0, and
Z
Z
ϕ(t, s, a)uk (dt, s, da) = E
ϕ(t, s, a)Uk (dt, s, da) ,
for all s ≥ 0. Moreover, uk (t, ds, da) and uk (dt, s, da) satisfy (PF ubini ) and one can
define uk (dt, ds, da) = uk (t, ds, da)dt = uk (dt, s, da)ds on R2+ , such that for any
test function ϕ in Mc,b (Rk+2
+ ),
Z
Z
ϕ(t, s, a)uk (dt, ds, da) = E
ϕ(t, s, a)Uk (dt, ds, da) ,
quantity which is finite. The end of the proof is completely analogous to the one of
Theorem 3.3.
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
35
B.3. Linear Hawkes processes
B.3.1. Cluster decomposition
Proposition B.3. Let g be a non negative L1loc (R+ ) function and h a non negative
L1 (R+ ) function such that khk1 < 1. Then the branching point process N is defined
as ∪∞
k=0 Nk the set of all the points in all generations constructed as follows:
• Ancestral points are Nanc distributed as a Poisson process of intensity g;
N0 := Nanc can be seen as the points of generation 0.
• Conditionally to Nanc , each ancestor a ∈ Nanc gives birth, independently
of anything else, to children points N1,a according to a Poisson process of
intensity h(. − a); N1 = ∪a∈Nanc N1,a forms the first generation points.
Then the construction is recursive in k, the number of generations:
• Denoting Nk the set of points in generation k, then conditionally to Nk ,
each point x ∈ Nk gives birth, independently of anything else, to children
points Nk+1,x according to a Poisson process of intensity h(. − x); Nk+1 =
∪x∈Nk Nk+1,x forms the points of the (k + 1)th generation.
This construction ends almost surely in every finite interval. Moreover the intensity
of N exists and is given by
Z t−
N
λ(t, Ft−
) = g(t) +
h(t − x)N (dx).
0
This is the cluster representation of the Hawkes process. When g ≡ ν, this has
been proved in 20 . However up to our knowledge this has not been written for a
general function g.
Proof. First, let us fix some A > 0. The process ends up almost surely in [0, A]
because there is a.s. a finite number of ancestors in [0, A]: if we consider the family of
points attached to one particular ancestor, the number of points in each generation
form a sub-critical Galton
Watson process with reproduction distribution, a Poisson
R
variable with mean h < 1 and whose extinction is consequently almost sure.
Next, to prove that N has intensity
Z t−
H(t) = g(t) +
h(t − x)N (dx),
0
we exhibit a particular thinning construction, where on one hand, N is indeed a
branching process by construction as defined by the proposition and, which, on the
other hand, guarantees that Ogata’s thinning project the points below H(t). We
can always assume that h(0) = 0, since changing the intensity of Poisson process
in the Rbranching structure at one particular point has no impact. Hence H(t) =
t
g(t) + 0 h(t − x)N (dx).
The construction is recursive in the same way. Fix some realisation Π of a Poisson
process on R2+ .
June 9, 2015
36
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
For Nanc , project the points below the curve t → g(t) on [0, A]. By construction,
Nanc is a Poisson process of intensity g(t) on [0, A]. Note that for the identification
(see Theorem B.11) we just need to do it on finite intervals and that the ancestors
that may be born after time A do not have any descendants in [0, A], so we can
discard them, since they do not appear in H(t), for t ≤ A.
Enumerate the points in Nanc ∩ [0, A] from T1 to TN0,∞ .
• The children of T1 , N1,T1 , are given by the projection of the points of Π
whose ordinates are in the strip t 7→ (g(t), g(t)+h(t−T1 )]. As before, by the
property of spatial independence of Π, this is a Poisson process of intensity
h(. − T1 ) conditionally to Nanc .
• Repeat until TN0,∞ , where N1,TN0,∞ are given by the projection of the points
PN0,∞ −1
of Π whose ordinates are in the strip t 7→ (g(t) + i=1
h(t − Ti ), g(t) +
PN0,∞
h(t
−
T
)].
As
before,
by
the
property
of
independence
of Π, this is a
i
i=1
Poisson process of intensity h(. − TN0,∞ ) conditionally to Nanc and because
the consecutive strips do not overlap, this process is completely independent
of the previous processes (N1,Ti )’s that have been constructed.
Note that at the end of this first generation, N1 = ∪T ∈Nanc N1,T consists of the
PN0,∞
projection of points of Π in the strip t 7→ (g(t), g(t) + i=1
R h(t − Ti )]. They
PN0,∞
therefore form a Poisson process of intensity i=1 h(t − Ti ) = h(t − u)Nanc (du),
conditionally to Nanc .
For generation
k + 1 replace in the previous construction Nanc by Nk and g(t)
Pk−1 R
by g(t) + j=0 h(t − u)dNj (u). Once again we end up for each point x in Nk
with a process of children Nk+1,x which is a Poisson process of intensity h(t − x)
conditionally to Nk and which is totally independent of the other Nk+1,y ’s.
R Note
also that as before, Nk+1 = ∪x∈Nk Nk+1,x is a Poisson process of intensity h(t −
u)Nk (du), conditionally to N0 , ..., Nk .
Hence we are indeed constructing a branching process as defined by the proposition. Because the underlying Galton Watson process ends almost surely, as shown
before, it means that there exists a.s. one generation Nk∗ which will be completely
empty and our recursive contruction ends up too.
The main point is to realize that at the end the points in N = ∪∞
k=0 Nk are
exactly the projection of the points in Π that are below
∞ Z
∞ Z t
X
X
t 7→ g(t) +
h(t − u)Nk (du) = g(t) +
h(t − u)Nk (du)
k=0
k=0
0
hence below
Z
t 7→ g(t) +
t
h(t − u)N (du) = H(t).
0
Moreover H(t) is FtN predictable. Therefore by Theorem B.11, N has intensity
H(t), which concludes the proof.
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
37
A cluster process Nc , is aR branching process, as defined before, which admits
t−
Nc
intensity λ(t, Ft−
) = h(t) + 0 h(t − z)Nc (dz). Its distribution only depends on
the function h. It corresponds to the family generated by one ancestor at time 0 in
Proposition B.3. Therefore, by Proposition
B.3, a Hawkes process with empty past
R t−
N
(N− = ∅) of intensity λ(t, Ft−
) = g(t) + 0 h(t − z)N (dz) can always be seen as
the union of Nanc and of all the a + Nca for a ∈ Nanc where the Nca are i.i.d. cluster
processes.
For a Hawkes process N with non empty past, N− , this
is more technical. Let
Nanc be a Poisson process of intensity g on R+ and NcV V ∈Nanc be a sequence of
i.i.d. cluster processes associated to h. Let also
!
[
V
N>0 = Nanc ∪
V + Nc .
(B.10)
V ∈Nanc
As we prove below, this represents the points in N that do not depend on N− . The
points that are depending
on N− are constructed as follows independently of N>0 .
T
Given N− , let N1 T ∈N− denote a sequence of independent Poisson processes with
respective intensities λT (v) = h(v − T )1(0,∞) (v). Then, given N− and N1T T ∈N− ,
let NcT,V V ∈N T ,T ∈N be a sequence of i.i.d. cluster processes associated to h. The
−
1
points depending on the past N− are given by the following formula as proved in
the next Proposition:
[
[
N≤0 = N− ∪
N1T ∪
V + NcT,V .
(B.11)
T ∈N−
V ∈N1T
Proposition B.4. Let N = N≤0 ∪ N>0 , where N>0 and N≤0 are given by (B.10)
and (B.11). Then N is a linear HawkesR process with past given by N− and intensity
t−
N
) = g(t) + −∞ h(t − x)N (dx), where g and h are as in
on (0, ∞) given by λ(t, Ft−
Proposition B.3.
Proof. Proposition B.3 yields that N>0 has intensity
Z t−
N>0
λN>0 (t, Ft−
) = g(t) +
h(t − x)N>0 (dx),
(B.12)
0
T
and that, given N− , for any T ∈ N− , NH
= N1T ∪
NT
λNHT (t, Ft−H )
S
V
∈N1T
V + NcT,V
has intensity
t−
Z
T
h(t − x)NH
(dx),
= h(t − T ) +
(B.13)
0
Moreover, all these processes are
Windependent
given N− . For any t ≥ 0, one can
T
N≤0
NH
N−
note that Ft
⊂ Gt := F0 ∨
, and so N≤0 has intensity
T ∈N− Ft
λN≤0 (t, Gt− ) =
X
T ∈N−
NT
λNHT (t, Ft−H )
Z
t−
h(t − x)N≤0 (dx)
=
−∞
(B.14)
June 9, 2015
38
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
N
on (0, +∞). Since this last expression is Ft ≤0 -predictable, by page 27 in 3 , this is
N
also λN≤0 (t, Ft−≤0 ). Moreover, N≤0 and N>0 are independent by construction and,
N≤0
for any t ≥ 0, FtN ⊂ Ft
given by
∨ FtN>0 . Hence, as before, N has intensity on (0, +∞)
N
N>0
N
) = g(t) +
λ(t, Ft−
) = λ(t, Ft−≤0 ) + λ(t, Ft−
Z
t−
h(t − x)N (dx).
−∞
B.3.2. A general result for linear Hawkes processes
The following proposition is a consequence of Theorem 3.3 applied to Hawkes processes with general past N− .
Proposition B.5. Using the notations of Theorem 3.3, let N be a Hawkes process
with past before 0 given by N− of distribution P0 and with intensity on R+ given by
Z t−
N
λ(t, Ft− ) = µ +
h(t − x)N (dx),
−∞
where µ is a positive
real number and h is a non-negative function with support in
R
R+ such that h < 1. Suppose that P0 is such that
Z 0
sup E
h(t − x)N− (dx) < ∞.
(B.15)
t≥0
−∞
Then, the mean measure u defined in Proposition 3.2 satisfies Theorem 3.3 and
R∞
moreover its integral v(t, s) := u(t, dσ) is a solution of the system (4.14)–(4.15)
s
µ,h
where v in is the survival function of −T0 , and where Φ = Φµ,h
P0 is given by ΦP0 =
µ,h
µ,h
µ,h
Φµ,h
+ + Φ−,P0 , with Φ+ given by (4.17) and Φ−,P0 given by,
Z t−
∀ s, t ≥ 0, Φµ,h
(t,
s)
=
E
h(t
−
z)N
(dz)
N
([t
−
s,
t))
=
0
. (B.16)
≤0
≤0
−,P0
−∞
Moreover, (4.20) holds.
B.3.3. Proof of the general result of Proposition B.5
Before proving Proposition B.5, we need some technical preliminaries.
Events of the type {St− ≥ s} are equivalent to the fact that the underlying
process has no point between t − s and t. Therefore, for any point process N and
any real numbers t, s ≥ 0, let
Et,s (N ) = {N ∩ [t − s, t) = ∅}.
(B.17)
Various sets Et,s (N ) are used in the sequel and the following lemma, whose proof is
obvious and therefore omitted, is applied several times to those sets.
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
39
Lemma B.6. Let Y be some random variable and I(Y ) some countable set of
indices depending on Y . Suppose that (Xi )i∈I(Y ) is a sequence of random variables
which are independent conditionally on Y . Let A(Y ) be some event depending on
Y and ∀ j ∈ I(Y ), Bj = Bj (Y, Xj ) be some event depending on Y and Xj . Then,
for any i ∈ I(Y ), and for all sequence of measurable functions (fi )i∈I(Y ) such that
the following quantities exist,
X
X
E [fi (Y, Xi )| Y, Bi ] A#B ,
E
fi (Y, Xi ) A#B = E
i∈I(Y )
i∈I(Y )
E[fi (Y,Xi )1Bi | Y ]
P(Bi | Y )
where E [fi (Y, Xi )| Y, Bi ] =
and A#B = A(Y ) ∩
T
B
.
j
j∈I(Y )
The following lemma is linked to Lemma 4.2.
Lemma B.7. Let N be a linear Hawkes process with no pastR before time 0 (i.e.
t−
N
) = g(t) + 0 h(t − x)N (dx),
N− = ∅) and intensity on (0, ∞) given by λ(t, Ft−
where g and h are as in Proposition B.3 and let for any x, s ≥ 0
Z x
Lg,h (x) = E
h(x − z)N (dz) Ex,s (N )
s
0
Gg,h (x) = P (E
x,s (N )) ,
s
Then, for any x, s ≥ 0,
Lg,h
s (x) =
x
Z
h,h
h (z) + Lh,h
s (z) Gs (z)g(x − z) dz,
(B.18)
s∧x
and
log(Gg,h
s (x))
Z
(x−s)∨0
=
Gh,h
s (x
Z
− z)g(z)dz −
0
x
g(z)dz.
(B.19)
0
h,h
1
∞
In particular, (Lh,h
and is a solution of (4.11)-(4.12).
s , Gs ) is in L × L
Proof. The statement only depends on the distribution of N . Hence, thanks
to
V
Proposition B.4, it is sufficient to consider N = Nanc ∪
∪
V
+
N
.
c
P V ∈Nanc
Let us show (B.18). First, let us write Lg,h
s (x) = E
X∈N h(x − X) Ex,s (N ) .
and note that Lg,h
s (x) = 0 if x ≤ s. The following decomposition holds
X
X
h(x − V ) +
Lg,h
h(x − V − W ) Ex,s (N ) .
s (x) = E
V ∈Nanc
W ∈NcV
According to Lemma B.6 and the following decomposition,
!
Ex,s (N ) = Ex,s (Nanc ) ∩
\
V ∈Nanc
Ex−V,s (NcV
) ,
(B.20)
June 9, 2015
40
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
let us denote Y = Nanc , XV = NcV and BV = Ex−V,s (NcV ) for all V ∈ Nanc . Let
us fix V ∈ Nanc and compute the conditional expectation of the inner sum with
respect to the filtration of Nanc which is
#
"
X
X
h((x − V ) − W ) Ex−V,s (Nc )
E
h(x − V − W ) Y, BV = E
W ∈Nc
W ∈NcV
=
Lh,h
s (x
− V ),
(B.21)
since, conditionally on Nanc , NcV has the same distribution as N
c which is a linear
R t−
Nc
Hawkes process with conditional intensity λ(t, Ft− ) = h(t) + 0 h(t − z)Nc (dz).
Using the conditional independence of the cluster processes with respect to Nanc ,
one can apply Lemma B.6 and deduce that
"
#
X
g,h
h,h
Ls (x) = E
h(x − V ) + Ls (x − V ) Ex,s (N )
V ∈Nanc
The following argument is inspired by Moller 28 . For every V ∈ Nanc , we say that V
has mark 0 if V has no descendant or himself in [x − s, x) and mark 1 otherwise. Let
0
1
0
us denote Nanc
the set of points with
mark 0 and Nanc
= Nanc \ Nanc
. For any V ∈
h,h
0
Nanc , we have P V ∈ Nanc Nanc = Gs (x−V )1[x−s,x)c (V ), and all the marks are
1
0
are independent Poisson
and Nanc
chosen independently given Nanc . Hence, Nanc
0
h,h
processes and the intensity
of
N
is
given
by
λ(v)
=
g(v)G
anc
s (x − v)1[x−s,x)c (v).
1
Moreover, the event Nanc = ∅ can be identified to Ex,s (N )and
X
1
Lg,h
h(x − V ) + Lh,h
s (x) = E
s (x − V ) Nanc = ∅
0
V ∈Nanc
Z
=
x−
h,h
h (x − w) + Lh,h
s (x − w) g(w)Gs (x − w)1[x−s,x)c (w)dw
−∞
(x−s)∨0
Z
h,h
h (x − w) + Lh,h
s (x − w) Gs (x − w)g(w) dw,
=
0
where we used the independence between the two Poisson processes. It suffices to
substitute w by z = x − w in the integral to get the desired formula. Since Gh,h
is
s
h,h
1
bounded, it is obvious that Ls is L .
Then, let us show (B.19). First note that if x < 0, Gg,h
s (x) = 1. Next, following
Q
(B.20) one has Gg,h
(x)
=
E
1
1
Ex,s (Nanc )
s
X∈Nanc Ex−X,s (NcX ) . This is also
Y
Gg,h
1Ex−V,s (NcV ) ,
s (x) = E 1Nanc ∩[x−s,x)=∅
V ∈Nanc ∩[x−s,x)c
= E 1Nanc ∩[x−s,x)=∅
Y
V ∈Nanc ∩[x−s,x)c
Gh,h
s (x − V ) ,
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
41
by conditioning with respect to Nanc . Since Nanc ∩ [x − s, x) is independent of
Nanc ∩ [x − s, x)c , this gives
"
!#
Z
Z x
h,h
g,h
g(z)dz)E exp
log(Gs (x − z))Nanc (dz)
.
Gs (x) = exp(−
[x−s,x)c
x−s
Rx
R
h,h
This leads to log(Gg,h
s (x)) = − x−s g(z)dz+ [x−s,x)c (Gs (x−z)−1)g(z)dz, thanks
to Campbell’s Theorem 23 . Then, (B.19) clearly follows from the facts that if z >
x > 0 then Gh,h
s (x − z) = 1 and g(z) = 0 as soon as z < 0.
Proof of Lemma 4.2 In turn, we use a Banach fixed point argument to prove
that for all s ≥ 0 there exists a unique couple (Ls , Gs ) ∈ L1 (R+ )×L∞ (R+ ) solution
to these equations. To do so, let us first study
Equation (4.11) and define TG,s:
R
Rx
(x−s)∨0
L∞ (R+ ) → L∞ (R+ ) by TG,s (f )(x) := exp 0
f (x − z)h(z)dz − 0 h(z)dz .
The right-hand side is well-defined since h ∈ L1 and f ∈ L∞ . Moreover we have
R
Rx
(x−s)∨0
R (x−s)∨0
kf kL∞
h(z)dz−
h(z)dz
(kf kL∞ −1)
h(z)dz
0
0
0
TG,s (f )(x) ≤ e
≤e
.
This shows that TG,s maps the ball of radius 1 of L∞ into itself, and more precisely
into the intersection of the positive cone and the ball. We distinguish two cases:
Rx
− If x < s, then TG,s (f )(x) = exp(− h(z)dz) for any f , thus, the unique fixed
0
Rx
point is given by Gs : x 7→ exp(− h(z)dz), which does not depend on s > x.
0
− And if x > s, the functional TG,s is a k−contraction in {f ∈ L∞ (R+ ), kf kL∞ ≤
R∞
1}, with k ≤ h(z)dz < 1, by convexity of the exponential. More precisely, using
0
that for all x, y, |ex − ey | ≤ emax(x,y) |x − y| we end up with, for kf k, kgkL∞ ≤ 1,
−
TG,s (f )(x) − TG,s (g)(x) ≤ e
Rx
0
x−s
R
h(z)dz
h(z)dz
e0
Z
≤ kf − gkL∞
Z
kf − gkL∞
(x−s)
h(z)dz
0
h(z)dz.
R+
Hence there exists only one fixed point Gs that we can identify with Gh,h
given
s
in Proposition B.7 and Gh,h
being
a
probability,
G
takes
values
in
[0,
1].
s
s
1
1
Analogously,
we
define
the
functional
T
:
L
(R
L,s
+ ) → L (R+ ) by TL,s (f )(x) :=
Rx
(h (z) + f (z)) Gs (z)h(x − z) dz, and it is easy to check that TL,s is well-defined
s∧x
as well. We similarly distinguish the two cases:
− If x < s, then the unique fixed point is given by Ls (x) = 0.
R∞
− And if x > s, thus TL,s is a k−contraction with k ≤ h(y)dy < 1 in L1 ((s, ∞))
0
since kGs kL∞ ≤ 1 :
June 9, 2015
42
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
kTL,s (f ) − TL,s (g)kL1 =
R∞ Rx
s
f (z) − g(z) Gs (z)h(x − z)dz dx
s
≤ kGs kL∞
R∞ R∞
f (z) − g(z) h(x − z)dxdz
s v
= kGs kL∞ kf − gkL1 ((s,∞))
R∞
h(y)dy.
0
In the same way, there exists only one fixed point Ls = Lh,h
given by Proposition
s
B.7. In particular Ls (x ≤ s) ≡ 0.
Finally, as a consequence of Equation
R (4.12) we find that if Ls is the unique
fixed point of TL,s , then kLs kL1 (R+ ) ≤
(
∞
0
1−
bounded in L1 with respect to s.
h(y) dy)2
R∞
0
h(y) dy
and therefore Ls is uniformly
Lemma B.8. Let N be a linear Hawkes process withR past before time 0 given by
t−
N
N− and intensity on (0, ∞) given by λ(t, Ft−
) = µ + −∞ h(t − x)N (dx), where µ
is a positive real number and h is a non-negative function with support in R+ , such
1
,exp
that ||h||L1 < 1. If the distribution of N− satisfies (B.15) then (ALλ,loc
) is satisfied.
N
)
. By iProposition B.4, λ(t) =
λ(t)
=
E
λ(t,
F
Proof.
For
all
t
>
0,
let
t−
h
i
hR
R t−
t−
E µ + 0 h(t − x)N>0 (dx) + E −∞ h(t − x)N≤0 (dx) which is possibly infinite.
Let us apply Proposition B.7 with g ≡ µ and s = 0, the choice s = 0 implying
that Et,0 (N>0 ) is of probability 1. Therefore
t−
Z
E µ+
Z t
h(t − x)N>0 (dx) = µ 1 +
(h(x) + L0 (x))dx ,
0
0
where (L0 , G0 = 1) is the solution
of Lemma 4.2 fori s = 0, by identification of
h
R t−
Proposition B.7. Hence E µ + 0 h(t − x)N>0 (dx) ≤ µ(1 + ||h||L1 + ||L0 ||L1 ).
On the other hand, thanks to Lemma B.9, we have
Z
t−
E
−∞
Z t
X
h(t − x)N≤0 (dx) = E
h(t − T ) +
[h(t − x) + L0 (t − x)] h(x − T )dx .
0
T ∈N−
Since all the quantities are non negative, one can exchange all the integrals and
deduce that
Z
t−
h(t − x)N≤0 (dx) ≤ M (1 + ||h||L1 + ||L0 ||L1 ),
E
−∞
with M = supt≥0 E
hR
0
−∞
i
h(t − x)N− (dx) which is finite by assumption. Hence,
1
,exp
λ(t) ≤ (µ + M )(1 + ||h||L1 + ||L0 ||L1 ), and therefore (ALλ,loc
) is satisfied.
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
43
Proof of Proposition B.5 First, by Proposition B.4
N
E λ(t, Ft−
) St− ≥ s =
Z t−
Z t−
h(t − z)N≤0 (dz) Et,s (N )
h(t − z)N>0 (dz) Et,s (N ) + E
µ+E
Z
= µ+E
0
t−
Z
h(t − z)N>0 (dz) Et,s (N>0 ) +E
−∞
t−
h(t − z)N≤0 (dz) Et,s (N≤0 )
−∞
0
h
N
By Lemma B.7, we obtain E λ(t, Ft−
) St− ≥ s = µ + Lµ,h
s (t) + Φ−,P0 (t, s). Idenand Gs = Gh,h
tifying by Lemma 4.2, Ls = Lh,h
s , we obtain
s
h
N
E λ(t, Ft−
) St− ≥ s = Φµ,h
+ (t, s) + Φ−,P0 (t, s).
N
Hence Φµ,h
P0 (t, s) = E λ(t, Ft− ) St− ≥ s .
Lemma B.8 ensures that the assumptions of Theorem 3.3 are fulfilled. Let u
and ρµ,h
3.3. With respect
to the
P0 = ρλ,P0 be defined accordingly as in Theorem
N
)1{St− ≥s} . The first
PDE system, there are two possibilities to express E λ(t, Ft−
h
i
one involves ρλ,P0 and is E ρµ,h
(t,
S
)1
, whereas the second one involves
t−
S
≥s
t−
P0
µ,h
Φµ,h
P0 and is ΦP0 (t, s)P (St− ≥ s) .
R +∞
R +∞
µ,h
This leads to s ρµ,h
u(t, dx), since u(t, ds) is
P0 (t, x)u(t, dx) = ΦP0 (t, s) s
R +∞
the distribution of St− . Let us denote v(t, s) = s u(t, dx): this relation, together
with Equation (3.10) for u,
immediately gives us that v satisfies Equation (4.14)
R +∞
with Φ = Φµ,h
.
Moreover,
u(t, dx) = 1, which gives us the boundary condition
P0
0
in (4.15).
B.3.4. Study of the general case for Φh−,P0 in Proposition B.5
Lemma
B.9. Let consider h a non-negative function with support in R+ such that
R
h < 1, N− a point
hR process on R− with distributioniP0 and N≤0 defined by (B.11).
t−
h
If Φ−,P0 (t, s) := E −∞ h(t − z)N≤0 (dz) Et,s (N≤0 ) , for all s, t ≥ 0, then,
X
h
Φ−,P0 (t, s) = E
(h(t − T ) + Ks (t, T )) Et,s (N≤0 ) ,
(B.22)
T ∈N−
where Ks (t, u) is given by (4.13).
Proof. Following the decomposition given in Proposition B.4, one has
X
Φh−,P0 (t, s) = E
h(t − T )
T ∈N−
!!
+
X
V ∈N1T
h(t − V ) +
X
W ∈NcT ,V
h(t − V − W )
Et,s (N≤0 ) ,
June 9, 2015
44
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
where Et,s (N≤0 ) = Et,s (N− )
T
T 0 ∈N−
Et,s (N1T )
T
V 0 ∈N1T
0
Et−V 0 ,s (NcV ) . Let us fix
T ∈ N− , V ∈ N1T and compute the conditional expectation of the inner sum with
respect to N− and N1T . In the same way as for (B.21) we end up with
X
E
h(t − V − W ) N− , N1T , Et−V,s (NcT,V ) = Lh,h
s (t − V ),
W ∈NcT ,V
since, conditionally on N− and N1T , NcT,V has the same distribution as Nc . Using
the conditional independence of the cluster processes (NcT,V )V ∈N1T with respect
to N− , (N1T )T ∈N− , one can apply Lemma B.6 with Y = N− , (N1T )T ∈N− and
X(T,V ) = NcT,V and deduce that
X
X
h(t − T ) +
Et,s (N≤0 ) .
Φh−,P0 (t, s) = E
h(t − V ) + Lh,h
s (t − V )
T ∈N−
V ∈N1T
Let us fix T ∈ N− and compute the conditional expectation of the inner sum with
respect to N− which is
X
T
Γ := E
h(t − V ) + Lh,h
(B.23)
s (t − V ) N− , At,s ,
V ∈N1T
where ATt,s = Et,s (N1T ) ∩
T
V 0 ∈N1T
0
Et−V 0 ,s (NcT,V ) . For every V ∈ N1T , we say that
V has mark 0 if V has no descendant or himself in [t − s, t) and mark 1 otherwise.
T,1
T,0
T
Let us denote N1T,0 theset of points with
mark 0 and N1 = N1 \ N1 .
For any V ∈ N1T , P V ∈ N1T,0 N1T
= Gh,h
s (t−V )1[t−s,t)c (V ) and all the marks
are chosen independently given N1T . Hence, N1T,0 and N1T,1 are independent Poisson
h,h
processes and the intensity of N1T,0 is given
n by λ(v)o= h(v − T )1[0,∞) (v)Gs (t −
v)1[t−s,t)c (v). Moreover, ATt,s is the event N1T,1 = ∅ , so
n
o
X
T,1
h(t − V ) + Lh,h
=∅
Γ = E
s (t − V ) N− , N1
V ∈N1T ,0
Z
t−
=
h,h
h(t − v) + Lh,h
s (t − v) h(v − T )1[0,∞) (v)Gs (t − v)1[t−s,t)c (v)dv
−∞
= Ks (t, T ).
Using the independence
of the cluster processes, one can apply Lemma B.6 with
T
Y = N− and XT = N1 , (NcT,V )V ∈N1T and (B.22) clearly follows.
Lemma B.10. Under the assumptions and notations of Proposition B.5 and
Lemma 4.2, the function Φh−,P0 of Proposition B.5 can be identified with (4.18)
under (A1N− ) and with (4.19) under (A2N− ) and (B.15) is satisfied in those two
cases.
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
45
Proof. Using
i
hP
E
(N
)
.
Lemma B.9, we have Φh−,P0 (t, s) = E
(h(t
−
T
)
+
K
(t,
T
))
t,s
≤0
s
T ∈N−
Under A1N− . On the one hand, for every t ≥ 0,
Z
0
h(t − x)N− (dx) = E [h(t − T0 )]
E
−∞
Z
0
Z
h(t − t0 )f0 (t0 )dt0 ≤ ||f0 ||L∞
=
−∞
∞
h(y)dy,
0
hence P0 satisfies (B.15). On
to one point T0 ,
the other hand, since N− is reduced
1
h
Φ−,P0 (t, s) = P E (N ) E (h(t − T0 ) + Ks (t, T0 )) 1Et,s (N≤0 ) , using the definition
( t,s ≤0 )
of the conditional expectation. First, we compute P(Et,s (N≤0 |T0 ). To do so, we use
T
T0 ,V
T0 Et−V,s (Nc
)
the decomposition Et,s (N≤0 ) = {T0 < t − s} ∩ Et,s (N1T0 ) ∩
V ∈N
1
and the fact that, conditionally on N1T0 , for all V ∈ N1T0 , NcT0 ,V has the same
distribution as Nc to deduce that
h
i
Y
Gs (t − V ) T0 ,
E 1Et,s (N≤0 ) T0 = 1T0 <t−s E 1Et,s (N T0 ) T0 E
1
T
V ∈N1 0 ∩[t−s,t)c
because the event Et,s (N1T0 ) involves N1T0 ∩ [t − s, t) whereas the product involves
N1T0 ∩ [t − s, t)c , both of those processes being two independent Poisson processes.
Their respective intensities are λ(x) = h(x − T0 )1[(t−s)∨0,t) (x) and λ(x) = h(x −
T0 )1[0,(t−s)∨0) (x), so we end up with
h
i
R
t
T0
E
1
T
=
exp
−
h(x
−
T
)1
(x)dx
0
0
[0,∞)
t−s
Et,s (N1 )
R
Q
= exp − (t−s)∨0 [1 − Gs (t − x)] h(x − T0 )dx .
E
G
(t
−
V
)
T
s
0
0
T
V ∈N 0 ∩[t−s,t)c
1
The product of these two last quantities is exactly q(t, s, T0 ) given by (4.13). Note
that q(t, s, T0 ) is exactly the probability that T0 has no descendant in [t − s, t) given
R 0∧(t−s)
T0 . Hence, P (Et,s (N≤0 )) = −∞
q(t, s, t0 )f0 (t0 )dt0 and (4.18) clearly follows.
2
Under AN− . On the one hand, for any t ≥ 0,
Z
0
E
−∞
Z
h(t − x)N− (dx) = E
0
−∞
Z
h(t − x)αdx ≤ α
∞
h(y)dy,
0
hence P0 satisfies (B.15). On the other hand, since we are dealing with a Poisson
process, we can use the same argumentation of marked Poisson processes as in the
proof of Lemma B.7. For every T ∈ N− , we say that T has mark 0 if T has no
0
descendant or himself in [t − s, t) and mark 1 otherwise. Let us denote N−
the set
1
0
of points with mark 0 and N− = N− \ N− . For any T ∈ N− , we have
0
P T ∈ N−
N− = q(t, s, T )1[t−s,t)c (T ),
June 9, 2015
46
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
0
1
and all the marks are chosen independently given N− . Hence, N−
and N−
are
0
independent Poisson processes and the intensity of N− is given by
λ(z) = α1z≤0 q(t, s, z)1[t−s,t)c (z)
1
Moreover, Et,s (N≤0 ) = N−
= ∅ . Hence,
X
1
= ∅
(h(t − T ) + Ks (t, T )) N−
Φh−,P0 (t, s) = E
0
T ∈N−
1
0
.
and N−
which gives (4.19) thanks to the independence of N−
B.3.5. Proof of Propositions 4.3 and 4.4
Since we already proved Proposition B.5 and Lemma B.10, to obtain Proposi∞
2
tion 4.3 it only remains to prove that Φµ,h
P0 ∈ L (R+ ), to ensure uniqueness of the
solution by Remark 4.1. To do so, it is easy to see that the assumption h ∈ L∞ (R+ )
combined with Lemma 4.2 giving that Gs ∈ [0, 1] and Ls ∈ L1 (R+ ) ensures that
∞
h
Φµ,h
+ , q and Ks are in L (R+ ). In turn, this implies that Φ−,P0 in both (4.18) and
∞
(4.19) is in L (R+ ), which concludes the proof of Proposition 4.3.
Proof of Proposition 4.4 The method of characteristics leads us to rewrite the
solution v of (4.14)–(4.15) by defining f in ≡ v in on R+ , f in ≡ 1 on R− such that
Rt
f in (s − t)e− (t−s)∨0 Φ(y,s−t+y) dy , when s ≥ t
Rs
v(t, s) =
(B.24)
−
Φ(y+t−s,y) dy
in
f (s − t)e (s−t)∨0
, when t ≥ s.
1
Let PM
0 be the distribution of the past given by AN− and T0 ∼ U([−M −1, −M ]).
By Proposition 4.3, let vM be the solution of System (4.14)–(4.15) with Φ = Φµ,h
PM
0
in
and v in = vM
, (i.e. the survival function of a uniform variable on [−M − 1, −M ]).
∞
Let also vM be the solution of System (4.14)–(4.15) with Φ = Φµ,h
and v in ≡ 1,
PM
0
and v∞ the solution of (4.21)-(4.22). Then
∞
∞
kvM − v ∞ kL∞ ((0,T )×(0,S)) ≤ kvM − vM
kL∞ ((0,T )×(0,S)) + kvM
− v ∞ kL∞ ((0,T )×(0,S)) .
in
in
By definition of vM
, it is clear that vM
(s) = 1 for s ≤ M, so that Formula (B.24) im∞
∞
plies that vM (t, s) = vM (t, s) as soon as s−t ≤ M and so kvM −vM
kL∞ ((0,T )×(0,S)) =
0 as soon as M ≥ S.
∞
To evaluate the distance kvM
− v ∞ kL∞ ((0,T )×(0,S)) , it remains to prove that
Rt h
−
Φ
M
(y,s−t+y) dy
e 0 −,P0
→ 1 uniformly on (0, T ) × (0, S) for any T > 0, S > 0. For
this, it suffices to prove that Φh−,PM (t, s) → 0 uniformly on (0, T ) × (0, S). Since q
0
given by (4.13) takes values in [exp(−2||h||L1 ), 1], (4.18) implies
R 0∧(t−s)
(h(t − t0 ) + Ks (t, t0 )) 1[−M −1,−M ] (t0 )dt0
h
Φ−,PM (t, s) ≤ −∞R 0∧(t−s)
.
0
exp(−2||h||L1 )1[−M −1,−M ] (t0 )dt0
−∞
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
47
Since ||Gs ||L∞ ≤ 1, Ls and h are non-negative, it is clear that
Z +∞
[h(t − x) + Ls (t − x)] h(x − t0 )dx,
Ks (t, t0 ) ≤
0
and so
Z −M
Z
+∞
Ks (t, t0 )dt0 ≤
−M −1
Z
[h(t − x) + Ls (t − x)]
h(x − t0 )dt0
dx
−M −1
0
Z
!
−M
∞
≤
Z
+∞
[h(t − x) + Ls (t − x)] dx
h(y)dy
ZM∞
0
h(y)dy [||h||L1 + ||Ls ||L1 ] .
≤
M
R∞
Hence, for M large enough
Φh−,PM (t, s)
0
≤
M
h(y)dy [||h||L1 +||Ls ||L1 ]
exp(−2||h||L1 )
→ 0, uniformly
1
in (t, s) since Ls is uniformly bounded in L , which concludes the proof.
B.4. Thinning
The demonstration of Ogata’s thinning algorithm uses a generalization of point
processes, namely the marked point processes. However, only the basic properties
of simple and marked point processes are needed (see 3 for a good overview of point
processes theory). Here (Ft )t>0 denotes a general filtration such that FtN ⊂ Ft for
all t > 0, and not necessarily the natural one, i.e. (FtN )t>0 .
Theorem B.11. Let Π be a (Ft )-Poisson process with intensity 1 on R2+ . Let
1
λ(t, Ft− ) be a non-negative (F
R t )-predictable process which is Lloc a.s. and define the
point process N by N (C) = C×R+ 1[0,λ(t,Ft− )] (z) Π (dt × dz) , for all C ∈ B (R+ ).
Then N admits λ(t, Ft− ) as a (Ft )-predictable intensity. Moreover, if λ is in fact
N
N
), then N admits λ(t, Ft−
) as a FtN FtN -predictable, i.e. λ(t, Ft− ) = λ(t, Ft−
predictable intensity.
Proof. The goal is to apply the martingale characterization of the intensity (Chapter II, Theorem 9 in 3 ). We cannot consider Π as a point process on R+ marked in
R+ (in particular, the point with the smallest abscissa cannot be defined). However,
for every k ∈ N, we can define RΠ(k) , the restriction of Π to the points with ordinate
smaller than k, by Π(k) (C) = C Π (dt × dz) for all C ∈ B (R+ × [0, k]). Then Π(k)
can be seen as a point process on R+ marked in Ek := [0, k] with intensity kernel
1.dz with respect to (Ft ). In the same way, we define N (k) by
Z
N (k) (C) =
1z∈[0,λ(t,Ft− )] Π(k) (dt × dz) for all C ∈ B (R+ ) .
C×R+
Let P(Ft ) be the predictable σ-algebra (see page 8 of 3 ).
Let us denote Ek = B ([0, k]) and P̃k (Ft ) = P (Ft ) ⊗ Ek the associated marked
predictable σ-algebra.
June 9, 2015
48
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
For any fixed z in E, {(u, ω) ∈ R+ × Ω such that λ(u, Fu− ) (ω) ≥ z} ∈ P (Ft )
since λ is predictable. If Γk = {(u, ω, z) ∈ R+ × Ω × Ek , λ(u, Fu− ) (ω) ≥ z}, then
1
Γk = ∩ ∗ ∪ {(u, ω) ∈ R+ × Ω, λ(u, Fu− ) (ω) ≥ q} × 0, q +
∩ Ek .
n∈N q∈Q+
n
So, Γk ∈ P̃k (Ft ) and 1z∈[0,λ(u,Fu− )]∩Ek is P˜k (Ft )-measurable. Hence, one can apply
the Integration Theorem (Chapter VIII, Corollary 4 in 3 ). So,
Z t Z
1z∈[0,λ(u,Fu− )] M̄ (k) (du × dz)
(Xt )t≥0 :=
is a (Ft )-local martingale
0
where M̄
(k)
Ek
t≥0
(k)
(du × dz) = Π
(du × dz) − dzdu. In fact,
Z t
(k)
Xt = Nt −
min (λ(u, Fu− ), k) du.
0
Rt
(k)
Nt
Yet,
(respectively 0 min (λ(u, Fu− ), k) du) is non-decreasingly converging
Rt
towards Nt (resp. 0 λ(u, Fu− )du). Both of the limits are finite a.s. thanks to the
local integrability of the
(see page
27 of 3 ). Thanks to monotone conver intensity
Rt
gence we deduce that Nt − 0 λ(u, Fu− )du
is a (Ft )-local martingale. Then,
t≥0
thanks to the martingale characterization of the intensity, Nt admits λ(t, Ft− ) as
an (Ft )-intensity. The last point of the Theorem
is a reduction of the filtration.
N
), it is a fortiori FtN -progressive and the desired result
Since λ(t, Ft− ) = λ(t, Ft−
follows (see page 27 in 3 ).
This final result can be found in 4 .
Proposition B.12 (Inversion Theorem).
Let N = {Tn }n>0 be a non explosive point process on R+ with FtN -predictable
N
intensity λt = λ(t, Ft−
). Let {Un }n>0 be a sequence of i.i.d. random variables with
N
uniform distribution on [0, 1]. Moreover, suppose that they are independent of F∞
.
Denote Gt = σ (Un , Tn ≤ t). Let N̂ be an homogeneous Poisson process with intensity 1 on R2+ independent of F∞ ∨ G∞ . Define a point process N̄ on R2+ by
Z
Z
X
N̄ ((a, b] × A) =
1(a,b] (Tn ) 1A Un λ(Tn , FTNn − ) +
N̂ (dt × dz)
(a,b]
n>0
N )]
A−[0,λ(t,Ft−
for every 0 ≤ a < b and A ⊂ R+ .
Then, N̄ is an homogeneous
Poisson process
on R2+ with intensity 1 with respect
to the filtration (Ht )t≥0 = Ft ∨ Gt ∨ FtN̂
.
t≥0
References
1. M. Bossy, N. Champagnat, et al. Markov processes and parabolic partial differential
equations. Encyclopedia of Quantitative Finance, pages 1142–1159, 2010.
2. O. Boxma, D. Perry, W. Stadje, and S. Zacks. A markovian growth-collapse model.
Advances in applied probability, pages 221–243, 2006.
June 9, 2015
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
Microscopic approach of a time elapsed neural model
49
3. P. Brémaud. Point processes and queues. Springer-Verlag, New York, 1981. Martingale
dynamics, Springer Series in Statistics.
4. P. Brémaud and L. Massoulié. Stability of nonlinear Hawkes processes. The Annals of
Probability, 24(3):1563–1588, 1996.
5. D.R. Brillinger, H.L. Bryant, and J.P. Segundo. Identification of synaptic interactions.
Biol. Cybernetics, 22:213–228, 1976.
6. E.N. Brown, R. Barbieri, V. Ventura, R.E. Kass, and L.M. Frank. The time rescaling theorem and its application to neural spike train analysis. Neural Computation,
14(2):325–346, 2002.
7. N. Brunel. Dynamics of sparsely connected networks of excitatory and inhibitory
spiking neurons. Journal of Computational Neuroscience, 8:183–208, 2000.
8. N. Brunel and V. Hakim. Fast global oscillations in networks of integrate-and-fire
neurons with low firing rates. Neural Computation, 11:1621–1671, 1999.
9. M. J Cáceres, J. A Carrillo, and Benoı̂t Perthame. Analysis of nonlinear noisy integrate&fire neuron models: blow-up and steady states. The Journal of Mathematical
Neuroscience, 1(1):1–33, 2011.
10. M. J. Cáceres, J. A. Carrillo, and L. Tao. A numerical solver for a nonlinear
fokker-planck equation representation of neuronal network dynamics. J. Comp. Phys.,
230:1084–1099, 2011.
11. M. J Cáceres and B. Perthame. Beyond blow-up in excitatory integrate and fire neuronal networks: refractory period and spontaneous activity. Journal of theoretical biology, 350:81–89, 2014.
12. José A. Cañizo, José A. Carrillo, and Sı́lvia Cuadrado. Measure solutions for some
models in population dynamics. Acta Appl. Math., 123:141–156, 2013.
13. E.S. Chornoboy, L.P. Schramm, and A.F. Karr. Maximum likelihood identification of
neural point process systems. Biol. Cybernetics, 59:265–275, 1988.
14. B. Cloez. Limit theorems for some branching measure-valued processes.
arXiv:1106.0660v2, 2012.
15. A. Compte, N. Brunel, P. S. Goldman-Rakic, and X.-J. Wang. Synaptic mechanisms
and network dynamics underlying spatial working memory in a cortical network model.
Cerebral Cortex 10, 10:910–923, 2000.
16. D. J. Daley and D. Vere-Jones. An introduction to the theory of point processes, volume 2. Springer, 1988.
17. F. Delarue, J. Inglis, S. Rubenthaler, and E Tanré. Global solvability of a networked
integrate-and-fire model of McKean-Vlasov type. Annals of Applied Probability, 2012.
to appear.
18. S. Ditlevsen and A. Samson. Estimation in the partially observed stochastic morrislecar neuronal model with particle filter and stochastic approximation methods. Annals of Applied Statistics, to appear.
19. M. Doumic, M. Hoffmann, N. Krell, and L. Robert. Statistical estimation of a growthfragmentation model observed on a genealogical tree. Bernoulli, in press, 2014.
20. A. G. Hawkes and D. Oakes. A cluster process representation of a self-exciting process.
Journal of Applied Probability, 11(3):493–503, 1974.
21. P. Jahn, R. W. Berg, J. Hounsgaard, and S. Ditlevsen. Motoneuron membrane potentials follow a time inhomogeneous jump diffusion process. J. Comput. Neurosci.,
31:563–579, 2011.
22. P. Jahn, R.W. Berg, J. Hounsgaard, and S. Ditlevsen. Motoneuron membrane potentials follow a time inhomogeneous jump diffusion process. Journal of Computational
Neuroscience, 31:563–579, 2011.
23. J.F.C. Kingman. Poisson processes. Oxford Science Publications, 1993.
June 9, 2015
50
0:43
WSPC/INSTRUCTION FILE
PDE˙Hawkes˙Marie11
J. Chevallier, M. Cáceres, M. Doumic, P. Reynaud-Bouret
24. C.D. Lai. An example of Wold’s Point Processes with Markov-Dependent Intervals.
Journal of Applied Probability, 15(4):748–758, 1978.
25. P. A. W. Lewis and G. S. Shedler. Simulation of nonhomogeneous Poisson processes
by thinning. Naval Research Logistics Quarterly, 26(3):403–413, 1979.
26. M. Mattia and P. Del Giudice. Population dynamics of interacting spiking neurons.
Phys. Rev. E, 66:051917, 2002.
27. P. Mattila. Geometry of sets and measures in Euclidean spaces: fractals and rectifiability. Number 44 in Cambridge studies in advanced mathematics. Cambridge University
Press, 1999.
28. J. Møller and J. G. Rasmussen. Perfect simulation of Hawkes processes. Advances in
Applied Probability, pages 629–646, 2005.
29. Y. Ogata. On Lewis’ simulation method for point processes. IEEE Transactions on
Information Theory, 27(1):23–31, 1981.
30. A Omurtag, Knight B. W., and L. Sirovich. On the simulation of large populations of
neurons. J. Comp. Neurosci., 8:51–63, 2000.
31. K. Pakdaman, B. Perthame, and D. Salort. Dynamics of a structured neuron population. Nonlinearity, 23(1):55–75, 2010.
32. K. Pakdaman, B. Perthame, and D. Salort. Relaxation and self-sustained oscillations
in the time elapsed neuron network model. SIAM Journal on Applied Mathematics,
73(3):1260–1279, 2013.
33. K. Pakdaman, B. Perthame, and D. Salort. Adaptation and fatigue model for neuron networks and large time asymptotics in a nonlinear fragmentation equation. The
Journal of Mathematical Neuroscience, 4(14):1–26, 2014.
34. B. Perthame. Transport equations in biology. Frontiers in Mathematics. Birkhäuser
Verlag, Basel, 2007.
35. J. Pham, K. Pakdaman, J. Champagnat, and J.-F. Vibert. Activity in sparsely connected excitatory neural networks: effect of connectivity. Neural Networks, 11(3):415–
434, 1998.
36. J.W. Pillow, J. Shlens, L. Paninski, A. Sher, A.M. Litke, E.J. Chichilnisky, and E.P.
Simoncelli. Spatio-temporal correlations and visual signalling in a complete neuronal
population. Nature, 454:995–999, 2008.
37. G. Pipa, S. Grün, and C. van Vreeswijk. Impact of spike train autostructure on probability distribution of joint spike events. Neural Computation, 25:1123–1163, 2013.
38. C. Pouzat and A. Chaffiol. Automatic spike train analysis and report generation. an
implementation with R, R2HTML and STAR. Journal of Neuroscience Methods, pages
119–144, 2009.
39. A. Renart, N. Brunel, and X.-J. Wang. Mean-field theory of irregularly spiking neuronal populations and working memory in recurrent cortical networks. In Jianfeng
Feng, editor, Computational Neuroscience: A comprehensive approach. Chapman &
Hall/CRC Mathematical Biology and Medicine Series, 2004.
40. P. Reynaud-Bouret, V. Rivoirard, F. Grammont, and C. Tuleau-Malot. Goodness-offit tests and nonparametric adaptive estimation for spike train analysis. The Journal
of Mathematical Neuroscience, 4(3), 2014.
41. P. Reynaud-Bouret, V. Rivoirard, and C. Tuleau-Malot. Inference of functional connectivity in neurosciences via Hawkes processes. In 1st IEEE Global Conference on
Signal and Information Processing, 2013, Austin Texas.
42. L. Sirovich, A Omurtag, and K. Lubliner. Dynamics of neural populations: Stability
and synchrony. Network: Computation in Neural Systems, 17:3–29, 2006.
43. V. Ventura, R. Carta, R.E. Kass, S.N. Gettner, and C.R. Olson. Statistical analysis
of temporal evolution in single-neuron firing rates. Biostatistics, 3(1):1–20, 2002.
| 9 |
Stack and Queue Layouts via
Layered Separators?
arXiv:1608.06458v1 [cs.CG] 23 Aug 2016
Vida Dujmović1 , Fabrizio Frati2
1. School of Computer Science and Electrical Engineering, Univ. of Ottawa, Canada
[email protected]
2. Dipartimento di Ingegneria, Roma Tre Univ., Italy
[email protected]
Abstract. It is known that every proper minor-closed class of graphs
has bounded stack-number (a.k.a. book thickness and page number).
While this includes notable graph families such as planar graphs and
graphs of bounded genus, many other graph families are not closed under
taking minors. For fixed g and k, we show that every n-vertex graph that
can be embedded on a surface of genus g with at most k crossings per edge
has stack-number O(log n); this includes k-planar graphs. The previously
√
best known bound for the stack-number of these families was O( n),
except in the case of 1-planar graphs. Analogous results are proved for
map graphs that can be embedded on a surface of fixed genus. None of
these families is closed under taking minors. The main ingredient in the
proof of these results is a construction proving that n-vertex graphs that
admit constant layered separators have O(log n) stack-number.
1
Introduction
A stack layout of a graph G consists of a total order σ of V (G) and a partition
of E(G) into sets (called stacks) such that no two edges in the same stack cross;
that is, there are no edges vw and xy in a single stack with v <σ x <σ w <σ y.
The minimum number of stacks in a stack layout of G is the stack-number of
G. Stack layouts, first defined by Ollmann [22], are ubiquitous structures with a
variety of applications (see [17] for a survey). A stack layout is also called a book
embedding and stack-number is also called book thickness and page number. The
stack-number is known to be bounded for planar graphs [24], bounded genus
graphs [20] and, most generally, all proper minor-closed graph families [4,5].
The purpose of this note is to bring the study of the stack-number beyond the
proper minor-closed graph families. Layered separators are a key tool for proving
our results. They have already led to progress on long-standing open problems
related to 3D graph drawings [11,15] and nonrepetitive graph colourings [13].
A layering {V0 , . . . , Vp } of a graph G is a partition of V (G) into layers Vi such
?
The research of Vida Dujmović was partially supported by NSERC, and Ontario
Ministry of Research and Innovation. The research of Fabrizio Frati was partially
supported by MIUR Project “AMANDA” under PRIN 2012C4E3KT.
that, for each e ∈ E(G), there is an i such that the endpoints of e are both in
Vi or one in Vi and one in Vi+1 . A graph G has a layered `-separator for a fixed
layering {V0 , . . . , Vp } if, for every subgraph G0 of G, there exists a set S ⊆ V (G0 )
with at most ` vertices in each layer (i.e., Vi ∩ S ≤ `, for i = 0, . . . , p) such that
each connected component of G0 − S has at most |V (G0 )|/2 vertices. Our main
technical contribution is the following theorem.
Theorem 1. Every n-vertex graph that has a layered `-separator has stacknumber at most 5` · log2 n.
We discuss the implications of Theorem 1 for two well-known non-minorclosed classes of graphs. A graph is (g, k)-planar if it can be drawn on a surface
of Euler genus at most g with at most k crossings per edge. Then (0, 0)-planar
graphs are planar graphs, whose stack-number is at most 4 [24]. Further, (0, k)planar graphs are k-planar graphs [23]; Bekos et al. [3] have recently proved that
1-planar graphs have bounded stack-number (see Alam et al. [1] for an improved
constant). The family of (g, k)-planar graphs is not closed under taking minors1
even for g = 0, k = 1; thus the result of Blankenship and Oporowski [4,5], stating
that proper minor-closed graph families have bounded stack-number, does not
apply to (g, k)-planar graphs. Dujmović et al. [12] showed that (g, k)-planar
graphs have layered (4g + 6)(k + 1)-separators2 . This and our Theorem 1 imply
the following
corollary. For all g ≥ 0√and k ≥ 2, the previously best known bound
√
was O( n), following from the O( m) bound for m-edge graphs [21].
Corollary 1. For any fixed g and k, every n-vertex (g, k)-planar graph has
stack-number O(log n).
A (g, d)-map graph G is defined as follows. Embed a graph H on a surface of
Euler genus g and label some of its faces as “nations” so that any vertex of H is
incident to at most d nations; then the vertices of G are the faces of H labeled
as nations and the edges of G connect nations that share a vertex of H. The
(0, d)-map graphs are the well-known d-map graphs [18,7,9,8,6]. The (g, 3)-map
graphs are the graphs of Euler genus at most g [8], thus they are closed under
taking minors. However, for every g ≥ 0 and d ≥ 4, the (g, d)-map graphs are not
closed under taking minors [12], thus the result of Blankenship and Oporowski
[4,5] does not apply to them. The (g, d)-map graphs have layered (2g +3)(2d+1)separators [12]. This and our Theorem 1 imply the following
√ corollary. For all
g ≥ 0 and d ≥ 4, the best previously known bound was O( n) [21].
Corollary 2. For any fixed g and d, every n-vertex (g, d)-map graph has stacknumber O(log n).
1
2
The n×n×2 grid graph is a well-known example of 1-planar graph with an arbitrarily
large complete graph minor. Indeed, contracting the i-th row in the front n × n grid
with the i-th column in the back n × n grid, for 1 ≤ i ≤ n, gives a Kn minor.
More precisely, Dujmović et al. [12] proved that (g, k)-planar graphs have layered
treewidth at most (4g + 6)(k + 1) and (g, d)-map graphs have layered treewidth at
most (2g + 3)(2d + 1). Just as the graphs of treewidth t have (classical) separators of
size t − 1, so do the graphs of layered treewidth ` have layered `-separators [15,16].
A “dual” concept to that of stack layouts are queue layouts. A queue layout
of a graph G consists of a total order σ of V (G) and a partition of E(G) into sets
(called queues), such that no two edges in the same queue nest; that is, there are
no edges vw and xy in a single queue with v <σ x <σ y <σ w. If v <σ x <σ y <σ
w we say that xy nests inside vw. The minimum number of queues in a queue
layout of G is called the queue-number of G. Queue layouts, like stack layouts,
have been extensively studied. In particular, it is a long standing open problem
to determine if planar graphs have bounded queue-number. Logarithmic upper
bounds have been obtained via layered separators [11,2]. In particular, a result
similar to Theorem 1 is known for the queue-number: Every n-vertex graph that
has layered `-separators has queue-number O(` log n) [11]; this bound was refined
to 3` · log3 (2n + 1) − 1 by Bannister et al. [2]. These results were established via
a connection with the track-number of a graph [14]. Together with the fact that
planar graphs have layered 2-separators [13,19], these results imply an O(log n)
bound for the queue-number of planar graphs, improving on a earlier result by
Di Battista et al. [10]. The polylog bound on the queue-number of planar graphs
extends to all proper minor-closed families of graphs [15,16]. Our approach to
prove Theorem 1 also gives a new proof of the following result (without using
track layouts). We include it for completeness.
Theorem 2. Every n-vertex graph that has a layered `-separator has queuenumber at most 3` · log2 n.
2
Proofs of Theorem 1 and Theorem 2
Let G be a graph and L = {V0 , . . . , Vp } be a layering of G such that G admits a
layered `-separator for layering L. Each edge of G is either an intra-layer edge,
that is, an edge between two vertices in a set Vi , or an inter-layer edge, that is,
an edge between a vertex in a set Vi and a vertex in a set Vi+1 .
A total order on a set of vertices R ⊆ V (G) is a vertex ordering of R. The
stack layout construction computes a vertex ordering σ s of V (G) satisfying the
layer-by-layer invariant, which is defined as follows: For 0 ≤ i < p, the vertices in
Vi precede the vertices in Vi+1 in σ s . Analogously, the queue layout construction
computes a vertex ordering σ q of V (G) satisfying the layer-by-layer invariant.
Let S be a layered `-separator for G with respect to L. Let G1 , . . . , Gk be
the graphs induced by the vertices in the connected components of G − S (the
vertices of S do not belong to any graph Gj ). These graphs are labeled G1 , . . . , Gk
arbitrarily. Recall that, by the definition of a layered `-separator for G, we have
|V (Gj )| ≤ n/2, for each 1 ≤ j ≤ k. Let Si = S ∩ Vi and let ρi be an arbitrary
vertex ordering of Si , for i = 0, . . . , p.
Both the stack and the queue layout constructions recursively construct vertex orderings of V (Gj ) satisfying the layer-by-layer invariant, for j = 1, . . . , k.
Let σjs be the vertex ordering of V (Gj ) computed by the stack layout construcs
tion; we also denote by σj,i
the restriction of σjs to the vertices in layer Vi . Note
s
s
s
s
that σj = σj,1 , σj,2 , . . . , σj,p by the layer-by-layer invariant. Vertex orderings σjq
q
and σj,i
are defined analogously for the queue layout construction.
ρ0
s
σ1,0
s
σ2,0
∈ V0
s
σk,0
ρ1
s
σk,1
s
σ2,1
∈ V1
s
σ1,1
ρ2
s
σ1,2
s
σ2,2
s
σk,2
∈ V2
Fig. 1. Illustration for the stack layout construction. Edges incident to vertices
in S are black and thick. Edges in graphs G1 , . . . , Gk are represented by gray
regions.
We now show how to combine the recursively constructed vertex orderings to
obtain a vertex ordering of V (G). The way this combination is performed differs
for the stack layout construction and the queue layout construction.
Stack layout construction. Vertex ordering σ s is defined as (refer to Fig. 1)
s
s
s
s
s
s
s
s
ρ0 , σ1,0
, σ2,0
, . . . , σk−1,0
, σk,0
, ρ1 , σk,1
, σk−1,1
, . . . , σ2,1
, σ1,1
,
s
s
s
s
s
s
s
s
ρ2 , σ1,2
, σ2,2
, . . . , σk−1,2
, σk,2
, ρ3 , σk,3
, σk−1,3
, . . . , σ2,3
, σ1,3
,....
The vertex ordering σ s satisfies the layer-by-layer invariant, given that vertex
ordering σjs does, for j = 1, . . . , k. Then Theorem 1 is implied by the following.
Lemma 1. G has a stack layout with 5` · log2 n stacks with vertex ordering σ s .
Proof: We use distinct sets of stacks for the intra- and the inter-layer edges.
Stacks for the intra-layer edges. We assign each intra-layer edge uv with u ∈ S
or v ∈ S to one of ` stacks P1 , . . . , P` as follows. Since uv is an intra-layer edge,
{u, v} ⊆ Vi , for some 0 ≤ i ≤ p. Assume w.l.o.g. that u <σs v. Then u ∈ S and
let it be x-th vertex in ρi (recall that ρi contains at most ` vertices). Assign uv
to Px . The only intra-layer edges that are not yet assigned to stacks belong to
graphs G1 , . . . , Gk . The assignment of these edges to stacks is the one computed
recursively; however, we use the same set of stacks to assign the edges of all
graphs G1 , . . . , Gk .
We now prove that no two intra-layer edges in the same stack cross. Let e
and e0 be two intra-layer edges of G and let both the endpoints of e be in Vi
and both the endpoints of e0 be in Vi0 . Assume w.l.o.g. that i ≤ i0 . If i < i0 ,
then, since σ s satisfies the layer-by-layer invariant, the endpoints of e precede
those of e0 in σ s , hence e and e0 do not cross. Suppose now that i = i0 . If e and
e0 are in some stack Px for x ∈ {1, . . . , `}, then they are both incident to the
x-th vertex in ρi , thus they do not cross. If e and e0 are in some stack different
from P1 , . . . , P` , then e ∈ E(Gj ) and e0 ∈ E(Gj 0 ), for some j, j 0 ∈ {1, . . . , k}. If
j = j 0 , then e and e0 do not cross by induction. Otherwise, both the endpoints of
s
e precede both the endpoints of e0 or vice versa, since the vertices in σmin{j,j
0 },i
s
s
precede those in σmax{j,j 0 },i in σ or vice versa, depending on whether i is even
or odd; hence e and e0 do not cross.
We now bound the number of stacks we use for the intra-layer edges of G; we
claim that this number is at most ` · log2 n. The proof is by induction on n; the
base case n = 1 is trivial. For any subgraph H of G, let p1 (H) be the number
of stacks we use for the intra-layer edges of H, and let p1 (n0 ) = maxH {p1 (H)}
over all subgraphs H of G with n0 vertices. As proved above, p1 (G) ≤ ` +
max{p1 (G1 ), . . . , p1 (Gk )}. Since each graph Gj has at most n/2 vertices, we get
that p1 (G) ≤ ` + p1 (n/2). By induction p1 (G) ≤ ` + ` · log2 (n/2) = ` · log2 n.
Stacks for the inter-layer edges. We use distinct sets of stacks for the even
inter-layer edges – connecting vertices on layers Vi and Vi+1 with i even – and
for the odd inter-layer edges – connecting vertices on layers Vi and Vi+1 with
i odd. We only describe how to assign the even inter-layer edges to 2` · log2 n
stacks so that no two edges in the same stack cross; the assignment for the odd
inter-layer edges is analogous.
We assign each even inter-layer edge uv with u ∈ S or v ∈ S to one of 2`
0
stacks P10 , . . . , P2`
as follows. Since uv is an inter-layer edge, u and v respectively
belong to layers Vi and Vi+1 , for some 0 ≤ i ≤ p − 1. If u ∈ S, then u is the
/ S, then
x-th vertex in ρi , for some 1 ≤ x ≤ `; assign edge uv to Px0 . If u ∈
0
v ∈ S is the y-th vertex in ρi+1 , for some 1 ≤ y ≤ `; assign edge uv to P`+y
.
The only even inter-layer edges that are not yet assigned to stacks belong to
graphs G1 , . . . , Gk . The assignment of these edges to stacks is the one computed
recursively; however, we use the same set of stacks to assign the edges of all
graphs G1 , . . . , Gk .
We prove that no two even inter-layer edges in the same stack cross. Let e and
e0 be two even inter-layer edges of G. Let Vi and Vi+1 be the layers containing
the endpoints of e. Let Vi0 and Vi0 +1 be the layers containing the endpoints of e0 .
Assume w.l.o.g. that i ≤ i0 . If i < i0 , then i + 1 < i0 , given that both i and i0 are
even. Then, since σ s satisfies the layer-by-layer invariant, both the endpoints of
e precede both the endpoints of e0 , thus e and e0 do not cross. Suppose now that
i = i0 . If e and e0 are in some stack Ph0 for h ∈ {1, . . . , 2`}, then e and e0 are both
incident either to the h-th vertex of ρi or to the (h − `)-th vertex of ρi+1 , hence
0
they do not cross. If e and e0 are in some stack different from P10 , . . . , P2`
, then
0
0
0
0
0
e ∈ E(Gj ) and e ∈ E(Gj ), for j, j ∈ {1, . . . , k}. If j = j , then e and e do not
cross by induction. Otherwise, j 6= j 0 and then e nests inside e0 or vice versa,
s
s
since the vertices in σmin{j,j
0 },i precede those in σmax{j,j 0 },i and the vertices in
s
s
s
σmax{j,j 0 },i+1 precede those in σmin{j,j 0 },i+1 in σ ; hence e and e0 do not cross.
We now bound the number of stacks we use for the even inter-layer edges of G;
we claim that this number is at most 2`·log2 n. The proof is by induction on n; the
base case n = 1 is trivial. For any subgraph H of G, let p2 (H) be the number of
stacks we use for the even inter-layer edges of H, and let p2 (n0 ) = maxH {p2 (H)}
over all subgraphs H of G with n0 vertices. As proved above, p2 (G) ≤ 2` +
max{p2 (G1 ), . . . , p2 (Gk )}. Since each graph Gj has at most n/2 vertices, we get
that p2 (G) ≤ 2` + p2 (n/2). By induction p2 (G) ≤ 2` + 2` · log2 (n/2) = 2` · log2 n.
The described stack layout uses ` · log2 n stacks for the intra-layer edges,
2` · log2 n stacks for the even inter-layer edges, and 2` · log2 n stacks for the odd
inter-layer edges, thus 5` · log2 n stacks in total. This concludes the proof.
Queue layout construction. Vertex ordering σ q is defined as (refer to
q
q
q
q
q
q
q
q
q
Fig. 2) ρ0 , σ1,0
, σ2,0
, . . . , σk,0
, ρ1 , σ1,1
, σ2,1
, . . . , σk,1
, . . . , ρp , σ1,p
, σ2,p
, . . . , σk,p
.
ρ0
q
σ1,0
q
σ2,0
∈ V0
q
σk,0
ρ1
q
σ1,1
q
σ2,1
∈ V1
q
σk,1
ρ2
q
σ1,2
q
σ2,2
q
σk,2
∈ V2
Fig. 2. Illustration for the queue layout construction.
The vertex ordering σ q satisfies the layer-by-layer invariant, given that vertex
ordering σjq does, for j = 1, . . . , k. Then Theorem 2 is implied by the following.
Lemma 2. G has a queue layout with 3` · log2 n queues with vertex ordering σ q .
Proof: We use distinct sets of queues for the intra- and the inter-layer edges.
Queues for the intra-layer edges. We assign each intra-layer edge uv with
u ∈ S or v ∈ S to one of ` queues Q1 , . . . , Q` as follows. Since uv is an intralayer edge, {u, v} ⊆ Vi , for some 0 ≤ i ≤ p. Assume w.l.o.g. that u <σq v. Then
u ∈ S and let it be the x-th vertex of ρi . Assign uv to Qx . The only intra-layer
edges that are not yet assigned to queues belong to graphs G1 , . . . , Gk . The
assignment of these edges to queues is the one computed recursively; however,
we use the same set of queues to assign the edges of all graphs G1 , . . . , Gk .
The proof that no two intra-layer edges in the same queue nest is the same
as the proof no two intra-layer edges in the same stack cross in Lemma 1 (with
the word “nest” replacing “cross” and with σ q replacing σ s ). The proof that the
number of queues we use for the intra-layer edges is at most ` · log2 n is also the
same as the proof that the number of stacks we use for the intra-layer edges is
at most ` · log2 n in Lemma 1.
Queues for the inter-layer edges. We assign each inter-layer edge uv with
u ∈ S or v ∈ S to one of 2` queues Q01 , . . . , Q02` as follows. Since uv is an
inter-layer edge, u and v respectively belong to layers Vi and Vi+1 , for some
0 ≤ i ≤ p − 1. If u ∈ S, then u is the x-th vertex in ρi , for some 1 ≤ x ≤ `; assign
/ S, then v ∈ S is the y-th vertex in ρi+1 , for some 1 ≤ y ≤ `;
edge uv to Q0x . If u ∈
assign edge uv to Q0`+y . The only inter-layer edges that are not yet assigned to
queues belong to graphs G1 , . . . , Gk . The assignment of these edges to queues is
the one computed recursively; however, we use the same set of queues to assign
the edges of all graphs G1 , . . . , Gk .
We prove that no two inter-layer edges e and e0 in the same queue nest. Let
Vi and Vi+1 be the layers containing the endpoints of e. Let Vi0 and Vi0 +1 be
the layers containing the endpoints of e0 . Assume w.l.o.g. that i ≤ i0 . If i < i0 ,
then both endpoints of e precede the endpoint of e0 in Vi0 +1 (hence e0 is not
nested inside e) and both endpoints of e0 follow the endpoint of e in Vi (hence e
is not nested inside e0 ), since σ q satisfies the layer-by-layer invariant; thus e and
e0 do not nest. Suppose now that i = i0 . If e and e0 are in some queue Q0h for
h ∈ {1, . . . , 2`}, then e and e0 are both incident either to the h-th vertex of ρi or to
the (h−`)-th vertex of ρi+1 , hence they do not nest. If e and e0 are in some queue
different from Q01 , . . . , Q02` , then e ∈ E(Gj ) and e0 ∈ E(Gj 0 ), for j, j 0 ∈ {1, . . . , k}.
If j = j 0 , then e and e0 do not nest by induction. Otherwise, j 6= j 0 and then the
q
endpoints of e alternate with those of e0 in σ q , since the vertices in σmin{j,j
0 },i
q
q
precede those in σmax{j,j 0 },i and the vertices in σmin{j,j 0 },i+1 precede those in
q
q
0
σmax{j,j
0 },i+1 in σ ; hence e and e do not nest.
We now bound the number of queues we use for the inter-layer edges of G;
we claim that this number is at most 2` · log2 n. The proof is by induction on n;
the base case n = 1 is trivial. For any subgraph H of G, let q(H) be the number
of queues we use for the inter-layer edges of H, and let q(n0 ) = maxH {q(H)}
over all subgraphs H of G with n0 vertices. As proved above, q(G) ≤ 2` +
max{q(G1 ), . . . , q(Gk )}. Since each graph Gj has at most n/2 vertices, we get
that q(G) ≤ 2` + q(n/2). By induction q(G) ≤ 2` + 2` · log2 (n/2) = 2` · log2 n.
Thus, the described queue layout uses ` · log2 n queues for the intra-layer
edges and 2` · log2 n queues for the inter-layer edges, thus 3` · log2 n queues in
total. This concludes the proof.
Acknowledgments: The authors wish to thank David R. Wood for stimulating
discussions and comments on the preliminary version of this article.
References
1. Alam, M.J., Brandenburg, F.J., Kobourov, S.G.: On the book thickness of 1-planar
graphs (2015), http://arxiv.org/abs/1510.05891
2. Bannister, M.J., Devanny, W.E., Dujmović, V., Eppstein, D., Wood, D.R.:
Track layouts, layered path decompositions, and leveled planarity (2015),
http://arxiv.org/abs/1506.09145
3. Bekos, M.A., Bruckdorfer, T., Kaufmann, M., Raftopoulou, C.N.: 1-planar graphs
have constant book thickness. In: Bansal, N., Finocchi, I. (eds.) 23rd Annual European Symposium on Algorithms. LNCS, vol. 9294, pp. 130–141. Springer (2015)
4. Blankenship, R.: Book Embeddings of Graphs. Ph.D. thesis, Dept. Math. Louisiana
St. Univ., U.S.A. (2003)
5. Blankenship, R., Oporowski, B.: Book embeddings of graphs and minor-closed
classes. In: 32nd Southeastern International Conference on Combinatorics, Graph
Theory and Computing. Dept. Math. Louisiana St. Univ. (2001)
6. Chen, Z.Z.: Approximation algorithms for independent sets in map graphs. J. Algorithms 41(1), 20–40 (2001)
7. Chen, Z.Z.: New bounds on the edge number of a k-map graph. J. Graph Theory
55(4), 267–290 (2007)
8. Chen, Z.Z., Grigni, M., Papadimitriou, C.H.: Map graphs. J. ACM 49(2), 127–138
(2002)
9. Demaine, E.D., Fomin, F.V., Hajiaghayi, M., Thilikos, D.M.: Fixed-parameter algorithms for (k, r)-center in planar graphs and map graphs. ACM Trans. Alg. 1(1),
33–47 (2005)
10. Di Battista, G., Frati, F., Pach, J.: On the queue number of planar graphs. SIAM
J. Comput. 42(6), 2243–2285 (2013)
11. Dujmović, V.: Graph layouts via layered separators. J. Combin. Th. Ser. B 110,
79–89 (2015)
12. Dujmović, V., Eppstein, D., Wood, D.R.: Genus, treewidth, and local crossing
number. In: Di Giacomo, E., Lubiw, A. (eds.) 23rd International Symposium on
Graph Drawing and Network Visualization. LNCS, vol. 9411, pp. 87–98. Springer
(2015)
13. Dujmović, V., Joret, G., Frati, F., Wood, D.R.: Nonrepetitive colourings of planar
graphs with O(log n) colours. Electr. J. Comb. 20(1), P51 (2013)
14. Dujmović, V., Morin, P., Wood, D.R.: Layout of graphs with bounded tree-width.
SIAM J. Comput. 34(3), 553–579 (2005)
15. Dujmović, V., Morin, P., Wood, D.R.: Layered separators for queue layouts, 3D
graph drawing and nonrepetitive coloring. In: 54th Annual IEEE Symposium on
Foundations of Computer Science. pp. 280–289. IEEE Computer Society (2013)
16. Dujmović, V., Morin, P., Wood, D.R.: Layered separators in minor-closed families
with applications (2014), http://arxiv.org/abs/1306.1595
17. Dujmović, V., Wood, D.R.: On linear layouts of graphs. Discrete Math. Theor.
Comput. Sci. 6(2), 339–358 (2004)
18. Fomin, F.V., Lokshtanov, D., Saurabh, S.: Bidimensionality and geometric graphs.
In: Rabani, Y. (ed.) 23rd Annual ACM-SIAM Symposium on Discrete Algorithms.
pp. 1563–1575. SIAM (2012)
19. Lipton, R.J., Tarjan, R.E.: A separator theorem for planar graphs. SIAM J. Appl.
Math. 36(2), 177–189 (1979)
√
20. Malitz, S.M.: Genus g graphs have pagenumber O( g). J. Algorithms 17(1), 85–
109 (1994)
√
21. Malitz, S.M.: Graphs with E edges have pagenumber O( E). J. Algor. 17(1),
71–84 (1994)
22. Ollmann, L.T.: On the book thicknesses of various graphs. In: Hoffman, F.,
Levow, R.B., Thomas, R.S.D. (eds.) 4th Southeastern Conference on Combinatorics, Graph Theory, and Computing. Congr. Numer., vol. VIII, p. 459 (1973)
23. Pach, J., Tóth, G.: Graphs drawn with few crossings per edge. Combinatorica
17(3), 427–439 (1997)
24. Yannakakis, M.: Embedding planar graphs in four pages. J. Comput. Sys. Sci.
38(1), 36–67 (1989)
| 8 |
1
Cross-Mode Interference Characterization in
Cellular Networks with Voronoi Guard Regions
arXiv:1702.05018v3 [] 20 Oct 2017
Stelios Stefanatos, Antonis G. Gotsis, and Angeliki Alexiou
Abstract—Advances in cellular networks such as device-todevice communications and full-duplex radios, as well as the inherent elimination of intra-cell interference achieved by networkcontrolled multiple access schemes, motivates the investigation
of the cross-mode interference properties under a guard region
corresponding to the Voronoi cell of an access point (AP). By
modeling the positions of interfering APs and user equipments
(UEs) as Poisson distributed, analytical expressions for the
statistics of the cross-mode interference generated by either APs
or UEs are obtained based on appropriately defined density
functions. The considered system model and analysis are general
enough to capture many operational scenarios of practical interest, including conventional downlink/uplink transmissions with
nearest AP association, as well as transmissions where not both
communicating nodes lie within the same cell. Analysis provides
insights on the level of protection offered by a Voronoi guard
region and its dependence on type of interference and receiver
position. Numerical examples demonstrate the validity/accuracy
of the analysis in obtaining the system coverage probability for
operational scenarios of practical interest.
Index Terms—Stochastic geometry, cellular networks, guard
region, D2D communications, full-duplex radios, interference.
I. I NTRODUCTION
Characterization of the interference experienced by the
receivers of a wireless network is of critical importance for
system analysis and design [1]. This is especially the case
for the future cellular network, whose envisioned fundamental
changes in its architecture, technology, and operation will have
significant impact on the interference footprint [2]. Interference characterization under these new system features is of
the utmost importance in order to understand their potential
merits as well as their ability to co-exist.
Towards increasing the spatial frequency reuse, two of the
most prominent techniques/features considered for the future
cellular network are device-to-device (D2D) communications
[3] and full-duplex (FD) radios [4]. Although promising,
application of these techniques introduces additional, crossmode interference. For example, an uplink transmission is
no longer affected only by interfering uplink transmissions
but also by interfering D2D and/or downlink transmissions.
Although it is reasonable to expect that the current practice of
eliminating intra-cell interference by employing coordinated
transmissions per cell will also hold in the future [5], the
continually increasing density of APs and user equipments
(UEs) suggest that inter-cell interference will be the major
limiting factor of D2D- and/or FD-enabled cellular networks,
rendering its statistical characterization critical.
The authors are with the Department of Digital Systems, University of
Piraeus, Piraeus, Greece. Email: {sstefanatos, agotsis, alexiou}@unipi.gr.
A. Previous Work
Stochastic geometry is by now a well accepted framework
for analytically modeling interference in large-scale wireless
networks [6]. Under this framework, most of the numerous
works on D2D-enabled cellular networks (without FD radios)
consider the interfering D2D nodes as uniformly distributed
on the plane, i.e., there is no spatial coordination of D2D
transmissions (see, e.g., [7], [8], [9], [10]). Building on the
approach of [11], various works consider the benefits of
employing spatially coordinated D2D transmissions where, for
each D2D link in the system, a circular guard region (zone)
is established, centered at either the receiver or transmitter,
within which no interfering transmissions are performed [12],
[13]. However, when the D2D links are network controlled
[5], a more natural and easier to establish guard region is the
(Voronoi) cell of a coordinating AP. Under a non-regular AP
deployment [14], this approach results in a random polygon
guard region, which makes the interference characterization a
much more challenging task.
Interference characterization for this type of guard region
has only been partially investigated in [15], [16], [17] and only
for the case of conventional uplink transmissions with nearest
AP association and one active UE per cell (with no crossmode interference). As the positions of the interfering UEs
are distributed as a Voronoi perturbed lattice process (VPLP)
in this case [18], [19], which is analytically intractable, an
approximation based on a Poisson point process (PPP) model
with a heuristically proposed equivalent density is employed.
This approach of approximating a complicated system model
by a simpler one with appropriate parameters (in this case,
by a PPP of a given density) was also used in [20] for the
characterization of downlink systems (with no cross-mode
interference as well). Reference [19] provides a rigorous
characterization of the equivalent density of the UE-generated
interference both from the “viewpoint” of an AP as well as
its associated UE, with the latter case of interest in case
of cross-mode interference. The analysis reveals significant
differences in the densities corresponding to these two cases
suggesting that the equivalent density is strongly dependent on
the considered receiver position. Interference characterization
for the case of arbitrary receiver position that may potentially
lie outside the Voronoi guard region as, e.g., in the case of
cross-cell D2D links [21], has not been investigated in the
literature, let alone under cross-mode interference conditions.
Investigation of interference statistics under a cell guard
region is also missing in the (much smaller) literature on FDenabled cellular networks, which typically assumes no spatial
2
coordination for the UE transmissions (see e.g., [22], [23],
[24]).
B. Contributions
This paper considers a stochastic geometry framework for
modeling the cross-mode interference power experienced at
an arbitrary receiver position due to transmissions by APs
or UEs and under the spatial protection of a Voronoi guard
region. Modeling the positions of interfererers (APs or UEs)
as a Poisson point process, the statistical characterization of
the cross-mode interference is pursued via computation of its
Laplace transform. The main contributions of the paper are
the following.
• Consideration of a general system model, which allows
for a unified analysis analysis of cross-mode interference
statistics. By appropriate choice of the system model
parameters, the interference characterization is applicable
to multiple operational scenarios, including conventional
downlink/uplink communications with nearest AP association and (cross-cell) D2D links where the transmitterreceiver pair does not necessarily lie in a single cell.
• Exact statistical characterization of AP-generated crossmode interference, applicable, e.g., in the case where
interference experienced at an AP is due to transmissions
by other FD-enabled APs. An equivalent interferer density is given in a simple closed form, allowing for an
intuitive understanding of the interfernece properties and
its dependence on the position of the receiver relative
to the position of the AP establishing the Voronoi guard
region.
• Determination of a lower bound of the Laplace transform in case of UE-generated cross-mode interference,
applicable, e.g., in the case where a UE experiences
interference due to other FD-enabled or D2D-operating
UEs. The properties of the corresponding equivalent
density of interferers is studied in detail, providing insights for various operational scenarios, including the
standard uplink communication scenario (with no crossmode interfernece) where a rigorous justification of why
the heuristic approaches previously proposed in [16], [17]
are accurate.
Simulated examples indicate the accuracy of the analytical
results also for cases where the positions of interferering UEs
are VPLP distributed, suggesting their use as a basis for
determination of performance as well as design of optimal
resource allocation algorithms for future, D2D- and/or FDenabled cellular networks.
C. Notation
The origin of the two-dimensional plane R2 will be denoted
as o. The Euclidean norm of x ∈ R2 is denoted as |x| with
operator | · | also used to denote the absolute value of a scalar
or the Lebesgue measure (area) of a bounded subset of R2 .
The polar form representation of x ∈ R2 will be denoted as
(|x|, ∠x) or |x|∠x, where ∠x is the principal branch of the
angular coordinate of x taking values in [−π, π). The open ball
in R2 , centered at x ∈ R2 and of radius R > 0, is denoted
as B(x, R) , {y ∈ R2 : |y − x| < R}, whereas its boundary
is denoted as C(x, R) , {y ∈ R2 : |y − x| = R}. I(·) is the
indicator (0 − 1) operator, P(·) is the probability measure,
and E(·) is the expectation operator. Functions arccos(·) :
[−1, 1] → [0, π] and arcsin(·) : [−1, 1] → [−π/2, π/2]
are the principal branches of the inverse cosine and sine,
respectively. The Laplace transform of a random variable z
equals Lz (s) , E(e−sz ), for all s ∈ R for which the
expectation exists.
II. S YSTEM M ODEL
A large-scale model of a cellular network with APs and
UEs positioned over R2 is considered. The positions of APs
are modeled as a realization of a homogeneous PPP (HPPP)
Φa ⊂ R2 of density λa > 0. All the communication scenarios
considered in this paper involve three nodes, namely,
• a, so called, typical node, which, without loss of generality (w.l.o.g.), will be assumed that its position coincides
with the origin o. The typical node is an AP if o ∈ Φa
or a UE, otherwise;
∗
• the closest AP to the typical node, located at x
,
arg minx∈Φa |x| (x∗ = o in case o ∈ Φa , i.e., the typical
node is an AP);
• a (typical) receiver located at an arbitrary position xR ∈
R2 . Typical node is an AP if xR ∈ Φa or a UE, otherwise.
The case xR = o is also allowed, meaning that the typical
node is also the typical receiver.
Let V ∗ denote the Voronoi cell of the AP at x∗ , generated by
the Poisson-Voronoi tessellation of the plane from Φa , i.e.,
V ∗ , y ∈ R2 : |y − x∗ | < |y − x|, for all x ∈ Φa \ {x∗ } .
The goal of this paper is to characterize the interference power
experienced at xR due to transmissions from nodes (APs or
UEs) that lie outside V ∗ , i.e., with V ∗ effectively forming a
spatial guard region within which no interference is generated.
Let Φu ⊂ R2 denote the point process representing the
positions of the UEs in the system other than the typical
node and typical receiver, and IxR ,a and IxR ,u denote the
interference power experienced at xR due to transmissions
by all APs and UEs in the system, respectively. Under the
Voronoi guard region scheme described above, the standard
interference power model is adopted in this paper, namely [1],
X
IxR ,k ,
Pk gx |x − xR |−αk , k ∈ {a, u},
(1)
x∈Φk \V ∗
where gx ≥ 0 is the channel gain of a transmission generated
by a node at x ∈ R2 , assumed to be an independent of x,
exponentially distributed random variable of mean 1 (Rayleigh
fading), αk > 2 is the path loss exponent and Pk > 0 is
the transmit power, which, for simplicity, is assumed fixed
and same for all nodes of the same type. Figure 1 shows an
example realization of V ∗ and the positions of the interferers
for the case xR 6= x∗ 6= o. Note that V ∗ is a random guard
region as it depends positions of the APs, therefore, there is no
guarantee that xR lies within V ∗ , i.e., it holds P(xR ∈ V ∗ ) <
3
TABLE I
S PECIAL C ASES OF S YSTEM M ODEL
V∗
|x∗ |
condition
xR 6= x∗ ,
x∗ 6= o
xR = o,
x∗ 6= o
xR = x∗ ,
x∗ 6= o
Fig. 1. Random realization of the system model. APs and UEs are shown as
triangle and circle markers, respectively. The typical node in this example
is also a UE shown as a square. The shaded polygon area indicates the
Voronoi guard region V ∗ within which no node (indicated by open marker)
transmits. All nodes outside V ∗ (filled markers) generate interference that is
experienced by the receiver of interest whose position (xR ) is indicated by
an open diamond marker. This scenario may correspond to a (cross-cell) D2D
link between typical node and receiver or to the receiver acting as relay aiding
the donwlink or uplink communication of the typical node with its nearest
AP.
1, unless xR is a point on the line segment joining the origin
with x∗ (inclusive).
The above model is general enough so that, by appropriately
choosing xR and/or x∗ , IxR ,a and IxR ,u correspond to many
practical instances of (cross-mode) interference experienced
in D2D/FD-enabled as well as conventional cellular networks.
For example, for the case where x∗ 6= xR 6= o, Eq. (1) may
correspond to the AP- and UE-generated interference power
experienced by the receiver of a, potentially cross-cell, D2D
link between the typical nodes at o and xR . Various other
scenarios of practical interest are captured by the model, some
of which are described in Table I. Note that from the scenarios
identified in Table I, only the standard downlink and uplink
scenarios have been considered previously in the literature
[15], [16], [17], [25], however, without consideration of crossmode interference, i.e., UE-generated and AP-generated interference for downlink and uplink transmissions, respectively.
Reference [19] considers the UE-generated cross-mode interference experienced at the typical node in downlink mode,
however, the results provided cannot not be straightforwardly
generalized to the general case.
For characterization of the performance of the communication link as well as obtaining insights on the effectiveness of
the guard region V ∗ for reducing interference, it is of interest to
describe the statistical properties of the random variables IxR ,a
and IxR ,u . This is the topic of the following section, where
the marginal statistics of IxR ,a and IxR ,u conditioned on x∗
and treating xR as a given, but otherwise free, parameter, are
investigated in detail. Characterization of the joint distribution
of IxR ,a and IxR ,u is left for future work.
Note that the respective unconditional interference statistics
can be simply obtained by averaging over the distribution
of x∗ , which corresponds to a uniformly distributed
∠x∗
√
∗
and a Rayleigh distributed |x | of mean 1/(2 λa ) [27].
However, results conditioned on x∗ are also of importance
xR =
6 o,
x∗ = o
xR = x∗ ,
x∗ = o
scenario
General case. Corresponds to (a) a D2D link when the
typical node sends data to the typical receiver or (b) to a
relay-assisted cellular communication where the typical
receiver acts a relay for the link between the typical
node and it nearest AP (either downlink or uplink).
Typical receiver is not guaranteed to lie within V ∗ .
Typical node is a UE in receive mode and lies within
V ∗ . Represents a standard downlink when the nearest
AP is transmitting data to the typical node/receiver.
Nearest AP to the typical node is in receive mode with
the typical node lying in V ∗ . Represents a standard
uplink when the AP receives data from the typical node.
Typical node is an AP. When the typical node/AP is the
one transmitting to the typical receiver, a non-standard
downlink is established as xR is not necessarily lying
within V ∗ .
Typical node is an AP in receive mode; the position of
the corresponding transmitter is unspecified and may as
well lie outside V ∗ , thus modeling a non-standard uplink.
on their own as they can serve as the mathematical basis
for (optimal) resource allocation algorithms given network
topology information, e.g., decide on whether the typical node
employs D2D or cellular mode given knowledge of x∗ .
For tractability purposes, the following assumption on the
statistics of Φu will be considered throughout the analysis.
Assumption. The positions of the (potentially) interfering
UEs, Φu , is a realization of an HPPP of density λu > 0,
that is independent of the positions of the APs, typical node,
and typical receiver.
Remark: In general, this assumption is not exact since
resource allocation and scheduling decisions over the entire
network affect the distribution of interfering UE positions.
For example, in the conventional uplink scenario with at least
one UE per cell, the transmitting UE positions correspond
to a VPLP process of density equal to λu = λa [17],
[19]. However, as also shown in [17], [19] for the standard
uplink scenario, the HPPP assumption of the UE point process
allows for a tractable, yet accurate approximation of the actual
performance, which, as will be demonstrated in Sec. V, is also
the case for other operational scenarios as well. For generality
purposes, an arbitrary value λu 6= λa is also allowed in
the analysis, which is actually the case, e.g., when one UE
is only allowed to transmit per cell but there exist empty
cells and, therefore, λu < λa (ultra dense networks [26]),
and also serves as an approximate model for the case when
some arbitrary/undefined coordination scheme is employed by
other cells in the system (if at all), that may as well result in
λu > λa .
III. I NTERFERENCE C HARACTERIZATION
Towards obtaining a tractable characterization of the distribution of the (AP or UE) interference power, or, equivalently,
its Laplace transform, the following standard result in PPP
theory is first recalled [27].
P
Lemma 1. Let I , x∈Φ̃ P hx |x − z|−α , z ∈ R2 , P > 0,
with Φ̃ an inhomogeneous PPP of density λ : R2 → [0, ∞),
4
and {hx }x∈Φ i.i.d. exponential random variables of mean 1.
The Laplace transform of I equals
Z
−α
LI (s) = exp −
λ(x)γ(sP |x − z| )dx
(2)
R2
Z ∞
= exp −2π
λz (r)rγ(sP r−α )dr ,
(3)
0
1
1+t ,
and the second equality holds only
where γ(t) , 1 −
in the case of a circularly-symmetric density function w.r.t.
point z, i.e., λ(z + x) = λz (|x|), for an appropriately defined
function λz : [0, ∞) → [0, ∞).
The above result provides a complete statistical characterization in integral form of the interference experienced at
a position z ∈ R2 due to Poisson distributed interferers.
Of particular interest is the case of a circularly-symmetric
density, which only requires a single integration over the radial
coordinate and has previously led to tractable analysis for
various wireless network models of interest such as mobile ad
hoc [1] and downlink cellular [25]. In the following it will
be shown that even though the interference model of Sec.
II suggests a non circularly-symmetric interference density
due to the random shape of the guard region, the Laplace
transform formula of (3) holds exact for IxR ,a and is a (tight)
lower bound for IxR ,u with appropriately defined circularlysymmetric equivalent density functions.
A. AP-Generated Interference
Note that considering the Voronoi cell of a random AP at
x ∈ Φa as a guard region has no effect on the distribution of
the interfering APs. This is because the Voronoi cell of any
AP is a (deterministic) function of Φa , therefore, it does not
impose any constraints on the realization of Φa , apart from
implying that the AP at x does not produce interference. By
well known properties of HPPPs [27], conditioning on the
position x of the non-interfering AP, has no effect on the
distribution of the interfering APs. That is, the interfering AP
positions are still distributed as an HPPP of density λa as in
the case without imposing any guard region whatsoever.
However, when it is the Voronoi cell of the nearest AP
to the originthat is considered as a guard region, an implicit
guard region is formed. Indeed, the interfering APs, given the
AP at x∗ is not interfering, are effectively distributed as an
inhomogeneous PPP with density [25]
λ̃a (x) = λa I(x ∈
/ B(o, |x∗ |)), x ∈ R2 ,
(4)
i.e., an implicit circular guard zone is introduced around the
typical node (see Fig. 1), since, if this was not the case,
another AP could be positioned at a distance smaller than
|x∗ | from the origin, which is impossible by assumption. This
observation may be used to obtain the Laplace transform of
the AP-generated interference experienced at xR directly from
the formula of (2). However, the following result shows that
the two-dimensional integration can be avoided as the formula
of (3) is also valid in this case with an appropriately defined
equivalent density function.
Proposition 2. The Laplace transform, LIxR ,a (s | x∗ ) ,
E(e−sIxR ,a | x∗ ), of the AP-generated interference power
IxR ,a , conditioned on x∗ , equals the right-hand side of (3)
with P = Pa , α = αa and λz (r) = λxR ,a (r), r ≥ 0, where
λa
, r > |x∗ | + |xR |
(
, |x∗ | > |xR |,
0
, r ≤ ||x∗ | − |xR ||
λxR ,a (r) ,
λa , |x∗ | < |xR |,
λa /2
, |x∗ | = |xR |
λa 1 − π1 arccos (d) , otherwise,
(5)
r 2 −(|x∗ |2 −|xR |2 )
with d ,
, for xR 6= o, and
2r|xR |
λxR ,a (r) = λa I(r ≥ |x∗ |),
(6)
for xR = o.
Proof: See Appendix A.
The following remarks can be made:
1) The interference power experienced at an arbitrary position xR 6= o under the considered guard region
scheme is equal in distribution to the interference power
experienced at the origin without any guard region and
with interferers distributed as an inhomogeneous PPP of
an equivalent, xR -depednent density given by (5).
2) Even though derivation of the statistics of IxR ,a was
conditioned on xR and x∗ , the resulting equivalent inteferer density depends only on their norms |xR | and |x∗ |.
Although the indepedence from ∠xR might have been
expected due to the isotropic property of Φa [1], there is
no obvious reason why one would expect independence
also from ∠xR .
3) λxR ,a (r) is a decreasing function of |x∗ |, corresponding
to a (statistically) smaller AP interference power due to
an increased guard zone area.
4) For xR = o, corresponding to a standard downlink with
interference generated from other donwlink transmissions, Prop. 2 coincides with the analysis of [25], as
expected.
5) The Laplace transform of the interference in the case
of xR 6= x∗ 6= o was examined previously in [28].
However, the corresponding formulas appear as twodimensional integrals that offer limited insights compared to the simpler and more intuitive equivalent density formulation given in Prop. 2.
6) The case xR = x∗ 6= o corresponds to the nearestneighbor transmission scenario considered in [29] where
the validity of λxR ,a (r) in (5) as an equivalent density
function for computation of the Laplace transform of the
interference power was not observed.
7) In [30], the Laplace transform of the interference power
experienced at xR ∈ R2 due to a Poisson hole process,
i.e., with interferers distributed as an HPPP over R2
except in the area covered by randomly positioned disks
(holes), was considered. The holes were assumed to not
include xR and a lower bound for the Laplace transform
was obtained by considering only a single hole [30,
Lemma 5], which coincides with the result of Prop.
1
1
0.8
0.8
0.6
λx∗ ,a (r)/λa
λxR ,a (r)/λa
5
0.4
√
|xR | × 2 λa
√
|xR | × 2 λa
√
|xR | × 2 λa
√
|xR | × 2 λa
0.2
0
0
0.5
1
1.5
2
2.5
√
r × 2 λa
3
= 1/2
=1
= 3/2
=2
3.5
0.6
0.4
√
|x∗ | × 2 λa
√
∗
|x | × 2 λa
√
∗
|x | × 2 λa
√
|x∗ | × 2 λa
0.2
4
Fig. 2. Equivalent radial density of interfering
APs experienced at various
√
xR ∈ R2 , conditioned on |x∗ | = 1/(2 λa ).
2. This is not surprising as the positions of the APs,
conditioned on x∗ , are essentially distributed a singlehole Poisson process. Note that Prop. 2 generalizes [30,
Lemma 5] by allowing xR to be covered by the hole
and considers a different proof methodology.
Figure 2 shows the normalized density function λxR ,a (r)/λ
√ a
for various values of |xR | and assuming that |x∗ | = 1/(2 λa ),
the expected distance from the nearest AP. It can be seen that
the presence of the implicit circular guard region results in
reducing the equivalent interferer density in certain intervals
of the radial coordinate r, depending on the value of |xR |.
In particular, when |xR | < |x∗ |, it is guaranteed that no APs
exist within a radius |x∗ | − |xR | > 0 from xR . In contrast,
when |xR | > |x∗ | there is no protection from APs in the close
vicinity of xR .
The case |xR | = |x∗ | is particularly interesting since it
corresponds to the case when the receiver of interest is the
AP at x∗ , experiencing interference from other AP, e.g., when
operating in FD. For x∗ 6= o, corresponding to an uplink
transmission by the typical node to its nearest AP, it can be
easily shown that it holds
1
r
λx∗ ,a (r) = λa
+
+ O(r3 ), r → 0,
(7)
2 2π|x∗ |
i.e., the guard region results in the serving AP experiencing
about half of the total interfering APs density in its close vicinity, which is intuitive as for asymptotically small distances
from x∗ the boundary of the circular guard region can be
locally approximated as a line that divides the plane in two
halves, one with interferer density λa and one with interferer
density 0. Figure 3 shows the normalized λx∗ ,a (r) for various
values of |x∗ |, where its linear asymptotic behavior as well as
the advantage of a larger |x∗ | are clearly visible.
B. UE-Generated Interference
The positions of interfering UEs, conditioned on the realization of Φa , are distributed as an inhomogeneous PPP of
0
0
0.5
1
1.5
2
2.5
√
r × 2 λa
3
= 1/2
=1
= 3/2
=2
3.5
4
Fig. 3. Equivalent radial density of interfering APs experienced at x∗ .
density
λ̃u (x|Φa ) = λu I(x ∈
/ V ∗ ), x ∈ R2 .
(8)
Although the expression for λ̃u is very similar to the that of
the density λ̃a of interfering APs given in (4), the random
Voronoi guard region V ∗ appearing in (8) is significantly
more complicated than the deterministic circular guard zone
B(o, |x∗ |), which renders the analysis more challenging. To
simplify the following exposition, it will be assumed that a
rotation of the Cartesian coordinate axes is performed such
that ∠x∗ = 0. Note that this rotation has no effect in the
analysis due to the isotropic property of the HPPP [1] and
immediately renders the following results independent of the
value of ∠x∗ in the original coordinate system.
Since it is of interest to examine the UE interference
statistics conditioned only on x∗ , a natural quantity to consider,
that will be also of importance in the interference statistics
analysis, is the probabilistic cell area (PCA) function pc (x |
x∗ ), x ∈ R2 . This function gives the probability that a point
x ∈ R2 lies within V ∗ conditioned only on x∗ , i.e.,1
pc (x | x∗ ) , P(x ∈ V ∗ | x∗ ).
Lemma 3. For all x ∈ R2 , the PCA function equals
(
1
, |x| ≤ |x∗ |, ∠x = 0,
∗
pc (x | x ) =
e−λa |A| < 1 , otherwise,
where A , B(x, |x − x∗ |) \ B(o, |x∗ |). For all x 6= x∗ , it holds
(
r2 (|∠x|+θ∗ )−|x∗ |2 |∠x|+|x||x∗ | sin(|∠x|) , x∗ 6= o,
|A| = ∗ 2
π|x|
, x∗ = o,
p
with r∗ , |x − x∗ | = |x|2 + |x∗ |2 − 2|x||x∗ | cos(∠x) and
π − arcsin |x| sin(|∠x|)
, |x| cos(∠x) > |x∗ |,
r
∗
θ∗ ,
arcsin |x| sin(|∠x|)
, |x| cos(∠x) ≤ |x∗ |.
r∗
1 Recall that, under the system model, x∗ is also the AP closest to the
origin.
6
2.4
2.2
2
λa E(|V ∗ | | x∗ )
0.5
0.9
0.1
|x∗ |
1.8
1.6
1.4
1.2
Fig. 4. Contours of pc (x | x∗ ) (solid lines),
corresponding to probabilities
√
0.1 to 0.9 in steps of 0.1, for |x∗ | = 1/ λa . The points of the line segment
∗
joining o to x are the only ones with pc (x|x∗ ) = 1. The corresponding
∗ 2
contours of e−λa π|x−x | are also shown (dashed lines).
Proof: The probability that the point x ∈ R2 belongs
to V is equal to the probability that there does not exist a
point of Φa \ {x∗ } within the set A, which equals e−λa π|A|
[27]. For |x| ≤ |x∗ | and ∠x = 0, it is a simple geometrical
observation that A = Ø, therefore, |A| = 0. When x∗ = o,
A = B(x, |x|) \ B(o, 0) = B(x, |x|), and, therefore, |A| =
π|x|2 . For all other cases, |A| can be computed by the same
approach as in the proof of [31, Theorem 1]. The procedure
is straightforward but tedious and is omitted.
A simple lower bound for pc (x | x∗ ) directly follows by
noting that A ⊆ B(x, |x − x∗ |) for all x ∈ R2 .
1
analysis
simulation
0
0.5
1
1.5
√
|x∗ | × 2 λa
2
2.5
3
Fig. 5. Average area of V ∗ conditioned on x∗ as a function of the
(normalized) distance of x∗ from the origin.
∗
Corollary 4. The PCA function is lower bounded as
pc (x | x∗ ) ≥ e−λa π|x−x
with equality if and only if x∗ = o.
∗ 2
|
, x ∈ R2 ,
(9)
Remark: The right-hand side of (9) equals the probability
that x belongs to the Voronoi cell of the AP positioned at x∗
when the latter is not conditioned on being the closest AP to
the origin or any other point in R2 .
Figure
4 depicts pc (x | x∗ ) for the case where |x∗ | =
√
1/ λa (behavior is similar for other values of |x∗ | > 0). Note
that, unless x∗ = o, pc (x | x∗ ) is not circularly symmetric
w.r.t. any point in R2 , with its form suggesting that points
isotropically distributed in the vicinity of x∗ are more probable
to lie within V ∗ than points isotropically distributed in the
vicinity of o.
∗ 2
The lower bound e−λa π|x−x | , x ∈ R2 , is also shown in
Fig. 4, clearly indicating the probabilistic expansion effect of
the Voronoi cell of an AP, when the latter is conditioned on
being the closest to the origin. This cell expansion effect is
also demonstrated in Fig. 5 where the conditional average cell
area, equal to
Z
∗
∗
∗
∗
E(|V | | x ) = E
I(x ∈ V )dx x
R2
Z
=
pc (x|x∗ )dx,
(10)
R2
is plotted as a function of the distance |x∗ |, with the integral of
(10) evaluated numerically using Lemma 3. Simulation results
are also depicted serving as a verification of the validity of
Lemma 3. It can be seen that the average area of the guard
region increases with |x∗ |, which implies a corresponding
increase of the average number of UEs that lie within the
region. However, as will be discussed in the following, even
though resulting in more UEs silenced on average, an increasing |x∗ | does not necessarily imply improved protection from
UE interference, depending on the receiver position.
The interference statistics of the UE-generated interference
power are given in the following result.
Proposition 5. The Laplace transform LIxR ,u (s | x∗ ) ,
E(e−sIxR ,u | x∗ ) of the UE-generated interference power
IxR ,u , conditioned on x∗ , is lower bounded as the right-hand
side of (3) with P = Pu , α = αu and λz (r) = λxR ,u (r), r ≥
0, where
Z 2π
1
∗
λxR ,u (r) , λu 1 −
pc ((r, θ) + xR | x ) dθ ,
2π 0
(11)
with pc (x | x∗ ) as given in Lemma 3.
Proof: See Appendix B.
The following remarks can be made:
1) The statistics of the interfering UE point process when
averaged over V ∗ (equivalently, over Φa \ {x∗ }) do not
correspond to a PPP, even though this is the case when a
fixed realization of V ∗ is considered. Therefore, it cannot
be expected that Lemma 1 applies for LIxR ,u (s | x∗ ).
However, as Prop. 5 shows, when the interfering point
process is treated in the analysis as an PPP of an appropriately defined equivalent density function λxR ,u (r), a
tractable lower bound for LIxR ,u (s | x∗ ) is obtained,
which will be shown to be tight in the numerical results
section.
2) For x∗ 6= o, the non circularly-symmetric form of the
PCA function results in an integral-form expression for
the equivalent density λxR ,u (r) that depends in general
on ∠xR , in contrast to the case for λxR ,a (r) (see Sec.
7
III.A, Remark 2). A closed form expression for the
equivalent density is only available for x∗ = o discussed
below.
3) When the receiver position is specified as |xR |∠xR with
∠xR uniformly distributed over [−π, π), a straightforward extension of the proof of Prop. 5 results in the
same lower bound for the conditioned Laplace transform
of IxR ,u as in Prop. 5, with λxR ,u (r) replaced by
Z π
1
λx ,u (r)d∠xR ,
(12)
λ|xR |,u (r) ,
2π −π R
which is independent of ∠xR .
Although λxR ,u (r) is not available in closed form when x∗ 6=
o, the numerical integration over a circular contour required
in (11) is very simple. With λxR ,u (r), r ≥ 0, pre-computed,
evaluation of the lower bound for LIxR ,u (s | x∗ ) is of the same
(small) numerical complexity as the evaluation of LIxR ,a (s |
x∗ ). Moreover, a closed-form upper bound for λxR ,u (r) is
available, which can in turn be used to obtain a looser lower
bound for LIxR ,u (s | x∗ ) that is independent of ∠xR . The
bound is tight in the sense that it corresponds to the exact
equivalent density when x∗ = o.
Proposition 6. The equivalent density λxR ,u (r) of Prop. 5 is
upper bounded as
h
i
∗ 2
2
2
λxR ,u (r) ≤ λu 1 − e−λa π(|x | +|xR | +r ) I0 (λa 2πx̄r) ,
(13)
where x̄ , max(|x∗ |, |xR |) and I0 (·) denotes the zero-order
modified Bessel function of the first kind. Equality holds only
when x∗ = o.
Proof: See Appendix C.
An immediate observation of the above result is that, given
xR , the equivalent interfering UE density decreases for any
x∗ 6= o compared to the case x∗ = o. This behavior is expected
due to the cell expansion effect described before. However, as
the bound of (13) is independent on ∠xR , it does not offer
clear information on how the interferering UE density changes
when different values of x∗ 6= o are considered for a given
xR . In order to obtain futher insights on the UE-generated
interference properties, λxR ,u (r) is investigated in detail in the
following section for certain special instances of xR and/or x∗ ,
which are of particular interest in cellular networks.
IV. A NALYSIS OF S PECIAL C ASES OF UE-G ENERATED
I NTERFERENCE
A. xR = x∗
This case corresponds to an standard uplink cellular transmission with nearest AP association, no intra-cell interference,
and interference generated by UEs outside V ∗ operating in
uplink and/or D2D mode. This case was previously investigated in [16], [17], with heuristically introduced equivalent
densities considering only the average performance over x∗ .
In this section, a more detailed investigation of the equivalent
density properties is provided.
For this case, a tighter upper bound than the one in Prop. 6
is available for λx∗ ,u (r), which, interestingly, is independent
of |x∗ |, in the practical case when x∗ 6= o.
Lemma 7. For xR = x∗ , the equivalent density λxR ,u (r) =
λx∗ ,u (r) of Prop. 5 is upper bounded as
2
λx∗ ,u (r) ≤ λu 1 − e−λa πr ,
(14)
with equality if and only if x∗ = o.
Proof:
Follows
by
replacing
the
term
pc ((r, θ) + xR | x∗ ) = pc ((r, θ) + x∗ | x∗ ) appearing in
(11) with its bound given in Cor. 4, which evaluates to
2
e−λa πr .
The bound of (14) indicates that λx∗ ,u (r) tends to zero
at least as fast as O(r2 ) for r → 0, irrespective of x∗ .
The following exact statement on the asymptotic behavior of
λx∗ ,u (r) shows that λx∗ ,u (r) ∼ cr2 , r → 0, with the value of
c independent of |x∗ | when x∗ 6= o.
Proposition 8. For xR = x∗ , the equivalent density
λxR ,u (r) = λx∗ ,u (r) of Prop. 5 equals
λx∗ ,u (r) = λu λa bπr2 + O(r3 ), r → 0,
(15)
with b = 1 for x∗ = o and b = 1/2 for x∗ 6= o.
Proof: For x∗ = o, the result follows directly from
Lemma 7. For x∗ 6= o, it can be shown by algebraic manipulation based on Lemma 3, that the term pc ((r, θ) + xR | x∗ ) =
pc ((r, θ) + x∗ | x∗ ) appearing in (11) equals
λu λa (qπ + (−1)q arccos(sin(θ)) + cos(θ) sin(θ)) r2 +O(r3 ),
for r → 0, where q = 0 for θ ∈ [−π, −π/2] ∪ [π/2, π) and
q = 1 for θ ∈ (−π/2, π/2). Substituting this expression in
(11) and performing the integration leads to the result.
Equation (15) indicates that imposing a guard region V ∗
when x∗ 6= o reduces the interfering UE density by half in
the close vicinity of x∗ , compared to the case x∗ = o. This
effect is similar to the behavior of λx∗ ,a (r) discussed in Sec.
III. A. However, λx∗ ,a (0) = λa /2 , whereas λx∗ ,u (0) = 0,
clearly demonstrating the effectiveness of the guard region for
reducing UE-generated interference in the vicinity of x∗ .
For non-asymptotic values of r, the behavior of λx∗ ,u (r)
can only be examined by numerical evaluation of (11). Figure 6 depicts the normalized equivalent density λx∗ ,u (r)/λu
for various values
of |x∗ | in the range of practical
in√
√
∗
terest [0, 3/(2
λ
)]
(note
that
P(|x
|
>
3/(2
λ
))
=
a
a
√
2
e−λa π(3/(2 λa )) ≈ 8.5 × 10−4 ). It can be seen that for nonasymptotic values of r, λx∗ ,u (r) is a decreasing function of
|x∗ | for all r > 0, corresponding to a reduced UE-generated
interference. This is expected since, as evident from Fig. 4, the
cell expansion effect with increasing |x∗ | can only improve the
interference protection at x∗ .
Interestingly, the dependence of λx∗ ,u (r) on |x∗ |, although
existing, can be seen to be rather small. This observation along
with Prop. 8 strongly motivates the consideration of a single
curve as an approximation to the family of curves in Fig. 6.
A natural approach is to consider the upper bound of (14),
which has the benefit of being available in closed form. This
is essentially the approach that was heuristically employed in
[17] without observing that it actually corresponds to a (tight)
bound for the equivalent interferer density.
1
1
0.8
0.8
λxR ,u (r)/λu
λx∗ ,u (r)/λu
8
0.6
0.4
√
exact (|x∗ | × 2 λa = 0, 0.5, 1, 1.5, 2, 2.5, 3)
piecewise-linear upper bound
piecewise-linear approx. in [14]
0.2
0
0
0.5
1
1.5
2
2.5
√
r × 2 λa
3
0.6
0.4
0.2
3.5
0
4
Fig. 6. Equivalent density function λx∗ ,u (r) for various |x∗ | (solid curves),
with the direction of the arrow corresponding to increasing |x∗ |. A piecewiselinear approximation and bound are also shown.
Fig. 7.
|xR |.
√
|xR | × 2 λa = 0, 0.5, 1, 1.5, 2, 2.5, 3
0
0.5
1
1.5
2
2.5
√
r × 2 λa
3
3.5
4
Equivalent density function λxR ,u (r) when x∗ = o for various
B. x∗ = o
Another approach is to consider a piecewise-linear curve,
which, in addition to having a simple closed form, results
in a closed form approximate expression for the bound of
LIx∗ ,u (s | x∗ ) as given below (details are straightforward and
omitted).
Lemma 9. With the piecewise-linear approximation
λx∗ ,u (r) ≈ λu min (δr, 1) , r ≥ 0,
(16)
for some δ > 0, the lower bound for LIx∗ ,u (s | x∗ ) equals
1 2
exp −λu π C(Pu s)2/α + 2
F̃ (3) − F̃ (2)
δ
3
where
,
(2π/α)/
sin(2π/α) and F̃ (x)
C
x
x
−1
with 2 F1 (·) denoting
2 F1 1, α , 1 + α , sPu δ α
hypergeometric function.
,
the
A piecewise-linear approximation as in (16) was heuristically proposed in [16] for a slightly more general system
model than the one considered here. A closed form formula
for δ was provided, only justified as obtained by means of
curve fitting without providing any details on the procedure.
For the system model √
considered in this paper, this formula
results in δ ≈ 0.82687 λa and the corresponding piecewiselinear approximation of the equivalent density is also depicted
in Fig. 6. It can be seen that this approximation is essentially
an attempt to capture the average behavior of λx∗ ,u (r) over
|x∗ |, thus explaining its good performance reported in [16].
However, it is not clear why this particular value of δ is
more preferable than any other (slightly) different value. A
more rigorous approach for the selection of δ is to consider
the tightest piecewise-linear upper bound for λx∗ ,u (r), which
leads to another (looser) lower bound for LIxR√
(s | x∗ ). This
,u
bound corresponds to a value of δ ≈ 1.13118 λa , found by
numerical optimization, and is also shown in Fig. 6.
This case corresponds to the typical node being an AP, i.e.,
a typical AP. Note that, in contrast to the cases when x∗ 6= o,
the Voronoi cell of the typical AP is not conditioned to cover
any point in R2 , therefore, no cell expansion effect is observed
and, for xR 6= o, it is possible that the receiver lies outside
the guard region. This is a different operational scenario from
the common downlink with nearest AP association [25] where
the intended receiver by default lies within the Voronoi cell
of the serving AP. It may arise, for example, in case of D2D
communications, where both communicating UEs receive data
(e.g., control information) from the same AP.
The equivalent interfering UE density is given in Prop. 6 and
is depicted in Fig. 7 for various values of |xR |. Note that the
case xR = o corresponds to the typical AP in receive mode,
whereas for xR 6= o the typical AP transmits. It can be seen
that increasing |xR | effectively results in reduced protection
in the close vicinity of the receiver since the latter is more
probable to lie near the edge or even outside the Voronoi guard
region.
C. xR = o
This case corresponds to the typical node being at receive
mode. One interesting scenario which corresponds to this case
is downlink cellular transmission with nearest AP association,
where interference is due to transmitting UEs (e.g., in uplink
or D2D mode) that lie outside the serving cell.
When x∗ = o, the setup matches the case xR = x∗
∗
with
above, and it holds λo,u (r|x∗ ) =
x = o, discussed
−λa πr 2
∗
λu 1 − e
. For x 6= o, the asymptotic behavior of
∗
λo,u (r|x ) can be obtained by the same approach as in the
proof of Prop. 8.
Proposition 10. For x∗ 6= o the equivalent density function
λxR ,u (r) = λo,u (r|x∗ ) of Prop. 5 equals
8
λo,u (r) = λu λa |x∗ |r + O(r2 ), r → 0.
π
9
1
λo,u (r)/λu
0.8
0.6
0.4
0.2
0
√
|x∗ | × 2 λa = 0, 0.5, 1, 1.5, 2, 2.5, 3
0
0.5
1
1.5
2
2.5
√
r × 2 λa
3
3.5
4
Fig. 8. Equivalent density function λo,u (r) for various |x∗ |, with the
direction of the arrow corresponding to increasing |x∗ |.
In stark contrast to the behavior of λx∗ ,u (r), λo,u (r) tends
to 0 only linearly (instead of quadradically) as r → 0 when
x∗ 6= o, and with a rate that is also proportional to (instead
of independent of) |x∗ |. This implies that the typical node
is not as well protected from interfering UEs as its nearest
AP, experiencing an increased equivalent interferer density in
its close vicinity as |x∗ | increases. This can be understood by
noting that with increasing |x∗ |, the origin, although contained
in a guard region of (statistically) increasing area, is more
probable to lie near the cell edge (Fig. 1 demonstrates such a
case.)
The strong dependence of λo,u (r) on |x∗ | observed for r →
0 also holds for all values of r. This is shown in Fig. 8 where
the numerically evaluated λo,u (r) is depicted for the same
values of |x∗ | as those considered in Fig. 6. It can be seen
that λo,u (r) increases with |x∗ | for all r up to approximately
√
1 − 1.5 times the average value of |x∗ | (equal to 1/(2 λa )),
whereas it decreases with |x∗ | for greater r .
This behavior of λo,u (r), especially for r → 0, does not
permit the use of single function to be used as a reasonable
approximation or bound for λo,u (r), irrespective of |x∗ |, as
was done for the case for λx∗ ,u (r). Noting that the upper
bound of (13) has the same value whenever |x∗ | or |xR | are
zero, it follows that the curves shown in Fig. 7 with |xR |
replaced by |x∗ | are upper bounds for the corresponding curves
of Fig. 8. Unfortunately, it can be seen that the bound is very
loose for small r and/or large |x∗ |.
D. |xR | ∈ (0, |x∗ |), ∠xR = 0
The previous cases demonstrate how different the properties of the UE-generated interference become when different
receiver positions are considered. This observation motivates
the question of which receiver position is best protected from
UE interfernce given the positions o and x∗ 6= o of the typical
node and nearest AP, respectively. This question is relevant in,
e.g., relay-aided cellular networks where the communication
between a UE and its nearest AP is aided by another node (e.g.,
an inactive UE) [32]. Although the performance of a relayaided communication depends on multiple factors including
the distances among the nodes involved and considered transmission scheme [33], a reasonable choice for the realy position
is the one experiencing less interference.
By symmetry of the system geometry, this position should
lie on the line segment joining the origin with x∗ (inclusive).
However, as can be seen from Figs. 6 and 8, the shape of
the equivalent interferer density does not allow for a natural
ordering of the different values of xR . For example, λx∗ ,u (r) is
smaller than λo,u (r) for small r, whereas the converse holds
for large r. Noting that the receiver performance is mostly
affected by nearby interference [1], a natural criterion to order
the interfering UE densities for varying xR is their asymptotic
behavior as r → 0. For |xR | = |x∗ | and |xR | = o, this is
given in Props. 8 and 10, respectively. For all other positions
the asymptotic density is given by the following result.
Proposition 11. For |xR | ∈ (0, |x∗ |), ∠xR = 0, and x∗ 6= o,
the equivalent density function λxR ,u (r) of Prop. 5 equals
λxR ,u (r) = λu λa
8 (|x∗ |/|xR |)2 3
r + O(r4 ), r → 0. (17)
9π |x∗ | − |xR |
It follows that the optimal receiver position must have
|xR | ∈ (0, |x∗ |) since, in this case, the density scales as O(r3 )
instead of O(r) and O(r2 ) when |xR | = 0 and |xR | = |x∗ |,
respectively. Its value can be easily obtained by minimization
of the expression in (17) w.r.t. |xR |.
Corollary 12. The receiver position that experiences the
smallest equivalent UE interferer density in its close vicinity
∗
lies on the line
√ segment joining the origin to x and is at dis∗
tance |x |/ 2 from the origin. The corresponding equivalent
32 3
r + O(r4 ), r → 0.
density equals λxR ,u (r) = λu λa 9π
V. N UMERICAL E XAMPLES
In this section, numerical examples will be given, demonstrating the performance of various operational scenarios in
cellular networks that can be modeled as special cases of the
analysis presented in the previous sections. The system performance metric that will be considered is LIxR ,k (ραk θ|x∗ ),
with k ∈ {a, u} depending on whether AP- or UE-generated
interference is considered, with ρ > 0, θ > 0 given. Note that
this metric corresponds to the coverage probability that the
signal-to-interference ratio (SIR) at xR is greater than θ, when
the distance between transmitter and receiver is ρ, the direct
link experiences a path loss exponent ak and Rayleigh fading,
and the transmit power is equal to Pk [1]. For simplicity and
w.l.o.g., all the following examples correspond to λa = 1 and
αa = αu = 4.
The analytical results of the previous sections will be
compared against simulated system performance where, in
a single experiment, the distribution of interfering APs and
UEs is generated as follows. Given the AP position closest
to the origin, x∗ , the interfering AP positions are obtained as
a realization of a PPP of density as in (4). Afterwards, an
independent realization of a HPPP of density λu is generated,
which, after removing the points lying within V ∗ , represents
the interferering UE positions.
10
1
1
0.8
coverage probability
0.8
coverage probability
simulation
analysis
1
4
1
2
0.6
3
4
0.4
1
|xR |
|x∗ |
0.2
0
−20
=
5
4
−10
0.6
=
1
4
1
2
3
4
0.4
1
0.2
sim. (PPP)
sim. (VPLP)
analysis
−15
|xR |
|x∗ |
5
4
−5
0
5
10
15
20
θ (dB)
(a)
0
−20
−15
−10
−5
0
5
10
15
20
θ (dB)
(b)
Fig. 9. Coverage probability of the link between the typical node and an isotropically positioned receiver under UE interference for (a) λu = λa , (b)
λu = 10λa .
In addition, for cases where the UE-generated interference
is of interest, a VPLP process of density λu = λa is also
simulated, corresponding to scenarios where one UE from each
Voronoi cell in the system transmits (as in the standard uplink
scenario with nearest AP association).
A. D2D Link Not Necessarily Contained in a Single Cell
In this example, UE-generated inteference is considered at
a receiver position xR 6= o with ∠xR uniformly distributed
in [−π, π) and ρ = |xR |. This case may model a D2D link
where the typical node is a UE that directly transmits to a
receiver isotropically distributed at a distance |xR | > 0 . A
guard region V ∗ is established by the closest AP to the typical
node, however, it is not guaranteed to include xR , i.e., the D2D
nodes may not be contained within the same cell, which is a
common scenario that arises in practice, referred to as crosscell D2D communication [21].
With interference generated from other UEs in D2D and/or
uplink mode, a lower bound for LIxR ,u (ραu θ|x∗ ) can be
obtained using Prop. 5 with an equivalent density function
λ|xR |,u (r) as given in (12). Figure 9 shows this lower bound
for λu = λa (Fig. 9a) and λu = 10λa (Fig. 9b). The
position x∗ √is assumed to be isotropically distributed with
|x∗ | = 1/(2 λa ) 2 and various values of the ratio |xR |/|x∗ |
are considered. It can be seen that, compared to the simulation
under the PPP interference assumption, the quality of the
analytical lower bound depends on λu . For λu = λa , it
is very close to the exact coverage probability, whereas for
λu = 10λa , it is reasonably tight and able to capture the
behavior of the exact coverage probability over varying |xR |.
Also, for λu = λa and a VPLP interference, the analytical
expression provides a reasonably accurate approximation for
the performance of this analytically intractable case. As expected, performance degradation is observed with increasing
2 Recall that this value is the average distance of the closest AP from the
typical node.
|xR |, due to both increasing path loss of the direct link as well
as increased probability of the receiver lying outside V ∗ .
B. AP-to-D2D-Receiver Link
In this example, AP-generated inteference is considered at
a receiver position xR 6= o with ∠xR uniformly distributed
in [−π, π) and ρ = |x∗ − xR |. This case is similar to the
previous, however, considering AP-generated interference and
the receiver at xR receiving data from the AP node at x∗ . This
case models the scenario where the typical node establishes a
D2D connection with a node at xR , and the AP at x∗ sends
data to xR via a dedicated cellular link (e.g., for control purposes or for implementing a cooperative transmission scheme).
Note that, in contrast to the previous case, the link distance
ρ is random and equal to ρ = |x∗ | + |xR | − 2|x∗ ||xR | cos ψ,
with ψ uniformly distributed in [−π, π). The exact coverage
probability of this link can be obtained using Prop. 2 followed
by a numerically computed expectation over ψ.
αa
Figure 10 shows E(LIxR ,a (ρ√
θ|x∗ )) for an isotropically
∗
∗
distributed x with |x | = 1/(2 λa ), and for various values
of the ratio |xR |/|x∗ |. Note that the case |xR | = 0 corresponds
to the standard downlink transmission model with nearest
AP association [25]. Monte Carlo evaluation of the coverage
probability (not shown here) perfectly matches the analytical
curves. It can be seen that increasing |xR | reduces the coverage
probability for small SIR but increases it for high SIR. This is
in direct proportion to the behavior of the equivalent interferer
density λxR ,a (r) with increasing |xR | shown in Fig. 2.
C. Uplink with Nearest AP Association
In this example, UE-generated interference is considered
at a receiver position xR = x∗ with ρ = |x∗ | and λu = λa
corresponding to the conventional uplink transmission scenario
with nearest AP association and one active UE per cell (no
cross-mode interference). Figure 11 shows the lower bound
11
1
1
|xR |
|x∗ |
0.6
5
4
0.4
1
1
3 2
4
0.8
=0
coverage probability
coverage probability
0.8
1
4
0.6
0.4
√
0.2 |x∗ | × 2 λa = 2
0.2
−15
−10
−5
0
5
10
15
0
−20
20
θ (dB)
Fig. 10. Coverage
probability for the link between the AP positioned at x∗
√
(|x∗ | = 1/(2 λa )) and an isotropicaly positioned receiver.
of LIxR ,u (ραu θ|x∗ ) obtained by Prop. 5 as well as the
looser, but more easily computable, lower bound obtained
using the closed
form expression given in Lemma 7 with
√
δ = 1.13118 λa (resulting in the tightest bound possible with
a piecewise-linear equivalent density function). The coverage
√
probability is computed for an AP at a distance |x∗ |=c/(2 λa )
from the typical node with c = 1/2, 1, 2, roughly corresponding to a small, average, and large uplink distance (performance
is independent of ∠x∗ ).
As was the case in Fig. 9a, Prop. 5 provides a very tight
lower bound for the actual coverage probability (under the PPP
model for the interfering UE positions). The bound of Lemma
7, although looser, is nevertheless a reasonable approximation
of the performance, especially for smaller values of |x∗ |.
Compared to the VPLP model, the PPP model provides an
optimistic performance prediction that is, however, reasonably
tight, especially for large |x∗ |. This observation motivates its
usage as a tractable approximation of the actual inteference
statistics as was also reported in [16], [17]. Interestingly,
the bound of Lemma 7 happens to provide an even better
approximation for√the VPLP performance for |x∗ | close to or
smaller than 1/(2 λa ).
−15
−10
1
2
1
−5
0
5
10
15
20
θ (dB)
Fig. 11. Coverage probability of the uplink between the typical node its
nearest AP (λu = λa ).
1
Case A
Case B
Case C
0.8
coverage probability
0
−20
sim. (PPP)
sim. (VPLP)
Prop. 5
Lemma 7
0.6
0.4
0.2
√
ρ × 2 λa = 2
0
−20
−15
−10
1
2
1
−5
0
5
10
15
20
θ (dB)
Fig. 12. Coverage probability under UE interference for the links corresponding to the cases described in Sec. V. D.
D. Effect of Guard Region on UE-Generated Interference
Protection
Case C (receiver imposes a Voronoi guard region): This
case corresponds to xR = x∗ = o modeling an nonstandard uplink transmission (see also description in
Table I)
In order to see the effectiveness of a Voronoi guard region
for enhancing link quality under UE-generated interference,
the performance under the following operational cases is
examined.
• Case A (no guard region is imposed): This results in the
standard transmission model under an HPPP of interferers
positions of density λu [1]. The exact coverage probability is well known (see, e.g., [1, Eq. 3.29]).
• Case B (transmitter imposes a Voronoi guard region):
This case corresponds to x∗ = o, xR 6= o, modeling an
non-standard downlink transmission (see also description
in Table I).
Cases B and C correspond to the analysis considered in Sec.
V. B. Assuming the same link distance ρ for all cases, Fig. 12
shows the analytically obtained coverage probability (exact for
Case A, lower bound for Cases B and C), assuming λu = λa .
It can be seen that imposing a Voronoi guard region (Cases B
and C) is always beneficial, as expected. However, for large
link distances, Case B provides only marginal gain as the
receiver is very likely to be located at the edge or even outside
the guard region. In contrast, the receiver is always guaranteed
to be protected under case C, resulting in the best performance
and significant gains for large link distances.
•
12
VI. C ONCLUSION
This paper considered the analytical characterization of
cross-mode inter-cell interference experienced in future cellular networks. By employing a stochastic geometry framework,
tractable expressions for the interference statistics were obtained that are exact in the case of AP-generated interference
and serve as a tight lower bound in the case of UE-generated
interference. These expressions are based on appropriately
defined equivalent interferer densities, which provide an intuitive quantity for obtaining insights on how the interference
properties change according to the type of interference and
receiver position. The considered system model and analysis
are general enough to capture many operational scenarios
of cellular networks, including conventional downlink/uplink
transmissions with nearest AP association as well as D2D
transmissions between UEs that do not necessarily lie in the
same cell. The analytical expressions of this paper can be
used for sophisticated design of critical system aspects such
as mode selection and resource allocation towards getting the
most out D2D- and/or FD-enabled cellular communications.
Interesting topics for future research is investigation of the
joint properties of AP- and UE-generated interference as well
as extension of the analysis to the case of heterogeneous,
multi-tier networks.
A PPENDIX A
P ROOF OF P ROPOSITION 2
Let Φ̃a , Φa \ {x∗ } denote the interfering AP point
process. Given x∗ , Φ̃a is an inhomogeneous PPP of density
λ̃a (x), x ∈ R2 , defined in (4). Density λ̃a (x) is circularly
symmetric w.r.t. xR = o, which, using Lemma 1, directly
leads to the Laplace transform expression of (3) with radial
density as in (6). Considering the case xR 6= o, it directly
follows from Lemma 1 and (2) that
Z
LIxR ,a (s) = exp −
λ̃a (x + xR )γ(sPa |x|−αa )dx
R2
by a change of integration variable. By switching to polar
coordinates (centered at o) for the integration, the right-hand
side of (3) results with P = Pa , α = αa and λxR (r) =
λxR ,a (r), where
Z 2π
1
λ̃a ((r, θ) + xR )dθ
λxR ,a (r) =
2π 0
Z
λa 2π
=
I {(r, θ) + xR ∈
/ B(o, |x∗ |)} dθ
2π 0
Z
λa 2π
=
I {(r, θ) ∈
/ B(−xR , |x∗ |)} dθ
2π 0
Z
λa 2π
=
I {(r, θ) ∈
/ B((|xR |, 0), |x∗ |)} dθ,
2π 0
with xR representing the polar coordinates of the receiver in
the first equality, with a slight abuse of notation. The last
equality follows by noting that the integral does not depend on
the value of ∠xR . Evaluation of the final integral is essentially
the evaluation of an angle (see Fig. 13), which can be easily
obtained by elementary Euclidean geometry, resulting in the
expression of (5).
B((|xR |, 0), |x∗ |)
C(o, r)
|xR |
θ0
Fig.
the angle θ0 , which is equal to quantity
R 2π 13. Geometrical figure depicting
/ B((|xR |, 0), |x∗ |)) dθ appearing in the proof of Prop. 2.
0 I ((r, θ) ∈
A PPENDIX B
P ROOF OF P ROPOSITION 5
It holds
LIxR ,u (s | x∗ )
=E E e−sIxR ,u | Φa | x∗
Z
(a)
= E exp −
λ̃u (x|Φa )γ(sPu |x − xR |−αu )dx x∗
R2
Z
∗
−αu
≥ exp −
E λ̃u (x + xR |Φa ) | x γ(sPu |x|
)dx ,
R2
where (a) follows by noting that, conditioned on Φa , the interfering UE point process equals Φu \V ∗ , which is distributed
as an inhomogeneous PPP of density λ̃u (x|Φ1 ), x ∈ R2
as given in (8), and using Lemma 1. The inequality is an
application of Jensen’s inequality with a change
of integration
variable. Result follows by noting that E λ̃u (x|Φa ) | x∗ =
λu [1 − pc (x | x∗ )] and switching to polar coordinates for the
integration.
A PPENDIX C
P ROOF OF P ROPOSITION 6
Replacing the PCA function in (11) with its lower bound
as per Cor. 4 results in the equivalent density bound
λxR ,u (r)
Z 2π
1
−λa π|(r,θ)+xR −x∗ |2
e
dθ
≤λu 1 −
2π 0
!
∗ 2 Z 2π
(a)
e−λa π|x |
−λa π|(r,θ)+xR |2
≤ λu 1 −
e
dθ
2π
0
!
∗ 2 Z 2π
2
e−λa π|x |
(b)
=λu 1 −
e−λa π|(r,θ)+(|xR |,0)| dθ
2π
0
!
∗ 2
2
2 Z
e−λa π(|x | +|xR | +r ) 2π λa π2r|xR | cos θ
(c)
= λu 1 −
e
dθ ,
2π
0
where (a) follows by application of the triangle inequality,
(b) by noting that the integral is independent of ∠xR and
(c) since |(r, θ) + (|xR |, 0)|2 = r2 + |xR |2 − 2r|xR | cos(π −
θ) (cosine law). The integral in the last equation is equal to
2πI0 (λa 2π|xR |r). By exchanging the roles x∗ and xR in (a)
and following the same reasoning, another upper bound of the
form of (c) results, with the only difference that the integrand
has |x∗ | instead of |xR | and evaluates to 2πI0 (λa 2π|x∗ |r).
Considering the minimum of these two bounds leads to (13).
13
R EFERENCES
[1] M. Haenggi and R. K. Ganti, “Interference in Large Wireless Networks,”
Found. Trends Netw., vol. 3, no. 2, pp. 127–248, 2008.
[2] F. Boccardi, R. W. Heath, A. Lozano, T. L. Marzetta, and P. Popovski,
“Five disruptive technology directions for 5G,” IEEE Commun. Mag.,
vol. 52, pp. 74–80, Feb. 2014.
[3] K. Doppler, M. Rinne, C. Wijting, C. Ribeiro, and K. Hugl, “Deviceto-Device Communication as an Underlay to LTE-Advanced Networks,”
IEEE Commun. Mag., vol. 50, no. 3, pp. 170–177, Mar. 2012.
[4] A. Sabharwal, P. Schniter, D. Guo, D. Bliss, S. Rangarajan, and R. Wichman, “In-band full-duplex wireless: Challenges and opportunities,” IEEE
J. Select. Areas Commun., vol. 32, pp. 1637–1652, Sept. 2014.
[5] L. Lei and Z. Zhong, “Operator controlled device-to-device communications in LTE-Advanced networks,” IEEE Wireless Commun., vol. 19,
no. 3, pp. 96–104, Jun. 2012.
[6] M. Haenggi, J. G. Andrews, F. Baccelli, O. Dousse, and M.
Franceschetti, “Stochastic geometry and random graphs for the analysis
and design of wireless networks,” IEEE J. Select. Areas Commun., vol.
27, pp. 1029-1046, Sept. 2009.
[7] Q. Ye, M. Al-Shalash, C. Caramanis, and J. G. Andrews, “Resource optimization in device-to-device cellular systems using time-frequency hopping,” IEEE Trans. Wireless Commun., vol. 13, no. 10, pp. 5467–5480,
Oct. 2014.
[8] X. Lin, J. G. Andrews, and A. Ghosh, “Spectrum sharing for deviceto-device communication in cellular networks,” IEEE Trans. Wireless
Commun., vol. 13, no. 12, pp. 6727–6740, Dec. 2014.
[9] N. Lee, X. Lin, J. G. Andrews, and R. W. Heath, Jr., “Power control for
D2D underlaid cellular networks: Modeling, algorithms and analysis,”
IEEE J. Sel. Areas Commun., vol. 33, no. 1, pp. 1–13, Jan. 2015.
[10] S. Stefanatos, A. Gotsis, and A. Alexiou, “Operational region of D2D
communications for enhancing cellular network performance,” IEEE
Trans. Wireless Commun., vol. 14, no. 11, pp. 5984–5997, Nov. 2015.
[11] A. Hasan and J. G. Andrews, “The guard zone in wireless ad hoc
networks,” IEEE Trans. Wireless Commun., vol. 6, no.3, pp. 897–906,
Mar. 2007.
[12] G. George, R. K. Mungara, and A. Lozano, “An Analytical Framework for Device-to-Device Communication in Cellular Networks,” IEEE
Trans. Wireless Commun., vol. 14, no. 11, pp. 6297–6310, Nov. 2015.
[13] Z. Chen and M. Kountouris, “Decentralized opportunistic access for
D2D underlaid cellular networks,” Jul. 2016. [Online]. Available:
http://arxiv.org/abs/1607.05543.
[14] J. G. Andrews, R. K. Ganti, N. Jindal, M. Haenggi, and S. Weber, “A
primer on spatial modeling and analysis in wireless networks,” IEEE
Commun. Mag., vol. 48, no. 11, pp. 156–163, Nov. 2010.
[15] H. ElSawy and E. Hossain, “On stochastic geometry modeling of cellular
uplink transmission with truncated channel inversion power control,”
IEEE Trans. Wireless Commun., vol. 13, no. 8, pp. 4454–4469, Aug.
2014.
[16] H. Lee, Y. Sang, and K. Kim, “On the uplink SIR distributions in
heterogeneous cellular networks,” IEEE Commun. Lett., vol. 18, no. 12,
pp. 2145–2148, Dec. 2014.
[17] S. Singh, X. Zhang, and J. G. Andrews, “Joint rate and SINR coverage
analysis for decoupled uplink-downlink biased cell associations in HetNets,” IEEE Trans. Wireless Commun., vol. 14, no. 10, pp. 5360–5373,
Oct. 2015.
[18] B. Błaszczyszyn and D. Yogeshwaran, “Clustering comparison of point
processes with applications to random geometric models,” in Stochastic
Geometry, Spatial Statistics and Random Fields, vol. 2120, Lecture
Notes in Mathematics. Cham, Switzerland: Springer-Verlag, 2015, pp.
31–71.
[19] M. Haenggi, “User point processes in cellular networks,” IEEE Wireless
Commun. Lett., vol. 6, no. 2, pp. 258–261, Apr. 2017.
[20] M. Di Renzo, W. Lu, and P. Guan, “The intensity matching approach:
A tractable stochastic geometry approximation to system-level analysis
of cellular networks,” IEEE Trans. Wireless Commun., vol. 15, no. 9,
pp. 5963–5983, Sep. 2016.
[21] H. Zhang, Y. Ji, L. Song, and H. Zhu “Hypergraph based resource
allocation for cross-cell device-to-device communications,” in Proc.
IEEE Int. Conf. Commun. (ICC), Kuala Lumpur, Malaysia, May 2016,
pp. 1–6.
[22] Z. Tong and M. Haenggi, “Throughput analysis for full-duplex wireless
networks with imperfect self-interference cancellation,” IEEE Trans.
Commun., vol. 63, no. 11, pp. 4490–4500, Nov. 2015.
[23] C. Psomas, M. Mohammadi, I. Krikidis, and H. A. Suraweera, “Impact
of directionality on interference mitigation in full-duplex cellular net-
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
works,” IEEE Trans. Wireless Commun., vol. 16, no. 1, pp. 487–502,
Jan. 2017.
K. S. Ali, H. ElSawy, and M.-S. Alouini, “Modeling cellular networks
with full-duplex D2D communication: a stochastic geometry approach,”
IEEE Trans. Commun., vol. 64, no. 10, pp. 4409–4424, Oct. 2016.
J. G. Andrews, F. Baccelli, and R. K. Ganti, “A tractable approach to
coverage and rate in cellular networks,” IEEE Trans. Commun., vol. 59,
no. 11, pp. 3122–3134, Nov. 2011.
S. Stefanatos and A. Alexiou, “Access Point density and bandwidth
partitioning in ultra-dense wireless networks,” IEEE Trans. Commun.,
vol. 62, no. 9, pp. 3376–3384, Sept. 2014.
F. Baccelli and B. Błaszczyszyn, Stochastic Geometry and Wireless
Networks. Delft, The Netherlands: Now Publishers Inc., 2009.
Z. Chen and M. Kountouris, “Guard zone based D2D underlaid cellular
networks with two-tier dependence,” 2015 IEEE International Conference on Communication Workshop (ICCW), London, 2015, pp. 222-227.
M. Haenggi, “The local delay in poisson networks,” IEEE Trans. Inf.
Theory, vol. 59, pp. 1788–1802, Mar. 2013.
Z. Yazdanshenasan, H. S. Dhillon, M. Afshang, and P. H. J. Chong,
“Poisson hole process: theory and applications to wireless networks,”
IEEE Trans. Wireless Commun., vol. 15, no. 11, pp. 7531–7546, Nov.
2016.
S. Sadr and R. S. Adve, “Handoff rate and coverage analysis in multitier heterogeneous networks,” IEEE Trans. Wireless Commun., vol. 14,
no. 5, pp. 2626–2638, May 2015.
H. E. Elkotby and M. Vu, “Uplink User-Assisted Relaying Deployment
in Cellular Networks,” IEEE Trans. on Wireless Commun., vol. 14, no.
10, pp. 5468–5483, Oct. 2015.
A. E. Gamal and Y.-H. Kim, Network Information Theory, Cambridge
University Press, 2012.
| 7 |
1
Optimal Spectrum Sensing Policy in RF-Powered
Cognitive Radio Networks
arXiv:1803.09193v1 [] 25 Mar 2018
Hae Sol Lee, Muhammad Ejaz Ahmed, and Dong In Kim
Abstract—An orthogonal frequency division multiple access
(OFDMA)-based primary user (PU) network is considered, which
provides different spectral access/energy harvesting opportunities
in RF-powered cognitive radio networks (CRNs). In this scenario,
we propose an optimal spectrum sensing policy for opportunistic
spectrum access/energy harvesting under both the PU collision
and energy causality constraints. PU subchannels can have different traffic patterns and exhibit distinct idle/busy frequencies, due
to which the spectral access/energy harvesting opportunities are
application specific. Secondary user (SU) collects traffic pattern
information through observation of the PU subchannels and
classifies the idle/busy period statistics for each subchannel. Based
on the statistics, we invoke stochastic models for evaluating SU
capacity by which the energy detection threshold for spectrum
sensing can be adjusted with higher sensing accuracy. To this end,
we employ the Markov decision process (MDP) model obtained
by quantizing the amount of SU battery and the duty cycle
model obtained by the ratio of average harvested energy and
energy consumption rates. We demonstrate the effectiveness of
the proposed stochastic models through comparison with the
optimal one obtained from an exhaustive method.
Index Terms—OFDMA, RF-powered cognitive radio networks
(CRNs), traffic patterns, spectral access, energy harvesting,
spectrum sensing.
I. I NTRODUCTION
Recently, there have been many proposals on designing
efficient circuits and devices for radio frequency (RF) energy
harvesting suitable for low-power wireless applications [1] [4]. The maximum power available for RF energy harvesting
at a free space distance of 40 meters is known to be 7µW and
1µW for 2.4GHz and 900MHz frequency, respectively [5].
With the RF energy harvesting capability, wireless, especially
mobile device can operate perpetually without periodic energy
replenishment. In cognitive radio networks (CRNs), secondary
users (SUs) equipped with RF energy harvesting device can
opportunistically not only access primary user (PU) channels
but also harvest RF energy carried on PU channels through
spectrum sensing. Hence, selecting PU channel for harvesting
or transmitting through accurate spectrum sensing is a crucial
component for SUs to achieve an optimal performance.
Most works on RF energy harvesting in CRNs rely on a
predefined assumption on PU channel idle time distribution [6]
- [15] and focus on optimizing SU spectral access based on the
battery level. Exploiting multiple channels will offer different
idle channel distributions and PU’s signal strengths. Therefore,
it will give SU more chances to choose between transmitting data and harvesting energy, which in turn improves the
transmission and energy harvesting efficiency, as demonstrated
in [16] and [17]. However, in practice, the channel idle time
distribution depends on specific traffic patterns carried over the
PU channel [18]. Therefore, it is of paramount importance for
SU to be aware of PU traffic patterns so that it can adapt its
harvesting/transmission strategies accordingly. The challenge,
however, lies in efficient classification of PU traffic patterns
based on the applications’ fingerprints (traffic features).
Existing solutions for the traffic patterns identification fall
into the following three categories: 1) port-based, 2) signaturebased, and 3) deep packet inspection. But these approaches
do not perform well under the dynamic nature of the traffic
patterns. In this paper, we propose Dirichlet process mixture
model (DPMM) to efficiently classify various applications
(traffic patterns). The DP is a family of Bayesian nonparametric (BNP) models which are mostly used for density
estimation, clustering, model selection/averaging, etc. The DPs
are nonparametric which means that the number of hidden
traffic applications is not known in advance. Due to the
nonparametric nature of these models, they do not require
the number of clusters (applications) to be known a priori.
Moreover, such models can adapt dynamically over time as the
number of traffic patterns grows. The proposed DPMM traffic
classification is unsupervised, and based only on observations
without any control overhead.
Based on the classification of PU traffic patterns, we propose
an optimal spectrum sensing policy for opportunistic spectral
access/energy harvesting in RF-powered CRNs. Towards this,
SU collects the information on distinct traffic patterns through
observation of the PU subchannels and identifies the PU traffic
patterns by classifying the received data packets into distinct
features. Following the DPMM approach developed in [19],
[20], we can classify the PU subchannels with idle/busy period
statistics which will be used for optimal spectral access/energy
harvesting. For this, we need to obtain the appropriate energy
detection thresholds associated with distinct traffic patterns, so
as to establish the optimal spectrum sensing policy with higher
sensing accuracy.
Suppose the energy detection threshold is low, then SU will
likely identify the PU subchannel as busy due to noise/cochannel interference. Hence, the probability of the PU subchannel being recognized as idle is low, even if the PU
subchannel in fact is idle, resulting in less transmission opportunity for SU. On the other hand, if the threshold is high,
the probability of the PU subchannel being recognized as
busy is low, causing SU to transmit aggressively. This will
result in collision of PU and SU transmissions, reducing their
transmission efficiency, and also incur the energy depletion
of SU device due to its frequent transmissions. Therefore,
we invoke two stochastic models for evaluating SU capacity,
and then derive an optimal energy detection threshold for
2
spectrum sensing to maximize the SU capacity. To this end, we
employ the Markov decision process (MDP) model obtained
by quantizing the amount of SU battery and the duty cycle
model obtained by considering the average energy harvesting
and energy consuming rates. We confirm that the SU capacity
obtained from two stochastic models can be close to the
optimal capacity obtained from an exhaustive method.
The rest of the paper is organized as follows. Section II
describes the system model along with the key assumptions.
In Section III, we give an overview of traffic classification
based on the DPMM approach. In Section IV, we formulate an
optimization problem using the duty cycle model for deriving
an optimal energy detection threshold, while in Section V we
propose a stochastic model based on the MDP for SU with
some statistic information about PU subchannels. Finally, the
performance obtained from analysis is examined in Section
VI, and concluding remarks are given in Section VII.
II. S YSTEM M ODEL
We consider a RF-powered CRN as shown in Fig. 1, where
SU is equipped with RF energy harvesting capability and
performs opportunistic transmission or energy harvesting by
accessing the corresponding PU subchannel. We assume a
PU network which employs orthogonal frequency division
multiple access (OFDMA) with a total of Nc subchannels and
synchronous time-slot based communication across PUs. Here,
the frequency band is divided into several non-overlapping
narrow frequency subbands assigned to different PUs.
Fig. 1. Illustration of cognitive radio network with energy harvesting.
We assume the subchannel of each PU shows independent
idle/busy time statistics, varying with K traffic sources. If
SU can identify K traffic sources on PU subchannels, it
will increase opportunities for SU transmission and energy
harvesting. To identify K traffic sources and classify their
traffic patterns, we consider the three features, such as the
packet length, packet interarrival time, and variance in packet
length. The three features are observed by SUs by inspecting
the packet header from PU traffic. It is assumed that SUs
collaborate with each other in sharing these features via a
common control channel.
Traffic classification is important in the problem under
consideration. Each application follows a unique pattern, and
recognizing this pattern is important for the following two
reasons: 1) we can predict how frequently the energy arrives
(packet interarrival). 2) we can predict for how long the
subchannel is occupied. For each traffic application those
values are different, so estimating those values would save us
energy by avoiding undue spectrum sensing because in that
case we do not know the pattern of the traffic application. If
traffic classification is removed, we would waste undue energy
by blindly sensing channels. In our approach we can use this
energy to transmit data instead.
We assume the packet arrival rate on subchannel c follows
a random process with mean λc . After clustering subchannels,
SU identifies the harvest subchannel, denoted by ch , for energy
harvesting such that λch is maximum. Similarly, the transmit
subchannel, denoted by ct , is identified for transmission such
that λct is minimum. From the subchannel with maximum
λch , SU can harvest more energy as ch is the subchannel with
frequent energy/packet arrivals. Similarly, for the subchannel
with lowest packet arrivals, i.e., ct , a possible PU collision
is less likely, compared to the other subchannels. We define
the current state (idle/busy) of the selected subchannels for
harvesting and transmission, i.e., Sch ∈ {0(idle), 1(busy)},
Sct ∈ {0(idle), 1(busy)}. Both subchannels coexist unless
all Nc subchannels have the same traffic pattern. If they all
have the same traffic pattern and have the same subchannel
states, the two subchannels are randomly selected. Given the
subchannels ch and ct , we define Pr(Sch = 0) = pci h and
Pr(Sct = 0) = pci t as the probabilities of the corresponding
subchannels for harvesting and transmission being idle.
Note that the energy harvesting can be performed over
multiple PU subchannels, and the channel gain affects largely
the performance of harvesting and transmission, both of which
need to be addressed further. Moreover, the energy harvesting using cooperation among multiple PU subchannels can
increase the rate capacity of SU, especially under the energy
causality in the RF-powered CRN considered herein. Due to
the complexity of determining the optimal energy detection
threshold through the traffic classification, thereby the optimal
sensing policy, if consider such multi-channel cooperation,
in this paper we demonstrate an improvement in the rate
capacity by choosing the best subchannel either for energy
harvesting or for transmission. Such multi-channel cooperation
will be an interesting issue for further extending the framework
developed in this paper to improve the rate capacity of SU with
self-powering.
A. SU Battery Model
The idle and busy probabilities on each PU subchannel for
harvesting and transmission can be estimated through the PU
traffic patterns identification. Here, we assume that SU battery
is charged by the energy harvesting which stores energy into
a rechargeable battery of finite capacity Bmax ∈ R+ for a
non-negative real number R+ . As shown in Fig. 2 below, SU
selects active mode or sleep mode for which the action can be
denoted as at ∈ {0(sleep), 1(active)} in the slot t.
3
subchannel. The PU signal and noise are assumed to be modeled as independent circularly symmetric complex Gaussian
(CSCG) random processes with mean zero and variances σp2
2
and σw
, respectively. Then, from [24], the probabilities of
false alarm Pf (ǫ) and detection Pd (ǫ) for the transmission
subchannel ct are evaluated as
Fig. 2. The frame structure of SU with energy harvesting. at represents the
mode selection of SU in a slot t.
In Fig. 2, we denote a time slot by Tslot where each Tslot
is divided into sensing and transmission time durations. We
represent the sensing time duration with Ts , the transmission
time duration with Tt = Tslot − Ts , and the residual battery
level in a slot t with Bt . For simplicity, we assume that SU
always has data to transmit.
SU selects the mode (active or sleep) according to the
battery level observed in each time slot. Then, by comparing
the residual battery level Bt at current slot t with sufficient
energy for transmission, we define the action as
(
1, Bt ≥ es + et
(1)
at =
0, Bt < es + et
where es is the energy consumption for sensing and et the
energy consumption for transmission.
In active mode, SU performs spectrum sensing with the
energy es = Ts Ps consumed over Ts with sensing power Ps .
We represent the sensing outcome as observations, i.e., ot ∈
{0(idle), 1(busy)}. If SU observation is busy (ot = 1) after
sensing the subchannel ct , SU does not transmit data in the
transmission phase (Tt ) of a slot. On the contrary, if ot = 0,
SU transmits data that consumes et = Tt (Pt /η + Pnc ) in the
transmission phase where Pt is the SU transmit power, Pnc
the non-ideal circuit power, and η (0 < η ≤ 1) the efficiency
of power amplifier [21]. Note that if ot = 1 in active mode,
SU can still harvest energy from the subchannel ch . But we
do not account for this as clustering can secure ct with higher
pci t , which can be ignored here for tractable analysis. On the
other hand, in sleep mode, SU turns off its transceiver (except
an energy harvesting circuit) until next slot begins.
To find the amount of residual energy in battery in each
time slot, the harvested energy Eth and energy consumption
Etc in the slot t are defined as
Eth = g(1 − at )ϕPp Tt Sch
(2)
Etc
(3)
= at [es + (1 − ot )et ]
where Pp is the PU transmit power and g is the subchannel
gain between PU and SU. Here we assume that SU receives
the subchannel gain information via a common control subchannel. Therefore, the residual energy in battery in the next
slot t + 1 can be updated as
Bt+1 = min(Bt − Etc + Eth , Bmax ).
(4)
B. Spectrum Sensing and Duty Cycle Model
The sensing probability varies with specific energy detection threshold and subchannel condition associated with PU
Pf (ǫ) = Pr(ot = 1|Sct = 0, at = 1)
p
ǫ
−1
Ns
=Q
2
σw
Pd (ǫ|g) = Pr(ot = 1|Sct = 1, at = 1, g)
p
ǫ
= Q gσ2
− 1 Ns
2
( σ2p + 1)σw
(5)
(6)
w
where ǫ ∈ R+ is a detection threshold for the energy detector,
Ns denotes the number of samples, and Q(x) Q-function.
In order to analyze a stochastic performance of SU, we need
to find how often the current battery level has enough energy
to transmit, i.e., the probability of SU being active Pr(at =
1). Note that we take into account the available energy to
formulate the SU’s action policy in (7) below. For this we
consider the duty-cycling behavior between active mode and
sleep mode to formulate Pr(at = 1). Thus, we define Pr(at =
1) = Tactive /(Tactive + Tsleep ), where Tactive and Tsleep are
the average times spent in active and sleep modes, respectively.
Assuming that the energy harvested in sleep mode must equal
the energy consumed during active mode, the probability of
SU being active can be expressed as
ρh
Pa (ǫ|g) = P r(at = 1) =
(7)
ρh + ρc (ǫ|g)
ρh = g ξpcoh
ρc (ǫ|g) = es +
n
o
1 − Pf (ǫ) pci t + 1 − Pd (ǫ|g) pcot et
(8)
(9)
where ξ = ϕPp Tt and pco is the probability of the subchannel
c ∈ {ch , ct } being busy with pco = 1 − pci . In the above, we
have assumed that during sleep mode, SU consumes no energy
but harvests the energy with rate ρh . On the other hand, during
active mode, SU consumes the energy at rate ρc . Hence, the
probability of SU being active based on the duty cycle model
above represents the ratio of the energy harvesting rate to the
sum of the energy harvesting and energy consuming rates.
III. T RAFFIC C LASSIFICATION
Sensing multiple PU subchannels may be required to
identify idle/busy subchannels among them, which carry K
traffic applications (patterns), without any prior information.
This may not lead to a performance gain of SU under
the energy causality in the RF-powered CRN because of
undue sensing time and high energy consumption costs. To
overcome this, we attempt to classify traffic patterns with
observed features and select the best subchannel either for
energy harvesting or for transmission accordingly. In this
paper, we consider the following three traffic features with
regard to application fingerprinting: packet length (pl ), packet
4
interarrival time (pt ), and variance in packet length (∆), which
are sufficient for acquiring the channel state information, as
detailed in [22] and [23]. Then, the packet length vector
Plen is represented as Plen = {pl1 , pl2 , · · · , plN }, where
N is the number of packet length samples. Similarly, Pint
and ∆ are represented as Pint = {pt1 , pt2 , · · · , ptN } and
∆ = {[var(pl )]W1 = 0, [var(pl )]W2 , · · · , [var(pl )]WN },
respectively. Here, [var(pl )]Wn is the temporal variance of
packet length in a window Wn of size n, spanning over
[1, . . . , n](st)th observations.
With regard to some application backgrounds, we provide
more detailed description of each feature as follows:
1) Packet Length: Packet lengths for different traffic payloads are likely to be different. For example, the UDP
packet size is longer, the gaming packet size may vary
depending on game dynamics, and the VoIP packet has
smaller packet lengths to minimize jitter. So we use
the packet length as one feature point for identifying
different traffic applications.
2) Packets Interarrival Time: Packet interarrival times for
different applications also vary depending upon the
requirements of applications. For example, in VoIP, the
inter-arrival time is small to avoid annoying effects
caused by jitter.
3) Variance in Packet Length: The packet lengths may
change in every connection. For example, by investigating real wireless traces in [26], [27], for gaming data,
we have observed that packet lengths vary significantly
during the communication.
Let xn be the feature vector of the nth training feature point,
given by
T
xn = pln , ptn , [var(pl )]Wn , X = x1 , · · · , xN .
The matrix of observations X is observed by SU by examining
the packet header of PU.
ܩ
ߠ
ߙ
ߨ
λ
Z
ݖ
λ
ܰ
In practice, we do not know how many traffic applications
are active in the PU network, and therefore, we would like to
learn it from the data observed. The Dirichlet processes can
be employed to have a mixture model (DPMM) with infinite
components (applications) which can be viewed as taking the
limit of the finite mixture model for K → ∞. The theoretical
model of data generation for the DPMM is
∼
∼
p(θk )
G
G
∼
DP (α, Go )
A. DPMM Representation
The model defined above is a theoretical one. In order
to realize DPMM models, Stick-breaking process, Chinese
restaurant process, and Polya-urn process are used. Here we
represent the DPMM in (10) using the Stick-breaking process.
Consider two infinite collections of independent random varii.i.d.
i.i.d.
ables Vk ∼ Beta(1, α) and θk∗ ∼ Go for k = {1, 2, . . . }.
The Stick-breaking process of G is given by
πk = Vk
k−1
Y
(1 − Vj )
(11)
j=1
G=
∞
X
πk δθk∗
(12)
k=1
where V = {V1 , V2 , . . .} with V0 = 0. The mixing weights
{πk } are given by breaking a stick of unit length into infinitely
small segments. In the DPMM, the vector π represents the
infinite vector of mixing weights and {θ1∗ , θ2∗ , . . . } are the
atoms which correspond to mixing components. Since Zn is
the cluster assignment random variable to the feature point xn ,
the data for the DPMM is generated as
i.i.d.
Fig. 3. Graphical model of an DPMM. Observations are represented by the
shaded node. Nodes are random variables, edges are dependencies, and plates
are replications.
xn
θk
P∞
∗
where G =
k=1 πk δθk ∼ DP (α, Go ), Go is the base
∗
distribution and δθk is used as a short notation for δ(θ = θk∗ )
which is a delta function that takes 1 if θ = θk∗ and 0
otherwise. θk are the cluster parameters sampled from G where
k ∈ {1, 2, . . . }. The generative distribution p(θk ) is governed
by the cluster parameters θk and is used to generate the
observations xn . Then, the multimodal
probability distribution
P
∗
can be defined as p(xn ) = ∞
k=1 πk p(·|δθk ) which is called
the mixture distribution with mixing weights πk and mixing
components p(·|δθk∗ ). The graphical model of the DPMM is
shown in Fig. 3. α is the scalar hyperparameter of the DPMM
and affects the number of clusters obtained. Zn is the cluster
assignment variable such that the feature point xn belongs
to the kth cluster. Larger the value of α, the more clusters,
while smaller the value of α, the fewer clusters. Note that
the value of α indicates the strength of belief in Go . A large
value means that most of the samples will be distinct and have
values concentrated on Go .
(10)
1) Draw Vk |α ∼ Beta(1, α), k ∈ {1, 2, . . . }.
i.i.d.
2) Draw θk∗ ∼ G0 , k ∈ {1, 2, . . . }.
3) For the feature point xn , do:
(a) Draw Zn |{v1 , v2 , . . . } ∼ Multinomial(π).
(b) Draw xn |zn ∼ p(xn |θz∗n ).
Here we restrict ourselves to the DPMM for which the
observable data are drawn from Normal distribution and where
the base distribution for the DP is the corresponding conjugate
distribution.
B. Inference for DPMM
Since the Dirichlet processes are nonparametric, we cannot
use the EM algorithm to estimate the random variables {Zn }
(which store the cluster assignments) for our DPMM model
in (10) due to the fact that EM is generally used for inference
in a mixture model, but here G is nonparametric, making EM
5
difficult. Hence, in order to estimate these assignment variables
in the paradigm of Bayesian nonparametrics, there exist two
candidate approaches for inferences: First, a sampling-based
approach uses Markov chain Monte Carlo (MCMC) to cluster
the traffic patterns X. The MCMC based sampling (also known
as Gibbs sampling) approach is more accurate in classifying
feature points. The second approach is based on variational
methods which convert inference problems into optimization
problems [20]. The main idea that governs variational inference is that it formulates the computation of marginal or
conditional probability in terms of an optimization problem
that depends on a number of free parameters, i.e., variational
parameters. We discuss both approaches in the following
subsections with their detailed comparison in Table I, which
are confirmed with the simulation results.
1) Collapsed Gibbs Sampling for Traffic Classification: In
our model, the data follow multivariate normal distribution,
Nk (~µk , Σk ), where the parameters are 3-dimensional mean
vector and covariance matrix. The conjugate distributions for
mean vector µ
~ k and covariance matrix Σk are given by
µk ∼ N (~µ0 , Σk /κ0 ) and Σk ∼ Inverse-Wishartν0 (Λ−1
~
0 ),
respectively. The Dirichlet hyperparameters, here symmetric
α/K, encodes our beliefs about how uniform/skewed the class
mixture weights are. The parameters to the Normal times
Inverse-Wishart prior, Λ−1
0 , ν0 , κ0 imply our prior knowledge
regarding the shape and position of the mixture densities.
For instance, ~µ0 specifies where we believe the mean of the
mixture densities are expected to be, where κ0 is the number
of pseudo-observations we are willing to ascribe to our belief.
The hyper-parameters Λ−1
0 , ν0 behave similarly for the mixture
density covariance.
Collapsed Gibbs sampler requires to select the base distribution Go which is a conjugate prior of the generative
distribution p(xn |θz∗n ), in order to solve analytically and be
able to sample directly from p(Zn |Z−n , X). The posterior
distribution under our model is
!
N
Y
P (Z, Θ, π, α) ∝ P (X|Z, Θ)P (Θ|G0 )
P (zn |π)
n=1
· P (π|α)P (α).
(13)
By integrating-out certain parameters, the posterior distribution is given by [10]
P (zi = k|Z−n , X, Θ, π, α) ∝ P (xn |zn , Θ)P (zn |Z−n , α).
(14)
For the first term in the above equation,
we
use
multivariate
i
h
(κn +1)
, since we
µn , Λκnn (ν
Student-t distribution, i.e., tνn −2 ~
n −2)
chose Inverse-Wishart as conjugate prior for Σk and Normal
distribution for µ
~ k , where the second term is called Chinese
restaurant process and is given by
mk
, if k ≤ K+ ,
n
−
1+α
(15)
P (zn = k|Z−n ) =
α
, if k > K+
n−1+α
where Z−n = Z/zn , K + is the number
PNof classes containing
at least one data point, and mk =
n=1 I(zi = k) is the
number of data points in class k.
The steps involved in collapsed Gibbs sampling are enumerated below as:
1) Initialize the cluster assignments {zn } randomly.
2) Repeat until convergence:
(a) Randomly select xn .
(b) Fix all other zn for every n 6= n: Z−n .
(c) Sample zn ∼ p(Zn |Z−n , X) from (15).
(d) If zn > K, then update K = K + 1.
An overall framework for PU traffic pattern classification
and application-specific optimization is shown in Fig. 4.
PU traffic pattern classification
PU network
traffic
Preprocessing
Feature
extraction
DPMM data
generation
Gibbs
sampling
inference
Variational
inference
Traffic classification
Idle/busy distribution
estimation
PU application-specific optimal
harvesting/transmission strategy
PU application-specific optimization
Fig. 4. A flow diagram for classification of PU traffic patterns.
2) Variational Bayes Inference for Traffic Classification:
The main idea that governs variational inference is that it
formulates the computation of marginal or conditional probability in terms of an optimization problem that depends on
the number of free parameters, i.e., variational parameters.
In other words, we choose a family of distributions over the
latent variables with its own set of variational parameters ν,
i.e., q(W1:M |ν). We are interested in finding the setting of
the parameters that makes our approximation q closest to the
posterior distribution. Consequently, we can use q with the
fitted parameters in place of the posterior.
Here we assume N observations, X = {xn }N
n=1 and M
latent variables, i.e., W = {Wm }M
.
The
fixed
parameters
m=1
H could be the parametrization of the distribution over the
6
observations or the latent variables. With the given notations,
the posterior distribution under Bayesian paradigm is
p(W|X, H) = R
p(W, X|H)
.
p(W, X|H)
W
(16)
The posterior density in (16) is in an intractable form (often
involving integrals) which cannot easily be solved analytically.
Thus we rely on an approximate inference method, i.e., variational Bayes inference. Our goal is to find an approximation
of the posterior distribution p(W|X, H) as well as the model
evidence p(X|H). We introduce a distribution q(W) defined
over the latent variables and observe that for any choice of
q(W), the following decomposition holds
log p(X|H) = L(q) + KL(q||p)
(17)
(
(18)
where
)
p(X, W|H)
L(q) = q(W) log
dW
q(W)
)
(
Z
p(W|X, H)
dW.
KL(q||p) = − q(W) log
q(W)
Z
(Eq [log p(Zn |V)] + Eq [log p(xn |Zn )])
n=1
− Eq [log q(V, θ , Z)].
∗
(20)
To maximize the bound, we must find a family of variational
distributions that approximate the distributions of infinitedimensional random measure G, where G is expressed in
terms of V = {V1 , V2 , . . . } and θ ∗ = {θ1∗ , θ2∗ , . . . }. The
factorized family of variational distributions for mean-field
inference can be expressed as
q(v, θ ∗ , z) =
K−1
Y
k=1
1 We
q(vk |ζk )
K
Y
k=1
q(θk∗ |εk )
N
Y
(22)
Since multiple parameter options exist for each latent variable
under the variational distribution, we need to optimize the
bounds in (20) based on ν above.
Optimizing by employing the coordinate ascent algorithm
from [9], we optimize the bounds in (20) with respect to the
variational parameters ν in (22). From the coordinate ascent
algorithm, we acquire the following statistics: the number of
traffic patterns (K) and their corresponding parameters θk∗ .
These statistics are utilized by SU for optimal harvesting and
transmission strategy in the sequel.
C. Comparison of Inference Algorithms
Table I presents the detailed comparison of the two algorithms. The collapsed Gibbs sampling is more accurate
than the variational Bayes inference, but the former has some
limitations such as slow convergence [19] and difficult to
diagnose convergence. We confirm the comparison results, in
terms of accuracy and time complexity, through simulations.
The variational inference is biased (underfitting), whereas the
Gibbs sampling’s bias diminishes as the number of runs for the
Markov chain increases. For non-conjugate prior distributions,
the latter is preferred which is much faster than the former.
The latter is deterministic, which means that we always obtain
the same optimal value, given the same starting value and
an objective function without huge local optima problems.
The latter is quicker since it approximates the posterior using
optimization with free variables.
TABLE I
C OMPARISON OF INFERENCE ALGORITHMS .
log p(X|H) ≥ Eq [log p(V|H)] + Eq [log p(θ ∗ |G0 )]
N
X
ν = {ζ1 , . . . , ζK−1 , ε1 , . . . , εK , ϑ1 , . . . , ϑN }.
(19)
From (19), we see that KL(q||p) is the Kullback-Leibler
divergence between q(W) and the posterior distribution
p(W|X, H), where the KL divergence satisfies KL(q||p) ≥ 0
with equality if and only if q(W) = p(W|X, H), i.e., q(·) is
the true posterior. It follows from (17) that L(q) ≤ log p(X|θ),
in other words, L(q) is a lower bound on p(X|θ). Therefore,
we can maximize the lower bound L(q) by optimizing with
respect to q(W), which is equivalent to minimizing the KL
divergence. We consider a restricted family of distributions
q(W) and then seek the member from this family for which
the KL divergence is minimized.
The latent variables1 for DPMM are the stick lengths, atoms,
and cluster assignment variables: W = {V, θ ∗ , Z} and the
hyperparameters are the scaling parameter α and the parameter
for the conjugate base distribution G0 , H = {α, G0 }. Thus,
the marginal distribution of the data (evidence) lower bound
is evaluated as
+
where q(vk |ζk ) are beta distributions parameterized with ζk ,
q(θk∗ |εk ) are exponential family distributions with natural
parameter εk , and q(zn |ϑn ) are multinomial distributions with
parameter σn . The latent variables in Fig. 3 are governed by
the same distribution, but, following the fully factorized variational variables in the mean-field variational approximation,
given there is an independent distribution for each variable.
Thus, the variational parameters are defined by
q(zn |ϑn ) (21)
n=1
use the variational inference to pick a family of distributions over the
latent variables W with its own variational parameters ν, and then set ν to
render q(·|ν) close to the posterior of interest.
Item
Gibbs sampling
Variational inference
Speed
Biasness
Computational requirement
For non-conjugate dist.
Deterministic
Accuracy
Convergence diagnoses
Converge to true posterior
Inference approach
Approximates
Slower
Not biased
Higher
Not preferred
No
Higher
Difficult
Yes
MCMC sampling
Integrals
Faster
Biased
Lower
Preferred
Yes
Lower
Easy
No
Optimization
Data distribution
IV. O PTIMAL E NERGY D ETECTION T HRESHOLD
E STIMATION
After selecting the subchannels for harvesting/transmission,
SU uses ch to harvest energy and ct to transmit information.
Specifically, the goal is to adjust the detection threshold ǫ of
7
the SU energy detector under the energy causality and PU
collision constraints. For instance, increasing the detection
threshold results in frequent SU transmissions, as a result,
the increased probability of accessing the occupied spectrum,
which may result in collision with PU transmission. Also, it
incurs excessive energy usage which is not good for energyconstrained SU. On the other hand, lowering the detection
threshold ǫ reduces unnecessary sensing and transmission
actions and consequently saves energy for future transmission. However, it decreases the probability of accessing the
unoccupied spectrum, causing the achievable rate loss to SU.
Thus, it is of vital importance to finely tune the SU energy
detection threshold ǫ for optimal sensing subject to the design
constraints, i.e., PU protection and energy causality.
A. Problem Formulation
The achievable rate capacity of SU is defined as C =
W log2 (1 + SN R) for the signal-to-noise ratio (SN R) of the
SU link when the PU transmit subchannel ct of bandwidth W
is idle. Then the SU rate capacity is expressed as
Tt
R (ǫ|g) =
C Pr(at = 1, ot = 0, Sct = 0|g)
Tslot
Tt
(23)
C 1 − Pf (ǫ) Pa (ǫ|g)pci t .
=
Tslot
Here we have used the relation Pr(at = 1, ot = 0, Sct =
0|g) = Pr(ot = 0|at = 1, Sct = 0) Pr(at = 1|Sct =
0, g) Pr(Sct = 0) based on Bays’ rule. The rate capacity
above converges to a specific value due to the Q-function
characteristic of Pf and Pa as the energy detection threshold
ǫ increases.
In this case, however, the PU performance is degraded due
to the collision of PU and SU transmissions as the latter
becomes more aggressive. Therefore, we should put some
constraint on the collision probability, which is evaluated as
Pc (ǫ|g) = Pr(at = 1, ot = 0|Sct = 1, g)
= 1 − Pd (ǫ|g) Pa (ǫ|g).
(24)
Now we can formulate an optimization problem to find an
appropriate value of ǫ, which leads to an optimal spectrum
sensing policy for maximizing the SU rate capacity as
max R (ǫ|g) s.t. Pc (ǫ|g) ≤ P c
ǫ
(25)
where P c is the target probability of collision with which PU
can be sufficiently protected.
B. Distinctions of RF-Powered CRNs from General CRNs
As defined in (7), the probability of action for SU in
RF-powered CRNs is a function of its battery level, given
the energy causality (i.e., self-powering) is applied to SU
for joint opportunistic energy harvesting and transmission,
unlike general CRNs. Therefore, as formulated in (23) - (25),
the optimal sensing policy in the RF-powered CRNs should
take into account the energy state, unlike that in the general
CRNs. For this, we have acquired the channel state information
through the traffic classification developed in this paper, which
is a crucial factor for determining the optimal sensing policy
in the RF-powered CRNs.
C. Optimal Energy Detection Threshold
To find an optimal energy detection threshold
ǫ in (25), we
define an objective function as O(ǫ|g) = 1 − Pf (ǫ) Pa (ǫ|g)
which is affected only by ǫ in (23), and then reformulate the
optimization problem above as
max O (ǫ|g) s.t. Pc (ǫ|g) ≤ P c .
ǫ
(26)
In addition, we define the constraint function as Φ(ǫ, P c ) =
Pc (ǫ|g) − P c to obtain a proper threshold range of [0, ǫc ],
where ǫc is the solution of the following equation:
Φ(ǫc , P c ) = 0.
(27)
We will find the following Propositions 1 and 2 useful in
obtaining the solution ǫc of (27).
Proposition 1. The collision probability Pc (ǫ|g) in (24) and
the object function O(ǫ|g) in (26) can be classified into two
types of function f (ǫ|g) as follows:
1) f (ǫ1 |g) < f (ǫ2 |g) for ǫ1 < ǫ2 ≤ ǫm ,
f (ǫ3 |g) > f (ǫ4 |g) for ǫm ≤ ǫ3 < ǫ4 ,
limǫ→∞ f (ǫ|g) = γ1 where ǫm = arg maxǫ f (ǫ|g).
2) f (ǫ1 |g) < f (ǫ2 |g) for ǫ1 < ǫ2 ,
limǫ→∞ f (ǫ|g) = γ1
where
ρh
γ1 =
.
(28)
ρh + e s + e t
Proof. See Appendix A.
Proposition 2. The constraint function in (27) yields a unique
solution ǫc for γ1 > P c . If γ1 ≤ P c , we have two or no
solution.
According to Proposition 2, we have the constraint range
[0, ǫc ] if γ1 > P c and otherwise, the constraint range is [0, ∞).
Following the IEEE 802.22 WRAN, if P c = 0.1, we consider
γ1 ≤ P c as an extreme case corresponding to an extremely
low energy harvesting rate, namely ρh ≪ (es + et ) in (28).
Thus, we may choose the constraint range to be [0, ǫc ] as the
only possible option.
To find ǫc , we resort to the secant method as a root-finding
algorithm where the constraint function can be approximated
by a secant line through two points of the function. Starting
with the two initial iterates ǫ0 and ǫ1 , the next iterate ǫ2 is obtained by computing the value at which the secant line passing
through the two points (ǫ0 , Φ(ǫ0 , P c )) and (ǫ1 , Φ(ǫ1 , P c )) as
Φ(ǫ1 , P c ) − Φ(ǫ0 , P c )
(ǫ1 − ǫ0 ) + Φ(ǫ1 , P c ) = 0,
ǫ2 − ǫ1
(29)
which yields the solution
ǫ2 = ǫ1 − Φ(ǫ1 , P c )
ǫ1 − ǫ0
.
Φ(ǫ1 , P c ) − Φ(ǫ0 , P c )
(30)
Hence, we can derive the recurrence relation as
ǫk =
ǫk−2 Φ(ǫk−1 , P c ) − ǫk−1 Φ(ǫk−2 , P c )
.
Φ(ǫk−1 , P c ) − Φ(ǫk−2 , P c )
(31)
Since the value of P c is small, a solution can be obtained
quickly by setting ǫ0 and ǫ1 close to zero. Then, we iterate
8
until |ǫk − ǫk−1 | becomes very small, which is described in
Algorithm 1. With the constraint rage [0, ǫc ] fixed, we should
be able to find an optimal energy detection threshold which
maximizes the objective function O(ǫ).
Algorithm 1 Optimization algorithm
1: Initialize ǫ0 and ǫ1 .
2: for k = 1, 2, 3, ... do
3:
Update ǫk using the recurrence relation (31)
4:
if |ǫk − ǫk−1 | is sufficiently small then
5:
ǫc = ǫk return ǫc
6:
end if
7: end
8: if ∇O(ǫc ) > 0 then
9:
ǫc is the optimal solution.
10: else
11:
Initialize ǫ0 = ǫc .
12:
for k = 1, 2, 3, ... do
13:
Update ǫk using the recurrence relation (32)
14:
if |ǫk − ǫk−1 | is sufficiently small then
15:
ǫ∗ = ǫk return ǫ∗
16:
end if
17:
end
18: end if
tion threshold play crucial role in obtaining the optimal sensing
policy for SU.
PU traffic pattern classification
Bayesian nonparametric
subchannel clustering
PU network
traffic
ܿ௧ and ܿ
ܤ௧ ݁௦ ݁௧
Yes
No
Spectrum sensing ܿ௧ with ߳
obtained by Algorithm 1
Update using (4)
Harvest energy
from ܿ
௧ ൌൌ Ͳ
We use the gradient descent method to find the maximum
value in the constraint range with the recurrence relation as
Yes
ǫk = ǫk−1 + β ∇O(ǫk−1 )
(32)
for the step size β. We use a fixed value for β to avoid the
complication of calculation and find an optimal one. In the first
step of the recurrence, we assume ǫ0 = ǫc . From Proposition 1,
if ∇O(ǫc ) > 0, ǫc is an optimal point as the objective function
is an increasing function in [0, ǫc ]. If ∇O(ǫc ) < 0, we continue
the recurrence process to find the maximum point given the
objective function is a type-1 function with unique maximum
point. Algorithm 1 for finding the optimal ǫ∗ is stated above.
Fig. 5 illustrates a whole process for subchannel clustering,
spectral access and energy harvesting. The three traffic features
are used to classify the PU traffic patterns through the BNP
subchannel clustering. Based on the obtained idle and busy period statistics from the output of MCMC (Gibbs sampling), we
find the corresponding subchannels ct and ch with maximum
energy harvesting and transmission probabilities, respectively.
If the residual energy in SU battery Bt is less than the
required energy for transmission, ch is selected to harvest
energy and otherwise, ct to transmit data. Then, SU senses
the selected subchannel using the optimal energy detection
threshold obtained from the gradient descent method above.
After sensing, if the sensing result is ot = 0, SU transmits
data and otherwise, turns off the transmission. The residual
energy in battery is then updated using (4). To enable this, SU
has to acquire information of pci h and pci t from the subchannel
clustering, which in turn influences the energy harvesting
and consuming rates ρh and ρc , respectively. Therefore, the
statistics information obtained through the accurate clustering
process and resulting sensing parameter of the energy detec-
Transmit
PU application-specific optimization
Fig. 5. A flow diagram for spectral access and energy harvesting.
V. M ARKOV D ECISION P ROCESS F ORMULATION
Unlike the duty-cycling behavior between active mode and
sleep mode, we introduce the MDP model to accurately predict
the actions of SU which vary with the evolution of the residual
energy in battery. Since idle/busy time distributions (to estimate harvesting and transmission opportunities) are obtained
from clustering traffic patterns, we need to take decision in
a real time. Even if we know the distribution parameters, we
still need to automate the decision process because the SU harvesting and transmission is a real-time process. The parameters
obtained would help us to achieve early convergence. For this
purpose, the MDP which is Markov-chain based approach,
is incorporated given idle/busy time distributions for traffic
applications. Therefore, the MDP renders a good solution for
online/real-time decision making.
A. State Space of Battery
SU decides whether to harvest energy or transmit based
on the battery level. The event of at = 1 means that the
amount of the residual energy in battery is sufficient enough
to transmit. To find the probability of having this event, we use
9
the MDP formulation assuming the state is the discretization of
the battery capacity, and evaluate the steady-state probabilities
of the battery level with sufficient energy to transmit. We
discretize thejcurrentk residual energy in battery bt in Nb levels
denotes the maximum amount of energy
where Nb = Bmax
eq
quanta that can be stored in battery. Here, one energy quantum
t)
corresponds to eq = (esn+e
where nτ represents the number
τ
of states that enter harvesting mode. In general, if Nb is
sufficiently high, the discrete model can be considered as a
good approximation of the continuous one. Then, (4) can be
rewritten in terms of energy quanta as
bt+1 = min(bt − ect + eht , Nb )
(33)
j hk
l cm
E
E
where eht = eqt and ect = eqt . Here, the floor is used to
have a conservative harvesting performance, while the ceiling
to assure a required energy consumption. Thus, the worst case
of the battery level is assumed.
B. Transition Probability Matrix
1
•• − 1
•• + 1
••
!
−1
Fig. 6. The battery state transition with nτ energy harvesting states for Nb state Markov chain model.
We define the transition probability matrix U for the Nb state Markov chain model in Fig. 6 as
h
U(ǫ|g) = Uh (ǫ|g) | Ua (ǫ|g)
u0,0
u1,0
..
.
unτ −1,0
Uh (ǫ|g) =
un ,0
τ
un +1,0
τ
..
.
uNb −1,0
0
u1,1
..
.
..
.
..
.
..
.
..
.
...
...
...
..
.
0
0
0
0
0
..
.
0
0
0
uNb −1−nτ ,Nb −1
0
0
0
0
...
0
uNb −1,Nb −1
. (36)
Here, Uh (ǫ|g) denotes the Nb × nτ matrix whose current
battery level induces SU to enter harvesting mode, while
Ua (ǫ|g) the Nb × (Nb − nτ ) matrix whose current battery
level induces SU to enter active mode. The components of
these matrices are defined as
ui,i = 1[0≤E h <eq ] pcot + pci t , (i < nτ )
t
ui,j = 1[(i−j)eq ≤E h <(i−j+1)eq ] pcot ,
(37)
t
In the Markov chain model with Nb states as shown in
Fig. 6, the harvesting state i ∈ {0, 1, ..., nτ − 1} changes
to state j (j ≥ i) through energy harvesting as the current
battery level is insufficient for transmission. The active state
i ∈ {nτ , nτ + 1, ..., Nb − 1} with sufficient energy to transmit
will change to (i − nτ )-state or fail to transfer and return to
i-state again.
0
Ua (ǫ|g) =
u0,n
0
0
u
1,n
τ +1
0
0
0
0
0
0
0
0
unτ −,nτ
0
0
unτ +1,nτ +1
0
0
0
0
...
...
..
.
..
.
0
unτ −1,nτ −1
..
.
unτ ,nτ −1
..
.
unτ +1,nτ −1
..
.
..
.
. . . uNb −1,nτ −1
ui,i = Pf (ǫ) pci t + Pd (ǫ|g) pcot , (n ≤ i ≤ Nb − 1).
(40)
Here, ui,i (i < nτ ) is the probability that energy harvesting
is successful when ch is busy but there is insufficient energy
to reach a higher battery level, or ch is idle. ui,j (0 < j <
nτ , j < i < Nb ) is the one that energy harvesting is successful
when ch is busy and the state changes from j to i. ui,i+n
(i < nτ ) is the one that ct is idle with no false alarm and
successful transmission, or ct is busy with missed detection
and collision. ui,i (n ≤ i ≤ Nb − 1) is the one that ct is idle
with false alarm, or ct is busy with no missed detection.
C. Steady-State Probability and Optimal Threshold Algorithm
We define the steady-state probability vector of the Nb -state
Markov chain as Π = [π0 , π1 , ..., πNb −1 ], where Π is the left
eigenvector of U(ǫ) corresponding to the unit eigenvalue as
ΠU(ǫ|g) = Π.
iT
0
0
ui,i+nτ
(0 < j < nτ , j < i < Nb )
(38)
ct
ct
= 1 − Pf (ǫ) pi + 1 − Pd (ǫ|g) po , (i < n)
(39)
(34)
To derive the steady-state probability vector Π, we need to
make the necessary assumption below.
Assumption 1. The maximum energy quanta Nb should be
sufficient enough to satisfy the following conditions:
(41)
(35)
eht + nτ − 1 ≤ Nb − 1
h
Et
≤ N b − nτ
eq
Eth
< N b − nβ + 1
eq
gϕPp Tt
Nb >
+ 1 nτ − 1.
es + et
(42a)
(42b)
(42c)
(42d)
10
Assumption 1 implies that the maximum state Nb − 1 must be
greater than the sum of the harvesting energy quanta eht and
the maximum number of harvesting state nτ − 1. This means
that the maximum state number should always be greater than
the maximum allowable state due to harvesting. Thus, (33)
can be rewritten as
bt+1 = bt − ect + eht .
VI. R ESULTS
The simulation results for the proposed duty cycle and
MDP based stochastic models are also presented. To show
the effectiveness of the proposed scheme by comparing how
close to actual capacity, we define
β=
(43)
With Assumption 1, we define the number of energy quanta
charged through energy harvesting as
nκ = nκ ∈ N ∩ {0} nκ eq ≤ Eth < (nκ + 1)eq
gnτ ϕPp Tt
.
(44)
=
es + et
Proposition 3. Then, the steady-state probability vector Π can
be evaluated as
τ
(0 ≤ i < nτ )
nα α+nτ τ ,
κ
πi = nκ κ+nτ τ ,
(45)
(nτ ≤ i < nτ + nκ )
0,
(nτ + nκ ≤ i < Nb )
where
κ = pcoh
τ = 1 − Pf (ǫ) pci t + (1 − Pd (ǫ|g) pcot .
(46)
(47)
Proof. See Appendix B.
Using the steady-state probability vector Π obtained by
Proposition 3, the probability of SU entering active mode
PaM (ǫ|g) can be derived as
NX
nκ
b −1
nκ κ
nτ κ
M
πi =
Pa (ǫ|g) =
= nκ
.
(48)
nκ κ + nτ τ
nτ κ + τ
i=n
Nt
Tt
1 X
C at (1 − ot )(1 − Sct )
Nt t=1 Tslot
(52)
for the actual capacity obtained from simulation based on the
Monte-Carlo method where Nt is the number of simulation
iterations using the exhaustive search for energy detection
threshold. We use real wireless traces available online [26]
[27] for 3G network. We utilize three sources (UDP, VoIP,
Game) in our data set. Unless otherwise stated, the values
of the parameters used here are listed in Table II, which are
mainly drawn from [25].
TABLE II
S IMULATION PARAMETERS
Symbol
W
Tslot
Ts
Tt
B0
Ps
Pt
Pnc
η
ϕ
2
σp2 /σw
Pc
Bmax
Definition
Bandwith
Slot duration
Sensing duration
Transmission duration
Initial energy
Sensing power
Transmit power
Non-ideal circuit power
Efficiency of power amplifier
Energy harvesting efficiency
SNR of PU signal at SU transmitter
Target probability of collision
Maximum capacity of battery
Value
1MHz
10ms
2ms
98ms
0J
110 mW
50 mW
115.9 mW
-5.65 dB
0.2
-10 dB
0.1
1 mJ
τ
If nτ is sufficiently large, we can approximate
in (25), which yields
PaM (ǫ|g) =
gϕPp Tt κ
.
gϕPp Tt κ + (es + et )τ
nκ
nτ
∼
=
ϕPp Tt
es +et
(49)
To find the optimal energy detection threshold, we define
the MDP objective function using PaM (ǫ|g) obtained by the
MDP as OM (ǫ) = (1 − Pf (ǫ))PaM (ǫ|g), and then express the
optimization problem again as
max OM (ǫ|g) s.t. Pc (ǫ|g) ≤ P c .
ǫ
(50)
Proposition 4. The MDP objective function OM (ǫ|g) can be
classified into two types of function f M (ǫ|g) as follows:
1) f M (ǫ1 |g) < f M (ǫ2 |g) for ǫ1 < ǫ2 ≤ ǫm ,
f M (ǫ3 |g) > f M (ǫ4 |g) for ǫm ≤ ǫ3 < ǫ4 ,
limǫ→∞ f (ǫ|g) = γ2 where ǫm = arg maxǫ f M (ǫ|g).
2) f M (ǫ1 |g) < f M (ǫ2 |g) for ǫ1 < ǫ2 ,
limǫ→∞ f M (ǫ|g) = γ2
where
gϕPp Tt κ
.
(51)
γ2 =
gϕPp Tt κ + es + et
Proof. See Appendix C.
By Proposition 4 we can optimize in the same way as
Algorithm 1 in Section IV.
A. Traffic-Awareness via Clustering
Before evaluating the performance of the proposed scheme,
we confirm the performance of traffic patterns clustering that
results from the proposed MCMC based sampling and the
variational inference. In Fig. 7, we compare the accuracy of
the MCMC based sampling, variational inference, and Kmeans (as baseline) clustering algorithms when data points
(observations) are generated from 3 different (UDP, VoIP,
Game) traffic patterns being mixed. The K-means algorithm
is one for grouping a given data into K clusters by minimizing
the dispersion of the distance between each clusters. Unlike
the other approaches, the K-means algorithm cannot estimate
the number of clusters, and it should be performed only
by assuming a fixed number of traffic sources. We see that
the Bayesian approaches offer higher accuracy than the Kmeans algorithm. This is because the Bayesian approaches are
to approximate a prior probability and a likelihood function
derived from a statistical model for the observed data, whereas
the K-means considers only the differences in observed traffic
values. In the variational inference, we observe some errors
compared to the MCMC method because we approximate the
latent variables assumed by the mean-field theory. In Fig. 8,
we compare the two Bayesian approaches in terms of their
elapsed times. We notice that the elapsed times increase as the
11
MCMC (Gibbs Sampling)
Variational Inference
K−Means (Parametric with K=3)
Accurately identficating K (%)
100
90
80
70
60
50
90
150
300
450
Data points (mixed with different traffics)
600
750
Fig. 7. Clustering accuracy of the proposed MCMC and variational inference,
and K-means algorithms.
Fig. 9. SU capacity versus SU transmit power Pt and energy detection
c
c
threshold ǫ. (top: VoIP with pi h = 0.2 bottom: Game with pi h = 0.5)
100
MCMC (Gibbs Sampling)
Variational Inference
Elapsed time (sec)
80
0.1
60
Optimal Point (Simulation)
0.12
Optimal Point (Theory: Duty Cycle)
0.4
Optimal Point (Theory: MDP)
40
0.14
0.35
20
0
90
150
300
450
Data points (mixed with different traffics)
600
750
SU transmit power Pt (W)
0.16
Fig. 8. Elapsed time of the MCMC and variational inference algorithms.
0.3
0.18
0.25
0.2
0.2
0.22
0.24
0.15
0.26
0.1
0.28
0.05
0.95
number of data points increases, and it is confirmed that the
variational inference shows less elapsed time than the MCMC
method. Hence, if we can derive a set of equations used to
iteratively update the parameters well, the former converges
faster than the latter requiring a large amount of sampling
work.
1
1.05
Energy detection threshold ε
1.1
1.15
1.2
(a)
0.1
Optimal Point (Simulation)
0.12
Optimal Point (Theory: Duty Cycle)
0.4
Optimal Point (Theory: MDP)
0.14
0.35
B. SU Optimal Sensing Threshold
Fig. 9 shows the SU achievable rate capacity based on the
duty cycle model with varying energy detection threshold ǫ and
SU transmit power Pt . Note that the probability of the harvest
subchannel ch being idle was measured as pci h = (0.2, 0.5) for
the (VoIP, Game) traffic applications, respectively. We observe
an interesting trade-off in choice of the variables ǫ and Pt .
For small ǫ, the false alarm probability increases, resulting
in low transmission probability. To the contrary, for large ǫ,
it decreases and SU more likely transmits data if its residual
energy is enough for transmission. We notice that the capacity
converges to a specific value proportional to γ1 in (28), which
is the ratio of the energy harvesting rate to the sum of the
energy harvesting and consuming rates. Hence, an optimal ǫ
balances the sensing accuracy trade-off. Likewise, there is the
SU transmit power Pt (W)
0.16
0.3
0.18
0.25
0.2
0.2
0.22
0.24
0.15
0.26
0.1
0.28
0.05
0.95
1
1.05
Energy detection threshold ε
1.1
1.15
1.2
(b)
Fig. 10. An optimal point of SU transmit power Pt and energy detection
c
threshold ǫ for (VoIP, Game) traffic with ((a) VoIP pi h = 0.2 (b) Game
ch
pi = 0.5)
12
0.9
0.8
SU capacity (bits/s/Hz)
0.7
Opimal
MCMC + Duty Cycle
Variational Inference + Duty Cycle
MCMC + MDP
Variational Inference + MDP
0.6
0.5
0.4
0.3
0.2
0.1
0
0.9
0.95
1
1.05
1.1
Energy detection threshold ε
1.15
Fig. 11. SU rate capacity versus energy detection threshold ǫ when Pt =0.2W.
0.9
0.85
SU capacity (bits/s/Hz)
energy causality trade-off in the SU transmit power. For large
Pt , the probability of SU being active decreases, while for
small Pt , it increases but the SNR decreases. We see the VoIP
traffic offering better performance than the Game traffic, as
the former yields higher harvesting rate with continuous and
short intervals between voice packets. However, the latter with
pci h = 0.5 shows the packet intervals changing dynamically,
resulting in low harvesting rate.
Fig. 10 shows an optimal point of the actual capacity in (52),
the duty cycle and MDP models for (VoIP, Game) traffic. The
black line shows an optimal ǫ obtained from (26) according
to Pt , where the optimal ǫ decreases slightly as Pt increases.
This is because a tight energy causality due to the increased
Pt requires less transmission opportunity. We can see that the
optimal point of VoIP traffic has a larger value of Pt than that
of Game traffic. In VoIP traffic, the increased Pt results in
low transmission opportunity but increases the SNR of SU,
and a small value of pci h guarantees the energy causality. It
means that VoIP traffic subchannel idle/busy statistics, which
show higher harvesting opportunity than Game traffic, offset
the tight energy causality due to the increased Pt . The optimal
point of the duty cycle and MDP models offer almost the same
performance as the actual capacity from simulation. In Figs.
9 and 10, it is evident that the subchannels carrying distinct
traffic patterns exhibit different harvesting rates, and hence the
appropriate values of ǫ should be determined considering both
the energy causality and PU collision constraints.
Opimal
MCMC + Duty Cycle
Variational Inference + Duty Cycle
MCMC + MDP
Variational Inference + MDP
0.8
0.75
0.7
C. SU Performance by Clustering Algorithms
0.65
0.1
0.12
0.14
0.16
0.18
0.2
0.22
SU transmit power P (W)
0.24
0.26
0.28
t
We evaluate the SU achievable rate capacity by the proposed
clustering algorithms with respective threshold optimization.
We assume 3 different (UDP, VoIP, Game) traffic sources
with 10 subchannels, respectively. In this setting, SU selects
the subchannels ct and ch from Nc = 30 subchannels with
450 data points generated by using MCMC and variational
inference, respectively. Then, the optimal energy detection
threshold for the rate capacity is determined based on the duty
cycle and MDP models. Fig. 11 shows the SU achievable
rate capacity for varying energy detection threshold ǫ. The
black line represents actual capacity using ǫ obtained from
(26) with accurate clustering information. We see that the
MCMC is closer to the optimal line than variational inference,
while the duty cycle and MDP models offer almost the same
performance in optimizing ǫ. This clearly shows the higher
sensing accuracy in selecting ct and ch subchannels of the
MCMC than variational inference.
Fig. 12 shows the SU achievable rate capacity obtained by
using the optimal value ǫ∗ for varying SU transmit power Pt .
We see that the variational inference reaches maximum when
Pt = 0.24W , but the optimal actual capacity does slightly later
when Pt = 0.26W , like the MCMC. It means that accurate
clustering information leads to higher energy harvesting rate,
which allows SU to increase the residual energy in battery.
Hence, SU can increase the maximum achievable rate capacity
with higher transmit power Pt .
Fig. 12. SU rate capacity versus SU transmit power Pt with optimal ǫ∗ .
VII. C ONCLUSION
We have proposed an optimal spectrum sensing policy for
maximizing the SU capacity in OFDMA based CRNs which
is powered by energy harvesting. SU collected traffic pattern
information through observation of PU subchannels and classified the idle/busy period statistics for each PU subchannel
using the MCMC and variational inference algorithms. Based
on these statistics, we developed the stochastic SU capacity
models which are the duty cycle based one defined by the
times spent in active and sleep mode, and the MDP model
based on the evolution of the residual energy in battery. The
energy detection threshold was optimized to maximize the SU
capacity while satisfying the energy causality and PU collision constraints according to traffic patterns. We have shown
the performance trade-off of the BNP subchannel clustering
algorithms by comparing the accuracy and elapsed time of
algorithms. It was shown that SU can optimize the stochastic
model by selecting the threshold referring to the idle/busy
period statistics of PU subchannels. It was also shown that the
proposed duty cycle and MDP model achieve similar capacity
to that of the actual capacity from simulation based on the
Monte-Carlo method.
13
ACKNOWLEDGMENT
This work was supported by the National Research Foundation of Korea (NRF) Grant funded by the Korean Government
under Grant 2014R1A5A1011478.
A PPENDIX A
P ROOF OF PROPOSITION 1
We define A(ǫ|g) = [1 − Pd (ǫ|g)] and B(ǫ|g) = Pa (ǫ|g)
for the proof of the collision probability Pc (ǫ|g) in (24).
df (ǫ|g)
dA(ǫ|g)
dB(ǫ|g)
=
B(ǫ|g) + A(ǫ|g)
.
(53)
dǫ
dǫ
dǫ
The increasing function A(ǫ|g) and the decreasing function
B(ǫ|g) satisfy the following properties:
lim A(ǫ|g) < lim B(ǫ|g)
(54)
lim A(ǫ|g) > lim B(ǫ|g)
(55)
ǫ→0
ǫ→0
ǫ→∞
ǫ→∞
A(ǫe |g) = B(ǫe |g)
(56)
dB(ǫ|g)
dA(ǫ|g)
>−
.
(57)
dǫ
dǫ
The same results can be obtained even if A(ǫ) = [1 − Pf (ǫ)]
for the proof of the objective function O(ǫ|g) in (26).
Proposition 5. If 0 < ǫ < ǫe ,
df (ǫ|g)
dǫ
> 0.
Proof. We have A(ǫ|g) < B(ǫ|g) from (54) and (56), and
using (57), we have the following inequalities:
dB(ǫ|g)
dA(ǫ|g)
(58a)
B(ǫ|g) > B(ǫ|g) −
dǫ
dǫ
dB(ǫ|g)
dA(ǫ|g)
(58b)
B(ǫ|g) > A(ǫ|g) −
dǫ
dǫ
df (ǫ|g)
> 0.
(58c)
dǫ
Proposition 6. If ǫe < ǫ and
where ∆ ∈ R+ .
df (ǫ|g)
dǫ
< 0,
df (ǫ+∆|g)
dǫ
< 0
Proof.
dA(ǫ + ∆|g)
dA(ǫ|g)
<
dǫ
dǫ
dA(ǫ+∆|g)
dǫ
A(ǫ|g)
dA(ǫ+∆|g)
dǫ
A(ǫ + ∆|g)
<
<
dA(ǫ|g)
dǫ
A(ǫ|g)
dA(ǫ|g)
dǫ
A(ǫ|g)
.
(59a)
(59b)
(59c)
Inequality (59a) is satisfied by ǫi < ǫe where ǫi is the inflection
point of A(ǫ|g), and (59c) is satisfied by the increasing
function property A(ǫ + ∆|g) > A(ǫ|g)
dA(ǫ|g)
dǫ
A(ǫ|g)
dB(ǫ+∆|g)
dǫ
B(ǫ|g)
dA(ǫ+∆|g)
dǫ
A(ǫ + ∆|g)
Inequality (60a) is satisfied by ǫe < ǫ, and (60b) is satisfied
by the decreasing function property B(ǫ + ∆|g) < B(ǫ|g).
Finally, from (59c), the inequality (60c) holds.
(ǫ|g)
= 0 has one
From Propositions 5 and 6, the equation df dǫ
solution or no solution, so that the function f (ǫ|g) converges
to γ1 = (limǫ→0 A(ǫ|g))(limǫ→0 B(ǫ|g)).
A PPENDIX B
P ROOF OF PROPOSITION 3
From (41), we have
Π (I − U(ǫ|g)) = 0
T
(61a)
T
(61b)
T
(61c)
(I − U(ǫ|g)) Π = 0
G(ǫ|g)Π = 0.
T
We define G(ǫ|g) = (I − U(ǫ|g)) . If Assumption 1 holds,
then G(ǫ|g) is given by
h
i
G(ǫ|g) = G1 (ǫ|g) G2 (ǫ|g)
(62)
G1 (ǫ|g) =
α
..
.
0
0
..
.
0
−α
0
..
0
0
..
.
..
.
0
..
.
0
..
α
0
0
.
−β
0
G2 (ǫ|g) =
0
β
.
−α
0
(63)
−β
0
.
0
β
(64)
The steady-state vector ΠT is the null vector of U(ǫ|g), and
hence the vector in (45) is a kind of the null vector.
A PPENDIX C
P ROOF OF PROPOSITION 4
<
− dB(ǫ+∆|g)
dǫ
B(ǫ|g)
(60a)
<
− dB(ǫ+∆|g)
dǫ
B(ǫ + ∆|g)
We define A(ǫ) = (1 − Pf (ǫ)) and B(ǫ|g) = Pa (ǫ|g) for
the proof of the MDP objective function OM (ǫ|g) case.
(60b)
<
− dB(ǫ+∆|g)
dǫ
.
B(ǫ + ∆|g)
(60c)
dA(ǫ)
dB(ǫ|g)
df M (ǫ|g)
=
B(ǫ|g) + A(ǫ)
.
(65)
dǫ
dǫ
dǫ
The increasing function A(ǫ) and the decreasing function
B(ǫ|g) satisfy the following properties (54) - (57). From
14
M
Proposition 5, we can derive that if 0 < ǫ < ǫe , df dǫ(ǫ|g) > 0.
Also, from Proposition 6, we can derive that if ǫe < ǫ
M
M
and df dǫ(ǫ|g) < 0, df (ǫ+∆|g)
< 0 where ∆ ∈ R+ .
dǫ
df M (ǫ|g)
= 0 has one solution or
Thus, the equation
dǫ
no solution, and the function f M (ǫ|g) converges to γ2 =
(limǫ→0 A(ǫ))(limǫ→0 B(ǫ|g)).
R EFERENCES
[1] T. Chen, Y. Yang, H. Zhang, H. Kim, and K. Horneman, “Network energy
saving technologies for green wireless access networks,” IEEE Wireless
Commun., vol. 18, no. 5, pp. 30-38, Oct. 2011.
[2] C. Han et al., “Green radio: Radio techniques to enable energy-efficient
wireless networks,” IEEE Commun. Mag., vol. 49, no. 6, pp. 46-54, Jun.
2011.
[3] X. Lu, P. Wang, D. Niyato, D. I. Kim, and Z. Han, “Wireless networks
with RF energy harvesting: A contemporary survey,” IEEE Commun.
Surveys & Tutorials, vol. 17, no. 2, pp. 757-789, Second Quarter 2015.
[4] X. Lu, P. Wang, D. Niyato, D. I. Kim, and Z. Han, “Wireless charging
technologies: Fundamentals, standards, and network applications,” IEEE
Commun. Surveys & Tutorials, vol. 18, no. 2, pp. 1413-1452, Second
Quarter 2016.
[5] A. M. Zungeru, L. Ang, S. Prabaharan, and K. P. Seng, “Radio frequency
energy harvesting and management for wireless sensor networks,” Green
Mobile Devices and Networks: Energy Optimization and Scavenging
Techniques, ch. 13, pp. 341-368, CRC Press 2012.
[6] S. Lee, R. Zhang, and K. Huang, “Opportunistic wireless energy harvesting in cognitive radio networks,” IEEE Trans. Wireless Commun., vol.
12, pp. 4788-4799, Sep. 2013.
[7] S. Park, H. Kim, and D. Hong, “Cognitive radio networks with energy
harvesting,” IEEE Trans. Wireless Commun., vol. 12, pp. 1386-1397, Mar.
2013.
[8] Q. Zhao, L. Tong, A. Swami, and Y. Chen, “Decentralized cognitive
MAC for opportunistic spectrum access in ad hoc networks: A POMDP
framework,” IEEE J. Select. Area. Commun., vol. 25, no. 3, pp. 589-600,
2007.
[9] D. Blei and M. Jordan, “Variational inference for Dirichlet process
mixtures,” Bayesian Analysis, vol. 1, no. 1, pp. 121-144, Aug. 2006.
[10] F. Wood and M. J. Black, “A nonparametric bayesian alternative to spike
sorting,” Journal of Neuroscience Methods, vol. 173, no. 1, pp. 1-12, Jun.
2008.
[11] T. L. Griffiths and Z. Ghahramani, “Infinite latent feature models and the
Indian buffet process,” Gatsby Computational Neuroscience Unit, Tech.
Rep. 2005-001, 2005.
[12] S. Park, J. Heo, B. Kim, W. Chung, H. Wang, and D. Hong, “Optimal
mode selection for cognitive radio sensor networks with RF energy
harvesting,” IEEE Proc. PIMRC 2012, pp. 2155-2159, 2012.
[13] S. Park, H. Kim, and D. Hong, “Cognitive radio networks with energy
harvesting,” IEEE Trans. Wireless Commun., vol. 12, no. 3, pp. 13861397,
Mar. 2013.
[14] C. M. Bishop, “Mixtures of gaussians,” in Pattern recognition and
machine learning 1st ed., Springer-Verlag New York, ch. 9, sec. 2, pp.
430-435.
[15] A. Sultan, “Sensing and transmit energy optimization for an energy
harvesting cognitive radio,” IEEE Wireless Commun. Letters, vol. 1, no.
5, pp. 500-503, Oct. 2012.
[16] D. T. Hoang, D. Niyato, P. Wang, D. I. Kim, “Opportunistic channel
access and RF energy harvesting in cognitive radio network,” IEEE
Journal on Selected Areas in Communications - Cognitive Radio Series,
vol. 32, pp. 2039-2052, Nov. 2014.
[17] D. T. Hoang, D. Niyato, P. Wang, D. I. Kim, “Performance optimization
for cooperative multiuser cognitive radio networks with RF energy
harvesting capability,” IEEE Trans. Wireless Commun., vol. 14, pp. 36143629, July 2015.
[18] M. E. Ahmed, J. B. Song, Z. Han, and D. Y. Suh, “Sensing-transmission
edifice using Bayesian nonparametric traffic clustering in cognitive radio
networks,” IEEE Trans. Mobile Computing, vol. 13, pp. 2141-2155, Sep.
2014.
[19] M. E. Ahmed, D. I. Kim, J. Y. Kim, and Y. A. Shin, “Energy-arrivalaware detection threshold in wireless-powered cognitive radio networks,”
IEEE Trans. Vehic. Technol., vol. 66, pp. 9201-9213, Oct. 2017.
[20] M. E. Ahmed, D. I. Kim, and K. W. Choi, “Traffic-aware optimal
spectral Access in wireless powered cognitive radio networks,” IEEE
Trans. Mobile Computing, vol. 17, pp. 734-745, Mar. 2018.
[21] J. Xu, and R. Zhang, “Throughput optimal policies for energy harvesting
wireless transmitters with non-ideal circuit power,” IEEE J. Select. Area.
Commun., vol. 32, pp. 322-332, Feb. 2014.
[22] M. E. Ahmed, J. B. Song, N. T. Nguyen, and Z. Han, “Nonparametric
Bayesian identification of primary users’ payloads in cognitive radio
networks,” IEEE Proc. ICC 2012, pp. 1586-1591, June 2012.
[23] M. E. Ahmed, J. B. Song, Z. Han, and D. Y. Suh, “Sensing-transmission
edifice using Bayesian nonparametric traffic clustering in cognitive radio
networks,” IEEE Trans. Mobile Computing, vol. 13, no. 9, pp. 2141-2155,
Sept. 2014.
[24] W. S. Chung, S. S. Park, S. M. Lim, and D. S. Hong, “Spectrum sensing
optimization for energy-harvesting cognitive radio systems,” IEEE Trans.
Wireless Commun., vol. 13, pp. 2601-2613, May 2014.
[25] Y. Pei, Y. C. Liang, K. C. Teh, and K. H. Li, “Energy-efficient design of
sequential channel sensing in cognitive radio networks: Optimal sensing
strategy, power allocation, and sensing order,” IEEE J. Select. Area.
Commun., vol. 29, pp. 1648-1659, Aug. 2011.
[26] [Online]. Available FTP: http://crawdad.cs.dartmouth.edu/meta.php?name=snu/wowviaw
[27] [Online]. Available FTP: http://crawdad.cs.dartmouth.edu/meta.php?name=kaist/wibro
| 7 |
Combining Strategic Learning and Tactical Search in Real-Time Strategy Games
Nicolas A. Barriga and Marius Stanescu and Michael Buro
arXiv:1709.03480v1 [] 11 Sep 2017
Department of Computing Science
University of Alberta, Canada
{barriga|astanesc|mburo}@ualberta.ca
Abstract
A commonly used technique for managing AI complexity in
real-time strategy (RTS) games is to use action and/or state
abstractions. High-level abstractions can often lead to good
strategic decision making, but tactical decision quality may
suffer due to lost details. A competing method is to sample the
search space which often leads to good tactical performance
in simple scenarios, but poor high-level planning.
We propose to use a deep convolutional neural network (CNN) to select among a limited set of abstract action choices, and to utilize the remaining computation time
for game tree search to improve low level tactics. The CNN
is trained by supervised learning on game states labelled by
Puppet Search, a strategic search algorithm that uses action
abstractions. The network is then used to select a script — an
abstract action — to produce low level actions for all units.
Subsequently, the game tree search algorithm improves the
tactical actions of a subset of units using a limited view of the
game state only considering units close to opponent units.
Experiments in the µRTS game show that the combined algorithm results in higher win-rates than either of its two independent components and other state-of-the-art µRTS agents.
To the best of our knowledge, this is the first successful application of a convolutional network to play a full RTS game on
standard game maps, as previous work has focused on subproblems, such as combat, or on very small maps.
1
Introduction
In recent years, numerous challenging research problems
have attracted AI researchers to using real-time strategy (RTS) games as test-bed in several areas, such as casebased reasoning and planning (Ontañón et al. 2007), evolutionary computation (Barriga, Stanescu, and Buro 2014),
machine learning (Synnaeve and Bessière 2011), deep learning (Usunier et al. 2017; Foerster et al. 2017; Peng et al.
2017) and heuristic and adversarial search (Churchill and
Buro 2011; Barriga, Stanescu, and Buro 2015). Functioning
AI solutions to most RTS sub-problems exist, but combining
those doesn’t come close to human level performance1 .
To cope with large state spaces and branching factors
in RTS games, recent work focuses on smart sampling of
Copyright c 2017, Association for the Advancement of Artificial
Intelligence (www.aaai.org). All rights reserved.
1
http://www.cs.mun.ca/˜dchurchill/
starcraftaicomp/report2015.shtml#mvm
the search space (Churchill and Buro 2013; Ontañón 2017;
2016; Ontañón and Buro 2015) and state and action abstractions (Uriarte and Ontañón 2014; Stanescu, Barriga, and
Buro 2014; Barriga, Stanescu, and Buro 2017b). The first
approach produces strong agents for small scenarios. The
latter techniques work well on larger problems because of
their ability to make good strategic choices. However, they
have limited tactical ability, due to their necessarily coarsegrained abstractions. One compromise would be to allocate
computational time for search-based approaches to improve
the tactical decisions, but this allocation would come at the
expense of allocating less time to strategic choices.
We propose to train a deep convolutional neural network (CNN) to predict the output of Puppet Search, thus
leaving most of the time free for use by a tactical search algorithm. Puppet Search is a strategic search algorithm that
uses action abstractions and has shown good results, particularly in large scenarios. We will base our network on previous work on CNNs for state evaluation (Stanescu et al.
2016), reformulating the earlier approach to handle larger
maps.
This paper’s contributions are a network architecture capable of scaling to larger map sizes than previous approaches, a policy network for selecting high-level actions,
and a method of combining the policy network with a tactical search algorithm that surpasses the performance of both
individually.
The remainder of this paper is organized as follows: Section 2 discussed previous related work, Section 3 describes
our proposed approach and Section 4 provides experimental
results. We then conclude and outline future work.
2
Related Work
Ever since the revolutionary results in the ImageNet competition (Krizhevsky, Sutskever, and Hinton 2012), CNNs have
been applied successfully in a wide range of domains. Their
ability to learn hierarchical structures of spatially invariant
local features make them ideal in settings that can be represented spatially. These include uni-dimensional streams in
natural language processing (Collobert and Weston 2008),
two-dimensional board games (Silver et al. 2016), or threedimensional video analysis (Ji et al. 2013).
These diverse successes have inspired the application of
CNNs to games. They have achieved human-level perfor-
Table 1: Input feature planes for Neural Network. 25 planes for the evaluation network and 26 for the policy network.
Feature
Unit type
Unit health points
Unit owner
Frames to completion
Resources
Player
# of planes
6
5
2
5
7
1
Description
Base, Barracks, worker, light, ranged, heavy
1, 2, 3, 4, or ≥ 5
Masks to indicate all units belonging to one player
0−25, 26−50, 51−80, 81−120, or ≥ 121
1, 2, 3, 4, 5, 6−9, or ≥ 10
Player for which to select strategy
mance in several Atari games, by using Q-learning, a well
known reinforcement learning (RL) algorithm (Mnih et al.
2015). But the most remarkable accomplishment may be AlphaGo (Silver et al. 2016), a Go playing program that last
year defeated Lee Sedol, one of the top human professionals, a feat that was thought to be at least a decade away.
As much an engineering as a scientific accomplishment, it
was achieved using a combination of tree search and a series of neural networks trained on millions of human games
and self-play, running on thousands of CPUs and hundreds
of GPUs.
These results have sparked interest in applying deep learning to games with larger state and action spaces. Some limited success has been found in micromanagement tasks for
RTS games (Usunier et al. 2017), where a deep network
managed to slightly outperform a set of baseline heuristics. Additional encouraging results were achieved for the
task of evaluating RTS game states (Stanescu et al. 2016).
The network significantly outperforms other state-of-the-art
approaches at predicting game outcomes. When it is used
in adversarial search algorithms, they perform significantly
better than using simpler evaluation functions that are three
to four orders of magnitude faster.
Most of the described research on deep learning in multiagent domains assumes full visibility of the environment
and lacks communication between agents. Recent work addresses this problem by learning communication between
agents alongside their policy (Sukhbaatar, Szlam, and Fergus 2016). In their model, each agent is controlled by a
deep network which has access to a communication channel through which they receive the summed transmissions
of other agents. The resulting model outperforms models
without communication, fully-connected models, and models using discrete communication on simple imperfect information combat tasks. However, symmetric communication prevents handling heterogeneous agent types, limitation
later removed by (Peng et al. 2017) which use a dedicated bidirection communication channel and recurrent neural networks. This would be an alternative to the search algorithm
we use for the tactical module on section 4.3, in cases where
there is no forward model of the game, or there is imperfect
information.
A new search algorithm that has shown good results particularly in large RTS scenarios, is Puppet Search (Barriga, Stanescu, and Buro 2015; 2017a; 2017b). It is an action abstraction mechanism that uses fast scripts with a
few carefully selected choice points. These scripts are usu-
ally hard-coded strategies, and the number of choice points
will depend on the time constraints the system has to meet.
These choice points are then exposed to an adversarial lookahead procedure, such as Alpha-Beta or Monte Carlo Tree
Search (MCTS). The algorithm then uses a forward model
of the game to examine the outcome of different choice combinations and decide on the best course of action. Using a restricted set of high-level actions results in low branching factor, enabling deep look-ahead and favouring strong strategic
decisions. Its main weakness is its rigid scripted tactical micromanagement, which led to modest results on small sized
scenarios where good micromanagement is key to victory.
3
Algorithm Details
We build on previous work on RTS game state evaluation (Stanescu et al. 2016) applied to µRTS (see figure 1).
This study presented a neural network architecture and experiments comparing it to simpler but faster evaluation functions. The CNN-based evaluation showed a higher accuracy
at evaluating game states. In addition, when used by stateof-the-art search algorithms, they perform significantly better than the faster evaluations. Table 1 lists the input features
their network uses.
The network itself is composed of two convolutional layers followed by two fully connected layers. It performed
very well on 8×8 maps. However, as the map size increases, so does the number of weights on the fully connected layers, which eventually dominates the weight set.
To tackle this problem, we designed a fully convolutional
network (FCN) which only consists of intermediate convolutional layers (Springenberg et al. 2014) and has the advantage of being an architecture that can fit a wide range of
board sizes.
Table 2 shows the architectures of the evaluation network
and the policy network we use, which only differ in the first
and last layers. The first layer of the policy network has an
extra plane which indicates which player’s policy it is computing. The last layer of the evaluation network has two outputs, indicating if the state is a player 1 or player 2 win,
while the policy network has four outputs, each corresponding to one of four possible actions. The global averaging
used after the convolutional layers does not use any extra
weights, compared to a fully connected layer. The benefit is
that the number of network parameters does not grow when
the map size is increased. This allows for a network to be
quickly pre-trained on smaller maps, and then fine-tuned on
the larger target map.
Table 2: Neural Network Architecture
Evaluation Network
Policy Network
Input 128x128, 25 planes Input 128x128, 26 planes
2x2 conv. 32 filters, pad 1, stride 1, LReLU
Dropout 0.2
3x3 conv. 32 filters, pad 0, stride 2, LReLU
Dropout 0.2
2x2 conv. 48 filters, pad 1, stride 1, LReLU
Dropout 0.2
3x3 conv. 48 filters, pad 0, stride 2, LReLU
Dropout 0.2
2x2 conv. 64 filters, pad 1, stride 1, LReLU
Dropout 0.2
3x3 conv. 64 filters, pad 0, stride 2, LReLU
Dropout 0.2
1x1 conv. 64 filters, pad 0, stride 1, LReLU
1x1 conv. 2 filters
1x1 conv. 4 filters
pad 0, stride 1, LReLU
pad 0, stride 1, LReLU
Global averaging over 16x16 planes
2-way softmax
4-way softmax
Figure 1: µRTS screenshot from a match between scripted
LightRush and HeavyRush agents. Light green squares are
resources, dark green are walls, dark grey are barracks and
light grey the bases. Numbers indicate resources. Grey circles are worker units, small yellow circles are light combat units and big yellow ones are heavy combat units. Blue
lines show production, red lines an attack and grey lines
moving direction. Units are outlined blue (player 1) and red
(player 2). µRTS can be found at https://github.com/
santiontanon/microrts.
Puppet Search requires a forward model to examine the
outcome of different actions and then choose the best one.
Most RTS games do not have a dedicated forward model or
simulator other than the game itself. This is usually too slow
to be used in a search algorithm, or even unavailable due
to technical constraints such as closed source code or being
tied to the graphics engine. Using a policy network for script
selection during game play allows us to bypass the need for
a forward model of the game. Granted, the forward model is
still required during the supervised training phase, but execution speed is less of an issue in this case, because training
is performed offline. Training the network via reinforcement
learning would remove this constraint completely.
Finally, with the policy network running significantly
faster (3ms versus a time budget of 100ms per frame for
search-based agents) than Puppet Search we can use the unused time to refine tactics. While the scripts used by Puppet Search and the policy network represent different strategic choices, they all share very similar tactical behaviour.
Their tactical ability is weak in comparison to state-of-theart search-based bots, as previous results (Barriga, Stanescu,
and Buro 2017b) suggest.
For this reason, the proposed algorithm combines an
FCN for strategic decisions and an adversarial search algo-
rithm for tactics. The strategic component handles macromanagement: unit production, workers, and sending combat
units towards the opponent. The tactical component handles
micro-management during combat.
The complete procedure is described by Algorithm 1. It
first builds a limited view of the game state, which only includes units that are close to opposing units (line 2). If this
limited state is empty, all available computation time is assigned to the strategic algorithm, otherwise, both algorithms
receive a fraction of the total time available. This fraction is
decided empirically for each particular algorithm combination. Then, in line 9 the strategic algorithm is used to compute actions for all units in the state, followed by the tactical algorithm that computes actions for units in the limited
state. Finally, the actions are merged (line 11) by replacing
the strategic action in case both algorithms produced actions
for a particular unit.
Algorithm 1 Combined Strategy and Tactics
1: procedure GET C OMBINEDACTION(state, stratAI,
tactAI,
stratT ime,
tactT ime)
2:
limState ← EXTRACT C OMBAT(state)
3:
if IS E MPTY(limState) then
4:
SET T IME (stratAI, stratT ime + tactT ime)
5:
else
6:
SET T IME (stratAI, stratT ime)
7:
SET T IME (tactAI, tactT ime)
8:
end if
9:
stratActions ← GETACTION(stratAI, state)
10:
tactActions ← GETACTION(tactAI, limState)
11:
return MERGE(stratActions, tactActions)
12: end procedure
4
Experiments and Results
All experiments were performed in machines running Fedora 25, with an Intel Core i7-7700K CPU, with 32GB of
RAM and an NVIDIA GeForce GTX 1070 with 8GB of
RAM. The Java version used for µRTS was OpenJDK 1.8.0,
Caffe git commit 365ac88 was compiled with g++ 5.3.0, and
pycaffe was run using python 2.7.13.
The Puppet Search version we used for all the following experiments utilizes alpha-beta search over a single
choice point with four options. The four options are WorkerRush, LightRush, RangedRush and HeavyRush, and were
also used as baselines in the following experiments. More
details about these scripts can be found in (Stanescu et al.
2016).
Two other recent algorithms were also used as benchmarks, Na¨veMCTS (Ontañón 2013) and Adversarial Hierarchical Task Networks (AHTNs) (Ontañón and Buro 2015).
Na¨veMCTS is an MCTS variant with a sampling strategy that exploits the tree structure of Combinatorial MultiArmed Bandits — bandit problems with multiple variables.
Applied to RTS games, each variable represents a unit, and
the legal actions for each of those units are the values that
each variable can take. Naı̈veMCTS outperforms other game
tree search algorithms on small scenarios. AHTNs are an alternative approach, similar to Puppet Search, that instead of
sampling from the full action space, uses scripted actions to
reduce the search space. It combines minimax tree search
with HTN planning.
All experiments were performed on 128x128 maps ported
from the StarCraft: Brood War maps used for the AIIDE
competition. These maps, as well as implementations of
Puppet Search, the four scripts, AHTN and Naı̈veMCTS are
readily available in the µRTS repository.
4.1
State Evaluation Network
The data for training the evaluation network was generated
by running games between a set of bots using 5 different
maps, each with 12 different starting positions. Ties were
discarded, and the remaining games were split into 2190
training games, and 262 test games. 12 game states were
randomly sampled from each game, for a total of 26,280
training samples and 3,144 test samples. Data is labelled by
a Boolean value indicating whether the first player won. All
evaluation functions were trained on the same dataset.
The network’s weights are initialized using Xavier initialization (Glorot and Bengio 2010). We used adaptive moment
estimation (ADAM) (Kingma and Ba 2014) with default values of β1 = 0.9, β2 = 0.999, = 10−8 and a base learning
rate of 10−4 . The batch size was 256.
The evaluation network reaches 95% accuracy in classifying samples as wins or losses. Figure 2 shows the accuracy
of different evaluation functions as game time progresses.
The functions compared are the evaluation network, Lanchester (Stanescu, Barriga, and Buro 2015), the simple linear
evaluation with hard-coded weights that comes with µRTS,
and a version of the simple evaluation with weights optimized using logistic regression. The network’s accuracy is
even higher than previous results in 8x8 maps (Stanescu et
al. 2016). The accuracy drop of the simple evaluation in the
early game happens because it does not take into account
units currently being built. If a player invests resources in
units or buildings that take a long time to complete, its score
lowers, despite the stronger resulting position after their
completion. The other functions learn appropriate weights
to mitigate this issue.
Table 5 shows the performance of PuppetSearch when using the Lanchester evaluation function and the neural network. The performance of the network is significantly better
(P-value = 0.0011) than Lanchester’s, even though the network is three orders of magnitude slower. Evaluating a game
state using Lanchester takes an average of 2.7µs, while the
evaluation network uses 2,574µs.
Table 6 shows the same comparison, but with Puppet
Search searching to a fixed depth of 4, rather than having
100ms per frame. The advantage of the neural network is
much more clear, as execution speed does not matter in this
case. (P-value = 0.0044)
4.2
Policy Network
We used the same procedure as in the previous subsection,
but now we labelled the samples with the outcome of a 10
second Puppet Search using the evaluation network. The resulting policy network has an accuracy for predicting the
correct puppet move of 73%, and a 95% accuracy for predicting any of the top 2 moves.
Table 3 shows the policy network coming close to Puppet
Search and defeating all the scripts.
4.3
Strategy and Tactics
Finally, we compare the performance of the policy network
and Puppet Search as the strategic part of a combined strategic/tactical agent. We will do so by assigning a fraction of
the allotted time to the strategic algorithm and the remainder
to the tactical algorithm, which will be Naı̈veMCTS in our
experiments. We expect the policy network to perform better in this scenario, as it runs significantly faster than Puppet
Figure 2: Comparison of evaluation accuracy between the
neural network and the built-in evaluation function in µRTS.
The accuracy of predicting the game winner is plotted
against game time. Results are aggregated in 200-frame
buckets. Shaded areas represent one standard error.
Table 3: Policy network versus Puppet Search: round-robin tournament using 60 different starting positions per match-up.
PS
PS
Policy net.
LightRush
HeavyRush
RangedRush
WorkerRush
44.2
12.5
33.3
8.3
6.7
Policy
Net.
55.8
5.8
28.3
0
38.3
Light
Rush
87.5
94.2
28.3
0
0
Heavy
Rush
66.67
71.7
71.7
0
0
Ranged
Rush
91.7
100
100
100
0
Worker
Rush
93.3
61.7
100
100
100
-
Avg.
65.8
61.9
48.3
48.3
18.1
7.5
Table 4: Mixed Strategy/Tactics agents: round-robin tournament using 60 different starting positions per match-up.
Policy net.-Naı̈ve
PS-Naı̈ve
PS
Policy net.
LightRush
HeavyRush
RangedRush
AHTN-P
WorkerRush
Naı̈veMCTS
Policy
Naı̈ve
43.3
2.5
0.0
0.0
4.2
0.0
27.5
25.8
1.7
PS
Naı̈ve
56.7
18.3
20.8
10.0
5.8
6.7
10.0
9.2
6.7
PS
97.5
81.7
36.7
13.3
30.8
7.5
3.3
5.0
6.7
Policy
Network
100.0
79.2
63.3
5.8
28.3
0.0
42.5
38.3
2.5
Search while maintaining similar action performance.
The best time split between strategic and tactical algorithm was determined experimentally to be 20% for Puppet
Search and 80% for Naı̈veMCTS. The policy network uses a
fixed time (around 3ms), and the remaining time is assigned
to the tactical search.
Table 4 shows that both strategic algorithms greatly benefit from blending with a tactical algorithm. The gains are
Table 5: Evaluation network versus Lanchester: round-robin
tournament using 60 different starting positions per matchup and 100ms of computation time.
PS CNN
PS Lanc.
LightRush
HeavyRush
PS
CNN
40.8
10.8
27.5
PS
Lanc.
59.2
35.8
32.5
Light
Rush
89.2
64.2
28.3
Heavy
Rush
72.5
67.5
71.7
-
Avg.
73.6
57.5
39.4
29.4
Table 6: Evaluation network versus Lanchester: round-robin
tournament on 20 different starting positions per match-up,
searching to depth 4.
PS CNN
PS Lanc.
LightRush
HeavyRush
PS
CNN
20
5
17.5
PS
Lanc.
80
17.5
10
Light
Rush
95
82.5
30
Heavy
Rush
82.5
90
70
-
Avg.
85.8
64.2
30.8
19.2
Light
Rush
100.0
90.0
86.7
94.2
28.3
0.0
0.0
0.0
3.3
Heavy
Rush
95.8
94.2
69.2
71.7
71.7
0.0
0.0
0.0
25.8
Ranged
Rush
100.0
93.3
92.5
100.0
100.0
100.0
0.0
0.0
13.3
AHTN
P
72.5
90.0
96.7
57.5
100.0
100.0
100.0
35.8
31.7
Worker
Rush
74.2
90.8
95.0
61.7
100.0
100.0
100.0
64.2
28.3
Naı̈ve
MCTS
98.3
93.3
93.3
97.5
96.7
74.2
86.7
68.3
71.7
-
Avg.
88.3
84.0
68.6
60.0
55.3
52.4
33.4
24.0
20.6
13.3
more substantial for the policy network, which now scores
56.7% against its Puppet Search counterpart. It also has a
4.3% higher overall win rate despite markedly poorer results
against WorkerRush and AHTN-P. These seems to be due to
a strategic mistake on the part of the policy network, which,
if its cause can be detected and corrected, would lead to even
higher performance.
5
Conclusions and Future Work
We have extended previous research that used CNNs to accurately evaluate RTS game states in small maps to larger
map sizes usually used in commercial RTS games. The average win prediction accuracy at all game times is higher
compared to smaller scenarios. This is probably the case because strategic decisions are more important than tactical decisions in larger maps, and strategic development is easier to
quantify by the network. Although the network is several orders of magnitude slower than competing simpler evaluation
functions, its accuracy makes it more effective. When the
Puppet Search high-level adversarial search algorithm uses
the CNN, its performance is better than when using simpler
but faster functions.
We also trained a policy network to predict the outcome
of Puppet Search. The win rate of the resulting network is
similar to that of the original search, with some exceptions
against specific opponents. However, while slightly weaker
in playing strength, a feed-forward network pass is much
faster. This speed increase created the opportunity for using
the saved time to fix the shortcomings introduced by highlevel abstractions. A tactical search algorithm can micromanage units in contact with the enemy, while the policy
chosen by the network handles routine tasks (mining, march-
ing units toward the opponent) and strategic tasks (training
new units). The resulting agent was shown to be stronger
than the policy network alone in all tested scenarios, but
can only partially compensate for the network’s weaknesses
against specific opponents.
Looking into the future, we recognize that most tactical search algorithms, like the MCTS variant we used,
have the drawback of requiring a forward model of the
game. Using machine learning techniques to make tactical
decisions would eliminate this requirement. However, this
has proved to be a difficult goal, as previous attempts by
other researchers have had limited success on simple scenarios (Usunier et al. 2017; Synnaeve and Bessière 2016). Recent research avenues based on integrating concepts such as
communication (Sukhbaatar, Szlam, and Fergus 2016), unit
grouping and bidirectional recurrent neural networks (Peng
et al. 2017) suggest that strong tactical networks might soon
be available.
The network architecture presented in this paper, being
fully convolutional, can be used on maps of any (reasonable)
size without increasing its number of parameters. Hence,
future research could include assessing the speed-up obtained by taking advantage of transfer learning from smaller
maps to larger ones. Also of interest would be to determine
whether different map sizes can be mixed within a training set. It would also be interesting to investigate the performance of the networks on maps that have not previously
been seen during training .
Because the policy network exhibits some weaknesses
against specific opponents, further experiments should be
performed to establish whether this is due to a lack of appropriate game state samples in the training data or other
reasons. A related issue is our reliance on labelled training data, which could be resolved by using reinforcement
learning techniques, such as DQN (deep Q network) learning. However, full RTS games are difficult for these techniques, mainly because the only available reward is the
outcome of the game. In addition, action choices near the
endgame (close to the reward), have very little impact on
the outcome of the game, while early ones (when there is
no reward), matter most. There are several strategies available that could help overcome these issues, such as curriculum learning (Bengio et al. 2009), reward shaping (Devlin,
Kudenko, and Grześ 2011), or implementing double DQN
learning (Hasselt, Guez, and Silver 2016). These strategies
have proved useful on adversarial games, games with sparse
rewards, or temporally extended planning problems respectively.
References
Barriga, N. A.; Stanescu, M.; and Buro, M. 2014. Building placement optimization in real-time strategy games. In
Workshop on Artificial Intelligence in Adversarial RealTime Games, AIIDE.
Barriga, N. A.; Stanescu, M.; and Buro, M. 2015. Puppet
Search: Enhancing scripted behaviour by look-ahead search
with applications to Real-Time Strategy games. In Eleventh
Annual AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE), 9–15.
Barriga, N. A.; Stanescu, M.; and Buro, M. 2017a. Combining scripted behavior with game tree search for stronger,
more robust game ai. In Game AI Pro 3: Collected Wisdom
of Game AI Professionals. CRC Press. chapter 14.
Barriga, N. A.; Stanescu, M.; and Buro, M. 2017b. Game
tree search based on non-deterministic action scripts in realtime strategy games. IEEE Transactions on Computational
Intelligence and AI in Games (TCIAIG).
Bengio, Y.; Louradour, J.; Collobert, R.; and Weston, J.
2009. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, 41–48.
ACM.
Churchill, D., and Buro, M. 2011. Build order optimization
in StarCraft. In AI and Interactive Digital Entertainment
Conference, AIIDE (AAAI), 14–19.
Churchill, D., and Buro, M. 2013. Portfolio greedy search
and simulation for large-scale combat in StarCraft. In IEEE
Conference on Computational Intelligence in Games (CIG),
1–8. IEEE.
Collobert, R., and Weston, J. 2008. A unified architecture
for natural language processing: Deep neural networks with
multitask learning. In Proceedings of the 25th international
conference on Machine learning, 160–167. ACM.
Devlin, S.; Kudenko, D.; and Grześ, M. 2011. An empirical
study of potential-based reward shaping and advice in complex, multi-agent systems. Advances in Complex Systems
14(02):251–278.
Foerster, J.; Nardelli, N.; Farquhar, G.; Torr, P. H. S.; Kohli,
P.; and Whiteson, S. 2017. Stabilising experience replay for
deep Multi-Agent reinforcement learning. In Thirty-fourth
International Conference on Machine Learning.
Glorot, X., and Bengio, Y. 2010. Understanding the difficulty of training deep feedforward neural networks. In International conference on artificial intelligence and statistics,
249–256.
Hasselt, H. v.; Guez, A.; and Silver, D. 2016. Deep reinforcement learning with double q-learning. In Proceedings
of the Thirtieth AAAI Conference on Artificial Intelligence,
2094–2100. AAAI Press.
Ji, S.; Xu, W.; Yang, M.; and Yu, K. 2013. 3d convolutional neural networks for human action recognition. IEEE
transactions on pattern analysis and machine intelligence
35(1):221–231.
Kingma, D. P., and Ba, J. 2014. Adam: A method for
stochastic optimization. CoRR abs/1412.6980.
Krizhevsky, A.; Sutskever, I.; and Hinton, G. E. 2012.
Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, 1097–1105.
Mnih, V.; Kavukcuoglu, K.; Silver, D.; Rusu, A. A.; Veness, J.; Bellemare, M. G.; Graves, A.; Riedmiller, M.;
Fidjeland, A. K.; Ostrovski, G.; et al. 2015. Humanlevel control through deep reinforcement learning. Nature
518(7540):529–533.
Ontañón, S., and Buro, M. 2015. Adversarial hierarchicaltask network planning for complex real-time games. In Proceedings of the 24th International Conference on Artificial
Intelligence (IJCAI), 1652–1658.
Ontañón, S.; Mishra, K.; Sugandh, N.; and Ram, A. 2007.
Case-based planning and execution for real-time strategy
games. In ICCBR ’07, 164–178. Berlin, Heidelberg:
Springer-Verlag.
Ontañón, S. 2013. The combinatorial multi-armed bandit
problem and its application to real-time strategy games. In
AIIDE.
Ontañón, S. 2016. Informed monte carlo tree search for
real-time strategy games. In Computational Intelligence and
Games (CIG), 2016 IEEE Conference on, 1–8. IEEE.
Ontañón, S. 2017. Combinatorial multi-armed bandits for
real-time strategy games. Journal of Artificial Intelligence
Research 58:665–702.
Peng, P.; Yuan, Q.; Wen, Y.; Yang, Y.; Tang, Z.; Long, H.;
and Wang, J. 2017. Multiagent Bidirectionally-Coordinated
nets for learning to play StarCraft combat games.
Silver, D.; Huang, A.; Maddison, C. J.; Guez, A.; Sifre, L.;
van den Driessche, G.; Schrittwieser, J.; Antonoglou, I.; Panneershelvam, V.; Lanctot, M.; et al. 2016. Mastering the
game of go with deep neural networks and tree search. Nature 529(7587):484–489.
Springenberg, J. T.; Dosovitskiy, A.; Brox, T.; and Riedmiller, M. 2014. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806.
Stanescu, M.; Barriga, N. A.; and Buro, M. 2014. Hierarchical adversarial search applied to real-time strategy games.
In Proceedings of the Tenth AAAI Conference on Artificial
Intelligence and Interactive Digital Entertainment (AIIDE),
66–72.
Stanescu, M.; Barriga, N. A.; and Buro, M. 2015. Using
Lanchester attrition laws for combat prediction in StarCraft.
In Eleventh Annual AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE), 86–
92.
Stanescu, M.; Barriga, N. A.; Hess, A.; and Buro, M. 2016.
Evaluating real-time strategy game states using convolutional neural networks. In IEEE Conference on Computational Intelligence and Games (CIG).
Sukhbaatar, S.; Szlam, A.; and Fergus, R. 2016. Learning
multiagent communication with backpropagation. In Lee,
D. D.; Sugiyama, M.; Luxburg, U. V.; Guyon, I.; and Garnett, R., eds., Advances in Neural Information Processing
Systems 29. Curran Associates, Inc. 2244–2252.
Synnaeve, G., and Bessière, P. 2011. A Bayesian model
for plan recognition in RTS games applied to StarCraft.
In AAAI., ed., Proceedings of the Seventh Artificial Intelligence and Interactive Digital Entertainment Conference
(AIIDE 2011), Proceedings of AIIDE, 79–84.
Synnaeve, G., and Bessière, P. 2016. Multiscale Bayesian
modeling for RTS games: An application to StarCraft AI.
IEEE Transactions on Computational intelligence and AI in
Games 8(4):338–350.
Uriarte, A., and Ontañón, S. 2014. Game-tree search over
high-level game states in RTS games. In Proceedings of the
Tenth AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, AIIDE’14, 73–79.
Usunier, N.; Synnaeve, G.; Lin, Z.; and Chintala, S. 2017.
Episodic exploration for deep deterministic policies: An application to starcraft micromanagement tasks. In 5th International Conference on Learning Representations.
| 2 |
Timeliness in Lossless Block Coding
Jing Zhong and Roy D. Yates
arXiv:1802.09167v1 [] 26 Feb 2018
WINLAB, ECE Department
Rutgers, the State University of New Jersey
{jzhong, ryates}@winlab.rutgers.edu
Abstract—We examine lossless data compression from an
average delay perspective. An encoder receives input symbols
one per unit time from an i.i.d. source and submits binary
codewords to a FIFO buffer that transmits bits at a fixed rate to
a receiver/decoder. Each input symbol at the encoder is viewed
as a status update by the source and the system performance is
characterized by the status update age, defined as the number of
time units (symbols) the decoder output lags behind the encoder
input. An upper bound on the average status age is derived from
the exponential bound on the probability of error in streaming
source coding with delay. Apart from the influence of the error
exponent that describes the convergence of the error, this upper
bound also scales with the constant multiplier term in the error
probability. However, the error exponent does not lead to an
accurate description of the status age for small delay and small
blocklength. An age optimal block coding scheme is proposed
based on an approximation of the average age by converting the
streaming source coding system into a D/G/1 queue. We compare
this scheme to the error exponent optimal coding scheme which
uses the method of types. We show that maximizing the error
exponent is not equivalent to minimizing the average status age.
I. I NTRODUCTION
In this era of ubiquitous connectivity and computing with
mobile devices, real-time status updates ensure that a monitor
(receiver) stays current about the status of interest of the
source. This requires the status updates to be as timely as
possible. In [1], [2], a new delay metric, the average status
age, was introduced to measure the end-to-end timeliness of a
status updating system. In the context of updates delivered
through queues, general expressions of the average status
age have been derived for single and multiple sources. In
[3], [4], [5], status age analysis has been also applied to
other communication systems, including random networks that
deliver packets out of order and multi-class queueing systems.
Many real-time data compression and communication systems with low latency requirements can be modeled as status
updating systems in which the applications require the source
to be reconstructed at the decoder in a timely manner. These
range from real-time video surveillance to remote telesurgery
[6]. The analysis of these timely compression and communication problems can be simplified to a real-time source coding
problem over a data network. The timely decoding of messages
at receiver must balance data compression delays against
network congestion deriving from insufficient compression.
Streaming source coding with a delay constraint was first
discussed in [7] and [8], in which the decoding error probability is bounded exponentially as a function of the delay
constraint. In [9], the error exponent analysis is generalized to
distributed streaming sources, and an achievable error exponent is obtained by fixed rate coding schemes using random
binning. Compared to this previous work, we are interested
in the following question: how timely can the streaming
source coding system be if we are allowed to choose any
lossless fixed-to-variable block coding scheme? We approach
this question by first connecting the status update age to the
error exponent in lossless streaming source coding. Although
the error analysis provides an upper bound on the timeliness
measure, we will see that maximizing the exponent does not
optimize the timeliness of decoded source messages.
In this work, we start in Section 2 with the system model
of lossless streaming source coding problem, and derive an
expression of average status age in lossless block coding
schemes. In Section 3, we show the upper bound of the
average status age as a function of the error exponent. We
use an example of block coding scheme to demonstrate how
the constant term in the error probability leads to the difference
between the bound and the actual average status age. We then
propose a method to find the age-minimizing optimal block
coding scheme for average age in Section 4. We show in
Section 5 that this is generally differs from the error-exponent
maximizing coding scheme that uses the method of types. We
conclude with a summary of our work and possible future
extensions in Section 6.
II. S YSTEM M ODEL
The streaming source coding system introduced in [7] and
[8] is illustrated in Figure 1. We assume that the channel
between the encoder and decoder is a constant rate bit pipe
with zero propagation delay. Starting at time t = 1, discrete
memoryless source symbols with finite alphabet X arrive
at each time unit sequentially, so the source symbol Xi
arrives at time i. In this work, we focus on fixed-to-variable
length block coding schemes. The encoder groups every B
message symbols into a single block and maps entire blocks
into variable-length bit strings. The k th symbol block is Yk
such that Yk+1 = XkB+1 XkB+2 · · · X(k+1)B . The encoded
sequence is then fed into a first-in-first-out (FIFO) buffer,
which outputs one binary bit to the decoder through the
channel every 1/R seconds. If the buffer is empty, it outputs
a gibberish bit e independent of any codewords. In fixedto-variable length coding, the decoder is able to determine
whether the next received bit is a gibberish bit or not, since
Source Symbol
Generator
X1 X2 …
Lossless Encoder
1101…
X1 X2 …
bit pipe
FIFO
Decoder
Receiver
(Monitor)
Rate: R
Fig. 1. System diagram of streaming source coding controlled by the FIFO buffer.
coding can be expressed as
N
1 X
Qk ,
N →∞ BN
∆ = lim
1
N →∞ N
= lim
t
Fig. 2. Example of variation in status age at a receiver of streaming fixedto-variable length coding with blocklength B = 3.
the generation time of next symbol block is known to the
decoder [7].
When the decoder receives the prefix-free codeword, it
reconstructs the corresponding message block immediately.
The delivery time of the block Yk is denoted by Dk . Note
that all the message symbols contained in a single message
block are decoded at the same time and thus have the same
delivery times.
In the source coding problem, the status age is defined as
the age of the most recently decoded symbol from when that
symbol was generated. That is, if the most recently decoded
symbol at time t is Xi , which was produced by the source
at time i, the instantaneous status age is ∆(t) = t − i. We
observe that ∆(t) is a random process that varies in time with
the receiver’s reconstruction of the source. The time-average
status age of the coding system observed by the receiver is
given by
1
∆ = lim
T →∞ T
Z
In streaming block coding, the status age at time t is the
same as the age of the last symbol in the most recently decoded
block, since all symbols in the same block are decoded
simultaneously. Fig. 2 shows a sample realization of status
age, as a function of time, at the receiver. We observe that kB
is the arrival time of symbol block Yk at the input of encoder.
The age is a sawtooth function that increases linearly in time
in the absence of any symbol blocks and is reset to Dk − kB
at time Dk when symbol block Yk is decoded at the receiver.
Using the same approach as [1], the integration of the
sawtooth area is equivalent to the sum of disjoint polygon
areas Qk shown in Fig. 2. The average status age for block
k=1
B
.
2
(3)
B
.
(4)
2
Intuitively, the timeliness metric ∆ is translated into a end-toend average delay measure in block coding schemes. In (4)
the expected system time in a queue is given by the sum
∆ = E[T ] +
E[T ] = E[S] + E[W ],
(5)
where E[S] and E[W ] are the expected service time and
the expected waiting time. In streaming block coding, each
encoded bit takes 1/R time unit to be transmitted by the
FIFO buffer, thus the service time of the symbol block Yk
with corresponding binary code length Lk is Sk = Lk /R, and
E[S] = E[L]/R. Applying the upper bound for G/G/1 queue
in [10], the expected waiting is upper bounded by
E[W ] ≤
(1)
0
(Dk − kB) +
(2)
From a queueing perspective, we can view a block Yk as a
customer arriving at time kB and departing at time Dk . The
system time of this customer is Dk − kB. Since Dk ≥ kB,
the average status age is always lower bounded by B/2.
The interarrival times of customers are deterministic since
the interarrival time of symbol blocks is exactly the block
length B. As a result, the term E[Dk − kB] in (3) is the
expected system time that block Yk spends in the queue. Let
E[T ] , E[Dk − kB], for an arbitrary customer k when the
queue has reached steady-state. Thus,
T
∆(t) dt .
k=1
N
X
E[L2 ] − E2 [L]
.
2R(BR − E[L])
(6)
Thus the average status age is upper bounded by
∆≤
E[L2 ] − E2 [L]
E[L] B
+
+ .
2R(BR − E[L])
R
2
(7)
III. C ONNECTING S TATUS AGE TO E RROR E XPONENT
In [11], the traditional block coding error exponent is
generalized to a streaming source coding problem, and the
delay-constrained error exponent is introduced to describe
the convergence rate of the symbol-wise error probability as
a function of the decoding delay. At time n, the decoder
estimates the k th source symbol as x̂k (n), for all k < n.
A delay constrained error exponent Es (R) is said to be
achievable if and only if for all > 0 and decoding delay
δ > 0, there exists a constant K < ∞ for a fixed-delay δ
encoder-decoder pair such that
Pr[x̂n−δ (n) 6= xn−δ ] ≤ K2
,
for all n > δ.
(8)
In a lossless block coding system, a symbol block is successfully decoded only after the entire encoded bit sequence
corresponding to that block departs from the FIFO buffer. An
error occurs at time n if some queueing delays cause some
encoded bits of the symbol n − δ to be still in the buffer. The
exponential convergence rate of error in delay δ comes from
the randomness of the lengths of encoded bit sequences.
Proposition 1: A block coding scheme with achievable error
exponent ES (R) has average status age ∆ satisfying
K 22ES (R)
, f (K, ES (R)).
(9)
∆ ≤ E (R)
(2 S
− 1)2
Proof of this proposition is shown in Appendix B. Despite the
fact that the constant K may vary for different ES (R), we
observe that as ES (R) → ∞, f (K, ES (R)) → K. This tells
us both the error exponent and the constant term K influences
the status age in a streaming coding system. Although the
exponent ES (R) provides a tight upper bound on the error
probability when the delay constraint δ is large enough, it does
not accurately describe the complicated error events when δ
is small. In the following discussion, we will use a prefix
block code example to show the effect of K and explain why
an exponential bound is sometimes not good enough for the
description of delay.
The following simple example was used in [11, Sec. 2.2.2]
to demonstrate a non-asymptotic error exponent result for a
prefix-free block code. Consider a source with alphabet size 3
and distribution PX (A) = a, PX (B) = PX (C) = (1 − a)/2,
where a ∈ [0, 1]. Assume the encoder has no information about
the source distribution, and chooses a block encoding strategy
with blocklength B = 2 as follows: It maps the block AA → 0
and all other blocks to 4-bit sequences led by a single 1, e.g.,
AB → 1000. This coding scheme is not adapted to the source
distribution, but it was shown to provide a convenient way
to obtain a closed form expression of the error probability. It
is also assumed that the channel rate is R = 3/2, meaning
that the FIFO buffer outputs 3 bits every 2 symbol periods.
The average age ∆ is finite if and only if the average length of
2
2
codeword is less than
)<
√ the channel rate BR, i.e. a +4(1−a
2
3. That is, a > 1/ 3 or q > 1/3 if we define q = a .
Compared to [11] that seeks an exponential upper bound for
the error probability, the calculation of (9) requires an exact
expression including the constant K. The error probability of
this coding system is upper bounded by
1,
0≤δ<1
3
Pe ≤ 1 − q + η −3/2 [q(1 − η)]2δ( 2 log2 η) , 1 ≤ δ < 3
−9/2
3
η
[1 − q(1 − η 3 )]2δ( 2 log2 η) ,
δ≥3
(10)
q
4(1−q)
−1+ 1+ q
where η =
. The detailed procedure to obtain
2
Pe is described in Appendix B.
Upper Bound from Error Exp.
Upper Bound on D/G/1 Queue
Numerical Simulation
25
Average Status Age
(−δES (R)−)
30
20
15
10
5
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Offered Load: H(X)/R
Fig. 3. Sample average age for prefix block code with blocklength B = 2.
We observe in (10) that the achievable error exponent that
satisfies (8) is − 23 log2 η. However, the exponential bound is
loose for small delay, specifically 0 ≤ δ < 3. Intuitively,
the decoder is still waiting for the binary codeword when
0 ≤ δ < 1, so the probability is always upper bounded by
1 for any decoding strategy; when 1 ≤ δ < 3, the decoder
can successfully recover the message if the codeword is 0
and the buffer delivers this bit no later than δ; when δ ≥ 3,
we have the most general case that can be described by
the exponential bound. The error exponent analysis usually
requires δ to be large enough, but the accumulation of average
age also counts small values of δ that lead to large probability
of error. Following the steps of the proof of Proposition 1 in
Appendix A, we express the upper bound on the average status
age as
3
5
2η
2
2
∆ ≤ 8 − 3q 1 − η + η +
3
−2ES (R)
3K2
K2−ES (R)
+ E (R)
+ E (R)
,
(11)
2 S
−1
(2 S
− 1)2
where K = η −9/2 [1 − q(1 − η 3 )] is the constant term from
the exponential bound when δ ≥ 3 in (10).
Figure 3 depicts a comparison among the numerical simulation of average age, the upper bound obtained from D/G/1
queue in (7) and the upper bound obtained from error exponent
in (11). In this plot, the channel rate R is fixed
√ at 3/2, and the
symbol probability a is varied within (1/ 3, 1] such that the
entropy H(X) is also varied. The sharp transition effect occurs
as the offered load of the system approaches 1 for all curves.
This is because the average codeword length approaches R,
and the number of bits queued in the FIFO buffers becomes
unbounded. This effect occurs earlier than H(X)/R = 1 since
the coding scheme is not adapted to the source distribution. We
observe that the bound obtained from D/G/1 waiting time (7)
is tight to the true simulation, while the delay-constrained error
exponent provides only a loose characterization of status age
in a block coding system. As a → 1, i.e. H(X) → 0, the upper
bound obtained from error exponent (11) is dominated by the
constant terms come from the sum of high error probabilities
in the small δ region.
IV. AGE O PTIMAL B LOCK C ODE
C1(Huffman)
A block coding scheme is age-optimal if it minimizes
the average status age for a given B and R. Since the
bound in (7) is simple and reasonablly tight, we use it as
an approximation of the average status age and treat it as a
penalty function with respect to variable L. Figure 4 depicts
a graphical representation of all the possible codebooks in
the two dimensional space constructed by E[L] and E[L2 ].
It is proved in [12] that the set of all possible codebooks
forms a convex hull for block coding schemes, and a linear
approximation algorithm is introduced to iteratively search all
code trees lying on the lower left boundary of the convex hull.
The non-linear penalty function in (7) is approximated by a
linear function
C2
C3
decreasing age
Fig. 4. The illustration of convex hull algorithm and the representation of
code trees in the coordinate.
f (L) = α E[L] + β E[L2 ],
and it is assumed there is an efficient algorithm
Find best(α, β) which returns the codebook that minimizes
the penalty function f (L) given any α, β ∈ [0, 1]. The
algorithm starts from two extreme cases: (α, β) = (1, 0) and
(α, β) = (0, 1). Note (α, β) = (1, 0) is the penalty function
that returns a code C1 that is a Huffman code. Furthermore,
(α, β) = (0, 1) returns the minimum second moment code
C2 . Given any two codebooks C1 and C2 , new values are
assigned to α and β as follows:
α0 = E[L2 ](C1 ) − E[L2 ](C2 )
(12)
β 0 = E[L](C2 ) − E[L](C1 ).
(13)
Afterwards, Find best(α0 , β 0 ) is called to search for a possible
codebook between the points of C1 and C2 . Intuitively, this
operation works as follows: we first draw a line segment l that
connects the points C1 and C2 , and then Find best identifies
the lowest line l0 parallel to l that meets the boundary of the
convex hull of all codebooks. If l0 lies below l, then l0 contains
at least one new codebook C3 . This step repeats iteratively by
renewing the value of α and β at each step for the C1 − C3
and C3 − C2 line segments, until we find all the feasible code
trees. A detailed description of the algorithm can be found in
[12].
Then the problem becomes how to efficiently return the best
codebook given a linear penalty function f (L). In [13], it is
shown that the problem of source coding for linear penalties
can be reduced to a coin collector’s problem, and a recursive
Package-Merge algorithm is introduced to solve the problem
in linear time.
Figure 5(a) depicts a numerical example of the average age
using age optimal code with different blocklength B. We use
the three symbols A, B, C with P (A) = 0.6, P (B) = 0.3 and
P (C) = 0.1. And the channel rate R is varied above H(X)
to vary the offered load. When R is large compared to H(X),
the average age grows almost linearly with the blocklength B.
Encoding with large blocklength is a losing proposition, since
what we gain by reducing the output rate of the encoder is
forfeited because the delay of the system is dominated by long
interarrival times of large blocks. Hence, the optimal strategy
in the high FIFO rate region is to choose the smallest possible
blocklength B. In contrast, as R decreases, the sharp transition
effect occurs earlier for smaller B since the corresponding
average code length is larger. Since the redundancy of block
coding decays with the blocklength B, the threshold of the
= 1 as B increases. We say that B
transition approaches H(X)
R
is a valid blocklength for rate R if and only if R is larger than
the code rate using blocklength B. In this region of transition,
it is complicated to obtain the optimal blocklength analytically.
V. C OMPARISON TO O PTIMAL E RROR E XPONENT C ODE
It is shown in [11] that the optimal error exponent ES (R)
can be achieved by a prefix-free block coding scheme that uses
the method of types. For a message block of length B, the
encoder first describes the type of the message block τ using
O(|X | log2 B), then represent the index of the realization
within this type by BH(τ ) bits, where H(τ ) is the entropy
of the type.
Figure 5(b) compares our age-optimal code to the coding
scheme using method of types. We use the same source
distribution as in Section 4 with blocklength B = 3. In this
example, we observe that the age optimal code outperforms
Huffman code when the offered load is high, implying that
Huffman code is nearly optimal in the low load region.
Although the type coding scheme achieves the largest error
exponent, it gives slightly larger average status age compared
to the other two block coding schemes. This is because
the type coding is asymptotically optimal in error, but the
minimization of average age requires us to choose small
blocklength since the term B/2 dominates in (3) when the
channel rate R is much larger than the source entropy H(X).
VI. C ONCLUSION AND F UTURE W ORK
We applied the status age analysis to a real-time lossless
source coding system in which source symbols are encoded sequentially and sent to an interested recipient through an errorfree channel, and showed that the timeliness metric is strongly
connected to the end-to-end delay of the coding system. We
connected the average age to the source coding error exponent
with delay and discussed why the error exponent does not
10
9
8
Average Status Age
(8) becomes too complicated when the blocklength gets large.
Nevertheless, in practical settings of high speed networks,
techniques for handling large blocklengths will be needed.
Similarly, sources with memory and age-optimized universal
coding schemes also merit attention. Finally, this work has
shown that optimal real-time compression over a network
depends strongly on the available network resources, even
if the network is just a fixed-rate bit pipe. More realistic
scenarios with shared network resources are also likely to be
a rich source of unsolved problems.
B=4
B=3
B=2
7
6
5
4
3
A PPENDIX
2
A. Proof of Proposition 1
1
0
0
0.2
0.4
0.6
0.8
1
Offered Load: H(X)/R
Dk = min{t|x̂k (n) = xk , for all n ≥ t }.
(a)
6
Average Status Age
5
(14)
Using the fact that Dk − k only takes non-negative integer
values, the differential time of the k th symbol in the coding
system can be written as
Method of Types
Huffman Code
Age Optimal Code
5.5
For any lossless coding schemes, the delivery time Dk in
(3) can also be defined as
4.5
E[Dk − k] =
4
3.5
=
∞
X
t=0
∞
X
3
t=0
2.5
∞
X
=
2
Pr[Dk − k > t]
Pr{∃x̂k (n) 6= xk , for some n ≥ t + k}
[
Pr
t=0
xk (n) 6= xk .
(15)
n≥t+k
1.5
1
0
0.2
0.4
0.6
0.8
1
Following from the union bound of all the possible error
events, we obtain
Offered Load: H(X)/R
(b)
E[Dk − k] ≤
∞ X
X
Pr[xk (n) 6= xk ].
(16)
t=0 n≥t+k
Fig. 5. Two numerical examples: (a) age optimal code with different
blocklength B. (b) comparison between age optimal block code and the
optimal error exponent code using method of types with blocklength B = 3.
describe the delay in a non-asymptotic setup. Exploiting the
quasi-linear property of the average age expression in block
coding, we also proposed the age optimal block coding scheme
that minimizes the average age. By comparing this scheme to
the optimal coding scheme for error exponent which uses the
method of types, we further showed that maximizing the error
exponent is not equivalent to minimizing the average status
age.
While we have focused here on toy examples, this work
is a starting point for the application of status age analysis
to real-time data compression. We examined how timely the
streaming source coding system can be using lossless block
coding schemes. We presented the connection between the
age and error exponent numerically for small block length
regime, although the asymptotic behavior remains unknown as
the blocklength becomes large. A primary reason is that the
exact expression of the constant K of the error probability in
Note that for any lossless block codes in a point-to-point transmission system controlled by FIFO buffer, the error probability
of the whole sequence is equivalent to the symbol-wise error
probability since a source symbol is decoded only after all
previous symbols were successfully decoded in advance. That
is, Pr[x̂k (n) 6= xk ] = Pr[x̂k (n) 6= xk ]. Using the upper bound
on the error exponent in (8), we have
E[Dk − k] ≤
∞
X
t=0
K
∞
X
2−δES (R) =
δ=t
K22ES (R)
. (17)
(2ES (R) − 1)2
B. Error Probability of Prefix Block Code Example.
Let Bk to be the number of bits in the buffer after the
transition at time 2k, then Bk forms a Markov chain since
the incoming codeword every two symbol time is either of
length 1 or length 4. That is, Bk+1 = Bk − 2 with probability
q = a2 , and Bk+1 = Bk + 1 with probability 1 − q. Note that
the boundary condition is Bk > 0. The stationary distribution
of Bk is obtained as
µj = Λη j ,
(18)
P
where Λ is the normalizer such that j≥0 µj = 1 and η =
q
− 12 + 12 1 + 4(1−q)
. Thus we rewrite µj = Λη j = η j −η j+1 .
q
The stationary distribution exists iff η < 1, requiring q ∈
( 13 , 1]. Denote Lk+1 as the next incoming codeword length
after the buffer state Bk , and βδ = b3(δ − 1)/2c. Following
the stationary distribution in (18), we can bound the symbolwise error probability using the outage probability of the buffer
and obtain
Pe ≤ Pr[Lk+1 = 1] Pr[Bk > βδ − 1]
+ Pr[Lk+1 = 4] Pr[Bk > βδ − 4]
=
1,
0≤δ<1
3
1 − q + η −3/2 [q(1 − η)]2δ( 2 log2 η) , 1 ≤ δ < 3
−9/2
3
η
[1 − q(1 − η 3 )]2δ( 2 log2 η) ,
δ ≥ 3.
(19)
R EFERENCES
[1] S. Kaul, R. Yates, and M. Gruteser, “Real-time status: How often should
one update?” in Proc. INFOCOM, Apr. 2012, pp. 2731–2735.
[2] R. Yates and S. Kaul, “Real-time status updating: Multiple sources,” in
Proc. IEEE Int. Symp. Inform. Theory, Jul. 2012, pp. 2666–2670.
[3] C. Kam, S. Kompella, and A. Ephremides, “Age of information under
random updates,” in Proc. IEEE Int. Symp. Inform. Theory, Jul. 2013,
pp. 66–70.
[4] M. Costa, M. Codreanu, and A. Ephremides, “Age of information with
packet management,” in Proc. IEEE Int. Symp. Inform. Theory, 2014,
pp. 1583–1587.
[5] L. Huang and E. Modiano, “Optimizing age-of-information in a multiclass queueing system,” in Proc. IEEE Int. Symp. Inform. Theory, Jun.
2015, pp. 1681–1685.
[6] S. Butner and M. Ghodoussi, “Transforming a surgical robot for human
telesurgery,” IEEE Trans. Robotics and Automation, vol. 19, no. 5, pp.
818–824, Oct. 2003.
[7] C. Cheng and A. Sahai, “The error exponent with delay for lossless
source coding,” in IEEE Inf. Theory Workshop, Mar. 2006, pp. 252–
256.
[8] C. Chang and A. Sahai, “Delay-constrained source coding for a peak
distortion measure,” in Proc. IEEE Int. Symp. Inform. Theory, 2007, pp.
576–580.
[9] S. C. Draper, C. Chang, and A. Sahai, “Lossless coding for distributed
streaming sources,” IEEE Trans. Inf. Theory, vol. 60, no. 3, pp. 1447–
1474, 2014.
[10] K. Marshall and R. V. Evans, “Some inequalities in queuing,” Operations
Research, pp. 651–668, 1968.
[11] C. Chang, “Streaming source coding with delay,” Ph.D. dissertation, UC
Berkeley, 2007.
[12] L. L. Larmore, “Minimum delay codes,” SIAM Journal on Computing,
1989.
[13] M. B. Baer, “Source coding for quasiarithmetic penalties,” IEEE Trans.
Inf. Theory, vol. 52, no. 10, pp. 4380–4393, 2006.
| 7 |
1
Semantic Web 0 (0) 1
IOS Press
Towards a Question Answering System over
the Semantic Web
Dennis Diefenbach a , Andreas Both b , Kamal Singh a and Pierre Maret a
a
arXiv:1803.00832v1 [] 2 Mar 2018
b
Université de Lyon, CNRS UMR 5516 Laboratoire Hubert Curien, France
DATEV eG, Germany
Abstract. Thanks to the development of the Semantic Web, a lot of new structured data has become available on the Web in
the form of knowledge bases (KBs). Making this valuable data accessible and usable for end-users is one of the main goals of
Question Answering (QA) over KBs. Most current QA systems query one KB, in one language (namely English). The existing
approaches are not designed to be easily adaptable to new KBs and languages.
We first introduce a new approach for translating natural language questions to SPARQL queries. It is able to query several KBs
simultaneously, in different languages, and can easily be ported to other KBs and languages. In our evaluation, the impact of
our approach is proven using 5 different well-known and large KBs: Wikidata, DBpedia, MusicBrainz, DBLP and Freebase as
well as 5 different languages namely English, German, French, Italian and Spanish. Second, we show how we integrated our
approach, to make it easily accessible by the research community and by end-users.
To summarize, we provided a conceptional solution for multilingual, KB-agnostic Question Answering over the Semantic Web.
The provided first approximation validates this concept.
Keywords: Question Answering, Multilinguality, Portability, QALD, SimpleQuestions
1. Introduction
Question Answering (QA) is an old research field
in computer science that started in the sixties [28]. In
the Semantic Web, a lot of new structured data has become available in the form of knowledge bases (KBs).
Nowadays, there are KBs about media, publications,
geography, life-science and more1 . The core purpose
of a QA system over KBs is to retrieve the desired information from one or many KBs, using natural language questions. This is generally addressed by translating a natural language question to a SPARQL query.
Current research does not address the challenge of
multilingual, KB-agnostic QA for both full and keyword questions (Table 1).
There are multiple reasons for that. Many QA approaches rely on language-specific tools (NLP tools),
e.g., SemGraphQA [2], gAnswer [51] and Xser [46].
Therefore, it is difficult or impossible to port them
to a language-agnostic system. Additionally, many
1 http://lod-cloud.net
approaches make particular assumptions on how the
knowledge is modelled in a given KB (generally referred to as “structural gap” [10]). This is the case of
AskNow [15] and DEANNA [47].
There are also approaches which are difficult to port
to new languages or KBs because they need a lot of
training data which is difficult and expensive to create.
This is for example the case of Bordes et al. [4]. Finally
there are approaches where it was not proven that they
scale well. This is for example the case of SINA [37].
In this paper, we present an algorithm that addresses
all of the above drawbacks and that can compete, in
terms of F-measure, with many existing approaches.
This publication is organized as follows. In section 2,
we present related works. In section 3 and 4, we describe the algorithm providing the foundations of our
approach. In section 5, we provide the results of our
evaluation over different benchmarks. In section 6, we
show how we implemented our algorithm as a service
so that it is easily accessible to the research community, and how we extended a series of existing services
1570-0844/0-1900/$35.00 c 0 – IOS Press and the authors. All rights reserved
2
D. Diefenbach et al. / Towards a Question Answering System over the Semantic Web
QA system
Lang
KBs
Type
gAnswer [51]
(QALD-3 Winner)
en
DBpedia
full
Xser [46] (QALD-4 &
5 Winner)
en
DBpedia
full
UTQA [34]
en, es,
fs
DBpedia
full
Jain [25]
(WebQuestions
Winner)
en
Freebase
full
Lukovnikov [29]
(SimpleQuestions
Winner)
en
Freebase
full
Ask Platypus
(https://askplatyp.us)
en
Wikidata
full
WDAqua-core1
en, fr,
de,
it, es
Wikidata,
DBpedia,
Freebase,
DBLP,
MusicBrainz
full &
key
Table 1
Selection of QA systems evaluated over the most popular benchmarks. We indicated their capabilities with respect to multilingual
questions, different KBs and different typologies of questions (full
= “well-formulated natural language questions”, key = “keyword
questions”).
so that our approach can be directly used by end-users.
We conclude with section 7.
2. Related work
In the context of QA, a large number of systems
have been developed in the last years. For a complete
overview, we refer to [10]. Most of them were evaluated on one of the following three popular benchmarks: WebQuestions [3], SimpleQuestions [4] and
QALD2 .
WebQuestions contains 5810 questions that can be
answered by one reefied statement. SimpleQuestions
contains 108442 questions that can be answered using a single, binary-relation. The QALD challenge versions include more complex questions than the previous ones, and contain between 100 and 450 questions, and are therefore, compared to the other, small
datasets.
The high number of questions of WebQuestions and
SimpleQuestions led to many supervised-learning approaches for QA. Especially deep learning approaches
became very popular in the recent years like Bordes
et al. [4] and Zhang et al. [49]. The main drawback
of these approaches is the training data itself. Creating a new training dataset for a new language or a new
KB might be very expensive. For example, Berant et
al. [3], report that they spent several thousands of dollars for the creation of WebQuestions using Amazon
Mechanical Turk. The problem of adapting these approaches to new dataset and languages can also be seen
by the fact that all these systems work only for English
questions over Freebase.
A list of the QA systems that were evaluated with
QALD-3, QALD-4, QALD-5, QALD-6 can be found
in Table 3. According to [10] less than 10% of the approaches were applied to more than one language and
5% to more than one KB. The reason is the heavy use
of NLP tools or NL features like in Xser [46], gAnswer [51] or QuerioDali [27].
The problem of QA in English over MusicBrainz3 was
proposed in QALD-1, in the year 2011. Two QA systems tackled this problem. Since then the MusicBrainz
KB4 completely changed. We are not aware of any QA
system over DBLP5 .
In summary, most QA systems work only in English
and over one KB. Multilinguality is poorly addressed
while portability is generally not addressed at all.
The fact that QA systems often reuse existing techniques and need several services to be exposed to the
end-user, leads to the idea of developing QA systems
in a modular way. At least four frameworks tried to
achieve this goal: QALL-ME [17], openQA [30], the
Open Knowledge Base and Question-Answering (OKBQA) challenge6 and Qanary [5, 12, 38]. We integrated our system as a Qanary QA component called
WDAqua-core1. We choose Qanary for two reasons.
First, it offers a series of off-the-shelf services related
to QA systems and second, it allows to freely configure
a QA system based on existing QA components.
3. Approach for QA over Knowledge Bases
In this section, we present our multilingual, KBagnostic approach for QA. It is based on the observation that many questions can be understood from
the semantics of the words in the question while the
syntax of the question has less importance. For ex3 https://musicbrainz.org
4 https://github.com/LinkedBrainz/MusicBrainz-R2RML
5 http://dblp.uni-trier.de
2
http://www.sc.cit-ec.uni-bielefeld.de/qald/
6 http://www.okbqa.org/
D. Diefenbach et al. / Towards a Question Answering System over the Semantic Web
3
– All IRIs are searched whose lexicalization (up to
stemming) is an n-gram N (up to stemming) in
the question.
– If an n-gram N is a stop word (like “is”, “are”,
“of”, “give”, . . . ), then we exclude the IRIs associated to it. This is due to the observation that the
semantics are important to understand a question
and the fact that stop words do not carry a lot of
semantics. Moreover, by removing the stop words
the time needed in the next step is decreased.
Fig. 1. Conceptual overview of the approach
ample, consider the question “Give me actors born in
Berlin”. This question can be reformulated in many
ways like “In Berlin were born which actors?” or as a
keyword question “Berlin, actors, born in”. In this case
by knowing the semantics of the words “Berlin”, “actors”, “born”, we are able to deduce the intention of
the user. This holds for many questions, i.e. they can
be correctly interpreted without considering the syntax as the semantics of the words is sufficient for them.
Taking advantage of this observation is the main idea
of our approach. The KB encodes the semantics of the
words and it can tell what is the most probable interpretation of the question (w.r.t. the knowledge model
described by the KB).
Our approach is decomposed in 4 steps: question expansion, query construction, query ranking and response decision. A conceptual overview is given in
Figure 1. In the following, the processing steps are described. As a running example, we consider the question “Give me philosophers born in Saint-Etienne”. For
the sake of simplicity, we use DBpedia as KB to answer the question. However, it is important to recognize that no assumptions either about the language or
the KB are made. Hence, even the processing of the
running example is language- and KB-agnostic.
An example is given in Table 2. The stop words and
the lexicalizations used for the different languages and
KBs are described in section 5.1. In this part, we used a
well-known Apache Lucene Index7 technology which
allows fast retrieval, while providing a small disk and
memory footprint.
3.2. Query construction
In this step, we construct a set of queries that represent possible interpretations of the given question
within the given KB. Therefore, we heavily utilize the
semantics encoded into the particular KB. We start
with a set R of IRIs from the previous step. The goal
is to construct all possible queries containing the IRIs
in R which give a non-empty result-set. Let V be the
set of variables. Based on the complexity of the questions in current benchmarks, we restrict our approach
to queries satisfying 4 patterns:
SELECT / ASK var
WHERE { s1 s2 s3 . }
SELECT / ASK var
WHERE { s1 s2 s3 .
s4 s5 s6 . }
3.1. Expansion
Following a recent survey [10], we call a lexicalization, a name of an entity, a property or a class. For example, “first man on the moon” and “Neil Armstrong”
are both lexicalizations of dbr:Neil_Armstrong.
In this step, we want to identify all entities, properties and classes, which the question could refer to. To
achieve this, we use the following rules:
with
s1, ..., s6 ∈ R ∪ V
and
var ∈ {s1, ...,s6} ∩ V
7 https://lucene.apache.org
4
D. Diefenbach et al. / Towards a Question Answering System over the Semantic Web
n
start
end
n-gram
resource
n
start
end
n-gram
resource
1
2
3
4
2
2
2
2
3
3
3
3
philosophers
philosophers
philosophers
philosophers
dbrc:Philosophes
dbr:Philosophes
dbo:Philosopher
dbrc:Philosophers
52
53
54
55
5
5
5
5
6
6
6
6
saint
saint
saint
saint
dbr:SAINT_(software)
dbr:Saint
dbr:Boxers_and_Saints
dbr:Utah_Saints
5
6
7
8
9
10
11
..
.
2
2
2
3
3
3
3
3
3
3
4
4
4
4
philosophers
philosophers
philosophers
born
born
born
born
dbr:Philosopher
dbr:Philosophy
dbo:philosophicalSchool
dbr:Born,_Netherlands
dbr:Born_(crater)
dbr:Born_auf_dem_Dar?
dbr:Born,_Saxony-Anhalt
56
57
58
59
..
.
5
5
5
5
6
6
6
6
saint
saint
saint
saint
dbr:Saints,_Luton
dbr:Baba_Brooks
dbr:Battle_of_the_Saintes
dbr:New_York_Saints
106
107
5
5
6
6
saint
saint
dbp:saintPatron
dbp:saintsDraft
42
43
44
45
46
47
48
49
3
3
3
3
3
3
3
3
4
4
4
4
4
4
5
5
born
born
born
born
born
born
born in
born in
dbp:bornAs
dbo:birthDate
dbo:birthName
dbp:bornDay
dbp:bornYear
dbp:bornDate
dbp:bornIn
dbo:birthPlace
108
109
110
111
112
113
114
115
116
5
5
5
5
5
5
5
5
5
6
6
6
6
6
6
7
7
7
saint
saint
saint
saint
saint
saint
saint etienne
saint etienne
saint etienne
dbp:saintsSince
dbo:patronSaint
dbp:saintsCollege
dbp:patronSaintOf
dbp:patronSaint(s)
dbp:patronSaint’sDay
dbr:Saint_Etienne_(band)
dbr:Saint_Etienne
dbr:Saint-Étienne
50
3
5
born in
dbo:hometown
117
6
7
etienne
dbr:Étienne
Table 2
Expansion step for the question “Give me philosophers born in Saint Étienne”. The first column enumerates the candidates that were found.
Here, 117 possible entities, properties and classes were found from the question. The second, third and fourth columns indicate the position of
the n-gram in the question and the n-gram itself. The last column is for the associated IRI. Note that many possible meanings are considered:
line 9 says that “born” may refer to a crater, line 52 that “saint” may refer to a software and line 114 that the string “Saint Étienne” may refer to
a band.
, i.e. all queries containing one or two triple patterns
that can be created starting from the IRIs in R. Moreover, for entity linking, we add the following two patterns:
SELECT ?x
WHERE { VALUES
?x {iri} . }
SELECT ?x
WHERE { VALUES
An example of generated queries is given in Figure 2.
The main challenge is the efficient construction of
these SPARQL queries. The main idea is to perform in
the KB graph a breadth-first search of depth 2 starting
from every IRI in R. While exploring the KB for all
IRIs r j ∈ R (where r j 6= ri ) the distance dri ,r j between
two resources is stored. These numbers are used when
constructing the queries shown above. For a detailed
algorithm of the query construction phase, please see
section 4. Concluding, in this section, we computed a
set of possible SPARQL queries (candidates). They are
driven by the lexicalizations computed in section 3.1
and represent the possible intentions expressed by the
question of the user.
?x {iri} .
iri ?p iri1 . }
with iri, iri1 ∈ R, i.e. all queries returning directly one of the IRIs in R with possibly one additional
triple.
Note that these last queries just give back directly an
entity and should be generated for a question like:
“What is Apple Company?” or “Who is Marie Curie?”.
3.3. Ranking
Now the computed candidates need to be ordered by
their probability of answering the question correctly.
Hence, we rank them based on the following features:
– Number of the words in the question which are
covered by the query. For example, the first query
in Figure 2 is covering two words (“Saint” and
“born”).
D. Diefenbach et al. / Towards a Question Answering System over the Semantic Web
– SELECT DISTINCT ?y WHERE {
dbr:Saint_(song) ?p ?x .
?x dbo:hometown ?y . }
– SELECT ?x {
VALUES ?x { dbr:Saint_Etienne_(band) } }
– SELECT DISTINCT ?y WHERE {
?x dbo:birthPlace dbr:Saint-Etienne .
?x dbo:birthDate ?y . }
– SELECT DISTINCT ?y WHERE {
?x ?p dbr:Philosophy .
?x dbo:birthDate ?y . }
Fig. 2. Some of the 395 queries constructed for the question “Give
me philosophers born in Saint Etienne.”. Note that all queries could
be semantically related to the question. The second one is returning
“Saint-Etienne” as a band, the third one the birth date of people born
in the city of “Saint-Etienne” and the forth one the birth date of
persons related to philosophy.
– The edit distance of the label of the resource and
the word it is associated to. For example, the edit
distance between the label of dbp:bornYear
(which is “born year”) and the word “born” is 5.
– The sum of the relevance of the resources, (e.g.
the number of inlinks and the number of outlinks
of a resource). This is a knowledge base independent choice, but it is also possible to use a specific
score for a KB (like page-rank).
– The number of variables in the query.
– The number of triples in the query.
If no training data is available, then we rank the queries
using a linear combination of the above 5 features,
where the weights are determined manually. Otherwise we assume a training dataset of questions together
with the corresponding answers set, which can be used
to calculate the F-measure for each of the SPARQL
query candidates. As a ranking objective, we want to
order the SPARQL query candidates in descending order with respect to the F-measure. In our exemplary
implementation we rank the queries using RankLib8
with Coordinate Ascent [31]. At test time the learned
model is used to rank the queries, the top-ranked query
8 https://sourceforge.net/p/lemur/wiki/RankLib/
5
is executed against a SPARQL endpoint, and the result
is computed. An example is given in Figure 3. Note
that, we do not use syntactic features. However, it is
possible to use them to further improve the ranking.
3.4. Answer Decision
The computations in the previous section lead to a
list of ranked SPARQL queries candidates representing our possible interpretations of the user’s intentions.
Although the quality of this processing step is high
(as shown in several experiments), an additional confidence score is computed. We construct a model based
on logistic regression. We use a training set consisting
of SPARQL queries and the labels true or false. True
indicates if the F-score of the SPARQL query is bigger than a threshold θ1 or false otherwise. Once the
model is trained, it can compute a confidence score
pQ ∈ [0, 1] for a query Q. In our exemplary implementation we assume a correctly ordered list of SPARQL
query candidates computed in section 3.3. Hence, it
only needs to be checked whether pQ1 > θ2 is true for
the first ranked query Q1 of the SPARQL query candidates, or otherwise it is assumed that the whole candidate list is not reflecting the user’s intention. Hence,
we refuse to answer the question. We answer the question if it is above a threshold θ2 otherwise we do
not answer it. Note that pQ can be interpreted as the
confidence that the QA system has in the generated
SPARQL query Q, i.e. in the generated answer.
3.5. Multiple KBs
Note that the approach can also be extended, as it is,
to multiple KBs. In the query expansion step, one has
just to take in consideration the labels of all KBs. In
the query construction step, one can consider multiple
KBs as one graph having multiple unconnected components. The query ranking and answer decision step
are literally the same.
3.6. Discussion
Overall, we follow a combinatorial approach with
efficient pruning, that relies on the semantics encoded
in the underlying KB.
In the following, we want to emphasize the advantages
of this approach using some examples.
– Joint disambiguation of entities and relations:
For example, for interpreting the question “How
6
D. Diefenbach et al. / Towards a Question Answering System over the Semantic Web
1. SELECT DISTINCT ?x WHERE {
?x dbp:birthPlace dbr:Saint-Etienne .
?x rdf:type dbo:Philosopher . }
2. SELECT DISTINCT ?y WHERE {
?x
?x
dbo:birthPlace dbr:Saint-Etienne .
dbo:philosophicalSchool ?y . }
3. SELECT DISTINCT ?x WHERE {
?x
dbp:birthPlace dbr:Saint-Etienne . }
4. SELECT DISTINCT ?x WHERE {
?x
dbo:hometown dbr:Saint-Etienne . }
Fig. 3. The top 4 generated queries for the question “Give me
philosophers born in Saint Étienne.”. (1) is the query that best
matches the question; (2) gives philosophical schools of people born
in Saint-Étienne; (3)(4) give people born in Saint-Étienne or that live
in Saint-Étienne. The order can be seen as a decreasing approximation to what was asked.
many inhabitants has Paris?” between the hundreds of different meanings of “Paris” and “inhabitants” the top ranked queries contain the resources called “Paris” which are cities, and the
property indicating the population, because only
these make sense semantically.
– Portability to different KBs: One problem in
QA over KBs is the semantic gap, i.e. the difference between how we think that the knowledge is
encoded in the KB and how it actually is. For example, in our approach, for the question “What is
the capital of France?”, we generate the query
SELECT ?x WHERE {
dbr:France dbp:capital ?x .
}
which probably most users would have expected,
but also the query
SELECT ?x {
VALUES ?x {
dbr:List_of_capitals_of_France
}
}
which refers to an overview article in Wikipedia
about the capitals of France and that most of the
users would probably not expect. This important
feature allows to port the approach to different
KBs while it is independent of how the knowledge is encoded.
– Ability to bridge over implicit relations: We are
able to bridge over implicit relations. For example, given “Give me German mathematicians” the
following query is computed:
SELECT DISTINCT ?x WHERE {
?x ?p1 dbr:Mathematician .
?x ?p2 dbr:Germany .
}
Here ?p1 is:
• dbo:field
• dbo:occupation,
• dbo:profession
and ?p2 is:
•
•
•
•
dbo:nationality,
dbo:birthPlace,
dbo:deathPlace,
dbo:residence.
Note that all these properties could be intended
for the given question.
– Easy to port to new languages: The only parts
where the language is relevant are the stop word
removal and stemming. Since these are very easy
to adapt to new languages, one can port the approach easily to other languages.
– Permanent system refinement: It is possible to
improve the system over time. The system generates multiple queries. This fact can be used to
easily create new training dataset as it is shown
in [11]. Using these datasets one can refine the
ranker to perform better on the asked questions.
– System robust to malformed questions and
keyword questions: There are no NLP tools used
in the approach which makes it very robust to
malformed questions. For this reason, keyword
questions are also supported.
A disadvantage of our exemplary implementation is
that the identification of relations relies on a dictionary. Note that, non-dictionary based methods follow
one of the following strategies. Either they try to learn
ways to express the relation from big training corpora
(like in [4]), s.t. the problem is shifted to create suitable training sets. Or text corpora are used to either extract lexicalizations for properties (like in [3]) or learn
D. Diefenbach et al. / Towards a Question Answering System over the Semantic Web
word-embeddings (like in [22]). Hence, possible improvements might be applied to this task in the future.
4. Fast candidate generation
In this section, we explain how the SPARQL queries
described in section 3.2 can be constructed efficiently.
Let R be a set of resources. We consider the KB as a
directed labeled graph G:
Definition 1. (Graph) A directed labeled graph is an
ordered pair G = (V, E, f ), such that:
– V is a non-empty set, called the vertex set;
– E is a set, called edge set, such that E ⊂ {(v, w) :
v, w ∈ V}, i.e. a subset of the pairs of V;
– For a set L called labeled set, f is a function f :
E → L, i.e. a function that assigns to each edge a
label p ∈ L. We indicate an edge with label p as
e = (v, p, w).
To compute the pairwise distance in G between every resource in R, we do a breadth-first search from every resource in R in an undirected way (i.e. we traverse
the graph in both directions).
We define a distance function d as follows. Assume
we start from a vertex r and find the following two
edges e1 = (r, p1 , r1 ), e2 = (r1 , p2 , r2 ). We say that
dr,p1 = 1, dr,r1 = 2, dr,p2 = 3 and so on. When an edge
is traversed in the opposite direction, we add a minus
sign. For example, given the edges e1 = (r, p1 , r1 ) and
e2 = (r2 , p2 , r1 ), we say dr,p2 = −3. For a vertex or
edge r, and a variable x we artificially set dr,x to be any
possible integer number. Moreover, we set d x,y = dy,x
for any x, y. The algorithm to compute these numbers
can be found in Algorithm 1.
The algorithm of our exemplary implementation
simply traverses the graph starting from the nodes in
R in a breadth-first search manner and keeps track
of the distances as defined above. The breadth-first
search is done by using HDT [16] as an indexing structure9 . Note that HDT was originally developed as an
exchange format for RDF files that is queryable. A
rarely mentioned feature of HDT is that it is perfectly
suitable for performing breadth-first search operations
over RDF data. In HDT, the RDF graph is stored as
an adjacency list which is an ideal data structure for
breadth-first search operations. This is not the case for
traditional triple-stores. The use of HDT at this point
9 https://www.w3.org/Submission/2011/03/
7
Data: Graph G = (V, E, f ) and a set R of edges
and labels
Result: The pairwise distance between elements
in R
1 for r ∈ R ∩ V do
2
for e1 =(r,p1 ,r1 )∈ E do
3
if p1 ∈ R then dr,p1 = 1; if r1 ∈ R then
dr,l1 = 2
4
for (e2 = (r1 , p2 , r2 ) ∈ E) do
5
if p2 ∈ R then dr,p2 = 3; if r2 ∈ R
then dr,22 = 4;
6
if p1 , p2 ∈ R then d p1 ,p2 = 2: if
p1 , r2 ∈ R then d p1 ,r2 = 3
7
end
8
for (e2 = (r2 , p2 , r1 ) ∈ E) do
9
if p2 ∈ R then dr,p2 = −3; if r2 ∈ R
then dr,22 = −4
10
if p1 , p2 ∈ R then d p1 ,p2 = −2; if
p1 , p2 ∈ R then d p1 ,22 = −3
11
end
12
end
13
for e1 =(r1 ,p1 ,r)∈ E do
14
if p1 ∈ R then dr,p1 = −1; if r1 ∈ R then
dr,l1 = −2
15
for (e2 = (r1 , p2 , r2 ) ∈ E) do
16
if p2 ∈ R then dr,p2 = 3; if r2 ∈ R
then dr,22 = 4
17
if p1 , p2 ∈ R then d p1 ,p2 = 2; if
p1 , r2 ∈ R then d p1 ,r2 = 3
18
end
19
for (e2 = (r2 , p2 , r1 ) ∈ E) do
20
if p2 ∈ R then dr,p2 = 3; if r2 ∈ R
then dr,22 = 4
21
if p1 , p2 ∈ R then d p1 ,p2 = 2; if
p1 , p2 ∈ R then d p1 ,22 = 3
22
end
23
end
24 end
Algorithm 1: Algorithm to compute the pairwise distance between every resource in a set R appearing in
a KB.
is key for two reasons, (1) the performance of the
breadth-first search operations, and (2) the low footprint of the index in terms of disk and memory space.
Roughly, a 100 GB RDF dump can be compressed to
a HDT file of a size of approx. 10 GB [16].
Based on the numbers above, we now want to construct
all triple patterns with K triples and one projection
variable recursively. Given a triple pattern T , we only
8
D. Diefenbach et al. / Towards a Question Answering System over the Semantic Web
Data: Graph G = (V, E, f ) and a set R of vertices and edges, and their pairwise distance d
Result: All connected triple patterns in G from a set R of vertices and edges with maximal K triple patterns
1 L = ∅ #list of triple patterns
2 V s,o = ∅ #set of variables in subject, object position
3 V p = ∅ #set of variables in predicate position
4 k=0
5 Function generate (L,k)
6
for s1 ∈ (R ∩ V) ∪ V s,o ∪ {xk,1 } do
7
for s2 ∈ (R ∩ P) ∪ V p ∪ {xk,2 } do
8
for s3 ∈ (R ∩ V) ∪ V s,o ∪ {xk,3 } do
9
if k = 0 ∧ d s2 ,s3 = −1 ∧ d s1 ,s2 = 1 ∧ d s1 ,s3 = 2 then L ← L ∪ {(s1 , s2 , s3 )}
10
for T ∈ L(k) do
11
b1 = true; b2 = true; b3 = true; b4 = true;
12
for (t1 , t2 , t3 ) ∈ T do
13
if not
(s1 = t1 ∧dt1 ,s2 = 2∧dt1 ,s3 = 3∧dt2 ,s2 = −2∧dt2 ,s3 = −3∧dt3 ,s2 = −3∧dt3 ,s3 = −4)
then b1 = f alse
14
if not (s1 = t3 ∧ dt1 ,s2 = 3 ∧ dt1 ,s3 = 4 ∧ dt2 ,s2 = 2 ∧ dt2 ,s3 = 3 ∧ dt3 ,s2 = 1 ∧ dt3 ,s3 = 2)
then b2 = f alse
15
if not (s3 = t1 ∧ dt1 ,s2 = −1 ∧ dt1 ,s3 = −4 ∧ dt2 ,s2 = −2 ∧ dt2 ,s3 = −3 ∧ dt3 ,s2 =
−1 ∧ dt3 ,s3 = −2) then b3 = f alse
16
if not (s3 = t3 ∧ dt1 ,s2 = −3 ∧ dt1 ,s3 = 2 ∧ dt2 ,s2 = −2 ∧ dt2 ,s3 = −1 ∧ dt3 ,s2 = −1)
then b4 = f alse
17
end
18
if b1 = true ∨ b2 = true ∨ b3 = true ∨ b4 = true then
19
L ← L ∪ (T ∪ (s1 , s2 , s3 ));
20
V s,o ← V s,o ∪ {s1 , s3 };
21
V p ← V ∪ {s2 })
22
end
23
if (k!=K) then
24
return generate(L,k+1)
25
end
26
end
27
end
28
end
29
end
Algorithm 2: Recursive algorithm to create all connected triple patterns from a set R of resources with maximal K
triple patterns. L contains the triple patterns created recursively and L(k) indicates the triple patterns with exactly k
triples. Note that the “if not” conditions very often are not fulfilled. This guarantees the speed of the process.
want to build connected triple-pattern while adding
triples to T . This can be done recursively using the algorithm described in Algorithm 2. Note that thanks to
the numbers collected during the breadth-first search
operations, this can be performed very fast. Once the
triple patterns are constructed, one can choose any of
the variables, which are in subject or object position,
as a projection variable.
The decision to generate a SELECT or and ASK query,
is made depending on some regex expressions over the
beginning of the question.
5. Evaluation
To validate the approach w.r.t. multilinguality, portability and robustness, we evaluated our approach using multiple benchmarks for QA that appeared in the
last years. The different benchmarks are not compa-
D. Diefenbach et al. / Towards a Question Answering System over the Semantic Web
rable and they focus on different aspects of QA. For
example SimpleQuestions focuses on questions that
can be solved by one simple triple-pattern, while LCQuAD focuses on more complex questions. Moreover,
the QALD questions address different challenges including multilinguality and the use of keyword questions. Unlike previous works, we do not focus on one
benchmark, but we analyze the behaviour of our approach under different scenarios. This is important, because it shows that our approach is not adapted to one
particular benchmark, as it is often done by existing
QA systems, and proofs its portability.
We tested our approach on 5 different datasets namely
Wikidata10 , DBpedia11 , MusicBrainz12 , DBLP13 and
Freebase14 . Moreover, we evaluated our approach on
five different languages namely: English, German,
French, Italian and Spanish. First, we describe how we
selected stop words and collected lexicalizations for
the different languages and KBs, then we describe and
discuss our results.
5.1. Stop Words and lexicalizations
As stop words, we use the lists, for the different languages, provided by Lucene, together with some words
which are very frequent in questions like “what”,
“which”, “give”.
Depending on the KB, we followed different strategies to collect lexicalizations. Since Wikidata has
a rich number of lexicalizations, we simply took
all lexicalizations associated to a resource through
rdfs:label15 , skos:prefLabel16 and skos:
altLabel. For DBpedia, we only used the English
DBpedia, where first all lexicalizations associated to
a resource through the rdfs:label property were
collected. Secondly, we followed the disambiguation
and redirect links to get additional ones and took also
into account available demonyms dbo:demonym
(i.e. to dbr:Europe we associate also the lexicalization “European”). Thirdly, by following the interlanguage links, we associated the labels from the
other languages to the resources. DBpedia properties are poorly covered with lexicalizations, especially
10 https://www.wikidata.org/
11 http://dbpedia.org
12 https://musicbrainz.org
when compared to Wikidata. For example, the property dbo:birthPlace has only one lexicalization
namely “birth place”, while the corresponding property over Wikidata P19 has 10 English lexicalizations
like “birthplace”, “born in”, “location born”, “birth
city”. In our exemplary implementation two strategies
were implemented. First, while aiming at a QA system
for the Semantic Web we also can take into account
interlinkings between properties of distinguished KBs,
s.t. lexicalizations are merged from all KBs currently
considered. There, the owl:sameAs links from DBpedia relations to Wikidata are used and every lexicalization present in Wikidata is associated to the corresponding DBpedia relation. Secondly, the DBpedia
abstracts are used to find more lexicalizations for the
relations. To find new lexicalizations of a property p
we follow the strategy proposed by [18]. We extracted
from the KB the subject-object pairs (x,y) that are connected by p. Then the abstracts are scanned and all
sentences are retrieved which contain both label(x)
and label(y). At the end, the segments of text between
label(x) and label(y), or label(y) and label(x) are extracted. We rank the extracted text segments and we
choose the most frequent ones. This was done only for
English.
For MusicBrainz we used the lexicalizations attached
to purl:title17 , foaf:name18 , skos:altLabel
and rdfs:label. For DBLP only the one attached
to rdfs:label. Note, MusicBrainz and DBLP contain only few properties. We aligned them manually
with Wikidata and moved the lexicalizations from one
KB to the other. The mappings can be found under
http://goo.gl/ujbwFW and http://goo.gl/ftzegZ respectively. This took in total 1 hour of manual work.
For Freebase, we considered the lexicalizations attached to rdfs:label. We also followed the few
available links to Wikidata. Finally, we took the 20
most prominent properties in the training set of the
SimpleQuestions benchmark and looked at the lexicalizations of them in the first 100 questions of SimpleQuestions. We extracted manually the lexicalizations
for them. This took 1 hour of manual work. We did not
use the other (75810 training and 10845 validation)
questions, i.e. despite previews works we only took a
small fraction of the available training data.
13 http://dblp.uni-trier.de
14 https://developers.google.com/freebase/
15 rdfs:
16 skos:
http://www.w3.org/2000/01/rdf-schema#
http://www.w3.org/2004/02/skos/core#
9
17 purl:
18 foaf:
http://purl.org/dc/elements/1.1/
http://xmlns.com/foaf/
10
D. Diefenbach et al. / Towards a Question Answering System over the Semantic Web
5.2. Experiments
To show the performance of the approach on different scenarios, we benchmarked it using the following
benchmarks.
5.2.1. Benchmarks
QALD: We evaluated our approach using the QALD
benchmarks. These benchmarks are good to see the
performance on multiple-languages and over both fullnatural language questions and keyword questions. We
followed the metrics of the original benchmarks. Note
that the metrics changed in QALD-7. The results are
given in Table 3 together with state-of-the-art systems.
To find these, we used Google Scholar to select all publications about QA systems that cited one of the QALD
challenge publications. Note that, in the past, QA systems were evaluated only on one or two of the QALD
benchmarks. We provide, for the first time, an estimation of the differences between the benchmark series.
Over English, we outperformed 90% of the proposed
approaches. We do not beat Xser [46] and UTQA [34].
Note that these systems required additional training
data than the one provided in the benchmark, which
required a significant cost in terms of manual effort.
Moreover, the robustness of these systems over keyword questions is probably not guaranteed. We cannot
prove this claim because for these systems neither the
source code nor a web-service is available.
Due to the manual effort required to do an error analysis for all benchmarks and the limited space, we restricted to the QALD-6 benchmark. The error sources
are the following. 40% are due to lexical gap (e.g.
for “Who played Gus Fring in Breaking Bad?” the
property dbo:portrayer is expected), 28% come
from wrong ranking, 12% are due to the missing support of superlatives and comparatives in our implementation (e.g. “Which Indian company has the most
employees?”), 9% from the need of complex queries
with unions or filters (e.g. the question “Give me a
list of all critically endangered birds.” requires a filter on dbo:conservationStatus equal “CR”),
6% come from out of scope questions (i.e. question
that should not be answered), 2% from too ambiguous
questions (e.g. “Who developed Slack?” is expected to
refer to a “cloud-based team collaboration tool” while
we interpret it as “linux distribution”). One can see that
keyword queries always perform worst as compared
to full natural language queries. The reason is that the
formulation of the keyword queries does not allow to
decide if the query is an ASK query or if a COUNT
is needed (e.g. “Did Elvis Presley have children?” is
formulated as “Elvis Presley, children”). This means
that we automatically get these questions wrong.
To show the performance over Wikidata, we consider
the QALD-7 task 4 training dataset. This originally
provided only English questions. The QALD-7 task 4
training dataset reuses questions over DBpedia from
previous challenges where translations in other languages were available. We moved these translations to
the dataset. The results can be seen in Table 4. Except
for English, keyword questions are easier than full natural language questions. The reason is the formulation
of the questions. For keyword questions the lexical gap
is smaller. For example, the keyword question corresponding to the question “Qui écrivit Harry Potter?”
is “écrivain, Harry Potter”. Stemming does not suffice
to map “écrivit” to “écrivain”, lemmatization would
be needed. This problem is much smaller for English,
where the effect described over DBpedia dominates.
We can see that the best performing language is English, while the worst performing language is Italian.
This is mostly related to the poorer number of lexicalizations for Italian. Note that the performance of the
QA approach over Wikidata correlates with the number of lexicalizations for resources and properties for
the different languages as described in [26]. This indicates that the quality of the data, in different languages, directly affects the performance of the QA system. Hence, we can derive that our results will probably improve while the data quality is increased. Finally we outperform the presented QA system over this
benchmark.
SimpleQuestions: SimpleQuestions contains 108442
questions that can be solved using one triple pattern.
We trained our system using the first 100 questions
in the training set. The results of our system, together
with the state-of-the-art systems are presented in Table 5. For this evaluation, we restricted the generated queries with one triple-pattern. The system performance is 14% below the state-of-the-art. Note that
we achieve this result by considering only 100 of the
75810 questions in the training set, and investing 1
hour of manual work for creating lexicalizations for
properties manually. Concretely, instead of generating
a training dataset with 80.000 questions, which can
cost several thousands of euros, we invested 1 hour of
manual work with the result of loosing (only) 14% in
accuracy!
Note that the SimpleQuestions dataset is highly skewed
11
D. Diefenbach et al. / Towards a Question Answering System over the Semantic Web
QA system
Lang
Type
Total
P
R
F
Runtime
Ref
QA system
Lang
Type
Total
QALD-3
P
R
F
Runtime
Ref
QALD-7
WDAqua-core1
WDAqua-core1
WDAqua-core1
WDAqua-core1
WDAqua-core1
gAnswer [51]∗
WDAqua-core1
WDAqua-core1
en
en
de
de
fr
en
fr
es
full
key
key
full
key
full
full
full
100
100
100
100
100
100
100
100
0.64
0.71
0.79
0.79
0.83
0.40
0.70
0.77
0.42
0.37
0.31
0.28
0.27
0.40
0.26
0.24
0.51
0.48
0.45
0.42
0.41
0.40
0.38
0.37
1.01
0.79
0.22
0.30
0.26
≈1s
0.37
0.27
[51]
-
WDAqua-core1
WDAqua-core1
WDAqua-core1
RTV [19]
Intui2 [13]
SINA [37]∗
DEANNA [47]∗
SWIP [35]
Zhu et al. [50]∗
it
it
es
en
en
en
en
en
en
full
key
key
full
full
full
full
full
full
100
100
100
99
99
100
100
99
99
0.79
0.84
0.80
0.32
0.32
0.32
0.21
0.16
0.38
0.23
0.23
0.23
0.34
0.32
0.32
0.21
0.17
0.42
0.36
0.36
0.36
0.33
0.32
0.32
0.21
0.17
0.38
0.30
0.24
0.23
≈ 10-20s
≈ 1-50 s
-
[6]
[6]
[37]
[51]
[6]
[50]
Xser [46]
WDAqua-core1
WDAqua-core1
gAnswer [51]
CASIA [24]
WDAqua-core1
WDAqua-core1
en
en
en
en
en
de
fr
full
key
full
full
full
key
key
50
50
50
50
50
50
50
0.72
0.76
0.56
0.37
0.32
0.92
0.92
0.71
0.40
0.30
0.37
0.40
0.20
0.20
0.72
0.52
0.39
0.37
0.36
0.33
0.33
0.32s
0.46s
0.973 s
0.04s
0.06s
[42]
[42]
[42]
-
WDAqua-core1
WDAqua-core1
en
en
full
key
100
100
0.37
0.35
WDAqua-core1
WDAqua-core1
WDAqua-core1
WDAqua-core1
WDAqua-core1
WDAqua-core1
Intui3 [14]
ISOFT [33]
it
es
de
it
es
fr
en
en
key
key
full
full
full
full
full
full
50
50
50
50
50
50
50
50
0.92
0.92
0.90
0.92
0.90
0.86
0.23
0.21
0.20
0.20
0.20
0.20
0.20
0.18
0.25
0.26
0.33
0.33
0.32
0.32
0.32
0.29
0.24
0.23
0.04s
0.05s
0.06s
0.16s
0.06s
0.09s
-
[42]
[42]
WDAqua-core1
Sorokin et al. [39]
WDAqua-core1
WDAqua-core1
WDAqua-core1
WDAqua-core1
WDAqua-core1
WDAqua-core1
es
en
de
fr
fr
es
de
it
key
full
key
key
full
full
full
full
100
100
100
100
100
100
100
100
Hakimov [23]∗
en
full
50
0.52
0.13
0.21
-
[23]
WDAqua-core1
it
key
100
QALD-4
QALD-5
Xser [46]
UTQA [34]
UTQA [34]
UTQA [34]
WDAqua-core1
WDAqua-core1
en
en
es
fs
en
en
full
full
full
full
full
key
50
50
50
50
50
50
0.74
0.55
0.53
0.56
0.60
0.72
0.53
0.51
0.41
0.27
0.73
0.65
0.54
0.52
0.47
0.37
0.62s
0.50s
[43]
[34]
[34]
[34]
-
AskNow[15]
QAnswer[36]
WDAqua-core1
WDAqua-core1
WDAqua-core1
WDAqua-core1
WDAqua-core1
WDAqua-core1
en
en
de
de
fr
fr
it
it
full
full
full
key
full
key
full
key
50
50
50
50
50
50
50
50
0.32
0.34
0.92
0.90
0.90
0.90
0.88
0.90
0.34
0.26
0.16
0.16
0.16
0.16
0.18
0.16
0.33
0.29
0.28
0.28
0.28
0.28
0.30
0.28
0.20s
0.19s
0.19s
0.18s
0.20s
0.18s
[15]
[43]
-
WDAqua-core1
WDAqua-core1
SemGraphQA[2]
YodaQA[1]
QuerioDali[27]
es
es
en
en
en
full
key
full
full
full
50
50
50
50
50
0.88
0.90
0.19
0.18
?
0.14
0.14
0.20
0.17
?
0.25
0.25
0.20
0.18
?
0.20s
0.20s
?
[43]
[43]
[27]
WDAqua-core1
WDAqua-core1
en
de
full
full
100
100
0.55
0.73
0.34
0.29
0.42
0.41
1.28s
0.41s
-
WDAqua-core1
WDAqua-core1
SemGraphQA [2]
WDAqua-core1
WDAqua-core1
WDAqua-core1
WDAqua-core1
WDAqua-core1
AMUSE [22]
de
en
en
fr
fr
es
es
it
en
key
key
full
key
full
full
key
key
full
100
100
100
100
100
100
100
100
100
0.85
0.51
0.70
0.78
0.57
0.69
0.83
0.75
-
0.27
0.30
0.25
0.23
0.22
0.19
0.18
0.17
-
0.41
0.37
0.37
0.36
0.32
0.30
0.30
0.28
0.26
0.30s
1.00s
0.34s
0.46s
0.45s
0.35s
0.34s
-
[44]
[22]
WDAqua-core1
AMUSE [22]
AMUSE [22]
it
es
de
full
full
full
100
100
100
0.62
-
0.15
-
0.24
0.20
0.16
0.43s
-
[22]
[22]
QALD-6
WDAqua-core1
WDAqua-core1
WDAqua-core1
en
fr
en
full
key
key
100
100
100
0.25
0.18
0.14
0.28
0.16
0.16
0.25
0.16
0.14
1.24s
0.32s
0.88s
-
WDAqua-core1
WDAqua-core1
WDAqua-core1
WDAqua-core1
WDAqua-core1
de
de
fr
it
it
full
key
full
key
full
100
100
100
100
100
0.10
0.12
0.12
0.06
0.04
0.10
0.10
0.10
0.06
0.04
0.10
0.10
0.10
0.06
0.04
0.34s
0.28s
0.42s
0.28s
0.34s
-
Table 3
This table (left column and upper right column) summarizes the results obtained by the QA systems evaluated with QALD-3 (over DBpedia 3.8), QALD-4 (over DBpedia 3.9), QALD-5 (over DBpedia
2014), QALD-6 (over DBpedia 2015-10), QALD-7 (2016-04). We
indicated with “∗” the systems that did not participate directly in the
challenges, but were evaluated on the same benchmark afterwards.
We indicate the average running times of a query for the systems
where we found them. Even if the runtime evaluations were executed on different hardware, it still helps to give an idea about the
scalability.
QA System
Lang
Type
Total
P
R
F
Runtime
Ref
0.39
0.38
0.37
0.35
1.68s
0.80s
-
0.31
0.27
0.27
0.27
0.24
0.18
0.19
0.32
0.28
0.30
0.31
0.26
0.20
0.20
0.31
0.29
0.27
0.27
0.27
0.24
0.18
0.18
0.45s
1.13s
1.14s
1.05s
0.65s
0.82s
1.00s
[39]
-
0.17
0.18
0.16
0.44s
-
QALD-7 task 4, training dataset
Table 4
The table shows the results of WDAqua-core1 over the QALD-7 task
4 training dataset. We used Wikidata (dated 2016-11-28).
QA System
Lang
Type
Total
Accuracy
Runtime
Ref
WDAqua-core1∗
Dai et al.∗
Bordes et al.
Yin et al.
Golub and He
Lukovnikov et al.
en
en
en
en
en
en
full
full
full
full
full
full
21687
21687
21687
21687
21687
21687
0.571
0.626
0.627
0.683
0.709
0.712
2.1 s
-
[7]
[4]
[48]
[21]
[29]
Table 5
This table summarizes the QA systems evaluated over SimpleQuestions. Every system was evaluated over FB2M except the ones
marked with (∗) which were evaluated over FB5M.
Benchmark
Lang
Type
Total
P
R
F
Runtime
LC-QuAD
en
WDAquaCore0Questions mixed
full
mixed
5000
689
0.59
0.79
0.38
0.46
0.46
0.59
1.5 s
1.3 s
Table 6
This table summarizes the results of WDAqua-core1 over some
newly appeared benchmarks.
Dataset
Lang
Type
Total
P
R
F
Runtime
DBpedia
All KBs supported
en
en
full
full
100
100
0.55
0.49
0.34
0.39
0.42
0.43
1.37 s
11.94s
Table 7
Comparison on QALD-6 when querying only DBpedia and multiple
KBs at the same time.
12
D. Diefenbach et al. / Towards a Question Answering System over the Semantic Web
towards certain properties (it contains 1629 properties, the 20 most frequent properties cover nearly 50%
of the questions). Therefore, it is not clear how the
other QA systems behave with respect to properties
not appearing in the training dataset and with respect
to keyword questions. Moreover, it is not clear how to
port the existing approaches to new languages and it is
not possible to adapt them to more difficult questions.
These points are solved using our approach. Hence, we
provided here, for the first time, a quantitative analysis
of the impact of big training data corpora on the quality of a QA system.
LC-QuAD & WDAquaCore0Questions: Recently,
a series of new benchmarks have been published. LCQuAD [41] is a benchmark containing 5000 English
questions and it concentrates on complex questions.
WDAquaCore0Questions [11] is a benchmark containing 689 questions over multiple languages and addressing mainly Wikidata, generated from the logs of
a live running QA system. The questions are a mixture
of real-world keyword and malformed questions. In
Table 6, we present the first baselines for these benchmarks.
Multiple KBs: The only available benchmark that
tackles multiple KBs was presented in QALD-4 task 2.
The KBs are rather small and perfectly interlinked.
This is not the case over the considered KBs. We
therefore evaluated the ability to query multiple KBs
differently. We run the questions of the QALD-6
benchmark, which was designed for DBpedia, both
over DBpedia (only) and over DBpedia, Wikidata,
MusicBrainz, DBLP and Freebase. Note that, while
the original questions have a solution over DBpedia,
a good answer could also be found over the other
datasets. We therefore manually checked whether the
answers that were found in other KBs are right (independently from which KB was chosen by the QA system to answer it). The results are presented in Table 7.
WDAqua-core1 choose 53 times to answer a question
over DBpedia, 39 over Wikidata and the other 8 times
over a different KB. Note that we get better results
when querying multiple KBs. Globally we get better
recall and lower precision which is expected. While
scalability is an issue, we are able to pick the right KB
to find the answer!
Note: We did not tackle the WebQuestions benchmark for the following reasons. While it has been
shown that WebQuestions can be addressed using non-
reified versions of Freebase, this was not the original goal of the benchmark. More then 60% of the
QA systems benchmarked over WebQuestions are tailored towards its reefication model. There are two important points here. First, most KBs in the Semantic
Web use binary statements. Secondly, in the Semantic Web community, many different reefication models
have been developed as described in [20].
5.2.2. Setting
All experiments were performed on a virtual machine with 4 core of Intel Xeon E5-2667 v3 3.2GH,
16 GB of RAM and 500 GB of SSD disk. Note that
the whole infrastructure was running on this machine,
i.e. all indexes and the triple-stores needed to compute
the answers (no external service was used). The original data dumps sum up to 336 GB. Note that across
all benchmarks we can answer a question in less then
2 seconds except when all KBs are queried at the same
time which shows that the algorithm should be parallelized for further optimization.
6. Provided Services for Multilingual and
Multi-KB QA
We have presented an algorithm that can be easily
ported to new KBs and that can query multiple KBs
at the same time. In the evaluation section, we have
shown that our approach is competitive while offering the advantage of being multilingual and robust to
keyword questions. Moreover, we have shown that it
runs on moderate hardware. In this section, we describe how we integrated the approach to an actual service and how we combine it to existing services so that
it can be directly used by end-users.
First, we integrated WDAqua-core1 into Qanary [5,
12], a framework to integrate QA components. This
way WDAqua-core1 can be accessed via RESTful interfaces for example to benchmark it via Gerbil for
QA[45]. It also allows to combine it with services that
are already integrated into Qanary like a speech recognition component based on Kaldi19 and a language detection component based on [32]. Moreover, the integration into Qanary allows to reuse Trill [8], a reusable
front-end for QA systems. A screenshot of Trill using
in the back-end WDAqua-core1 can be found in Figure 4.
Secondly, we reused and extended Trill to make it eas19 http://kaldi-asr.org
D. Diefenbach et al. / Towards a Question Answering System over the Semantic Web
Fig. 4. Screenshot of Trill, using in the back-end WDAqua-core1,
for the question “Give me museums in Lyon.”.
ily portable to new KBs. While Trill originally was
supporting only DBpedia and Wikidata, now it supports also MusicBrainz, DBLP and Freebase. We designed the extension so that it can be easily ported to
new KBs. Enabling the support to a new KB is mainly
reduced to writing an adapted SPARQL query for the
new KB. Additionally, the extension allows to select
multiple KBs at the same time.
Thirdly, we adapted some services that are used in Trill
to be easily portable to new KBs. These include SPARQLToUser [9], a tool that generates a human readable
version of a SPARQL query and LinkSUM [40] a service for entity summarization. All these tools now support the 5 mentioned KBs and the 5 mentioned languages.
A public online demo is available under:
www.wdaqua.eu/qa
7. Conclusion and Future Work
In this paper, we introduced a novel concept for QA
aimed at multilingual and KB-agnostic QA. Due to the
described characteristics of our approach portability is
ensured which is a significant advantage in comparison
to previous approaches. We have shown the power of
our approach in an extensive evaluation over multiple
benchmarks. Hence, we clearly have shown our contributions w.r.t. qualitative (language, KBs) and quantitative improvements (outperforming many existing systems and querying multiple KBs) as well as the capability of our approach to scale for very large KBs like
DBpedia.
13
We have applied our algorithm and adapted a set of existing services so that end-users can query, using multiple languages, multiple KBs at the same time, using
an unified interface. Hence, we provided here a major
step towards QA over the Semantic Web following our
larger research agenda of providing QA over the LOD
cloud.
In the future, we want to tackle the following points.
First, we want to parallelize our approach, s.t. when
querying multiple KBs acceptable response times will
be achieved. Secondly, we want to query more and
more KBs (hints to interesting KBs are welcome).
Thirdly, from different lessons learned from querying
multiple KBs, we want to give a set of recommendations for RDF datasets, s.t. they are fit for QA. And
fourth, we want to extend our approach to also query
reefied data. Fifth, we would like to extend the approach to be able to answer questions including aggregates and functions. We believe that our work can
further boost the expansion of the Semantic Web since
we presented a solution that easily allows to consume
RDF data directly by end-users requiring low hardware investments.
Note: There is a Patent Pending for the presented
approach. It was submitted the 18 January 2018 at the
EPO and has the number EP18305035.0.
Acknowledgements
This project has received funding from the European
Union’s Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No 642795.
References
[1] Petr Baudiš and Jan Šedivỳ. 2015. QALD Challenge and the
YodaQA System: Prototype Notes. (2015).
[2] Romain Beaumont, Brigitte Grau, and Anne-Laure Ligozat.
2015.
SemGraphQA@QALD-5: LIMSI participation at
QALD-5@CLEF. CLEF.
[3] Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang.
2013. Semantic Parsing on Freebase from Question-Answer
Pairs.. In EMNLP.
[4] Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. 2015. Large-scale Simple Question Answering with Memory Networks. CoRR abs/1506.02075 (2015).
arXiv:1506.02075 http://arxiv.org/abs/1506.02075
[5] A. Both, D. Diefenbach, K. Singh, S. Shekarpour, D. Cherix,
and C. Lange. 2016. Qanary – a methodology for vocabularydriven open question answering systems. In ESWC 2016.
14
D. Diefenbach et al. / Towards a Question Answering System over the Semantic Web
[6] Philipp Cimiano, Vanessa Lopez, Christina Unger, Elena
Cabrio, Axel-Cyrille Ngonga Ngomo, and Sebastian Walter.
2013. Multilingual question answering over linked data (qald3): Lab overview. Springer.
[7] Zihang Dai, Lei Li, and Wei Xu. 2016. Cfo: Conditional focused neural question answering with large-scale knowledge
bases. arXiv preprint arXiv:1606.01994 (2016).
[8] D. Diefenbach, S. Amjad, A. Both, K. Singh, and P. Maret.
2017. Trill: A reusable Front-End for QA systems. In ESWC
P&D.
[9] D. Diefenbach, Y. Dridi, , K. Singh, and P. Maret. 2017. SPARQLtoUser: Did the question answering system understand me?.
In ISWC NLIWoD3 Workshop.
[10] D. Diefenbach, V. Lopez, K. Singh, and P. Maret. 2017. Core
techniques of question answering systems over knowledge
bases: a survey. Knowledge and Information systems (2017),
1–41.
[11] D. Diefenbach, T. Pellissier, K. Singh, and P. Maret. 2017.
Question Answering Benchmarks for Wikidata. In ISWC
P&D.
[12] D. Diefenbach, K. Singh, A. Both, D. Cherix, C. Lange, and S.
Auer. 2017. The Qanary Ecosystem: getting new insights by
composing Question Answering pipelines. In ICWE.
[13] Corina Dima. 2013. Intui2: A prototype system for question
answering over linked data. Proceedings of the Question Answering over Linked Data lab (QALD-3) at CLEF (2013).
[14] Corina Dima. 2014. Answering natural language questions
with Intui3. In Conference and Labs of the Evaluation Forum
(CLEF).
[15] Mohnish Dubey, Sourish Dasgupta, Ankit Sharma, Konrad
Höffner, and Jens Lehmann. 2016. AskNow: A Framework for
Natural Language Query Formalization in SPARQL. In International Semantic Web Conference. Springer.
[16] Javier D Fernández, Miguel A Martínez-Prieto, Claudio
Gutiérrez, Axel Polleres, and Mario Arias. 2013. Binary RDF
representation for publication and exchange (HDT). Web Semantics: Science, Services and Agents on the World Wide Web
19 (2013), 22–41.
[17] Ó. Ferrández, Ch. Spurk, M. Kouylekov, and al. 2011. The
QALL-ME Framework: A specifiable-domain multilingual
Question Answering architecture. J. Web Sem. (2011).
[18] Daniel Gerber and A-C Ngonga Ngomo. 2011. Bootstrapping
the linked data web. In 1st Workshop on Web Scale Knowledge
Extraction@ ISWC, Vol. 2011.
[19] Cristina Giannone, Valentina Bellomaria, and Roberto Basili.
2013. A HMM-based approach to question answering against
linked data. Proceedings of the Question Answering over
Linked Data lab (QALD-3) at CLEF (2013).
[20] José M Giménez-García, Antoine Zimmermann, and Pierre
Maret. 2017. NdFluents: An Ontology for Annotated Statements with Inference Preservation. In European Semantic Web
Conference. Springer, 638–654.
[21] David Golub and Xiaodong He. 2016. Character-level question
answering with attention. arXiv preprint arXiv:1604.00727
(2016).
[22] Sherzod Hakimov, Soufian Jebbara, and Philipp Cimiano.
2017. AMUSE: Multilingual Semantic Parsing for Question
Answering over Linked Data. In International Semantic Web
Conference. Springer, 329–346.
[23] Sherzod Hakimov, Christina Unger, Sebastian Walter, and
Philipp Cimiano. 2015. Applying semantic parsing to question
answering over linked data: Addressing the lexical gap. In Natural Language Processing and Information Systems. Springer.
[24] Shizhu He, Yuanzhe Zhang, Kang Liu, and Jun Zhao. 2014.
CASIA@ V2: A MLN-based Question Answering System
over Linked Data. Proc. of QALD-4 (2014).
[25] Sarthak Jain. 2016. Question Answering over Knowledge Base
using Factual Memory Networks. In Proceedings of NAACLHLT.
[26] Lucie-Aimée Kaffee, Alessandro Piscopo, Pavlos Vougiouklis, Elena Simperl, Leslie Carr, and Lydia Pintscher. 2017. A
Glimpse into Babel: An Analysis of Multilinguality in Wikidata. In Proceedings of the 13th International Symposium on
Open Collaboration. ACM, 14.
[27] V. Lopez, P. Tommasi, S. Kotoulas, and J. Wu. 2016. QuerioDALI: Question Answering Over Dynamic and Linked
Knowledge Graphs. In International Semantic Web Conference. Springer, 363–382.
[28] Vanessa Lopez, Victoria Uren, Marta Sabou, and Enrico Motta.
2011. Is question answering fit for the semantic web? a survey.
Semantic Web 2, 2 (2011).
[29] Denis Lukovnikov, Asja Fischer, Jens Lehmann, and Sören
Auer. 2017. Neural Network-based Question Answering over
Knowledge Graphs on Word and Character Level. In Proceedings of the 26th International Conference on World Wide Web.
International World Wide Web Conferences Steering Committee, 1211–1220.
[30] E. Marx, R. Usbeck, A. Ngonga Ngomo, K. Höffner, J.
Lehmann, and S. Auer. 2014. Towards an Open Question Answering Architecture. In SEMANTiCS.
[31] Donald Metzler and W Bruce Croft. 2007. Linear featurebased models for information retrieval. Information Retrieval
10, 3 (2007), 257–274.
[32] Shuyo Nakatani. 2010. Language Detection Library for Java.
(2010). https://github.com/shuyo/language-detection.
[33] Seonyeong Park, Hyosup Shim, and Gary Geunbae Lee. 2014.
ISOFT at QALD-4: Semantic similarity-based question answering system over linked data. In CLEF.
[34] Amir Pouran-ebn veyseh. 2016. Cross-Lingual Question Answering Using Common Semantic Space. In NAACL-HLT
2016.
[35] Camille Pradel, Ollivier Haemmerlé, and Nathalie Hernandez.
2012. A semantic web interface using patterns: the SWIP system. In Graph Structures for Knowledge Representation and
Reasoning. Springer.
[36] Stefan Ruseti, Alexandru Mirea, Traian Rebedea, and Stefan
Trausan-Matu. 2015. QAnswer-Enhanced Entity Matching for
Question Answering over Linked Data. CLEF.
[37] Saeedeh Shekarpour, Edgard Marx, Axel-Cyrille Ngonga
Ngomo, and Sören Auer. 2015. Sina: Semantic interpretation
of user queries for question answering on interlinked data. Web
Semantics: Science, Services and Agents on the World Wide
Web 30 (2015).
[38] K. Singh, A. Both, D. Diefenbach, and S. Shekarpour. 2016.
Towards a Message-Driven Vocabulary for Promoting the Interoperability of Question Answering Systems. In ICSC 2016.
[39] Daniil Sorokin and Iryna Gurevych. 2017. End-to-end Representation Learning for Question Answering with Weak Supervision. In ESWC 2017 Semantic Web Challenges.
D. Diefenbach et al. / Towards a Question Answering System over the Semantic Web
[40] Andreas Thalhammer, Nelia Lasierra, and Achim Rettinger.
2016. LinkSUM: Using Link Analysis to Summarize Entity
Data. In Web Engineering: 16th International Conference,
ICWE 2016, Lugano, Switzerland, June 6-9, 2016. Proceedings. Lecture Notes in Computer Science, Vol. 9671. Springer
International Publishing, Cham, 244–261. https://doi.org/10.
1007/978-3-319-38791-8_14
[41] Priyansh Trivedi, Gaurav Maheshwari, Mohnish Dubey, and
Jens Lehmann. 2017. LC-QuAD: A corpus for complex question answering over knowledge graphs. In International Semantic Web Conference. Springer, 210–218.
[42] Christina Unger, Corina Forascu, Vanessa Lopez, AxelCyrille Ngonga Ngomo, Elena Cabrio, Philipp Cimiano, and
Sebastian Walter. 2014. Question answering over linked data
(QALD-4). In Working Notes for CLEF 2014 Conference.
[43] Christina Unger, Corina Forascu, Vanessa Lopez, AxelCyrille Ngonga Ngomo, Elena Cabrio, Philipp Cimiano, and
Sebastian Walter. 2015. Answering over Linked Data (QALD5). In Working Notes for CLEF 2015 Conference.
[44] Christina Unger, Axel-Cyrille Ngonga Ngomo, Elena Cabrio,
and Cimiano. 2016. 6th Open Challenge on Question Answering over Linked Data (QALD-6). In The Semantic Web: ESWC
2016 Challenges.
[45] Ricardo Usbeck, Michael Röder, Michael Hoffmann, Felix
Conrads, Jonathan Huthmann, Axel-Cyrille Ngonga-Ngomo,
Christian Demmler, and Christina Unger. 2016. Benchmarking
Question Answering Systems. Semantic Web Journal (2016).
to appear.
15
[46] Kun Xu, Yansong Feng, and Dongyan Zhao. 2014. Xser@
QALD-4: Answering Natural Language Questions via Phrasal
Semantic Parsing. (2014).
[47] Mohamed Yahya, Klaus Berberich, Shady Elbassuoni, Maya
Ramanath, Volker Tresp, and Gerhard Weikum. 2012. Natural language questions for the web of data. In Proceedings of
the 2012 Joint Conference on Empirical Methods in Natural
Language Processing and Computational Natural Language
Learning. Association for Computational Linguistics.
[48] Wenpeng Yin, Mo Yu, Bing Xiang, Bowen Zhou, and Hinrich
Schütze. 2016. Simple question answering by attentive convolutional neural network. arXiv preprint arXiv:1606.03391
(2016).
[49] Yuanzhe Zhang, Kang Liu, Shizhu He, Guoliang Ji, Zhanyi
Liu, Hua Wu, and Jun Zhao. 2016. Question Answering over
Knowledge Base with Neural Attention Combining Global
Knowledge Information. arXiv preprint arXiv:1606.00979
(2016).
[50] Chenhao Zhu, Kan Ren, Xuan Liu, Haofen Wang, Yiding Tian,
and Yong Yu. 2015. A Graph Traversal Based Approach to
Answer Non-Aggregation Questions Over DBpedia. arXiv
preprint arXiv:1510.04780 (2015).
[51] Lei Zou, Ruizhe Huang, Haixun Wang, Jeffer Xu Yu, Wenqiang He, and Dongyan Zhao. 2014. Natural language question
answering over RDF: a graph data driven approach. In Proceedings of the 2014 ACM SIGMOD international conference
on Management of data. ACM.
| 2 |
Polychronous Interpretation of Synoptic, a Domain Specific
Modeling Language for Embedded Flight-Software
L. Besnard, T. Gautier, J. Ouy, J.-P. Talpin, J.-P. Bodeveix, A. Cortier,
M. Pantel, M. Strecker, G. Garcia, A. Rugina, J. Buisson, F. Dagnat
L. Besnard, T. Gautier, J. Ouy, J.-P. Talpin
INRIA Rennes - Bretagne Atlantique / IRISA, Campus de Beaulieu, F-35042 Rennes Cedex, France
{Loic.Besnard, Thierry.Gautier, Julien.Ouy, Jean-Pierre.Talpin}@irisa.fr
J.-P. Bodeveix, A. Cortier, M. Pantel, M. Strecker
IRIT-ACADIE, Université Paul Sabatier, 118 Route de Narbonne, F-31062 Toulouse Cedex 9, France
{bodeveix, cortier, pantel, strecker}@irit.fr
G. Garcia
Thales Alenia Space, 100 Boulevard Midi, F-06150 Cannes, France
[email protected]
A. Rugina
EADS Astrium, 31 rue des Cosmonautes, Z.I. du Palays, F-31402 Toulouse Cedex 4, France
[email protected]
J. Buisson, F. Dagnat
Institut Télécom / Télécom Bretagne, Technopôle Brest Iroise, CS83818, F-29238 Brest Cedex 3, France
{jeremy.buisson, Fabien.Dagnat}@telecom-bretagne.eu
The SPaCIFY project, which aims at bringing advances in MDE to the satellite flight software industry, advocates a top-down approach built on a domain-specific modeling language named Synoptic.
In line with previous approaches to real-time modeling such as Statecharts and Simulink, Synoptic
features hierarchical decomposition of application and control modules in synchronous block diagrams and state machines. Its semantics is described in the polychronous model of computation,
which is that of the synchronous language S IGNAL.
1
Introduction
In collaboration with major European manufacturers, the SPaCIFY project aims at bringing advances in
MDE to the satellite flight software industry. It focuses on software development and maintenance phases
of satellite lifecycle. The project advocates a top-down approach built on a Domain-Specific Modeling
Language (DSML) named Synoptic. The aim of Synoptic is to support all aspects of embedded flightsoftware design. As such, Synoptic consists of heterogeneous modeling and programming principles
defined in collaboration with the industrial partners and end users of the SPaCIFY project.
Used as the central modeling language of the SPaCIFY model driven engineering process, Synoptic
allows to describe different layers of abstraction: at the highest level, the software architecture models the
M. Bujorianu and M. Fisher (Eds.):
Workshop on Formal Methods for Aerospace (FMA)
EPTCS 20, 2010, pp. 80–87, doi:10.4204/EPTCS.20.9
c SPaCIFY Project
This work is licensed under the
Creative Commons Attribution License.
SPaCIFY Project
81
functional decomposition of the flight software. This is mapped to a dynamic architecture which defines
the thread structure of the software. It consists of a set of threads, where each thread is characterized by
properties such as its frequency, its priority and its activation pattern (periodic, sporadic).
A mapping establishes a correspondence between the software and the dynamic architecture, by
specifying which blocks are executed by which threads. At the lowest level, the hardware architecture
permits to define devices (processors, sensors, actuators, busses) and their properties.
Finally, mappings describe the correspondence between the dynamic and hardware architecture on
the one hand, by specifying which threads are executed by which processor, and describe a correspondence between the software and hardware architecture on the other hand, by specifying which data is
carried by which bus for instance. Figure 1 depicts these layers and mappings.
Figure 1: Global view: layers and architecture mappings
The aim is to synthesize as much of this mapping as possible, for example by appealing to internal or
external schedulers. However, to allow for human intervention, it is possible to give a fine-grained mapping, thus overriding or bypassing machine-generated schedules. Anyway, consistency of the resulting
dynamic architecture is verified by the SPaCIFY tool suite, based on the properties of the software and
dynamic model.
At each step of the development process, it is also useful to model different abstraction levels of the
system under design inside a same layer (functional, dynamic or hardware architecture). Synoptic offers
this capability by providing an incremental design framework and refinement features.
To summarize, Synoptic deals with data-flow diagrams, mode automata, blocks, components, dynamic and hardware architecture, mapping and timing.
The functional part of the Synoptic language allows to model software architecture. The corresponding sub-language is well adapted to model synchronous islands and to specify interaction points between
these islands and the middleware platform using the concept of external variables.
Synchronous islands and middleware form a Globally Asynchronous and Locally Synchronous (GALS)
system.
Software architecture The development of the Synoptic software architecture language has been
tightly coordinated with the definition of the GeneAuto language [1]. Synoptic uses essentially two
types of modules, called blocks in Synoptic, which can be mutually nested: data-flow diagrams and
82
Polychronous interpretation of Synoptic
mode automata. Nesting favors a hierarchical design and enables viewing the description at different
levels of detail.
By embedding blocks in the states of state machines, one can elegantly model operational modes:
each state represents a mode, and transitions correspond to mode changes. In each mode, the system may
be composed of other sub-blocks or have different connection patterns among components.
Apart from structural and behavioral aspects, the Synoptic software architecture language allows to
define temporal properties of blocks. For instance, a block can be parameterized with a frequency and a
worst case execution time which are taken into account in the mapping onto the dynamic architecture.
Synoptic is equipped with an assertion language that allows to state desired properties of the model
under development. We are mainly interested in properties that permit to express, for example, coherence
of the modes (“if component X is in mode m1, then component Y is in mode m2” or “. . . can eventually
move into mode m2”). Specific transformations extract these properties and pass them to the verification
tools.
The main purpose of this paper is to describe a formal semantics of Synoptic, expressed in terms
of the synchronous language S IGNAL [2, 3]. S IGNAL is based on “synchronized data-flow” (flows with
synchronization): a process is a set of equations on elementary flows describing both data and control.
The S IGNAL formal model provides the capability to describe systems with several clocks (polychronous
systems) as relational specifications. A brief overview of the abstract syntax of Synoptic is provided in
Section 2. Then Section 3 describes the interpretation of each one of these constructions in the model of
the S IGNAL language.
2
An overview of Synoptic
Blocks are the main structuring elements of Synoptic. A block block x A defines a functional unit of
compilation and of execution that can be called from many contexts and with different modes in the
system under design. A block x encapsulates a functionality A that may consist of sub-blocks, automata
and data-flows. A block x is implicity associated with two signals x.trigger and x.reset. The signal
x.trigger starts the execution of A. The specification A may then operate at its own pace until the next
x.trigger is signaled. The signal x.reset is delivered to x at some x.trigger and forces A to reset its state
and variables to initial values.
(blocks) A, B ::=
block x A | dataflow x A | automaton x A | A | B
Data-flows inter-connect blocks with data and events (e.g. trigger and reset signals). A flow can
simpliy define a connection from an event x to an event y, written event x → y, combine data y and z
by a simple operation f to form the flow x, written data y f z → x or feed a signal y back to x, written
data y $init v → x. In a feedback loop, the signal x is initially defined by x0 = v. Then, at each occurrence
n > 0 of the signal y, it takes its previous value xn = yn−1 . The execution of a data-flow is controlled by
its parent clock. A data-flow simultaneously executes each connection it is composed of every time it is
triggered by its parent block.
(data f low) A, B ::=
data y $init v → x | data y f z → x | event x → y | A | B
Actions are sequences of operations on variables that are performed during the execution of automata.
An assignment x = y f z defines the new value of the variable x from the current values of y and z by the
function f . The skip stores the new values of variables that have been defined before it, so that they
SPaCIFY Project
83
become current past it. The conditional if x then A else B executes A if the current value of x is true and
executes B otherwise. A sequence A; B executes A and then B.
(action) A, B ::=
skip | x = y f z | if x then A else B | A; B
Automata schedule the execution of operations and blocks by performing timely guarded transitions.
An automaton receives control from its trigger and reset signals x.trigger and x.reset as specified by its
parent block. When an automaton is first triggered, or when it is reset, its starts execution from its initial
state, specified as initial state S. On any state S : do A, it performs the action A. From this state, it may
perform an immediate transition to new state T , written S → on x T , if the value of the current variable
x is true. It may also perform a delayed transition to T , written S on x T , that waits the next trigger
before to resume execution (in state T ). If no transition condition applies, it then waits the next trigger
and resumes execution in state S. States and transitions are composed as A | B. The timed execution of
an automaton combines the behavior of an action or a data-flow. The execution of a delayed transition or
of a stutter is controlled by an occurrence of the parent trigger signal (as for a data-flow). The execution
of an immediate transition is performed without waiting for a trigger or a reset (as for an action).
(automaton) A, B ::=
3
state S : do A | S → on x T | S on x T | A | B
Polychronous interpretation of Synoptic
The model of computation on which Synoptic relies is that of the polychronous data-flow language
S IGNAL. This section describes how Synoptic programs are interpreted into this core language.
3.1
A brief introduction to S IGNAL
In S IGNAL, a process P consists of the composition of simultaneous equations x = f (y, z) over signals
x, y, z. A delay equation x = y $init v defines x every time y is present. Initially, x is defined by the value v,
and then, it is defined by the previous value of y. A sampling equation x = y when z defines x by y when z
is true. Finally, a merge equation x = y default z defines x by y when y is present and by z otherwise. An
equation x = y f z can use a boolean or arithmetic operator f to define all of the nth values of the signal x
by the result of the application of f to the nth values of the signals y and z. The synchronous composition
of processes P | Q consists of the simultaneous solution of the equations in P and in Q. It is commutative
and associative. The process P/x restricts the signal x to the lexical scope of P.
P, Q ::= x = y f z | P/x | P | Q
(process)
In S IGNAL, the presence of a value along a signal x is an expression noted ˆx. It is true when x is present.
Otherwise, it is absent. Specific processes and operators are defined in S IGNAL to manipulate clocks
explicitly. We only use the simplest one, xˆ = y, that synchronizes all occurrences of the signals x and y.
3.2
Interpretation of blocks
The execution of a block is driven by the trigger t of its parent block. The block resynchronizes with
that trigger every time, itself or one of its sub-blocks, makes an explicit reference to time (e.g. a skip for
an action or a delayed transition S T for an automaton). Otherwise, the elapse of time is sensed from
outside the block, whose operations (e.g., on ci ), are perceived as belonging to the same period as within
84
Polychronous interpretation of Synoptic
[ti ,ti+1 [. The interpretation implements this feature by encoding actions and automata using static single
assignment. As a result, and from within a block, every non-time-consuming sequence of actions A; B or
transitions A → B defines the value of all its variables once and defines intermediate ones in the flow of
its execution.
3.3
Interpretation of data-flow
Data-flows are structurally similar to S IGNAL programs and equally combined using synchronous composition. The interpretation [[A]]rt = hhPii of a data-flow (Fig. 2) is parameterized by the reset and trigger
signals of the parent block and returns a process P (the input term A and the output term P are marked by
[[A]] and hhPii for convenience). A delayed flow data y $init v → x initially defines x by the value v. It is
reset to that value every time the reset signal r occurs. Otherwise, it takes the previous value of y in time.
[[ dataflow f A]]rt = hh[[A]]rt | ∏x∈in(A) xˆ =t ii
[[ data y $init v → x]]rt = hhx = (v when r) default (y $init v) | (xˆ = y)ii
[[ data y f z → x]]rt = hhx = y f zii
[[ event y → x]]rt = hhx = when yii
[[A | B]]rt = hh[[A]]rt | [[B]]rt ii
Figure 2: Interpretation of data-flow connections
In Fig. 2, we write ∏i≤n Pi for a finite product of processes P1 | . . . Pn . Similarly, i≤n ei is a finite
merge e1 default . . . en .
A functional flow data y f z → x defines x by the product of (y, z) by f . An event flow event y → x
connects y to define x. Particular cases are the operator ?(y) to convert an event y to a boolean data and
the operator ˆ(y) to convert the boolean data y to an event. We write in(A) and out(A) for the input and
output signals of a data-flow A.
By default, the convention of Synoptic is to synchronize the input signals of a data-flow to the parent
trigger. It is however, possible to define alternative policies. One is to down-sample the input signals at
the pace of the trigger. Another is to adapt or resample them at that trigger.
W
3.4
Interpretation of actions
The execution of an action A starts at an occurrence of its parent trigger and shall end before the next
occurrence of that event. During the execution of an action, one may also wait and synchronize with
this event by issuing a skip . A skip has no behavior but to signal the end of an instant: all the newly
computed values of signals are flushed in memory and execution is resumed upon the next parent trigger.
Action x! sends the signal x to its environment. Execution may continue within the same symbolic instant
unless a second emission is performed: one shall issue a skip before that. An operation x = y f z takes the
current value of y and z to define the new value of x by the product with f . A conditional if x then A else B
executes A or B depending on the current value of x.
As a result, only one new value of a variable x should at most be defined within an instant delimited
by a start and an end or a skip. Therefore, the interpretation of an action consists of its decomposition in
static single assignment form. To this end, we use an environment E to associate each variable with its
definition, an expression, and a guard, that locates it (in time).
SPaCIFY Project
85
An action holds an internal state s that stores an integer n denoting the current portion of the actions
that is being executed. State 0 represents the start of the program and each n > 0 labels a skip that
materializes a synchronized sequence of actions.
The interpretation [[A]]s,m,g,E = hhPiin,h,F of an action A (Fig. 3) takes as parameters the state variable
s, the state m of the current section, the guard g that leads to it, and the environment E. It returns a
process P, the state n and guard h of its continuation, and an updated environment F. We write usegE (x)
for the expression that returns the definition of the variable x at the guard g and defgE (x) for storing the
final values of all variables x defined in E (i.e., x ∈ V (E)) at the guard g.
usegE (x) = i f x ∈ V (E)then hhE(x)i
i else hh(x $init 0) when gii
defg (E) = ∏x∈V (E) x = usegE (x)
Execution is started with s = 0 upon receipt of a trigger t. It is also resumed from a skip at s = n with a
trigger t. Hence the signal t is synchronized to the state s of the action. The signal r is used to inform the
parent block (an automaton) that the execution of the action has finished (it is back to its initial state 0).
An end resets s to 0, stores all variables x defined in E with an equation x = usegE (x) and finally stops (its
returned guard is 0). A skip advances s to the next label n + 1 when it receives control upon the guard
e and flushes the variables defined so far. It returns a new guard (s $init 0) = n + 1 to resume the actions
past it. An action x! emits x when its guard e is true. A sequence A; B evaluates A to the process P and
passes its state nA , guard gA , environment EA to B. It returns P | Q with the state, guard and environment
of B. Similarly, a conditional evaluates A with the guard g when x to P and B with g when not x to Q. It
returns P | Q but with the guard gA default gB . All variables x ∈ X, defined in both EA and EB , are merged
in the environment F.
[[ do A]]rt = hh(P | sˆ =t | r = (s = 0)) /sii where hhPiin,h,F = [[A; end ]]s,0,((s pre 0)=0),0/
[[ end ]]s,n,g,E = hhs = 0 when g | defg (E)ii0,0,0/
[[ skip ]]s,n,g,E = hhs = n + 1 when g | defg (E)iin+1,((s pre 0)=n+1),0/
[[x!]]s,n,g,E = hhx = 1 when giin,g,E
[[x = y f z]]s,n,g,E = hhx = eiin,g,Ex ]{x7→e} where e = hh f (usegE (y), usegE (z)) when gii
[[A; B]]s,n,g,E = hhP | QiinB ,gB ,EB where hhPiinA ,gA ,EA = [[A]]s,n,g,E and hhQiinB ,gB ,EB = [[B]]s,nA ,gA ,EA
[[ if x then A else B]]s,n,g,E = hhP | QiinB ,(gA default gB ),(EA ]EB )
g
g
where hhPiinA ,gA ,EA = [[A]]s,n,(g when useE (x)),E and hhQiinB ,gB ,EB = [[B]]s,nA ,(g when not useE (x)),E
Figure 3: Interpretation of timed sequential actions
In Fig. 3, we write E ] F to merge the definitions in the environments E and F. For all variables
x ∈ V (E) ∪ V (F) in the domains of E and F,
x ∈ V (E) \ V (F)
E(x),
F(x),
x ∈ V (F) \ V (E)
(E ] F)(x) =
E(x) default F(x), x ∈ V (E) ∩ V (F)
Note that an action cannot be reset from the parent clock because it is not synchronized to it. A sequence
of emissions x!; x! yields only one event along the signal x because they occur at the same (logical) time,
as opposed to x!; skip ; x! which sends the second one during the next trigger.
86
3.5
Polychronous interpretation of Synoptic
Interpretation of automata
An automaton describes a hierarchic structure consisting of actions that are executed upon entry in a
state by immediate and delayed transitions. An immediate transition occurs during the period of time
allocated to a trigger. Hence, it does not synchronize to it. Conversely, a delayed transition occurs
upon synchronization with the next occurrence of the parent trigger event. As a result, an automaton
is partitioned in regions. Each region corresponds to the amount of calculation that can be performed
within the period of a trigger, starting from a given initial state.
Notations We write →A and A for the immediate and delayed transition relations of an automaton
A. We write pred→A (S) = {T | (T, x, S) ∈ R} and succ→A (S) = {T | (S, x, T ) ∈ R} (resp. predA (S) and
succA (S)) for the predecessor and successor states of the immediate (resp. delayed) transitions →A
(resp. A ) from a state S in an automaton A.Finally, we write ~S for the region of a state S. It is defined
by an equivalence relation.
∀S, T ∈ S (A), ((S, x, T ) ∈ →A ) ⇔ ~S = ~T
For any state S of A, written S ∈ S (A), it is required that the restriction of →A to the region ~S is acyclic.
Notice that, still, a delayed transition may take place between two states of the same region.
Interpretation An automaton A is interpreted by a process [[ automaton x A]]rt parameterized by its
parent trigger and reset signals. The interpretation of A defines a local state s. It is synchronized to the
parent trigger t. It is set to 0, the initial state, upon receipt of a reset signal r and, otherwise, takes the
previous value of s0 , that denotes the next state. The interpretation of all states is performed concurrently.
We give all states Si of an automaton A a unique integer label i = dSi e and designate with dAe its
number of states. S0 is the initial state and, for each state of index i, we call Ai its action i and xi j the
guard of an immediate or delayed transition from Si to S j .
[[ automaton x A]]rt =
hh t ˆ = s | s = (0 when r) default (s0 $init 0) | ∏Si ∈S (A) [[Si ]]s /ss0 ii
The interpretation [[Si ]]s of all states 0 ≤ i < dAe of an automaton (Fig. 4) is implemented by a series of
mutually recursive equations that define the meaning of each state Si depending on the result obtained for
its predecessors S j in the same region. Since a region is by definition acyclic, this system of equations
has therefore a unique solution.
The interpretation of state Si starts with that of its actions Ai . An action Ai defines a local state
si synchronized to the parent state s = i of the automaton. The automaton stutters with s0 = s if the
evaluation of the action is not finished: it is in a local state si 6= 0.
Interpreting the actions Ai requires the definition of a guard gi and of an environment Ei . The guard
gi defines when Ai starts. It requires the local state to be 0 or the state Si to receive control from a
predecessor S j in the same region (with the guard x ji ).
The environment Ei is constructed by merging these Fj returned by its immediate predecessors S j .
Once these parameters are defined, the interpretation of Ai returns a process Pi together with an exit guard
hi and an environment Fi holding the value of all variables it defines.
Upon evaluation of Ai , delayed transition from Si are checked. This is done by the definition of a
process Qi which, first, checks if the guard xi j of a delayed transition from Si evaluates to true with Fi . If
so, variables defined in Fi are stored with defhi (Fi ).
SPaCIFY Project
87
All delayed transitions from Si to S j are guarded by hi (one must have finished evaluating i before
moving to j) and a condition gi j , defined by the value of the guard xi j . The default condition is to stay in
the current state s while si 6= 0 (i.e. until mode i is terminated).
Hence, the next state from i is defined by the equation s0 = s0i . The next state equation of each state
W
is composed with the other to form the product ∏i<dAe s0 = s0i that is merged as s0 = i<dAe s0i .
∀i < dAe, [[Si ]]s = (Pi | Qi | si ˆ = when (s = i) | s0 = s0i ) /si where
i ,Ei
hhPi iin,hi ,Fi = [[Ai ]]si ,0,g
Qi = ∏(Si ,xi j ,S j )∈A defhi when (useFi (xi j )) (Fi )
Ei =
U
S j ∈pred→A (Si ) Fj
gi = 1 when (si $init 0 = 0) default
(use
(x
))
E ji
(S j ,x ji ,Si )∈→A
W
gi j = hi when (useFi (xi j )), ∀(Si , xi j , S j ) ∈ A
W
s0i = (s when si 6= 0) default (Si ,xi j ,S j )∈A ( j when gi j )
Figure 4: Recursive interpretation of a mode automaton
4
Conclusion
Synoptic has a formal semantics, defined in terms of the synchronous language S IGNAL. On the one
hand, this allows for neat integration of verification environments for ascertaining properties of the system under development. On the other hand, a formal semantics makes it possible to encode the metamodel in a proof assistant. In this sense, Synoptic will profit from the formal correctness proof and subsequent certification of a code generator that is under way in the GeneAuto project. Moreover, the formal
model of S IGNAL is the basis for the Eclipse-based polychronous modeling environment SME [3, 4].
SME is used to transform Synoptic diagrams and generate executable C code.
References
[1] A. Toom, T. Naks, M. Pantel, M. Gandriau and I. Wati: GeneAuto: An Automatic Code Generator for a safe
subset of SimuLink/StateFlow. European Congress on Embedded Real Time Software (ERTS’08), Société des
Ingénieurs de l’Automobile, (2008).
[2] P. Le Guernic, J.-P. Talpin and J.-C. Le Lann: Polychrony for system design. Journal for Circuits, Systems and
Computers, Special Issue on Application Specific Hardware Design, World Scientific, (2003).
[3] Polychrony and SME. Available at http://www.irisa.fr/espresso/Polychrony.
[4] C. Brunette, J.-P. Talpin, A. Gamatié and T. Gautier: A metamodel for the design of polychronous systems.
The Journal of Logic and Algebraic Programming, 78, Elsevier, (2009).
| 6 |
arXiv:1601.07260v1 [] 27 Jan 2016
Quiver: Using Control Perturbations to Increase the
Observability of Sensor Data in Smart Buildings
Jason Koh, Bharathan Balaji, Vahideh Akhlaghi, Rajesh Gupta
Yuvraj Agarwal
University of California, San Diego
{jbkoh, bbalaji, vakhlagh, gupta}@eng.ucsd.edu
Carnegie Mellon University
[email protected]
Abstract—Modern buildings consist of hundreds of sensors and
actuators for monitoring and operation of systems such as HVAC,
light and security. To enable portable applications in next generation smart buildings, we need models and standardized ontologies
that represent these sensors across diverse types of buildings.
Recent research has shown that extracting information such as
sensor type with available metadata and timeseries data analysis
is difficult due to heterogeneity of systems and lack of support for
interoperability. We propose perturbations in the control system
as a mechanism to increase the observability of building systems
to extract contextual information and develop standardized models. We design Quiver, an experimental framework for actuation
of building HVAC system that enables us to perturb the control
system safely. Using Quiver, we demonstrate three applications
using empirical experiments on a real commercial building – colocation of data points, identification of point type and mapping of
dependency between actuators. Our results show that we can colocate data points in HVAC terminal units with 98.4 % accuracy
and 63 % coverage. We can identify point types of the terminal
units with 85.3 % accuracy. Finally, we map the dependency
links between actuators with an accuracy of 73.5 %, with 8.1 %
and 18.4 % false positives and false negatives respectively.
I. I NTRODUCTION
Cyber-Physical Systems (CPS) represent continuing advance of instrumentation of existing systems (such as automobile [1]) and networks (such as transportation [2], energy grid [3]) with a growing list of sensors and actuators.
These sensing mechanisms (from tire-pressure meters to synchrophasers) provide a greater awareness of complete systems with increasingly finer resolutions of time and distance
scales. Yet, deployment and maintenance of growing sensing
infrastructure presents a significant challenge [4], [5], [6],
one that researchers have tried to address through analysis
of the sensory data. Actuation also provides us with increased
observational capabilities by actively modulating a system and
observing responses. Active control has shown to be effective
across disparate disciplines – seismic structural design [7],
aerodynamic stability analysis [8] and fault tolerant control [9].
However, control is seldom used across most systems and
much of the literature is based on simulation studies. We
propose that control perturbations be used across a variety
of CPS applications such as information discovery, modeling,
control optimization and privacy protection.
In this paper, we explore use of perturbation control to
address sensor data management problem. Our thesis is that
a carefully selected and controlled modulation of control
algorithms can be used to discover sensor conditions and
reduce sensor maintenance work required for creation of CPS
applications. We focus on buildings as a driver application to
test this hypothesis. Buildings consist of hundreds to thousands
of sensors and actuators that are used for management of
air conditioning, lighting, fire safety and water. Prior work
has addressed many challenges to enable development of
smart building applications – standardized API for information
access [10], data management [11], [12], and semantic ontology [13]. These solutions have led to innovative applications
such as occupancy based control [14], [15], human in the loop
control [16], [17] and energy disaggregation [18]. A major
challenge in adoption of these applications is that they are
not portable across different building systems due to lack of
standardized ontologies [19], [20].
Recent works have focused on standardizing building information using existing sensor metadata and available timeseries
data to enable portable applications [19], [20], [21], [22].
These works illustrate that modern buildings consist of a
wide variety of sensors and actuators with varying naming
conventions across vendors. Oftentimes the metadata available
is not understandable to a non-expert and sometimes no
metadata is available to understand the data context. Even
when researchers used the available metadata, timeseries data
and applied state-of-the-art algorithms to identify the type of
the sensor, the overall accuracy was low [21], [22] or required
significant manual effort [19], [20]. Identifying sensor type,
however, is just one step towards creation of standardized
models that can be used by portable applications. Other
pertinent problems include determining sensor location, relationship among sensors/actuators and models which capture
the behavior of the system.
Active control is a promising approach to address the
lack of information available, as carefully designed control
perturbations can reveal insights into system behavior that is
not observed in regular operation. Recently, Pritoni et al. [23]
showed that the mapping between the Air Handler Units
(AHU) and the corresponding terminal units in the building
Heating, Ventilation and Air Conditioning (HVAC) system
can be inferred with 79% accuracy with control perturbations
compared to 32% accuracy with data analysis alone. Control
perturbations have also been studied for Fault Detection and
Diagnosis (FDD) [24], [25] and fault tolerant control [26], [27]
in HVAC systems as it eliminates mundane manual testing and
II. O UR B UILDING T ESTBED
Modern buildings consist of hundreds of networked sensors and actuators for operation and maintenance of various
systems. These systems are typically overseen with Building
Management System (BMS) which helps configure, monitor,
analyze and maintain various systems. The sensors, actuators
and the configuration parameters in the BMS are together
referred to as points. We focus on building HVAC systems
where BMSes are most commonly used.
Our testbed is a 150,000 sq ft building, constructed in 2004
and consists of a few thousand occupants and 466 rooms. The
HVAC system consist of an Air Handler Unit (AHU) that
supplies cool air to the building via ductwork using chilled
water supplied by a central plant. A heat exchanger supplies
hot water to the rest of the building using hot water supplied
by central plant. The cool air and hot water are used by local
terminal units called Variable Air Volume (VAV) boxes to
regulate the temperature of rooms. The area serviced by the
VAV box is referred to as a thermal zone, which consist of
a large room or multiple small rooms in our building. Figure
1 shows a schematic of the VAV box with the sensors and
actuators installed for its operation.
VAVs have been commonplace since 1990s [30], and their
basic working is well understood. The VAV regulates the
amount of cool air provided using a damper, and if the zone
needs to be heated, it regulates the hot water in the heating
coil using a valve. The temperature sensor in the thermal zone
provides feedback on how much cooling or heating is required.
However, in the real VAV box, there are over 200 points
that govern its working [31]. The essential sensors include:
Zone Temperature, Supply Air Flow, Reheat Valve Position and
HOT WATER RETURN
HOT WATER SUPPLY
fixes some classes of faults automatically.
We expand on these ideas and show that active control
mechanisms can be used as an integral part of a data model.
Control based interventions are not used in practice because of
equipment and safety issues. We empirically explore control
in a real building HVAC system. We build Quiver, a control
framework that allows us to do control experiments safely on
the HVAC system by constraining control input that satisfies
criteria such as range of values, frequency of actuation and
dependency between actuators. We deploy Quiver in our
building testbed and use it to demonstrate three example
applications that exploit control perturbations. First, we show
that perturbations can be used to identify co-located sensors
which has been shown to be difficult with data alone [28].
We co-locate data points in HVAC terminal units with 98.4%
accuracy and 63% coverage. Second, we identify the point type
of terminal units given ground truth point types of one terminal
unit using transfer learning classification. We identify point
types with 85.3% accuracy across 8 zones. Third, we map
the dependency between sensors and actuators in the control
system using control perturbations and probabilistic analysis.
We identify dependency links between actuators with 73.5 %
accuracy, with 8.1% false positives and 18.4% false negatives
across 5 zones.
Supply Air
Return Air
VAV
Supply
Fan
Damper
Reheat
Reheat Valve
Coil
Exhaust
Fan
Flow sensor
Room 1
Zone Temperature
Room 2
HVAC Zone
T Thermostat Adjust
Fig. 1. Sensors and actuators in a Variable Air Volume (VAV) unit that
provides local control of temperature in the HVAC system [29].
Damper Position; and the actuator points include: Reheat Valve
Command, Thermostat Slider Adjust and Damper Command.
These actuators are controlled using many configuration points
such as Temperature Setpoint, Occupied Command, Air Flow
Setpoint, etc. These configuration points account for majority
of the points, and include nuanced parameters that ensure
minimum airflow, set the PID loop settings, etc.
Not all of these 200 points are reported to the BMS, and
only the essential sensors and control points are exposed to
limit resource usage and information overload for building
managers. In our building testbed, 14 to 17 points are reported
to the BMS for each VAV box. The points exposed to BMS
changes depending on the vendor, type of VAV and the
installation version used by the vendor. Even though the same
model of VAV is used across all zones in our building, there
are minor variations due to configuration changes, presence of
supply/exhaust fans or lack of heating.
A. Data Collection and Control
The points in our building communicate with the BMS
using BACnet [32], a standard building network protocol. We
connect our server to this network to collect data and control
the points in our building. We use BuildingDepot [33], an open
source RESTful web service based building datastore to manage the points in the building, provide appropriate permissions
to developers, and search using a tagging mechanism. Our
control framework Quiver works on top of BuildingDepot to
manage control inputs from our experiments. Figure 2 depicts
the system architecture of our deployment.
BACnet is a well developed protocol with which developers
can not only read and write points, but also schedule hourly
control, mark holidays on a calendar, and even manage programs running in the embedded VAV controller. For simplicity,
we only focus on read and write points, i.e., in BACnet
terminology Input, Output and Value points. These points can
have floating point, binary or multi-state values, and in BACnet
BACnet
Connector
BACnet/IP
Building
Management System
HTTP
BuildingDepot
MySQL
Cassandra
Storage
HTTP
Quiver
Control Framework
Python
API
Control
Experiments
Fig. 2. System architecture of Quiver. Data collection and control is done via
BACnet protocol using BuildingDepot web service [33]. Quiver ensures that
the control sequences of our experiments are safe and rolls back the system
to its original behavior in case of failure.
a floating point that can be written to is referred to Analog
Output. Each of these Output points have an associated priority
array. The default operation is performed at the lowest priority
and the highest levels are reserved for emergency operations
such as fire safety. Once a higher level priority is written to,
the lower levels are ignored. An explicit write with value “0”
needs to be written to the higher level priority in order to
relinquish control to the lower levels.
The university Facilities Management provides us with
a fixed priority level in this priority array for our control
experiments. We need to relinquish control back to the default
priority level after our control experiments to ensure that our
interference does not affect the regular operation of the HVAC
system. Quiver ensures that all the points are relinquished after
an experiment.
B. Points in Variable Air Volume Box
Figure 3 shows the points associated with VAV in our
building BMS and how these points relate to each other. At the
top of the figure, we have the zone Temperature Setpoint and
Occupied Command, which in combination with thermostat
input determine the temperature guardband within which the
VAV is trying to keep the zone temperature. The temperature
guardband is indicated by Heating and Cooling Setpoints,
which represent the lower and upper bounds of temperature
respectively. There are three occupancy modes: Occupied,
Standby and Unoccupied during which the temperature bands
are 4o F , 8o F and 12o F respectively. During the Occupied
mode, minimum amount of airflow is maintained to ensure
indoor air quality. The Thermostat Adjust allows changing the
temperature setting by ±1o F , and the Temporary Occupancy
maps to a button on the thermostat which when pressed puts
the zone to Occupied mode for two hours during nights/weekends.
The Heating and Cooling Setpoints determine the behavior
of the VAV control system with the measured Zone Tem-
perature completing the feedback loop. These three points
determine the Cooling and Heating Command of the thermal
zone. The Cooling Command determines the amount of cool
air required for the zone and determines an appriopriate Supply
Air Flow Setpoint that is between the designed minimum and
maximum supply air flow. When the Cooling Command is
high (∼ 100%), feedback is sent to the AHU to decrease the
supply air temperature to meet the cooling needs of the thermal
zone. The Heating Command determines the amount of reheat
required by controlling the Reheat Valve Command. During
heating, the airflow is set to the minimum to reduce chilled
airflow from AHU, and this airflow is increased when high
Heating Command (∼ 100%) fails to heat up the thermal zone
sufficiently. A high Heating Command also sends a signal to
the heat exchanger to increase the supply water temperature.
Note that only one of Heating or Cooling Commands can be
>0% at a time.
The Supply Air Flow Setpoint determined by the cooling/heating requirements in turn determines the Damper Command which is the amount of damper actuation required to
match the setpoint to the measured Supply Air Flow. The
Damper Position sensor also provides feedback to set the
appropriate Damper Command. There is a separate PID loop
associated with setting each of Heating Command, Cooling
Command, Supply Air Flow Setpoint and Damper Command,
and there are PID parameters such as those that govern
proportional gain and integration time, but these are hidden
from the BMS.
III. Q UIVER : C ONTROL F RAMEWORK
1
Quiver is built upon our building testbed. In this section,
we show the utility of Quiver on the VAV box to demonstrate
our control perturbation applications.
Given that exercising control over a building control system can lead to unintended, and potentially dangerous consequences, we ensure safety for the control perturbations
we perform by: (a) global time synchronization across the
computational servers running different parts of Quiver, (b)
identification of a range of ‘safe’ values for control through
metadata and data history, (c) dependency checks based on
domain knowledge, and (d) status tracking to ensure we
relinquish control at the end of our experiment and are able to
rollback the changes made in case of an application crash. We
have designed these safeguards based on five years of control
experience with BMSes and discussions with our university
Facilities Management.
We perform basic checks such as checking the range of
values exercised (i.e. min/max), type of data for each input
based on metadata available in BACnet as well as self-imposed
limits. For instance, we force the Temperature Setpoint to be
between 62o F − 78o F . Each point marked as read/write in
Figure 3 can be written to using Quiver. We synchronize time
across all our servers – BACnet Connector, BuildingDepot and
1 The basic control framework will be presented as a poster abstract (not
peer reviewed) in an upcoming conference.
Thermostat Input
Temperature Setpoint (TS)
Thermostat Adjust
Air Handler Supply Air
Temperature
Cooling Setpoint (CS)
Heating Setpoint (HS)
Heat Exchanger Supply
Water Temperature
Min Supply Air Flow
When Occupied
Cooling Command (CC)
Heating Command (HC)
Max Supply Air Flow
Occupied Command (OC)
Read-only Point
Read/Write Point
Central Unit Point
Direct
Link
Indirect
Link
Feedback
Link
Temporary Occupancy
Supply Air Flow Setpoint (SAFS)
Damper Command (DC)
Reheat Valve Command (RVC)
Damper Position (DP)
Legend
Supply Air Flow (SAF)
Zone Temperature (ZT)
Fig. 3. BMS points associated with VAV in our building testbed. The dependency between the points as shown by arrows is mapped based on domain
knowledge. Read-only points are either sensors or configuration points which cannot be changed. Read/write points can be changed via BACnet.
Quiver – using our university Network Time Protocol server.
The time synchronization across these servers is necessary to
ensure correct sequence of operations and proper data analysis.
A local database keeps track of all the read/write points in
the testbed building. For example, we keep a track of whether
the point is part of a current experiment, the last value that was
written to the point, the reset (default) value of the point, the
timestamp of the last write to that point, and the thermal zone
that the point belongs to. This database is used to keep track
of the status of the experiment as well as allow developers
to query it through APIs. The database is also used to ensure
that all points are reset to their default values at the end of
an experiment and, if an experiment crashes due to an error
condition, a rollback is performed to restore the points to a
safe (default) state.
Quiver limits the frequency of control of a single point, or
dependent points, to allow time for HVAC control changes to
take effect and to avoid problems such as damper oscillations
which reduces equipment life. Currently, we have set the minimum time between consecutive writes to 10 minutes. Before
writing to a BACnet point, Quiver checks for dependency
between the point and other points which have been written to
by referring the local database. We conservatively assume that
all the control points in a particular VAV are related to each
other, and thus, only one control input can be written to a VAV
every 10 minutes. Note that there are 237 thermal zones in the
building, and each of these zones can be controlled in parallel.
We set a minimum delay of 10 seconds between each write so
that the BMS does not get overloaded. After each write, Quiver
ensures that the BMS has accepted the input and throws an
exception in case the write is rejected after retrying two times.
Upon further experimentation, we found that the VAV
embedded controller has some built-in safety features that
also limit the amount of control available. For example, the
VAV does not allow the Supply Air Flow Setpoint to be set
beyond the minimum and maximum values. It also does not
allow heating when the Zone Temperature exceeds the Heating
Setpoint and disallows cooling when the temperature is lower
than the Heating Setpoint. The Damper Command represents
the change in Damper Position required, and resets itself after
the damper is actuated appropriately. This behavior is different
from the rest of the actuator points such Supply Air Flow
Setpoint, which match the sensor value as much as possible.
We incorporate these constraints in our experiments, and
integrate them in the next version of the control framework.
IV. Q UIVER : C ONTROL E XPERIMENTS
We use Quiver to learn more information about the sensor
and actuator points inside a building using control perturbations. We define control perturbation as any changes made
to actuators that deviates from typical HVAC operation. We
confine all of our control experiments to nights/weekends, or
in unoccupied zones only, to alleviate any effects on occupant
comfort. We focus our control experiments towards addressing
three important smart building applications:
Temperature ( ◦ F)
Temperature Setpoint
Cooling Setpoint
Heating Setpoint
Zone Temperature
Cooling Command
Heating Command
80
75
70
65
60
100
75
50
25
0
Supply Air Flow
Supply Air Flow Setpoint
900
600
300
0
Damper Command
1.5
Damper Position
1
100
75
0.5
50
0
09 PM
10 PM
11 PM
12 AM
Damper Position (%)
Damper Command (%)
Air Flow (cu. ft/min)
Command (%)
Temperature ( ◦ F)
78
74
70
66
62
25
01 AM
Fig. 4. A sample of co-location experiment, where the Temperature Setpoint
is oscillating between 62o F and 78o F for four hours at night (the top graph).
The VAV points which react to these changes (remaining four graphs) can be
co-located by using temporal data analysis of this controlled period.
Identifying points which are co-located with a VAV box.
Identifying point types within a VAV given ground truth
point types of one VAV unit.
• Mapping the dependency between VAV actuator points.
All of our data analysis is implemented using Python Scikit
Learn library [34].
•
•
A. Using Control Perturbation to Determine Location
The location of sensor and actuator points is not readily
available in the BMS for older buildings. In buildings where
location information is available, it is usually inconsistent
due to errors in manual labeling process [20], [21]. It is
also difficult to co-locate points using historical data alone
as many VAVs function in a similar manner, and the variation
of data is not enough to distinguish them apart [28]. Control
perturbations can be used to force the control system to
unusual operating points, and co-located points that respond
to this perturbation can be clustered together by data analysis.
We assume that we already know the type of points in
the building which can be obtained using recently proposed
methods [20], [22], but do not know if they are co-located
or how these point relate to each other or affect the control
system. We do not use the location information already integrated into Quiver for these experiments. We perturb the
actuator point identified as the Temperature Setpoint (TS) of
a randomly chosen zone, and identify the corresponding colocated points using the temporal data features of other points.
Towards the end of this section, we discuss how we can relax
the assumption of knowing the point type apriori.
Figure 4 shows an example control sequence, where we
change TS 4 times across 4 hours from low (62o F ) and
high (78o F ), and the corresponding VAV points in the same
zone that react to its changes. We chose such an oscillation
of TS as it deviates substantially from normal operation and
we can easily distinguish the controlled zone from the rest
of the zones under normal operation. This control sequence
was chosen empirically, and we show that even such simple
control sequences can be effective for co-location of points.
However, as we show with our experiments, the effect of
control sequences do affect the quality of results. We do not
focus on design of generic control sequences in this paper.
We extract basic features such as range (max - min),
mean and standard deviation from the observed timeseries
data. We also extract the Dynamic Time Warping (DTW)
distance [35] between the applied TS signal and the point
under consideration. DTW compensates for the time delay in
the reaction and change in sensor values due to a control action
and quantifies the difference between the shape of the signals.
In addition, we exploit our pulse control and analyze the Fast
Fourier Transform (FFT) after normalizing the data and use
the Euclidean distance between the FFT of the point signal and
FFT of the TS signal. We refer to this feature as “L2 norm
of FFT” or “LFT”. We ignore frequencies beyond 0.0005 Hz,
i.e. a period of 30 minutes, because we only focus on changes
caused by our low frequency control sequence.
We extract these features for all the VAV points in the
building and identify the outlier points. In principle, the point
which deviates the most from regular control operations would
be co-located with our TS point with high probability.
Figure 5 shows the distribution of all the Zone Temperature
(ZT) points in our building across three features – DTW, LFT
and range – with a control sequence of two changes to TS.
The zone under control is marked in red, and as observed, the
red point is far away from most of the points from the other
zones in the building. However, there are still a few points
which are also differ significantly from most zones and it is
difficult to distinguish the red point from those outliers. When
we examine the data from the experiment where we made 4
changes to TS, the corresponding to changes to the ZT in the
same zone resulted in much higher variation from those in
uncontrolled zones. This is captured by our features as shown
Uncontrolled Zones
Uncontrolled Zones
Controlled Zone
4.4
4.2
4.0DTW
3.8
0.0 0.5
1.0 1.5
Range (2.0 2.5
°F)
3.0 3.5
3.6
3.4
20
15
10 f FFT
5 mo
0 L 2Nor
5
Controlled Zone
ZT
SAFS
SAF
RVC
HC
DP
DC
CC
AHS
ACS
Fig. 5. Co-location of Zone Temperature by perturbing the Temperature
Setpoint. 2 changes of Temperature Setpoint is applied across 2 hours in
the controlled zone.
Uncontrolled Zones
1
Rang2e ( 3
°F)
4
10
20
30
L2 Norm of FFT
40
50
60
Fig. 7. Comparison of the L2 norm between FFT of VAV points and the
FFT of the controlled Temperature Setpoint. The points corresponding to the
controlled VAV have a much lower L2 norm compared to regular zones for
eight point types, but fails to capture the difference in Damper Command.
Controlled Zone
4.2
4.1
4.0
3.9DTW
3.8
3.7
3.6
0
0
50
40 FFT
f
30 rm o
o
N
20 L 2
Fig. 6. Co-location of Zone Temperature by perturbing the Temperature
Setpoint. 4 pulses of Temperature Setpoint is applied across 4 hours in the
controlled zone.
in Figure 6. Hence, with the help of a well designed control
perturbation it is possible to mold the behavior of the control
system for end use applications.
We analyze the data for other point types to check if we
can co-locate the zonal points successfully. In practice, we find
that LFT feature alone is sufficient to distinguish the controlled
zone points from the rest. Figure 7 shows compares the LFT
of the controlled zone with other zones for point types: Zone
Temperature (ZT), Supply Air Flow Setpoint (SAFS), Supply
Air Flow (SAF), Reheat Valve Command (RVC), Heating
Command (HC), Damper Position (DP), Cooling Command
(CC), Heating Setpoint (HS) and Cooling Setpoint (CS). We
performed this control experiment on eight zones in our building, and we co-located the listed points with 98.6% accuracy.
We only failed to identify the correct Damper Position point
for one of the zones, leading to a drop in accuracy.
The Damper Command (DC) is a differential actuator that
sets the change that needs to be made to the damper. There
are several VAVs in the building which constantly change their
DC for minor variation in the air flow, and the features we
extracted – DTW, FFT, mean, variance, number of changes
– failed to differentiate the DC of the zone under control
from the rest (Figure 7). More sophisticated data analysis or
perturbance signals are required for co-location of DC points.
We could only co-locate DC points in two of the eight zones
with our current method.
Another issue with these control experiments is that we
can only co-locate those points which react to changes in
TS (see Figure 3). Points such as Occupied Command (OC)
and Thermostat Adjust, which are external inputs to the VAV
control system, cannot be co-located. To remedy this, we
perform a second set of experiments which oscillates the OC
similar to our TS control perturbations. We successfully colocated OC and points such as Cooling Setpoint and Supply Air Flow that have been already co-located with their
corresponding TS point. Thus, all of these points can be
marked as being co-located in the same VAV. We performed
the Occupied Command oscillation experiments across four
zones with 100% success rate in their co-location results. We
could not perform similar experiments on the thermostat points
(Thermostat Adjust and Temporary Occupancy). They cannot
be controlled by our platform as thermostats produce their data
contiguously, and we acknowledge this is a limitation of our
proposed method.
We repeated our TS control experiments on a hot day, and
found that the same control perturbations cannot co-locate
B. Identification of Point Type
We now look at the inverse problem of identifying point
type given the co-located points in a VAV. Identification of
point type is essential for third party application to interpret
timeseries data, and prior work has shown that data analysis
alone fails to identify point types accurately [21], [22].
As shown towards end of Section IV-A, it is possible to
identify co-located points even when the individual point types
are unknown. For this problem, we assume that we know the
co-location of all the VAV point types and the ground truth
point types for one zone. We use this information along with
the timeseries data to identify point types of other zones in the
building. When our data analysis fails to identify certain point
types, we use control experiments to increase the coverage of
point types identification. Note that the VAV points we focus
on cover 79% of all points in our building testbed.
With the help of the zone for which ground truth is known
apriori, we train supervised classifiers using features extracted
from one year of timeseries data and ground truth point
types as labels. We slice one year of data into 53 weeks
(partial weeks for starting and ending week), and extract
features such as mean, variance, dominant frequency, noise,
skewness and kurtosis [36] for each week. Noise is represented
by error variance between original data and its piece-wise
contant approximation. Skewness measures the symmetry of
the data values across the mean while kurtosis measures the
peak of the data distribution at the mean. The vector of
DP
DC
OCM
CMF
ASFSP
RVC
HC
CC
ZT
HS
CS
WCA
OC
TS
100
75
Accruacy (%)
Point Type
heating related points – Heating Command (HC), Reheat Valve
Command (RVC) – as they are not triggered sufficiently due to
hot outside weather. The zone cannot be cooled down enough
to activate heating (HC, RVC) when its TS is changed to the
high value. We need to change our control perturbations to excite these points specifically. Thus, the perturbation signature
needs to be sensitive to external conditions and confounding
factors.
Note that in Figure 7, the LFT feature of the controlled zone
for most point types except ZT and DC differ significantly
from the rest of the points. Controlled zone’s LFT feature of
ZT is not distinguishable from other types’ LFT feature of the
other zones though it can be co-located within the same type.
Contrary to DC’s different operational behavior, ZT’s signal
response is slower than the other types due to heat capacity of
zones. If we assume that we do not know the point type, then
the points except ZT and DC can be obtained as outliers from
the points belonging to normal zones. However, we would
need to design appropriate threshold or clustering technique
to identify the outlier points correctly.
Overall, with our control perturbations based co-location,
we successfully co-located 10 out of 16 point types (63%
coverage) with 98.4% accuracy across eight VAV units. We
co-located the Occupied Command points using auxiliary
control experiments for four zones. We failed to co-locate
Damper Command due to its divergent behavior. We also do
not co-locate 4 point types which do not respond to control
perturbations.
50
25
Gau
L
R
R
N
D
B
A
ssia inear S andom BF SVM earest ecision ernoul daBoo
l
n N VM
i
N
F
T
eigh ree Naiv st
ore
aive
st
e Ba
bor
Bay
s
yes
es
Classifier
100
Fig. 8. Accuracy of point type identification using transfer learning across ten
supervised classifiers with one year historical data for VAV points belonging
to one zone.
features for one week of data represents one row of training
data for our classifier. We train many standard classifiers –
Gaussian Naive Bayes (GNB), Linear Support Vector Machine
(LSVM), Random Forest (RF), Radial Basis Funcation SVM
(RBF SVM), Nearest Neighbours (NN), Decision Tree (DT),
Adaboost, and Bernoulli Naive Bayes (BNB). Our test results
show high accuracy (∼100%). We use these trained models
for identifying point types of different zones with the same
features extracted from one year of data. This process is called
transfer learning, and figure 8 summarizes the results obtained
across all the classifiers for different point types in one zone.
We find that some point types such as Temperature Setpoint
(TS) are readily identified even with simple features such
as mean and variance across most classifiers, while other
point types work only with specific classifiers or require more
features. We experimented with 10 classifiers for three zones,
and choose the classifier which works best for each point
types for analysis with other zones. Some point types such as
Heating Command (HC), Cooling Command (CC) and Zone
Temperature (ZT) are not identifiable by any classifier with
all the features we used. This is because the behavior of
these point types are similar to each other or other identified
point types during regular operation. We repeated this analysis
across 8 zones and found similar results. Our results are
summarized in Table I. These results are corroborated by
prior work who used more sophisticated data features for type
identification [19], [21], [22].
We leverage control perturbations to identify point types
which were unsuccessful using data analysis. As points such
as TS and Occupied Command (OC) are easily identifiable
using data analysis, we use these points to create our control
perturbations. We put the zone to the Occupied mode, and
increase the TS to 78o F to force the VAV into a heating mode
for 3 hours. The same perturbation is applied for the ground
truth zone as well as all the zones for which the point type
needs to be identified. We then extract the same data features
for this control period across all the points, and use transfer
learning to label points in different zones. As the behavior of
1
TABLE I
R ESULTS OF POINT TYPE IDENTIFICATION USING BOTH DATA ANALYSIS
AND ANALYSIS WITH CONTROL PERTURBATIONS .
Control
SAFS
RVC
Classifier
Accuracy
Classifier
Accuracy
Temperature Setpoint
RF
100%
-
-
Occupied Command
RBF SVM
99.2%
-
-
CC
Thermostat Adjust
AdaBoost
93.1%
-
-
OC
Cooling Setpoint
LSVM
88.9%
-
-
TS
Heating Setpoint
DT
87.4%
-
-
Zone Temperature
RF
65.0%
RF
88%
Min Supply Flow
RF
77.8%
-
-
Max Supply Flow
BNB
100%
-
-
Cooling Command
DT
47.0%
DT
88%
Heating Command
DT
32.1%
-
-
Reheat Valve Cmd
RBF SVM
78.4%
-
-
Supply Air Flow SP
NN
80.3%
-
-
Supply Air Flow
LSVM
90.6%
-
-
Damper Command
RF
96.2%
-
-
Damper Position
RF
80.3%
-
-
all the controlled zones is forced to be similar, the behavior of
the same point type across the zones is similar. However, the
controlled sequence also forces the non-identified point types
– SAF, SAFS, ZT, HC – to be different from the rest of the
points. Thus, with this experiment, we are able to identify the
rest of the point types successfully. Our results across the 8
zones have been summarized in Table I. Overall, our accuracy
of point type of identification is 85.3 %.
Our data analysis or control experiments could not distinguish between point types Heating Command (HC) and Reheat
Valve Command (RVC) as they are virtually identical in their
behavior. However, it is possible to separate these two points
using dependency analysis, which we describe next.
C. Dependency Mapping of Points
We now focus on the understanding the working of the
VAV, and how the points relate to each other. As the type
of points exposed are different across vendors and equipment,
it is necessary to understand the context of these points, and
map it to a model that can be used by other applications. These
models can be built using domain knowledge, technical documents and historical data analysis [37], [38] as demonstrated
by our dependency graph in Figure 3. We propose control
perturbations as an alternative to these methods, which can be
used for either verifying already developed models or used for
older buildings where information available is insufficient for
modeling using other methods.
We assume that we already know the point type and the
co-located points in a VAV. We focus on modeling the dependencies between the actuator points (or read/write points).
We write to the Temperature Setpoint (TS) of a zone with a
randomly chosen value every 20 minutes for 6 hours. The goal
of this control sequence is to identify the points that react to
changes in TS, and so we choose random values within limits
0.75
P(Dependent Change|
Controlled Change)
Data
Points
Point Type
DC
0.5
HC
0.25
TS
OC
CC
HC
RVC
Controlled Points
SAFS
DC
0
Fig. 9. Color map showing the changes induced by control perturbations of
each actuator point on other actuators. Probabilities are calculated as the ratio
of number of changes observed in a non-controlled actuator and the number
of changes made by the controlled actuator.
for perturbing different operating points of the control system.
For every change in TS, we analyze the behavior of the VAV
points for 10 minutes, and note the points whose values change
during this period. The threshold of change is one standard
deviation for values observed for the past 12 minutes. We
chose these times so that we can isolate changes that occur
due to the change in TS rather than other external factors
such as solar radiation. For the duration of the experiment,
we calculate a final probability for each of the points as the
ratio of the number of changes observed for the point and
the number of changes in TS. We repeat this experiment by
perturbing all the actuator points - Occupied Command (OC),
Cooling Command (CC), Heating Command (HC), Supply Air
Flow Setpoint (SAFS) and Damper Command (DC). Note that
we ignore the points related to minimum and maximum supply
air flow as they are constants.
Figure 9 shows a color map representing the probabilities
obtained by perturbing each of these points. The changes to TS
affects all the actuators except OC and DC, while changes to
actuators like CC cause changes only in DC and SAFS. With
the help of this color map, we can understand which points are
being affected by each of the actuator points. However, this
does not precisely decide the dependency between points as
points which are lower in the dependency tree such as SAFS
get affected by almost all of the actuation experiments. We
find the behavior of DC to be unpredictable, and the changes
that occurred in DC with our control perturbances were lesser
than our set threshold. We perform these perturbation across
five zones.
Figure 10 shows the relationships obtained as a result of
our analysis. The green solid links indicate relationships which
are true and confirmed with the analysis. The red dashed links
show relationships which are not true, but are shown to be
related by the analysis. In general, the above experiments
cannot identify the true links when a “cycle” is formed in
the graph. Here, by “cycle” we mean that there are multiple
paths from one point to another in the graph. We perform more
control experiments to negate the red links, and also confirm
the blue dotted links which form a cycle with the green links.
TS
OC
CC
HC
RVC
SAFS
DC
Legend
True link, no test required
False link, test required
True link, test required
Fig. 10. Dependency links obtained by perturbing each of the actuators and
checking which other actuators react to these changes. The links which require
further testing are verified using conditional control perturbations.
Consider the cycle formed between the points TS, CC and
SAFS. In order to verify if the link between TS and SAFS is
correct, we perform a conditional control perturbation, where
we change TS but force CC to be unchanged, which is called
Graph Surgery [39]. We repeat this experiment for at least four
changes of TS. If TS were directly affecting SAFS, we would
observe that SAFS changes even when CC is held constant.
We verify each of the red and blue links this way. When there
are more than three points in a cycle, such as that with TS,
OC, HC and RVC, we ensure several combinations of TS
and OC are performed to test validity of the link. In some
of these experiments, we preset the TS value to a fixed value
for appropriate conditions that can activate other points such
as HC. As we note at Section III, an external variable, ZT,
may disable HC or CC in certain condition though ZT is not
an actuator. We let the VAV control system to settle to a steady
state after our change of TS before performing any dependency
experiments.
We performed these experiments on five zones in our
building, and verified the links with 73.5 % accuracy with
a false positive rate of 18.4 % and false negative rate of 8.1
%. All of the false positive and 64.7 % of the false negative
are due to the external variable, ZT. Thus, we can discover
the dependency between actuator points in the VAV using
our control experiments. However, in order to discover the
complete dependency map as shown in Figure 3, we need to
use maximum likelihood data analysis as the behavior of readonly points cannot be controlled directly.
V. R ELATED W ORK
The problem of discovering system characteristics and models using available data is called system identification [40], and
is a well studied subject in control systems research. Using
control perturbations for system identifications is also well
known [40], [41], and the design of control perturbations,
also referred to as auxiliary or secondary signals, has been
studied for modeling different types of control systems [42],
[43]. System identification techniques have also been used for
HVAC modeling [44], and some prior work have explored
using control signals for system identification [45], [46].
However, all of these works focus on modeling the control
system or perform control optimizations and do not address
identification of contextual information such as location, point
type or dependency graphs. Moreover, the control perturbation
methods used for buildings are only verified using simulations.
Active HVAC control on real systems has been used for
fault diagnosis [24] and fault tolerant control [26]. These
works and other simulation based studies which propose fault
tolerant control [47], [48] assume the contextual information
about the system is already available. We focus on discovering
contextual information using active control.
Co-location of sensors has been studied before [49], [28] but
they use sophisticated data analysis algorithms. These methods
fail when the points from different locations have similar
data characteristics during regular usage. We show that with
perturbations we can excite the local control system to unique
operating points and co-locate points with high accuracy using
simple data analytics.
Point type identification has also been studied earlier using
both metadata [19], [20] and data analytics [21], [22]. These
works show that metadata alone can be unreliable and requires
significant manual input for accurate type identification, and
the data analytics based method is useful for some point
types but fails for others. Our results conform to the data
analysis works, and we show that perturbations can be used
to identify the points which are difficult with data analysis.
We also note that Hong et al. [22] focus on transfer learning
across buildings, while we focus on transfer learning within
the building across the different instances of VAV units.
In addition, we focus on creating dependency graphs between points in the HVAC system. Pritoni et al. [23] use control of AHU discharge air temperature to map the VAV units
to their corresponding AHUs. We present control techniques
to isolate relationships within the VAV unit.
We have presented a detailed view of how HVAC VAV
units work in practice, and how our Quiver control framework
has been designed for safe control of the VAV unit, and
provides features such as query of current status and rollbacks
in case of crashes. Dawson-Haggerty et al. [50] provide
similar mechanisms to provide safe operations, but the onus of
rollbacks and error checking is on the application developer.
They also provide support for multi-user control with timed
leases and locks, while we only support single user control.
VI. L IMITATIONS AND F UTURE W ORK
We have shown that control perturbations can be useful in
discovery of contextual information in HVAC VAV units in our
building testbed. This is just a first step, and more research
is needed to extend these results to other types of equipment
and control systems. We hope that the research community
embraces control perturbations as a tool and generalizes the
results across different buildings, vendors and even other cyber
physical systems.
Although our control perturbations were simple and effective for extracting information we were interested in, we
observe that design of perturbation signatures can affect the
results significantly depending on external disturbances and
behavior of the control system. We will focus on automating
the perturbance signals in future work, borrowing ideas from
system identification literature [43]. We have also forced our
control experiments to be done on either weekends or nights.
Our work needs to be extended to perform these experiments
even during work hours with perturbation signals that conform
to comfort constraints.
To fully exploit control capability for both perturbation
applications and control optimizations, we need to provide
support to third party developers for direct control of systems.
Initial work in both academia [50] and industry [51] have
extended traditional vendor specific controls to common APIs.
Our control framework, Quiver, allows safe control by a single
user. Further work needs to be done to provide safety guarantees so that multiple users can access the control system at the
same time and developers are provided sandboxes or emulation
tools to experiment with different control applications.
VII. C ONCLUSION
Traditional Building Management Systems are vendor specific vertically integrated solutions. To enable third party
building applications that exploit sensor information, recent
works have created standardized APIs and data management
solutions which have led to a spurt of innovative applications.
We extend these works by enabling control of building systems
and demonstrate applications that exploit control to extract
contextual information. We design Quiver, a control framework that allows safe experimentation of HVAC VAV units by
integrating domain knowledge gained through experience and
empirical experiments. Using Quiver, we demonstrate three
control perturbation applications that extract context information about VAV units – co-location of points, identification of
point types and mapping the dependency between points. We
co-locate 63% VAV points with 98.4% accuracy, we identify
point types with 85.3 % accuracy across 8 zones, and we
identify dependencies between VAV actuator points with 73.5
% accuracy across 5 zones.
R EFERENCES
[1] C. Berger and B. Rumpe, “Autonomous driving-5 years after the urban
challenge: The anticipatory vehicle as a cyber-physical system,” arXiv
preprint arXiv:1409.0413, 2014.
[2] H. Abid, L. T. T. Phuong, J. Wang, S. Lee, and S. Qaisar, “V-cloud:
vehicular cyber-physical systems and cloud computing,” in Proceedings
of the 4th International Symposium on Applied Sciences in Biomedical
and Communication Technologies. ACM, 2011, p. 165.
[3] S. Karnouskos, “Cyber-physical systems in the smartgrid,” in Industrial
Informatics (INDIN), 2011 9th IEEE International Conference on.
IEEE, 2011, pp. 20–23.
[4] P. Derler, E. Lee, A. S. Vincentelli et al., “Modeling cyber–physical
systems,” Proceedings of the IEEE, vol. 100, no. 1, pp. 13–28, 2012.
[5] S. Karnouskos, “Stuxnet worm impact on industrial cyber-physical
system security,” in IECON 2011-37th Annual Conference on IEEE
Industrial Electronics Society. IEEE, 2011, pp. 4490–4494.
[6] B. Balaji, A. Faruque, M. Abdullah, N. Dutt, R. Gupta, and Y. Agarwal,
“Models, abstractions, and architectures: the missing links in cyberphysical systems,” in Proceedings of the 52nd Annual Design Automation Conference. ACM, 2015, p. 82.
[7] M. D. Symans and M. C. Constantinou, “Semi-active control systems for
seismic protection of structures: a state-of-the-art review,” Engineering
structures, vol. 21, no. 6, pp. 469–487, 1999.
[8] A. Epstein, J. Ffowcs Williams, and E. Greitzer, “Active suppression of
aerodynamic instabilities in turbomachines,” Journal of Propulsion and
Power, vol. 5, no. 2, pp. 204–211, 1989.
[9] M. Blanke and J. Schröder, Diagnosis and fault-tolerant control.
Springer, 2006, vol. 2.
[10] S. Dawson-Haggerty, X. Jiang, G. Tolle, J. Ortiz, and D. Culler, “sMAP:
A simple measurement and actuation profile for physical information,”
in Proceedings of the 8th ACM Conference on Embedded Networked
Sensor Systems. ACM, 2010, pp. 197–210.
[11] P. Arjunan, N. Batra, H. Choi, A. Singh, P. Singh, and M. B. Srivastava,
“SensorAct: A privacy and security aware federated middleware for
building management,” in Proceedings of the Fourth ACM Workshop on
Embedded Sensing Systems for Energy-Efficiency in Buildings. ACM,
2012, pp. 80–87.
[12] A. Krioukov, G. Fierro, N. Kitaev, and D. Culler, “Building application
stack (bas),” in Proceedings of the Fourth ACM Workshop on Embedded
Sensing Systems for Energy-Efficiency in Buildings. ACM, 2012, pp.
72–79.
[13] Project Haystack. http://project-haystack.org/.
[14] B. Balaji, J. Xu, A. Nwokafor, R. Gupta, and Y. Agarwal, “Sentinel:
occupancy based hvac actuation using existing wifi infrastructure within
commercial buildings,” in Proceedings of the 11th ACM Conference on
Embedded Networked Sensor Systems. ACM, 2013, p. 17.
[15] P. X. Gao and S. Keshav, “Spot: a smart personalized office thermal
control system,” in Proceedings of the fourth international conference
on Future energy systems. ACM, 2013, pp. 237–246.
[16] V. L. Erickson and A. E. Cerpa, “Thermovote: participatory sensing
for efficient building hvac conditioning,” in Proceedings of the Fourth
ACM Workshop on Embedded Sensing Systems for Energy-Efficiency in
Buildings. ACM, 2012, pp. 9–16.
[17] A. Krioukov, S. Dawson-Haggerty, L. Lee, O. Rehmane, and D. Culler,
“A living laboratory study in personalized automated lighting controls,”
in Proceedings of the third ACM workshop on embedded sensing systems
for energy-efficiency in buildings. ACM, 2011, pp. 1–6.
[18] X. Jiang, M. Van Ly, J. Taneja, P. Dutta, and D. Culler, “Experiences
with a high-fidelity wireless building energy auditing network,” in
Proceedings of the 7th ACM Conference on Embedded Networked
Sensor Systems. ACM, 2009, pp. 113–126.
[19] B. Balaji, C. Verma, B. Narayanaswamy, and Y. Agarwal, “Zodiac: Organizing Large Deployment of Sensors to Create Reusable Applications
for Buildings,” in Proceedings of the ACM Conference on Embedded
Systems For Energy-Efficient Built Environments. ACM, 2015.
[20] A. Bhattacharya, D. Culler, D. Hong, K. Whitehouse, J. Ortiz, and
E. Wu, “Automated Metadata Construction To Support Portable Building
Applications,” in Proceedings of the ACM Conference on Embedded
Systems For Energy-Efficient Built Environments. ACM, 2015.
[21] J. Gao, J. Ploennigs, and M. Berges, “A Data-driven Meta-data Inference
Framework for Building Automation Systems,” in Proceedings of the
ACM Conference on Embedded Systems For Energy-Efficient Built
Environments. ACM, 2015.
[22] D. Hong, H. Wang, J. Ortiz, and K. Whitehouse, “The Building Adapter:
Towards Quickly Applying Building Analytics at Scale,” in Proceedings
of the ACM Conference on Embedded Systems For Energy-Efficient Built
Environments. ACM, 2015.
[23] M. Pritoni, A. Bhattacharya, D. Culler, and M. Modera, “Short Paper: A
Method for Discovering Functional Relationships Between Air Handling
Units and Variable-Air-Volume Boxes From Sensor Data,” in Proceedings of the ACM Conference on Embedded Systems For Energy-Efficient
Built Environments. ACM, 2015.
[24] J. Weimer, S. A. Ahmadi, J. Araujo, F. M. Mele, D. Papale, I. Shames,
H. Sandberg, and K. H. Johansson, “Active actuator fault detection and
diagnostics in hvac systems,” in Proceedings of the Fourth ACM Workshop on Embedded Sensing Systems for Energy-Efficiency in Buildings.
ACM, 2012, pp. 107–114.
[25] M. Padilla and D. Choinière, “A combined passive-active sensor fault
detection and isolation approach for air handling units,” Energy and
Buildings, vol. 99, pp. 214–219, 2015.
[26] N. Fernandez, M. R. Brambley, and S. Katipamula, Self-correcting
HVAC Controls: Algorithms for Sensors and Dampers in Air-handling
Units. Pacific Northwest National Laboratory, 2009.
[27] M. Padilla, D. Choinière, and J. A. Candanedo, “A model-based strategy
for self-correction of sensor faults in vav air handling units,” Science
and Technology for the Built Environment, no. just-accepted, pp. 00–00,
2015.
[28] D. Hong, J. Ortiz, K. Whitehouse, and D. Culler, “Towards automatic
spatial verification of sensor placement in buildings,” in Proceedings
of the 5th ACM Workshop on Embedded Systems For Energy-Efficient
Buildings. ACM, 2013, pp. 1–8.
[29] B. Balaji, H. Teraoka, R. Gupta, and Y. Agarwal, “Zonepac: Zonal
power estimation and control via hvac metering and occupant feedback,”
in Proceedings of the 5th ACM Workshop on Embedded Systems For
Energy-Efficient Buildings. ACM, 2013, pp. 1–8.
[30] M. Hydeman, Advanced Variable Air Volume: System Design Guide:
Design Guidelines. California Energy Commission, 2003.
[31] “VAV Terminal Control Applications Application Note,” Johnson Controls Technical Bulletin, 2003.
[32] S. T. Bushby, “BACnetTM : A standard communication infrastructure for
intelligent buildings,” Automation in Construction, vol. 6, no. 5, pp.
529–540, 1997.
[33] T. Weng, A. Nwokafor, and Y. Agarwal, “Buildingdepot 2.0: An
integrated management system for building analysis and control,” in
Proceedings of the 5th ACM Workshop on Embedded Systems For
Energy-Efficient Buildings. ACM, 2013, pp. 1–8.
[34] “scikit-learn Machine Learning in Python,” 2015, http://scikit-learn.org/.
[35] D. J. Berndt and J. Clifford, “Using dynamic time warping to find
patterns in time series.” in KDD workshop, vol. 10, no. 16. Seattle,
WA, 1994, pp. 359–370.
[36] R. S. Tsay, Analysis of financial time series. John Wiley & Sons, 2005,
vol. 543.
[37] A. Foucquier, S. Robert, F. Suard, L. Stéphan, and A. Jay, “State of
the art in building modelling and energy performances prediction: A
review,” Renewable and Sustainable Energy Reviews, vol. 23, pp. 272–
288, 2013.
[38] F. V. Jensen, An introduction to Bayesian networks. UCL press London,
1996, vol. 210.
[39] J. Pearl, Causality. Cambridge university press, 2009.
[40] L. Ljung, System identification. Springer, 1998.
[41] K. Godfrey, Perturbation signals for system identification. Prentice
Hall International (UK) Ltd., 1993.
[42] K. Godfrey, H. Barker, and A. Tucker, “Comparison of perturbation
signals for linear system identification in the frequency domain,” in
Control Theory and Applications, IEE Proceedings-, vol. 146, no. 6.
IET, 1999, pp. 535–548.
[43] K. Godfrey, A. Tan, H. Barker, and B. Chong, “A survey of readily
accessible perturbation signals for system identification in the frequency
domain,” Control Engineering Practice, vol. 13, no. 11, pp. 1391–1402,
2005.
[44] J. Teeter and M.-Y. Chow, “Application of functional link neural network
to hvac thermal dynamic system identification,” Industrial Electronics,
IEEE Transactions on, vol. 45, no. 1, pp. 170–176, 1998.
[45] J. Sousa, R. Babuška, and H. Verbruggen, “Fuzzy predictive control
applied to an air-conditioning system,” Control Engineering Practice,
vol. 5, no. 10, pp. 1395–1406, 1997.
[46] G. Virk, J. Cheung, and D. Loveday, “Practical stochastic multivariable
identification for buildings,” Applied mathematical modelling, vol. 19,
no. 10, pp. 621–636, 1995.
[47] X.-F. Liu and A. Dexter, “Fault-tolerant supervisory control of vav airconditioning systems,” Energy and Buildings, vol. 33, no. 4, pp. 379–
389, 2001.
[48] S. Wang and Y. Chen, “Fault-tolerant control for outdoor ventilation
air flow rate in buildings based on neural network,” Building and
Environment, vol. 37, no. 7, pp. 691–704, 2002.
[49] R. Fontugne, J. Ortiz, D. Culler, and H. Esaki, “Empirical mode
decomposition for intrinsic-relationship extraction in large sensor deployments,” in Workshop on Internet of Things Applications, IoT-App,
vol. 12, 2012.
[50] S. Dawson-Haggerty, A. Krioukov, J. Taneja, S. Karandikar, G. Fierro,
N. Kitaev, and D. E. Culler, “Boss: Building operating system services.”
in NSDI, vol. 13, 2013, pp. 443–458.
[51] “NiagaraAX by Tridium,” 2015, http://www.niagaraax.com/.
| 3 |
SuperMinHash – A New Minwise Hashing Algorithm
for Jaccard Similarity Estimation
Otmar Ertl
Linz, Austria
[email protected]
arXiv:1706.05698v1 [] 18 Jun 2017
ABSTRACT
This paper presents a new algorithm for calculating hash signatures
of sets which can be directly used for Jaccard similarity estimation.
The new approach is an improvement over the MinHash algorithm,
because it has a better runtime behavior and the resulting signatures
allow a more precise estimation of the Jaccard index.
1
P(h j (A) = h j (B)) = P(h j (A ∩ B) = h j (A ∪ B)) =
INTRODUCTION
|A ∩ B|
J=
|A ∪ B|
is a measure for the similarity of two sets A and B. If one is interested
in pairwise similarities of many sets the direct calculation is often
computationally too expensive. Therefore, different algorithms [1, 3,
5, 7–9] have been proposed, which first calculate hash signatures of
individual sets. The Jaccard index can then be quickly determined
given only the signatures of the corresponding two sets. Each
signature contains condensed information about its corresponding
set which is relevant for Jaccard index estimation.
MinHash Algorithm
The MinHash algorithm [1] was the first approach to calculate
signatures suitable for Jaccard index estimation. The signature
consists of m values (h 0 , h 1 , . . . , hm−1 ) which are defined for a
given data set D by
h j (D) := min (r j (d)).
d ∈D
(1)
The functions r j are independent and uniform hash functions with
value range [0, 1). The signature size m is a free parameter and
allows trading space and computation time for more precise estimates.
Input: (d 0 , d 1 , . . . , dn−1 )
Output: (h 0 , h 1 , . . . , hm−1 ) ∈ [0, 1)m
(h 0 , h 1 , . . . , hm−1 ) ← (∞, ∞, . . . , ∞)
for i ← 0, 1, . . . , n − 1 do
initialize pseudo-random generator with seed di
for j ← 0, 1, . . . , m − 1 do
r ← uniform random number from [0, 1)
h j ← min(h j , r )
end for
end for
(3)
is an unbiased estimator for the Jaccard index. I denotes the indicator function. Since all signature values are independent and
identically distributed, the sum of indicators corresponds to a binomial distribution with sample size m and success probability J .
Hence, the variance of the estimator is given by
J (1 − J )
.
(4)
m
Algorithm 1 demonstrates the calculation of the MinHash signature for a given input data sequence d 0 , d 1 , . . . , dn−1 of length
n. Since the input data may contain duplicates we generally have
|D| ≤ n for the cardinality of the set D = {d 0 , d 1 , . . . , dn−1 }.
For simplicity Algorithm 1 and also the algorithms that are presented later are expressed in terms of a pseudo-random number
generator. Assuming independent and uniform hash functions r j
the sequence r 0 (d), r 1 (d), . . . behaves statistically like the output of
an ideal pseudo-random generator with seed d. By chaining the
hash values of different hash functions random bit sequences of
arbitrary length can be realized. In practice, the next hash function
is evaluated, only if all bits of the previous hash value have been
consumed.
The runtime complexity of MinHash is O(mn), because the inner
loop is executed mn times. Since m is large for many applications,
more efficient algorithms are desirable.
Var( Jˆ) =
1.2
Algorithm 1 MinHash algorithm.
|A ∩ B|
= J . (2)
|A ∪ B|
Here we used the equivalence h j (A) = h j (B) ⇔ h j (A ∩ B) =
h j (A ∪ B). Therefore,
m−1
1 Õ
I (h j (A) = h j (B))
Jˆ =
m j=0
The Jaccard index
1.1
The probability that signature values are equal for two different
sets A and B corresponds to the Jaccard index
One Permutation Hashing
The first approach that significantly reduced the calculation time
was one permutation hashing [5]. The idea is to divide the input
set D randomly into m disjoint subsets D 0 , D 1 , . . . , Dm−1 . The hash
signature is calculated using a single hash function r
h j (D) := min (r (d)).
d ∈D j
This procedure results in an optimal runtime complexity of O(m+n).
Unfortunately, for small input sets, especially if |D| < m, many subsets are empty and corresponding signature values are undefined.
Various densification algorithms have been proposed to resolve this
problem [7–9], which fill undefined positions in the signature by
copying defined values in such a way that estimator (3) remains
unbiased. However, all densified hash signatures lead to less precise
Jaccard index estimates compared to MinHash for small data sets
with |D| m. In addition, the best densification scheme in terms
of precision presented in [7] has a runtime that scales quadratically
with signature size m for very small data sets [3]. Another disadvantage is that signatures of different sets cannot be longer merged
after densification to construct the signature for the corresponding
union set.
1.3
Here we extended (1) by adding elements of a random permutation
0
1
···
m−1
π (d) =
π0 (d) π1 (d) · · · πm−1 (d)
that is generated for each input element d.
Since the values r j (d 0 ) + π j (d 0 ), . . . , r j (dn−1 ) + π j (dn−1 ), are still
mutually independent and uniformly distributed over [0, m), (2) also
holds here and the Jaccard index estimator (3) will give unbiased
results. However, in contrast to MinHash, the signature values
h 0 , h 1 , . . . , hm−1 are no longer independent. As we will see, this is
the reason for the improved precision when estimating the Jaccard
index for small sets.
The new approach requires the generation of random permutations for each input data element. Fisher–Yates shuffling is the
standard algorithm for this purpose [4]. The shuffling algorithm
uses uniformly distributed integer numbers. An algorithm for the
generation of strict uniform random integers that is efficient regarding random bit consumption is described in [6].
A straightforward implementation of (5) would look like Algorithm 2. Obviously, the runtime complexity is still O(nm). However,
in the following we describe a couple of algorithmic optimizations
which finally end up in the new SuperMinHash algorithm.
Fast Similarity Sketching
Recently, a new algorithm called fast similarity sketching has been
presented [3] that achieves a runtime complexity of O(n + m log m)
for the case that the input does not contain duplicates (n = |D|). It
was also shown that the variance of the Jaccard index estimator
is significantly improved for small data sets. However, in contrast
to MinHash it cannot be directly used as streaming algorithm,
because multiple passes over the input data are needed. Moreover,
the computation time is approximately twice that of MinHash for
small data sets with |D| m.
1.4
Outline
In the following we present a new algorithm for the calculation of
signatures appropriate for Jaccard index estimation. We call the new
algorithm SuperMinHash, because it generally supersedes MinHash.
We will prove that the variance of the Jaccard index estimator (3) is
strictly smaller for same signature sizes. In addition, we will show
that the runtime for calculating the signatures is comparable for
small data sets while it is significantly better for larger data sets as
it follows an O(n + m log2 m) scaling law for n = |D|. Furthermore,
like MinHash, the new algorithm requires only a single pass over
the input data, which allows a straightforward application to data
streams or big data sets that do not fit into memory as a whole.
2
2.1
Algorithm 3 Transformed version of Algorithm 2.
SUPERMINHASH ALGORITHM
Input: (d 0 , d 1 , . . . , dn−1 )
Output: (h 0 , h 1 , . . . , hm−1 ) ∈ [0, m)m
(h 0 , h 1 , . . . , hm−1 ) ← (∞, ∞, . . . , ∞)
allocate array (p0 , p1 , . . . , pm−1 )
(q 0 , q 1 , . . . , qm−1 ) ← (−1, −1, . . . , −1)
for i ← 0, 1, . . . , n − 1 do
initialize pseudo-random generator with seed di
for j ← 0, 1, . . . , m − 1 do
r ← uniform random number from [0, 1)
k ← uniform random number from {j, . . . , m − 1}
if q j , i then
qj ← i
pj ← j
end if
if qk , i then
qk ← i
pk ← k
end if
swap p j and pk
hp j ← min(hp j , r + j)
end for
end for
The new algorithm is based on a hash signature defined by
h j (D) := min (r j (d) + π j (d)).
d ∈D
Optimization
As first step towards our final algorithm we merge both inner
loops in Algorithm 2 and eliminate the initialization of array
(p0 , p1 , . . . , pm−1 ) as demonstrated by Algorithm 3. The trick is
to introduce a second array (q 0 , q 1 , . . . , qm−1 ) which is used to
mark corresponding entries in (p0 , p1 , . . . , pm−1 ) as initialized during the j-th inner loop cycle. pk is regarded as initialized if and only
if qk = j. Otherwise, pk is set equal to k when accessed first and
qk is simultaneously set equal to j to flag the entry as initialized.
(5)
Algorithm 2 Straightforward calculation of the new signature
defined by (5) using Fisher–Yates shuffling.
Input: (d 0 , d 1 , . . . , dn−1 )
Output: (h 0 , h 1 , . . . , hm−1 ) ∈ [0, m)m
(h 0 , h 1 , . . . , hm−1 ) ← (∞, ∞, . . . , ∞)
for i ← 0, 1, . . . , n − 1 do
initialize pseudo-random generator with seed di
(p0 , p1 , . . . , pm−1 ) ← (0, 1, . . . , m − 1)
for j ← 0, 1, . . . , m − 1 do
k ← uniform random number from {j, . . . , m − 1}
swap p j and pk
end for
for j ← 0, 1, . . . , m − 1 do
r ← uniform random number from [0, 1)
h j ← min(h j , r + p j )
end for
end for
2
A second modification compared to Algorithm 2 is that the signature value update h j ← min(h j , r + p j ) has been replaced by
hp j ← min(hp j , r + j). Both variants are statistically equivalent,
because it does not make any difference, whether we interpret the
randomly generated permutation as π (d) or as its inverse π −1 (d).
Algorithm 3 shows potential for further improvement. We see
that the signature value updates r + j are strictly increasing within
the inner loop. Therefore, if we knew the current maximum of all
current signature values, we would be able to leave the inner loop
early. The solution is to maintain a histogram over the integral
parts of the current signature values
(Í
m−1 I (bh c = k)
k ∈ {0, 1, . . . , m − 2}
j
j=0
bk := Ím−1
j=0 I (h j ≥ m − 1) k = m − 1
and also to keep track of the maximum non-zero histogram entry
a := max({j | b j > 0}).
Figure 1: The function α(m, u) over u for different signature
sizes m. The crosses represent values obtained through simulations.
Algorithm 4 SuperMinHash algorithm which is an optimized version of Algorithm 3.
Input: (d 0 , d 1 , . . . , dn−1 )
Output: (h 0 , h 1 , . . . , hm−1 ) ∈ [0, m)m
(h 0 , h 1 , . . . , hm−1 ) ← (∞, ∞, . . . , ∞)
allocate array (p0 , p1 , . . . , pm−1 )
(q 0 , q 1 , . . . , qm−1 ) ← (−1, −1, . . . , −1)
(b0 , b1 , . . . , bm−2 , bm−1 ) ← (0, 0, . . . , 0, m)
a ←m−1
for i ← 0, 1, . . . , n − 1 do
initialize pseudo-random generator with seed di
j←0
while j ≤ a do
r ← uniform random number from [0, 1)
k ← uniform random number from {j, . . . , m − 1}
if q j , i then
qj ← i
pj ← j
end if
if qk , i then
qk ← i
pk ← k
end if
swap p j and pk
if r + j < hp j then
j 0 ← min(bhp j c, m − 1)
hp j ← r + j
if j < j 0 then
bj0 ← bj0 − 1
bj ← bj + 1
while ba = 0 do
a ←a−1
end while
end if
end if
j ←j+1
end while
end for
Knowing a allows escaping the inner loop as soon as j > a, because
further signature value updates are not possible in this case. The
result of all these optimizations is the new SuperMinHash algorithm
as shown in Algorithm 4.
2.2
Precision
As proven in the appendix the variance of estimator (3) for the new
signature is
J (1 − J )
Var( Jˆ) =
α(m, u)
(6)
m
where u := |A ∪ B| is the union cardinality. The function α(m, u) is
defined as
Ím−1 u
l ((l + 1)u + (l − 1)u − 2l u )
α(m, u) := 1 − l =1
.
(7)
(m − 1)u−1mu (u − 1)
The function is always in the range [0, 1), because the term (l +
1)u + (l − 1)u − 2l u is positive for u > 1. α(m, u) corresponds to
the reduction factor of the variance relative to that of MinHash
signatures (4). Fig. 1 shows the function for different values of m.
Interestingly, α(m, u) only depends on the union cardinality u and
the signature size m and does not depend on the Jaccard index J .
Compared to MinHash the variance is approximately by a factor of
two smaller in case u < m.
To verify (6) we have conducted some simulations to determine
the variance of the Jaccard index estimator for two random sets A
and B experimentally. We considered the cases |A \ B| = |B \ A| =
|A ∩ B| = 2k with u = 3 · 2k and the cases |A \ B|/2 = |B \ A| =
|A ∩ B| = 2k with u = 4 · 2k both for k ∈ {0, 1, . . . , 11}. For each
case 100 000 different triples of disjoint sets S A\B , S B\A , and S B∩A
have been randomly generated with cardinalities |A \ B|, |B \ A|,
and |A ∩ B|, respectively. Then the sets A and B are constructed
using A = S A\B ∪ S A∩B and B = S B\A ∪ S A∩B . After calculating the
corresponding hash signatures, their common Jaccard index has
been estimated. The estimates of all 100 000 simulation runs have
3
been used to calculate the variance and also α(m, u) by dividing by
the theoretical MinHash variance (4). The experimental results are
shown as crosses in Fig. 1 and confirm the theoretically derived
formula (7).
For all simulation runs we used the 128-bit version of the MurmurHash3 algorithm which also allows to specify a seed. We used a
predefined sequence of seed values to generate an arbitrary number
of hash values for a given data element, which are used as bit source
for pseudo-random number generation.
2.3
Runtime
To analyze the runtime of Algorithm 4 we first consider the case
that all inserted elements are distinct (n = |D|). The expected
runtime is given by the expected total number of inner (while) loop
iterations denoted by T = T (n, m) that are needed when inserting
n elements. If ts denotes the average number of element insertions
until a becomes smaller than s, we can write
T (n, m) = n +
m−1
Õ
min(ts , n).
Figure 2: The average number of inner loop iterations in Algorithm 4 per inserted data element over the set size n for
different signature sizes m.
s=1
Since a is smaller than s as soon as each signature value is less
than s, ts can be regarded as the average number of random permutations that are necessary until any value of {0, 1, . . . , s − 1} was
mapped to each signature index. This corresponds to the coupon
collector’s problem with collection size m and group drawings of
size s, where each drawing gives s distinct coupons [10]. In our
case the complete collection corresponds to the m signature indices.
Drawing a group of coupons corresponds to selecting the first s
indices after permuting a list with all m of them.
For the classical coupon collector’s problem with group size s = 1
we have the well known solution [2]
To better understand the runtime of Algorithm 4 compared to
the MinHash algorithm, we investigated the average number of
inner loop cycles per inserted data element T (n, m)/n. For the new
algorithm we expect that that this number starts at m and decreases
to 1 as n → ∞ because of (8). In contrast, the MinHash algorithm
always needs m inner loop iterations regardless of the input data
size n.
Fig. 2 shows the experimentally determined average number of
inner loop cycles for set sizes n = 2k with k ∈ {0, 1, . . . , 20} and
n = 3 · 2k with k ∈ {0, 1, . . . , 19} based on 1000 randomly generated
data sets, respectively. The results indeed show that the amortized
costs for a single data element insertion correspond to a single
inner loop execution as n → ∞. Thus, the runtime performance is
improved up to a factor of m relative to MinHash.
If the input data d 0 , d 1 , . . . , dn−1 contains duplicates, hence
|D| < n, the runtime will be longer, because repeated insertions
of identical values will not change any state in Algorithm 4. If
we assume that the order of input elements is entirely random,
n
the average number of inner loop iterations will be T (|D|, m) |D
|
which gives T (|D|, m)/|D| per inserted element. In the worst case,
if many input elements are equal and sorted, the number of inner
loop iterations per input element is still limited by m and thus equal
to that of MinHash.
t 1 = mHm .
1
1
1
2
1 denotes the m-th harmonic numHere Hm := + + . . . + m
ber. Unfortunately, there is no simple expression for s ≥ 2 [10].
However, it is easy to find an upper bound for ts . Let ρl be the
probability that l drawings are necessary to complete the coupon
collection for the classical case with group size 1. By definition,
Í
Í
we have l∞=1 ρl l = t 1 = mHm with l∞=1 ρl = 1. If l drawings are
necessary to complete the collection for the case s = 1, it is obvious
that not more than dl/se drawings will be necessary for the general
case with group size s. Therefore, we can find the upper bound
Õ
∞
∞
Õ
l
l + s − 1 mHm + s − 1
ts ≤
ρl
≤
ρl
=
.
s
s
s
l =1
l =1
Using this inequality together with min(ts , n) ≤ ts we get
T (n, m) ≤ n +
m−1
Õ
ts ≤ n + (mHm − 1)Hm−1 + m − 1
3
s=1
= n + O(m log2 m) = O(n + m log2 m).
CONCLUSION
We have presented the SuperMinHash algorithm which can be used
as full replacement for the MinHash algorithm as it has similar or
even better properties as the MinHash algorithm. The new algorithm has comparable runtime for small input sizes, is significantly
faster for larger data sizes, can also be used as streaming algorithm
as it requires only a single pass over the data, and significantly
improves the precision of Jaccard index estimates for small sets.
(8)
Here we used the relationship Hm = O(log m). In any case the
worst case runtime is limited by the maximum number of inner
loop iterations, which is equal to nm, if the shortcut introduced in
Algorithm 4 never comes into play. Thus, the new algorithm never
needs more inner loop cycles than the MinHash algorithm.
4
APPENDIX
1
=
m −z
In order to derive formula (6) for the variance of estimator (3)
when applied to the new signature we first consider the conditional
probability P(h j (A) = h j (B) | hk (A) = hk (B)). For the trivial case
j = k this probability is equal to 1. Therefore, we consider the case
j , k in the following.
If we introduce
d 0 := arg min hk (d)
(9)
=
and use the equivalences h j (A) = h j (B) ⇔ h j (A∩B) = h j (A∪B) ⇔
h j (A ∩ B) < h j (A 4 B) and hk (A) = hk (B) ⇔ d 0 ∈ A ∩ B, where
A 4 B denotes the symmetric difference of sets A and B, we can
write
x ∈ [0, bzc)
(m−1)(m−z)
)(m−1−z)
(m−x
(m−1)(m−z)
x ∈ [bzc + 1, m).
x ∈ [bzc, bzc + 1)
= P(h j (d) > x | d , d 0 ∧ hk (d 0 ) = z)
0
(13)
|A4B |
= (W (x, z))u(1−J ) .
(14)
Here we have used the fact that the complementary cumulative
distribution function of the minimum of independent random variables is identical to the product of the individual complementary
cumulative distribution functions.
Analogously, h j (A∩ B) conditioned on d 0 ∈ A∩ B and hk (d 0 ) = z
is distributed like the minimum of |A ∩ B| − 1 = u J − 1 identically
distributed random variables following (13) and h j (d 0 ) which is
described by (12)
(10)
0
with functions R(z) and S(z) defined as
R(z) := P(hk (d 0 ) < z | d 0 ∈ A ∩ B) = P(hk (d 0 ) < z)
and
S(z) := P(h j (A ∩ B) < h j (A 4 B) | d 0 ∈ A ∩ B ∧ hk (d 0 ) = z),
respectively.
Since hk (d 0 ) is the minimum of u := |A ∪ B| independent uniformly distributed values from [0, m), its cumulative distribution
function is
z u
R(z) = 1 − 1 −
m
and its first derivative is
u
R 0 (z) = u (m − z)u−1 .
(11)
m
To get S(z) we first consider the distribution of h j (d) for any
input element d conditioned on hk (d) = z. The distribution of
h j (d) is uniform over [0, bzc) ∪ [bzc + 1, m), because the integral
part must be different from bzc due to the permutation in (5). The
corresponding complementary cumulative distribution function is
P(h j (A ∩ B) > x | d 0 ∈ A ∩ B ∧ hk (d 0 ) = z)
= P(h j (d 0 ) > x | hk (d 0 ) = z)·
· P(h j (d) > x | d , d 0 ∧ hk (d 0 ) = z)
|(A∩B)\{d 0 } |
= V (x, z) (W (x, z))u J −1 .
(15)
Using (14) and (15) we can derive S(z)
S(z) = P(h j (A ∩ B) < h j (A 4 B) | d 0 ∈ A ∩ B ∧ hk (d 0 ) = z)
= 1 − P(h j (A ∩ B) > h j (A 4 B) | d 0 ∈ A ∩ B ∧ hk (d 0 ) = z)
∫m P(h j (A ∩ B) > x | d 0 ∈ A ∩ B ∧ hk (d 0 ) = z)·
©
ª
= 1 + ∂P(h j (A 4 B) > x | d 0 ∈ A ∩ B ∧ hk (d 0 ) = z) ® dx
·
¬
0 «
∂x
∫m
∂W (x, z)
dx
= 1 + u(1 − J ) V (x, z) (W (x, z))u−2
∂x
V (x, z) := P(h j (d) > x | hk (d) = z)
x ∈ [0, bzc)
x ∈ [bzc, bzc + 1)
x ∈ [bzc + 1, m).
z
m−1−x
m−1
(m−1−
bz c)(m−(x − bz c)−z)
P(h j (A 4 B) > x | d 0 ∈ A ∩ B ∧ hk (d 0 ) = z)
P(h j (A) = h j (B) | hk (A) = hk (B))
m − 1 − x
1
=
· m − 1 − bzc
m−1
m − x
V (x, z 0 )dz 0
Now we are able to determine the complementary cumulative
distribution functions for h j (A 4 B), which is the minimum of
|A 4 B| = u(1 − J ) identically distributed random variables obeying
(13), conditioned on d 0 ∈ A ∩ B and hk (d 0 ) = z
d ∈A∪B
= P(h j (A ∩ B) < h j (A 4 B) | d ∈ A ∩ B)
∫m
=
R 0 (z)S(z)dz
∫m
(12)
0
Next we consider the distribution of h j (d), if d , d 0 and hk (d 0 ) = z.
Due to (9) d , d 0 is equivalent to hk (d) > hk (d 0 ). Furthermore,
hk (d) is uniformly distributed over [z, m). Therefore, we get for the
complementary cumulative distribution function
=1+
u(1 − J )
u−1
∫m
V (x, z)
∂
(W (x, z))u−1 dx
∂x
0
∫m
u(1 − J ) ©
∂V (x, z)
ª
(W (x, z))u−1 dx ®
1 +
u−1
∂x
«
¬
0
∫bz c
u−1
©
ª
1
m−1−x
1 − m−1
®
dx
m−1
®
®
0
u(1−J )
®
= 1 − u−1
®
m
∫
u−1 ®
(m−x )(m−1−z)
− 1
dx ®®
m−1
(m−1)(m−z)
«
¬
bz c+1
=1−
W (x, z) := P(h j (d) > x | d , d 0 ∧ hk (d 0 ) = z)
= P(h j (d) > x | hk (d) > hk (d 0 ) ∧ hk (d 0 ) = z)
= P(h j (d) > x | hk (d) > z ∧ hk (d 0 ) = z)
= P(h j (d) > x | hk (d) > z)
∫m
1
P(h j (d) > x | hk (d) = z 0 )dz 0
=
m −z
z
5
h
u i bz c
© 1 + u1 m−1−x
ª
m−1
®
u(1−J )
x =0
= 1 − u−1
®
1 m−1−z u−1 m−x u m
®
+ u m−z
m−1
x
=
bz
c+1
«
¬
u
1 m−1− bz c
1
1
+
−
ª
u
m−1
u
u(1−J ) ©
= 1 − u−1
u−1
u ®®
m−1− bz c
− u1 m−1−z
m−z
m−1
«
¬
u
u−1
(1−J ) m−1− bz c
= J − u−1
1 − m−1−z
.
m−1
m−z
The last step used (17) for the case j , k.
Now we are finally able to derive the variance of the Jaccard
index estimator (3)
m−1
1
©Õ
ª
Var
I (h j (A) = h j (B))®
2
m
« j=0
¬
m−1
m−1
Õ
Õ
1
Cov(I (h j (A) = h j (B)), I (hk (A) = hk (B)))
= 2
m j=0
k =0
J (1 − J )
1 − α(m, u)
=
m
−
m
(m
−
1)
m−1
m2
J (1 − J )
=
α(m, u).
m
Var( Jˆ) =
(16)
Now we can insert (11) and (16) into (10) which gives
P(h j (A) = h j (B) | hk (A) = hk (B))
∫m
=
R 0 (z)S(z)dz
REFERENCES
[1] A. Z. Broder. 1997. On the Resemblance and Containment of Documents. In
Proceedings of the Compression and Complexity of Sequences. 21–29.
[2] T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein. 2009. Introduction to
Algorithms. MIT Press.
[3] S. Dahlgaard, M. B. T. Knudsen, and M. Thorup. 2017. Fast Similarity Sketching.
(2017). arXiv:1704.04370
[4] R. A. Fisher and F. Yates. 1938. Statistical Tables for Biological, Agricultural and
Medical Research. Oliver & Boyd, London.
[5] P. Li, A. Owen, and C. Zhang. 2012. One Permutation Hashing. Advances in
Neural Information Processing Systems 25 (2012), 3113–3121.
[6] J. Lumbroso. 2013. Optimal Discrete Uniform Generation from Coin Flips, and
Applications. (2013). arXiv:1304.1916
[7] A. Shrivastava. 2017. Optimal Densification for Fast and Accurate Minwise
Hashing. (2017). arXiv:1703.04664
[8] A. Shrivastava and P. Li. 2014. Densifying One Permutation Hashing via Rotation
for Fast Near Neighbor Search. In Proceedings of the 31st International Conference
on Machine Learning. 557–565.
[9] A. Shrivastava and P. Li. 2014. Improved Densification of One Permutation
Hashing. (2014). arXiv:1406.4784
[10] W. Stadje. 1990. The Collector’s Problem with Group Drawings. Advances in
Applied Probability 22, 4 (1990), 866–882.
0
=J−
∫m
R 0 (z) (J − S(z)) dz
0
=J−
l +1
m−1
Õ∫
R 0 (z) (J − S(z)) dz
l =0 l
l +1
m−1 ∫
u Õ
=J− u
(m − z)u−1 (J − S(z)) dz
m
l =0 l
1−J
= J − muu u−1
1−J
= J − muu u−1
=J−
=J−
=J−
(1−J )
l +1
m−1
Õ ∫ © (m
l =0 l
«
m−2
Õ
l =0
Ím−2
l =0
m−1−l
m−1
u
− z)u−1 m−1−l
·ª
m−1
®
u−1 ® dz
®
m−1−z
· 1 − m−z
¬
!
l +1
∫
u−1
(m − z)
u
l
− (m − 1 − z)u−1
(m−1−l )u ((m−l )u +(m−2−l )u −2(m−1−l )u )
(m−1)u m u (u−1)
l u ((l +1)u +(l −1)u −2l u )
(m−1)u m u (u−1)
(1−J )(1−α (m,u))
.
m−1
(1−J )
dz
Ím−1
l =1
(17)
Here we introduced α(m, u) as defined in (7).
To calculate the variance of the Jaccard index estimator we need
the covariance of indicators I (h j (A) = h j (B)) and I (hk (A) = hk (B))
Cov(I (h j (A) = h j (B)), I (hk (A) = hk (B)))
= E(I (h j (A) = h j (B))I (hk (A) = hk (B)))
− E(I (h j (A) = h j (B))) E(I (hk (A) = hk (B)))
= P(h j (A) = h j (B) ∧ hk (A) = hk (B))
− P(h j (A) = h j (B))P(hk (A) = hk (B))
= P(h j (A) = h j (B) | hk (A) = hk (B)) P(hk (A) = hk (B)) − J 2
= P(h j (A) = h j (B) | hk (A) = hk (B)) J − J 2
(
1
j =k
= J (1 − J ) ·
1−α (m,u)
− m−1
j , k.
6
| 8 |
Bidirectional PageRank Estimation: From
Average-Case to Worst-Case
Peter Lofgren1 , Siddhartha Banerjee2 , and Ashish Goel1
1
Stanford University, Stanford CA 94305, USA
Cornell University, Ithaca NY 14850, USA
arXiv:1507.08705v3 [] 15 Dec 2015
2
Abstract. We present a new algorithm for estimating the Personalized PageRank (PPR) between a source and target node on undirected
graphs, with sublinear running-time guarantees over the worst-case choice
of source and target nodes. Our work builds on a recent line of work on
bidirectional estimators for PPR, which obtained sublinear running-time
guarantees but in an average-case sense, for a uniformly random choice of
target node. Crucially, we show how the reversibility of random walks on
undirected networks can be exploited to convert average-case to worstcase guarantees. While past bidirectional methods combine forward random walks with reverse local pushes, our algorithm combines forward
local pushes with reverse random walks. We also discuss how to modify
our methods to estimate random-walk probabilities for any length distribution, thereby obtaining fast algorithms for estimating general graph
diffusions, including the heat kernel, on undirected networks.
1
Introduction
Ever since their introduction in the seminal work of Page et al. [23], PageRank
and Personalized PageRank (PPR) have become some of the most important
and widely used network centrality metrics (a recent survey [13] lists several
examples). At a high level, for any graph G, given ‘teleport’ probability α and a
‘personalization distribution’ σ over the nodes of G, PPR models the importance
of every node from the point of view of σ in terms of the stationary probabilities
of ‘short’ random walks that periodically restart from σ with probability α. It
can be defined recursively as giving importance α to σ, and in addition giving
every node importance based on the importance of its in-neighbors.
Formally, given normalized adjacency matrix W = D−1 A, the Personalized
PageRank vector πσ with respect to source distribution σ is the solution to
πσ = ασ + (1 − α)πσ W.
(1)
An equivalent definition is in terms of the terminal node of a random-walk
starting from σ. Let {X0 , X1 , X2 , . . .} be a random-walk starting from X0 ∼ σ,
and L ∼ Geometric(α). Then the PPR of any node t is given by [4]:
πσ (t) = P[XL = t]
(2)
2
The equivalence of these definitions can be seen using a power series expansion.
In this work, we focus on developing PPR-estimators with worst-case sublinear guarantees for undirected graphs. Apart from their technical importance, our
results are are of practical relevance as several large-scale applications of PPR are
based on undirected networks. For example, Facebook (which is an undirected
social network) used Personalized PageRank for friend recommendation [5]. The
social network Twitter is directed, but Twitter’s friend recommendation algorithm (Who to Follow) [16] uses an algorithm called personalized SALSA [19,6],
which first converts the directed network into an expanded undirected graph 3 ,
and then computes PPR on this new graph. Random walks have also been used
for collaborative filtering by the YouTube team [7] (on the undirected user-item
bipartite graph), to predict future items a user will view. Applications like this
motivate fast algorithms for PPR estimation on undirected graphs.
Equations (1) and (2) suggest two natural estimation algorithms for PPR
– via linear-algebraic iterative techniques, and using Monte Carlo. The linear
algebraic characterization of PageRank in Eqn. (1) suggests the use of power
iteration (or other localized iterations; cf Section 1.2 for details), while Eqn. (2)
is the basis for a Monte-Carlo algorithm, wherein we estimate πσ [t] by sampling
independent L-step paths, each starting from a random state sampled from σ.
For studying PageRank estimation algorithms, smaller probabilities are more
difficult to estimate than large ones, so a natural parametrization is in terms
of the minimum PageRank we want to detect. Formally, given any source σ,
target node t ∈ V and a desired minimum probability threshold δ, we want
algorithms that give accurate estimates whenever πσ [t] ≥ δ. Improved algorithms
are motivated by the slow convergence of these algorithms: both Monte Carlo
and linear algebraic techniques have a running time of Ω(1/δ) for PageRank
estimation. Furthermore this is true not only for worst case choices of target state
t, but on average Monte-Carlo requires Ω(1/δ) time to estimate a probability of
size δ. Power iteration takes Θ(m) time, where m is the number of edges, and
the work [21] shows empirically that the local version of power-iteration scales
with 1/δ for δ > 1/m.
In a recent line of work, linear-algebraic and Monte-Carlo techniques were
combined to develop new bidirectional PageRank estimators FAST-PPR [22] and
Bidirectional-PPR [20], which gave the first significant improvement in the
running-time of PageRank estimation since the development of Monte-Carlo
techniques. Given an arbitrary source distribution σ and a uniform random
target node t, these estimators were shown
to return
an accurate PageRank
q
d/δ , where d = m/n is the
estimate with an average running-time of Õ
q
average degree of the graph. Given Õ n d/δ precomputation and storage,
3
Specifically, for each node u in the original graph, SALSA creates two virtual nodes,
a “consumer-node” u0 and a “producer-node” u00 , which are linked by an undirected
edge. Any directed edge (u, v) is then converted into an undirected edge (u0 , v 00 ) from
u’s consumer node to v’s producer node.
3
the authors prove worst case guarantees for this bidirectional estimator but in
practice that is a large precomputation requirement. This raised the challenge
of designing an algorithm with similar running-time guarantees over a worstcase choice of target node t. Inspired by the bidirectional estimators in [22,20],
we propose a new PageRank estimator for undirected graphs with worst-case
running time guarantees.
1.1
Our Contribution
We present the first estimator for personalized PageRank with sublinear running time in the worst case on undirected graphs. We formally present our
Undirected-BiPPR algorithm in Section 2, and prove that it has the following
accuracy and running-time guarantees:
Result 1 (See Theorem 1 in Section 2) Given any undirected graph G, teleport probability α, source node s, target node t, threshold δ and relative error ,
the Undirected-BiPPR estimator (Algorithm 2) returns an unbiased estimate
π
bs [t] for πs [t], which, with probability greater than 1 − pf ail , satisfies:
|b
πs [t] − πs [t]| < max {πs [t], 2eδ} .
Result 2 (See Theorem 2 in Section 2) Let any undirected graph G, teleport probability α, threshold δ and desired relative error be given. For any
source,
the Undirected-BiPPR algorithm has a running-time
√target pairq(s, t),
ln(1/pf ail )
dt
, where dt is the degree of the target node t.
of O
δ
In personalization applications, we are often only interested in personalized importance scores if they are greater than global importance scores, so it is natural
to set δ based on the global importance of t. Assuming G is connected, in the limit
α → 0, the PPR vector for any start node s converges to the stationary distribution of infinite-length random-walks on G – that is limα→0 πs [t] = dt /m. This
suggests that a natural PPR significance-test is to check whether πs (t) ≥ dt /m.
To this end, we have the following corollary:
Result 3 (See Corollary 1 in Section 2) For any graph G and any (s, t)
pair such that πs (t) ≥ dmt , then with high probability 4 , Undirected-BiPPR returns
√ an estimate πs (t) with relative error with a worst-case running-time of
O ( m log n/).
Finally, in Section 3, using ideas from [8], we extend our technique to estimating more general random-walk transition-probabilities on undirected graphs,
including graph diffusions and the heat kernel [11,18].
4
Following convention, we use w.h.p. to mean with probability greater than 1 −
1
.
n
4
1.2
Existing Approaches for PageRank Estimation
We first summarize the existing methods for PageRank estimation:
Monte Carlo Methods: A standard method [4,9] for estimating πσ [t] is by
using the terminal node of independently generated random walks of length
L ∼ Geometric(α) starting from a random node sampled from σ. Simple cone
centration arguments show that we need Θ(1/δ)
samples to get an accurate
estimate of πσ [t], irrespective of the choice of t and graph G.
Linear-Algebraic Iterations: Since the PageRank vector is the stationary
distribution of a Markov chain, it can also be estimated via forward or reverse
power iterations. A direct power iteration is often infeasible for large graphs; in
such cases, it is preferable to use localized power iterations [2,1]. These localupdate methods can also be used for other transition probability estimation
problems such as heat kernel estimation [18]. Local update algorithms are often
fast in practice, as unlike full power iteration methods they exploit the local
structure of the chain. However even in sparse Markov chains and for a large
fraction of target states, their running time can be Ω(1/δ). For example, consider
a random walk on a random d-regular graph and let δ = o(1/n). Then for
` ∼ logd (1/δ), verifying πes [t] > δ is equivalent to uncovering the entire logd (1/δ)
neighborhood of s. However since a large random d-regular graph is (w.h.p.) an
expander, this neighborhood has Ω(1/δ) distinct nodes.
Bidirectional Techniques: Bidirectional methods are based on simultaneously
working forward from the source node s and backward from the target node
t in order to improve the running-time. One example of such a bidirectional
technique is the use of colliding random-walks to estimate length-2` randomwalk transition probabilities in regular undirected graphs [14,17] – the main idea
here is to exploit the reversibility by using two independent random walks of
length ` starting from s and t respectively, and detecting if they collide. This
results in reducing the number of walks required by a square-root factor, based
on an argument similar to the birthday-paradox.
The FAST-PPR algorithm of Lofgren et al. [22] was the first bidirectional
algorithm for estimating PPR in general graphs; this was subsequently refined
and improved by the Bidirectional-PPR algorithm [20], and also generalized
to other Markov chain estimation problems [8]. These algorithms are based on
using a reverse local-update iteration from the target t (adapted from Andersen
et al. [1]) to smear the mass over a larger target set, and then using randomwalks from the source s to detect this target set. From a theoretical perspective, a
significant breakthrough was in showing that for arbitrary choice of source
qnode
s these bidirectional algorithms achieved an average running-time of Õ( d/δ)
over uniform-random choice of target node t – in contrast, both local-update and
Monte Carlo has a running-time of Ω(1/δ) for uniform-random targets. More
recently, [10] showed that a similar bidirectional technique achieved a sublinear
query-complexity for global PageRank computation, under a modified query
model, in which all neighbors of a given node could be found in O(1) time.
5
2
PageRank Estimation in Undirected Graphs
We now present our new bidirectional algorithm for PageRank estimation in
undirected graphs.
2.1
Preliminaries
We consider an undirected graph G(V, E), with n nodes and m edges. For ease
of notation, we henceforth consider unweighted graphs, and focus on the simple
case where σ = es for some single node s. We note however that all our results
extend to weighted graphs and any source distribution σ in a straightforward
manner.
2.2
A Symmetry for PPR in Undirected Graphs
The Undirected-BiPPR Algorithm critically depends on an underlying reversibility property exhibited by PPR vectors in undirected graphs. This property, stated
before in several earlier works [3,15], is a direct consequence of the reversibility
of random walks on undirected graphs. To keep our presentation self-contained,
we present this property, along with a simple probabilistic proof, in the form of
the following lemma:
Lemma 1. Given any undirected graph G, for any teleport probability α ∈ (0, 1)
and for any node-pair (s, t) ∈ V 2 , we have:
πs [t] =
dt
πt [s].
ds
Proof. For path P = {s, v1 , v2 , . . . , vk , t} in G, we denote its length as `(P )
(here `(P ) = k + 1), and define its reverse path to be P = {t, vk , . . . , v2 , v1 , s} –
note that `(P ) = `(P ). Moreover, we know that a random-walk starting from s
traverses path P with probability P[P ] = d1s · d1v · . . . · dv1 , and thus, it is easy
1
k
to see that we have:
P[P ] · ds = P[P ] · dt
(3)
Now let Pst denote the set of paths in G starting at s and terminating at t. Then
we can re-write Eqn. (2) as:
πs [t] =
Y
P ∈Pst
2.3
α(1 − α)`(P ) P[P ] =
Y
P ∈Pts
α(1 − α)`(P ) P[P ] =
dt
πt [s]
ds
The Undirected-BiPPR Algorithm
At a high level, the Undirected-BiPPR algorithm has two components:
6
– Forward-work: Starting from source s, we first use a forward local-update
algorithm, the ApproximatePageRank(G, α, s, rmax ) algorithm of Andersen et
al. [2] (shown here as Algorithm 1). This procedure begins by placing one
unit of “residual” probability-mass on s, then repeatedly selecting some node
u, converting an α-fraction of the residual mass at u into probability mass,
and pushing the remaining residual mass to u’s neighbors. For any node u, it
returns an estimate ps [u] of its PPR πs [u] from s as well as a residual rs [u]
which represents un-pushed mass at u.
– Reverse-work: We next sample random walks of length L ∼ Geometric(α)
starting from t, and use the residual at the terminal nodes of these walks
to compute our desired PPR estimate. Our use of random walks backwards
from t depends critically on the symmetry in undirected graphs presented in
Lemma 1.
Note that this is in contrast to FAST-PPR and Bidirectional-PPR, which performs the local-update step in reverse from the target t, and generates randomwalks forwards from the source s.
Algorithm 1 ApproximatePageRank(G, α, s, rmax ) [2]
Inputs: graph G, teleport probability α, start node s, maximum residual rmax
1: Initialize (sparse) estimate-vector ps = 0 and (sparse) residual-vector rs = es
(i.e. rs [v] = 1 if v = s; else 0)
2: while ∃u ∈ V s.t. rsd[u]
> rmax do
u
3:
for v ∈ N [u] do
4:
rs [v] += (1 − α)rs [u]/du
5:
end for
6:
ps [u] += αrs [u]
7:
rs [u] = 0
8: end while
9: return (ps , rs )
In more detail, our algorithm will choose a maximum residual parameter
rmax , and apply the local push operation in Algorithm 1 until for all v, rs [v]/dv <
rmax . Andersen et al. [2] prove that their local-push operation preserves the
following invariant for vectors (ps , rs ):
X
πs [t] = ps [t] +
rs [v]πv [t],
∀ t ∈ V.
(4)
v∈V
Since we ensure that ∀v, rs [v]/dv < rmax , it is natural at this point to use the
symmetry Lemma 1 and re-write this as:
X rs [v]
πs [t] = ps [t] + dt
πt [v].
dv
v∈V
P
Now using the fact that
t πv [t] = nπ[t] get that ∀ t ∈ V , |πs [t] − ps [t]| ≤
rmax dt nπ[t].
7
However, we can get a more accurate estimate by using the residuals. The
key idea of our algorithm is to re-interpret this as an expectation:
rs [v]
πs [t] = ps [t] + dt EV ∼πt
.
(5)
dV
We estimate the expectation using standard Monte-Carlo. Let Vi ∼ πt and
Xi = rs (Vi )dt /dVi , so we have πs [t] = ps [t] + E[X]. Moreover, each sample Xi is
bounded by dt rmax (this is the stopping condition for ApproximatePageRank),
which allows us to efficiently estimate its expectation. To this end, we generate
w random walks, where
c rmax
w= 2
.
δ/dt
The choice of c is specified in Theorem 1. Finally, we return the estimate:
π
bs [t] = pt [s] +
w
1 X
Xi .
w i=1
The complete pseudocode is given in Algorithm 2.
Algorithm 2 Undirected-BiPPR(s, t, δ)
Inputs: graph G, teleport probability α, start node s, target node t, minimum probability δ, accuracy parameter c = 3 ln (2/pfail ) (cf. Theorem 1)
1: (ps , rs ) = ApproximatePageRank(s, rmax )
2: Set number of walks w = cdt rmax /(2 δ)
3: for index i ∈ [w] do
4:
Sample a random walk starting from t, stopping after each step with probability
α; let Vi be the endpoint
5:
Set Xi = rs (Vi )/dVi
6: end for
P
7: return π
bs [t] = ps [t] + (1/w) i∈[w] Xi
2.4
Analyzing the Performance of Undirected-BiPPR
Accuracy Analysis: We first prove that Undirected-BiPPR returns an unbiased estimate with the desired accuracy:
Theorem 1. In an undirected graph G, for any source node s, minimum threshold δ, maximum residual rmax , relative error , and failure probability pfail , Algorithm 2 outputs an estimate π
bs [t] such that with probability at least 1 − pfail
we have:
|πs [t] − π̂s [t]| ≤ max{πs [t], 2eδ}.
The proof follows a similar outline as the proof of Theorem 1 in [20]. For
completeness, we sketch the proof here:
8
Proof. As stated in Algorithm 2, we average over w = cdt rmax /2 δ walks, where
c is a parameter we choose later. Each walk is of length Geometric(α), and we
denote Vi as the last node visited by the ith walk; note that Vi ∼ πt . As defined
above, let Xi = rs (Vi )dt /dVi ; the estimate returned by Undirected-BiPPR is:
π
bs [t] = pt [s] +
w
1 X
Xi .
w i=1
First, from Eqn. (5), we have that E[b
πs [t]] = πs [t]. Also, ApproximatePageRank
guarantees that for all v, rs [v] < dv rmax , and so each Xi is bounded in [0, dt rmax ];
for convenience, we rescale Xi by defining Yi = dt r1max Xi .
We now show concentration of the estimates via the following Chernoff
bounds (see Theorem 1.1 in [12]):
2
1. P[|Y − E[Y ]| > E[Y ]] < 2 exp(− 3 E[Y ])
2. For any b > 2eE[Y ], P[Y > b] ≤ 2−b
We perform a case analysis based on whether E[Xi ] ≥ δ or E[Xi ] < δ. First, if
E[Xi ] ≥ δ, then we have E[Y ] = dt rwmax E[Xi ] = 2cδ E[Xi ] ≥ c2 , and thus:
P [|b
πs [t] − πs [t]| > πs [t]] ≤ P X̄ − E[Xi ] > E[Xi ] = P [|Y − E[Y ]| > E[Y ]]
2
c
≤ 2 exp − E[Y ] ≤ 2 exp −
≤ pfail ,
3
3
where the last line holds as long as we choose c ≥ 3 ln (2/pfail ).
Suppose alternatively that E[Xi ] < δ. Then:
w
P[|π̂s [t] − πs [t]| > 2eδ] = P[ X̄ − E[Xi ] > 2eδ] = P |Y − E[Y ]| >
2eδ
dt rmax
w
2eδ .
≤P Y >
dt rmax
At this point we set b = 2eδw/dt rmax = 2ec/2 and apply the second Chernoff
bound. Note that E[Y ] = cE[Xi ]/2 δ < c/2 , and hence we satisfy b > 2eE[Y ].
We conclude that:
P[|π̂s [t] − πs [t]| > 2eδ] ≤ 2−b ≤ pfail
2
1
as long as we choose c such that c ≥ 2e
log2 pfail
. The proof is completed by
combining both cases and choosing c = 3 ln (2/pfail ).
Running Time Analysis: The more interesting analysis is that of the runningtime of Undirected-BiPPR – we now prove a worst-case running-time bound:
Theorem 2. In an undirected graph, for any source node (or distribution) s,
target t with degree dt , threshold δ, maximum residual rmax , relative error , and
failure probability pfail , Undirected-BiPPR has a worst-case running-time of:
q
1 r
log pfail
dt
.
O
δ
9
Before proving this result, we first state and prove a crucial lemma from [2]:
Lemma 2 (Lemma 2 in [2]). Let T be the total number of push operations performed by ApproximatePageRank, and let dk be the degree of the vertex involved
in the k th push. Then:
T
X
dk ≤
k=1
1
αrmax
Proof. Let vk be the vertex pushed in the k th step – then by definition, we have
that rs (vk ) > rmax dk . Now after the local-push operation, the sum residual ||rs ||1
decreases by at least αrmax dk . However, we started with ||rs ||1 = 1, and thus we
PT
have k=1 αrmax dk ≤ 1.
Note also that the amount of work done while pushing from a node v is dv .
Proof (of Theorem 2). As proven in Lemma 2, the push forward step takes
total
time O (1/αrmax ) in the worst-case. The random walks take O(w) = O
time. Thus our total time is
!
1
ln pfail
1
rmax
O
+
.
rmax
2 δ/dt
Balancing this by choosing rmax =
q
ln p 1
1 rmax
2 δ/dt
p
δ/dt , we get total running-time:
fail
q
O
1
ln pfail
r
dt
.
δ
We can get a cleaner worst-case running time bound if we make a natural assumption on πs [t]. In an undirected graph, if we let α = 0 and take infinitely long
walks, the stationary probability of being at any node t is dmt . Thus if πs [t] < dmt ,
then s actually has a lower PPR to t than the non-personalized stationary probability of t, so it is natural to say t is not significant for s. If we set a significance
threshold of δ = dmt , and apply the previous theorem, we immediately get the
following:
Corollary 1. If πs [t] ≥ dmt , we can estimate πs [t] within relative error with
probability greater than 1 − n1 in worst-case time:
log n √
O
m .
In contrast, the running time for Monte-Carlo to achieve the same accuracy
fail )
guarantee is O 1δ log(1/p
, and the running time for ApproximatePageRank
α2
¯
d
is O δα
. The FAST-PPR algorithm of [22] has an average case running time of
10
O
1
α2
q q
d¯
δ
log(1/pfail ) log(1/δ)
log(1/(1−α))
for uniformly chosen targets, but has no clean
worst-case running time bound because its running time depends on the degree
of nodes pushed from in the linear-algebraic part of the algorithm.
3
Extension to Graph Diffusions
PageRank and Personalized PageRank are a special case of a more general set
of network-centrality metrics referred to as graph diffusions [11,18]. In a a graph
diffusion we assign a weight αi to walks of length i. The score is then is a
polynomial function of the random-walk transition probabilities of the form:
f (W, σ) :=
∞
X
αi σW i ,
i=0
P
where αi ≥ 0, i αi = 1. To see that PageRank has this form, we can expand
Eqn. (1) via a Taylor series to get:
πσ =
∞
X
α(1 − α)i σW i
i=1
Another important graph diffusion is the heat kernel hσ , which corresponds to
the scaled matrix exponent of (I − W )−1 :
−1
hσ,γ = e−γ(I−W )
=
∞
X
e−γ γ i
i=1
i!
σW i
In [8], Banerjee and Lofgren extended Bidirectional-PPR to get bidirectional
estimators for graph diffusions and other general Markov chain transition-probability
estimation problems. These algorithms inherited similar performance guarantees
to Bidirectional-PPR – in particular, they had good expected running-time
bounds for uniform-random choice of target node t. We now briefly discuss how
we can modify Undirected-BiPPR to get an estimator for graph diffusions in
undirected graphs with worst-case running-time bounds.
First, we observe that Lemma 1 extends to all graph diffusions, as follows:
Corollary 2. Let any undirected graph G with random-walk
matrix W , and
P
∞
any set of P
non-negative length
weights
(α
)
with
α
=
1
be
given. Define
i
i
i=0
∞
f (W, σ) = i=0 αi σW i . Then for any node-pair (s, t) ∈ V 2 , we have:
f (W, es ) =
dt
f (W, et ) .
ds
As before, the above result is stated for unweighted graphs, but it P
also extends
to random-walks on weighted undirected graphs, if we define di = j wij .
Next, observe that for any graph diffusion f (·), the truncated sum f `max =
P∞
P`max
T i
obeys: ||f − f `max ||∞ ≤ `max +1 αk Thus a guarantee on an
i=0 αi πσ P
11
estimate for the truncated sum directly translates to a guarantee on the estimate
for the diffusion.
The main idea in [8] is to generalize the bidirectional estimators for PageRank
to estimating multi-step transitions probabilities (for short, MSTP). Given a
source node s, a target node t, and length ` ≤ `max , we define:
p`s [t] = P[Random-walk of length ` starting from s terminates at t]
Note from Corollary 2, we have for any pair (s, t) and any `, p`s [t]ds = p`t [s]dt .
Now in order to develop a bidirectional estimator for p`s [t], we need to define
a local-update step similar to ApproximatePageRank. For this, we can modify
the REVERSE-PUSH algorithm from [8], as follows.
Similar to ApproximatePageRank, given a source node s and maximum length
`max , we associate with each length ` ≤ `max an estimate vector qs` and a residual
vector rs` . These are updated via the following ApproximateMSTP algorithm:
Algorithm 3 ApproximateMSTP(G, s, `max , rmax )
Inputs: Graph G, source s, maximum steps `max , maximum residual rmax
1: Initialize: Estimate-vectors qsk = 0 , ∀ k ∈ {0, 1, 2, . . . , `max },
Residual-vectors rs0 = es and rsk = 0 , ∀ k ∈ {1, 2, 3, . . . , `max }
2: for i ∈ {0, 1, . . . , `max } do
3:
while ∃ v ∈ S s.t. rti [v]/dv > rmax do
4:
for w ∈ N (v) do
5:
rsi+1 [w] += rsi [v]/dv
6:
end for
7:
qsi [v] += rsi [v]
8:
rsi [v] = 0
9:
end while
10: end for
max
11: return {qs` , rs` }``=0
The main observation now is that for any source s, target t, and length `,
max
satisfy
after executing the ApproximateMSTP algorithm, the vectors {qs` , rs` }``=0
the following invariant (via a similar argument as in [8], Lemma 1):
p`s [t] = qs` [t] +
` X
X
k=0 v∈V
rsk [v]pv`−k [t] = qs` [t] + dt
` X k
X
rs [v] `−k
p [v]
dv t
k=0 v∈V
As before, note now that the last term can be written as an expectation over
random-walks originating from t. The remaining algorithm, accuracy analysis,
and runtime analysis follow the same lines as those in Section 2.
4
Lower Bound
In [22], the authors prove an average case lower bound for PPR-Estimation. In
particular they prove that there exists a family of undirected 3-regular graphs
12
for which any algorithm that can distinguish between pairs (s, t) with πs [t] > δ
and pairs (s, t) with πs [t] < 2δ (distinguishing correctly with constant probability
√
8/9), must access Ω(1/ δ) edges
graph. Since the algorithms in [22,20]
qof the
d/δ , where d is the average degree of the
solve this problem in time O
√
given graph, there remains a d gap between the lower bound and the best
algorithm for the average case. For the worst case (possibly parameterized by
some property of the graph or target node), the authors are unaware of any lower
bound stronger than this average case bound, and an interesting open question
is to prove a lower bound for the worst case.
5
Conclusion
We have developed Undirected-BiPPR, a new bidirectional PPR-estimator for
undirected graphs, which for any (s, t) pair such that πs [t] > d√
t /m, returns
an estimate with relative-error in worst-case running time of O( m/). This
thus extends the average-case running-time improvements achieved in [22,20] to
worst-case bounds on undirected graphs, using the reversibility of random-walks
on undirected graphs. Whether such worst-case running-time results extend to
general graphs, or if PageRank computation is fundamentally easier on undirected graphs as opposed to directed graphs, remains an open question.
6
Acknowledgments
Research supported by the DARPA GRAPHS program via grant FA9550-121-0411, and by NSF grant 1447697. One author was supported by an NPSC
fellowship. Thanks to Aaron Sidford for a helpful discussion.
References
1. R. Andersen, C. Borgs, J. Chayes, J. Hopcraft, V. S. Mirrokni, and S.-H. Teng.
Local computation of pagerank contributions. In Algorithms and Models for the
Web-Graph. Springer, 2007.
2. R. Andersen, F. Chung, and K. Lang. Local graph partitioning using pagerank
vectors. In Foundations of Computer Science, 2006. FOCS’06. 47th Annual IEEE
Symposium on, 2006.
3. K. Avrachenkov, P. Gonçalves, and M. Sokol. On the choice of kernel and labelled
data in semi-supervised learning methods. In Algorithms and Models for the Web
Graph, pages 56–67. Springer, 2013.
4. K. Avrachenkov, N. Litvak, D. Nemirovsky, and N. Osipova. Monte carlo methods in pagerank computation: When one iteration is sufficient. SIAM Journal on
Numerical Analysis, 2007.
5. L. Backstrom and J. Leskovec. Supervised random walks: predicting and recommending links in social networks. In Proceedings of the fourth ACM international
conference on Web search and data mining. ACM, 2011.
13
6. B. Bahmani, A. Chowdhury, and A. Goel. Fast incremental and personalized
pagerank. Proceedings of the VLDB Endowment, 4(3):173–184, 2010.
7. S. Baluja, R. Seth, D. Sivakumar, Y. Jing, J. Yagnik, S. Kumar, D. Ravichandran,
and M. Aly. Video suggestion and discovery for youtube: taking random walks
through the view graph. In Proceedings of the 17th international conference on
World Wide Web. ACM, 2008.
8. S. Banerjee and P. Lofgren. Fast bidirectional probability estimation in markov
models. In NIPS, 2015.
9. C. Borgs, M. Brautbar, J. Chayes, and S.-H. Teng. A sublinear time algorithm
for pagerank computations. In Algorithms and Models for the Web Graph, pages
41–53. Springer, 2012.
10. M. Bressan, E. Peserico, and L. Pretto. Approximating pagerank locally with
sublinear query complexity. arXiv preprint arXiv:1404.1864, 2014.
11. F. Chung. The heat kernel as the pagerank of a graph. Proceedings of the National
Academy of Sciences, 2007.
12. D. Dubhashi and A. Panconesi. Concentration of Measure for the Analysis of
Randomized Algorithms. Cambridge University Press, 2009.
13. D. F. Gleich. PageRank beyond the web. arXiv, cs.SI:1407.5107, 2014. Accepted
for publication in SIAM Review.
14. O. Goldreich and D. Ron. On testing expansion in bounded-degree graphs. In
Studies in Complexity and Cryptography. Miscellanea on the Interplay between
Randomness and Computation. Springer, 2011.
15. V. Grolmusz. A note on the pagerank of undirected graphs. Information Processing
Letters, 2015.
16. P. Gupta, A. Goel, J. Lin, A. Sharma, D. Wang, and R. Zadeh. Wtf: The who
to follow service at twitter. In Proceedings of the 22nd international conference
on World Wide Web, pages 505–514. International World Wide Web Conferences
Steering Committee, 2013.
17. S. Kale, Y. Peres, and C. Seshadhri. Noise tolerance of expanders and sublinear
expander reconstruction. In IEEE FOCS’08. IEEE, 2008.
18. K. Kloster and D. F. Gleich. Heat kernel based community detection. In ACM
SIGKDD’14, 2014.
19. R. Lempel and S. Moran. The stochastic approach for link-structure analysis
(salsa) and the tkc effect. Computer Networks, 33(1):387–401, 2000.
20. P. Lofgren, S. Banerjee, and A. Goel. Personalized pagerank estimation and search:
A bidirectional approach. In WSDM, 2016.
21. P. Lofgren and A. Goel. Personalized pagerank to a target node. arXiv preprint
arXiv:1304.4658, 2013.
22. P. A. Lofgren, S. Banerjee, A. Goel, and C. Seshadhri. FAST-PPR: Scaling Personalized PageRank estimation for large graphs. In ACM SIGKDD’14. ACM, 2014.
23. L. Page, S. Brin, R. Motwani, and T. Winograd. The pagerank citation ranking:
bringing order to the web. 1999.
| 8 |
DCS
21 November 2017
A New Oscillating-Error Technique for Classifiers
Kieran Greer, Distributed Computing Systems, Belfast, UK.
[email protected].
Version 1.5c
Abstract – This paper describes a new method for reducing the error in a classifier. It uses
an error correction update that includes the very simple rule of either adding or subtracting
the error adjustment, based on whether the variable value is currently larger or smaller than
the desired value. While a traditional neuron would sum the inputs together and then apply
a function to the total, this new method can change the function decision for each input
value. This gives added flexibility to the convergence procedure, where through a series of
transpositions, variables that are far away can continue towards the desired value, whereas
variables that are originally much closer can oscillate from one side to the other. Tests show
that the method can successfully classify some benchmark datasets. It can also work in a
batch mode, with reduced training times and can be used as part of a neural network
architecture. Some comparisons with an earlier wave shape paper are also made.
Keywords: classifier, oscillating error, transposition, matrix, neural network, cellular
automata.
1
Introduction
Neural networks and classifiers in general are statistical processors. They all work by trying
to reduce the error in the system through an error correction method that includes
transposition through a function. Neural networks in particular, are based loosely on the
human brain, with a distributed architecture of relatively simple processing units. Each
neural unit solves a small part of the problem, where collectively, they are able to solve the
whole problem. Being statistical classifiers, they try to converge to some solution without
any level of intelligence outside of the pre-defined function. This works very well for a
statistical system, but the simulation of a brain-like neuron could include a little bit more. It
does get involved in different kinds of biochemical reaction [4][29] and may even have a
1
DCS
21 November 2017
type of memory [26]. For this paper, the neuron is able to react to its input and apply a very
simple rule of either adding or subtracting the error adjustment, based on whether the
variable value is currently larger or smaller than the desired value, and on a variable by
variable basis. The decision is based on the most basic of reactions and so it could be part of
an automatic theory. It is also well known that resonance is a feature of real brain
operations and other simulation models [3][14]. The idea of resonance would be to use the
data shape to determine what values go together, where earlier research [13] and this
paper suggest that the data shape can be represented by a single averaged value. The
procedure is shown to work surprisingly well and be very flexible and so it should be taken
seriously as a general mechanism.
The rest of this paper is organised as follows: section 2 briefly outlines the reasons for the
new method. Section 3 introduces some related work and section 4 describes the theory
behind the new classifier. Section 5 runs through a very simple test example, while section 6
gives the result of some tests on real datasets. Finally, section 7 gives some conclusions to
the work.
2
Reasons for the New Method
The proposed method would give the component slightly more flexibility, or if arguing for a
neural component, then a small amount of intelligence, but still keep it at a most basic and
automatic level. Each variable can reduce its error in a way that best suits it, with a
dampening effect that is independent of the other variables. Basically, if the data point
(variable value) is less than the desired value, the weight adjustment is added to it and if it is
larger than the desired value, the weight adjustment is subtracted from it. This means that
variables of the same input set to the neuron could be treated differently when the neuron
applies the function, which gives added flexibility to the convergence procedure. Through a
series of transpositions or levels in the classifier, a variable that is far from the correct value
can be adjusted by the full amount in the same direction each time. A variable that is at the
correct value can oscillate around it and therefore some of the adjustment size can even be
removed. The method is implemented here in matrix form, but as it uses a neuron-like
2
DCS
21 November 2017
architecture, it can be compared more closely with neural networks, or simply as a general
update mechanism. The weight correction can also be added or subtracted and not
multiplied, where the data works best with some form of normalisation, but considering a
binary-style of reduction, it does not take many steps for the error to reduce. The error
correction is also calculated by using the input and desired output values only and not any
intermediary error value sets. Although, this maybe considers the whole matrix to be a
single hidden unit. One other advantage of the method is the fact that it is not necessary to
fine-tune the classifier, with appropriate random weight sets, for example. The weight
correction procedure will always be the same and only a stopping criterion is required, along
with the dataset pre-processing.
3
Related Work
Related work would therefore include neural networks [27][31] and the resonance type in
particular [3][14]. The Adaptive Resonance Theory is an example of trying to use resonance,
created by a matching agreement, as part of a neural network model. It is also categorical in
nature, but can learn category patterns and includes a long-term memory component that is
a matrix of weight updates. The primary intuition behind the ART model is that object
identification and recognition generally occur as a result of the interaction of 'top-down'
observer expectations with 'bottom-up' sensory information and the idea of resonance is
the agreement between these two processes. Resonance suggests a repeating value or
state, which then suggests an averaged value, which is why it may be possible to represent a
wave shape that way. The Fuzzy-ART system uses what is called a one-shot learning process,
where each input item can be categorised after just one presentation. Cellular automata
possibly have some relation as well [32][5], because the new neural component is at a
similar level of complexity. It is not usual for a neural component to make a decision, but the
decision is so simple that it might be compared to a reaction. The paper [15] is also
interesting in this respect, with their Gauss-Newton gradient descent Marquardt algorithm.
It uses batch processing to compute the average sum of squares over the dataset error, and
can add or subtract a value from the step value, which is also a feature of the related
Marquardt-Levenberg algorithm. So in fact, these algorithms do make a similar decision,
3
DCS
21 November 2017
although it applies to the weight rather than the value itself. The rule that the new neuron
uses can probably make the best fit result non-linear, even if it is linear with respect to time.
Attempts to optimise the learning process have been made since the early days of neural
networks. Kolmgorov’s theorem [2][22] is often used to support the idea that a neural
network can successfully define any arbitrary function using just one hidden layer [17].
While Deep Learning has improved on this, it would be an idea of the model of this paper.
The theorem states that each multivariate continuous real-valued function can be
represented as a superposition and composition of continuous functions of only one
variable. The paper [10] gives a summary of some early attempts, including batch processing
and even the inclusion of rules, but as part of different types of learning frameworks. It is
interesting that rules and discrete categories or activations, are all quite old ideas. More
recently, the deep learning neural network models [18] adopt a policy of many more levels
than the earlier backpropagation ones. These new networks include a feedback from one
level to previous ones, as well as continuously refining the function, to learn mid-level
structures or features. Some Convolutional Neural Networks can also be trained in a oneshot mode. The paper [19], for example, can train the network using only one labelled
example per category, as part of a data reduction or transformation process. One-shot
learning therefore appears to be the term that was originally used. The paper [12] also uses
batch processing or averaging of the input dataset, and uses the term single-pass to mean a
similar thing.
Resonance is mentioned because an earlier neural network paper [13] tried to encapsulate
the dataset shape into a single averaged value and these papers [3][12] that are interested
in resonance also try to condense the input data rows into vectors of single averaged values.
In that case, a relative size of a scalar becomes important, but discriminating comparisons
must still be made. To help with this, the dataset is separated for each output category, so
that the averaged value applies to one category only. The justification is that each neuron
always has to accommodate all of the data that passes through it and so it has to produce
an average evaluation for that. Thus, averaging the input data could become a very cheap
way of describing the data shape. While the closest classifier might be a neural network, this
new model uses a matrix-like structure that contains a number of transitions from one layer
4
DCS
21 November 2017
to the next. These are however relatively simple transformations of adding or subtracting a
value and are really just steps in the same error reduction procedure.
4
Background Theory and Method Description
The theory of the new mechanism started with looking at the wave shape paper [13], which
is described first with some new details. After that, the new oscillating error mechanism is
described.
4.1
Wave Shape Algorithm
This was proposed in [13] as an alternative way of looking at the relative input and output
value sets. The idea was that the value differences would describe a type of wave shape and
similar shapes could be combined in the synapses, as they would produce the same type of
resonance. That design also uses average values, where both the input and the output can
be summed and averaged over each column (all data rows), to represent each variable field
with the average value. Tests do in fact show a substantial reduction in the error of the
average input to the average output using this method and even on established datasets,
such as the Wine dataset [7][28]. The problem was that while the error could be reduced, it
was reduced to an average output value that is not very accurate for each specific instance.
For example, if the output values are 1, 2 and 3, then the input dataset could be averaged to
produce a value close to 2, but this is not very helpful when trying to achieve an exact
output value of 1 or 3. That procedure, based strongly on shape, could be more useful for
modelling the synapses, whereas the neuron needs to compare with the desired result.
Therefore, using actual values instead of differences is probably more appropriate. For
example, if the input dataset is 2, 8, 4, 5, 10; then you can measure the average of these
values, or the average of their differences: 6, -4, 1, 5. As part of a theory, the synapses could
consider shape more than an actual value, as they try to sync with each other, while the
neuron compares with the actual result. So possibly, modelling the network can consider
that neurons and synapses are measuring a different type of quantity over the same value
set and for a different purpose – one to reinforce a type of signal (synapse) and one to
produce a more exact output (neuron). As stated however, averaging over the whole
5
DCS
21 November 2017
dataset makes the network too general and so possibly the ideas of the next section can be
tried.
4.2
Oscillating-Error Method
This is the new algorithm of the paper and resulted from trying to make the input to output
mapping of the last section more accurate. The new neuron can take an input from each
variable or column and adjust it by either adding or subtracting the weight update, on a
variable by variable basis. As the error oscillates from one side to the other, a bit of it gets
removed, as the current difference and so it will necessarily reduce in size. The new neuron
is therefore the same as a traditional one, except for the inclusion of the rule as part of the
calculation and separate weight sets for each category, during training. The new mechanism
has been tried using batch values, as for section 4.1, but the learning procedure is different
to the earlier models mentioned in section 3. It has been implemented in a matrix form of
levels that pass each input to the next level and is not as a flexible neural network, but the
units that are used would be suitable for neural networks in general. The calculations are
really only the ones described later and the equations suggest that time would be linear
with increasing dataset size or number of levels. The tested datasets required only a second
or less to be classified, where additional time to create the initial category groupings might
be the only consideration. The pre-processing however creates the batch rows, only 1 for
each category and so much fewer row numbers are subsequently used for training.
This paper only considers categorical data, where each input row belongs to a single
category. If represented by a single output neuron however, this can still produce a range of
output values, but they represent a discrete set instead of a continuous one. In the case of
the Wine dataset [7], the 3 output categories can be represented by the values 0, 0.5 and
1.0, for example. As described in section 4.1, the current wave shape method is not accurate
enough, as it averages over all categories. The new method therefore sums and averages
over each category group separately. In effect, it divides the dataset into batches,
representing the rows in each category and produces an averaged data row for each
category group. For the Wine dataset, there are therefore three sets of input data, one for
each category, represented by 3 averaged data rows. These then update the classifier
6
DCS
21 November 2017
separately, which stores different sets of weight or error correction values for each category
group. The weight value sets can then be combined into a single weight value set after they
are learned, to be used over any new input. For the Wine dataset, during training for
example, the structure would store 3 sets of 13 weight or error correction values, relating to
the 3 output categories and the 13 input variables. After the error corrections have been
determined, the 3 values for each variable are summed and averaged to produce the value
to be used by the classifier on any classification task. This also becomes the starting set of
weight update values for the next network layer. The method also vertically adjusts the
error, instead of using a multiplication factor.
4.3
Training Algorithm
The following algorithm helps to describe the process:
1. Group all data rows for each output category. Each group is then processed separately
during training.
a. For each category group, sum and average all input points for each variable (or
data column) to produce an averaged data row for that category.
2. To train the classifier:
a. Pass each data row of group values through the layers and update for the new
layer.
i. For the input layer, present each averaged data row to the classifier.
ii. For other layers, present the last set of weight adjusted inputs.
b. For the current layer, create the new weight correction set as follows:
i. If the value is smaller than the desired output value, then add the
previous layer’s averaged weight correction value to it.
ii. If the value is larger than the desired output value then subtract the
previous layer’s averaged weight correction value from it.
iii. Measure the difference between the new weight-corrected value and the
desired category output. Take the absolute value of that as the weight
error correction value for the data point in the category group.
iv. The error value can also be summed and compared with earlier layers, to
evaluate the stopping criterion.
c. The weight update method is essentially a single event that sets the value for the
category group in the layer.
d. After evaluating the weight sets for each category group separately, average over
them and store the averaged list as a new transposition layer in the matrix.
7
DCS
21 November 2017
3. The transposed values can also be stored as each new layer is added, to make the next
learning phase quicker. It can continue from the last layer, instead of running the values
through the whole matrix again.
4. Go to step 2 to create the next matrix layer in the structure, and repeat the process until
a stopping criterion is met.
5. A stopping criterion can be number of iterations, or if the total error does not reduce by
a substantial amount anymore.
During training, each layer creates a set of error correction weights for each of the output
categories. After training, these weight sets are then summed and averaged to produce a
final set for that layer. At the end of the process, there is then a matrix-like structure of
layers, each with a single set of error correction values, one for each input variable. Any new
input data row can be passed through each layer and the related correction value added or
subtracted from it using the simple rule. This produces an output value for each variable
(column) in the data row. The final layer is a single neuron that represents the discrete
output categories. All of the input values can be summed and averaged to produce an exact
output value. If a margin of error is allowed, then the closest category group can be
selected.
The strength of the process lies in the fact that input values that are very far from the
desired one can continue to move towards it, while ones that are closer can start to oscillate
around it and do not need to be moved away by the same error correction1. This gives
added flexibility to the learning process and makes the variables a bit independent of each
other. This is therefore a very simple idea, with a minimum of disturbance to the mechanical
and automatic nature of the traditional neuron. The following equation Equ. 1 can be used
to determine the variable value at a level in the classifier. This is used by the classifier after
it has learned the transposition layers’ weights and therefore only needs to adjust the input
values using these weights. Equation Equ. 2 describes the error correction rule and fits into
Equ. 1 as the Xij or the network value for variable j at level i.
1
For example, in a standard neural network: if point 1 has an error of 10 and point 2 has an error of 0, then if
you subtract 10 from both to correct point 1, the point 2 error actually increases to 10.
8
DCS
21 November 2017
𝑛
X = (∑𝑚
𝑖=1 ∑𝑗=1(𝑋𝑖𝑗)) / 𝑛
Equ. 1.
Where:
Xij = Xi-1j + ECij if Xj <= O
and
Equ. 2.
Xij = Xi-1j – ECij if Xj > O.
Where:
O = desired output value.
X = final output value.
Xij = input value for variable (column) j after transposition in matrix layer i.
ECij = error correction for variable j in layer i.
n = total number of variables.
m = total number of matrix layers.
5
Example Trace of a Scenario
The following scenario traces through the process for a dataset with 5 variables. The
example assumes that they have already been grouped for the output category and is
intended to demonstrate the error correction procedure only. The desired output category
value is ‘4’. The following steps show how the variables can converge to that value at each
iterative step2.
Averaged Input row values to layer 1: 3, 8, 5, 10, 2
Output category value: 4
Input-Output Differences = Abs(4 – 3), Abs(4 – 8) , Abs(4 – 5) , Abs(4 – 10) , Abs(4 – 2)
Absolute error = 1, 4, 1, 6, 2
•
Next iteration: take the input values and adjust, by adding or subtracting the error
correction.
2
If there is more than one output category value, then the weight values for each group can conflict and the
error might not automatically reduce to 0, as is this example. That is also why the categories are grouped
separately for training.
9
DCS
•
21 November 2017
For variable 1, for example: 3 is less than 4, so add 1 to it. For variable 2: 8 is larger than
4, so subtract 4 from it, and so on.
•
Determine the new difference from the desired output to get the new weight set.
Input plus/minus error correction to layer 2: 4, 4, 4, 4, 4
Input-Output Differences = Abs(4 – 4), Abs(4 – 4) , Abs(4 – 4) , Abs(4 – 4) , Abs(4 – 4)
Absolute error = 0, 0, 0, 0, 0
Continue until the stopping criterion is met. In this case, the error is now 0. It is interesting
that with a single output category, this method reduces the error to 0 in 1 step. If there are
several output categories and their weights sets are averaged, then the weight update will
not necessarily reduce the error to 0. Also, if there was another layer, then it would adjust
input values that are ‘0, 0, 0, 0, 0’ and not the original input value set.
6
Test Results
A test program has been written in the C# .Net language. It can read in a data file, normalise
it, generate the classifier from it and measure how many categories it subsequently
evaluates correctly. The classifier was designed with only one output node, as described in
section 4.2. The input values were also normalised. Therefore, 3 categories would produce
desired output values of 0, 0.5 and 1. The conversion from a category to a real number is
not implicit in the data and so it is possible to use a value range to represent each category,
just as easily as a single value. It might be interesting however for numerical data, if specific
output values can be learned accurately. The error margin that is discussed as part of the
result does not relate to distributions, but relates to the smallest margin around the output
value representing the category that will give the best percentage of correct classifications.
The representative value is still what the classifier tries to learn, but then a value range
round that can only reduce the number of errors. For example, consider 3 categories again.
These are represented by the output values 0 (category 1), 0.5 (category 2) and 1.0
(category 3), which gives a gap of ‘0.5’ between each value. It would therefore be possible
to measure up to 49% of that gap, either side of a category value and still be 100% reliable
10
DCS
21 November 2017
with respect to the category classification. A 20% error margin, for example, would be
calculated as 0.5 * 20 / 100 = 0.1. This would mean that a range of 0.4 – 0.6 would be
classified as the category 2 and anything outside of this range could be classified as
incorrect. A 15% margin of error would mean that the range would have to be 0.425 – 5.75,
and so on. So a smaller error margin would simply indicate that the classifier could be more
accurate to an exact real value and there is no ambiguity over the results presented in this
paper. Binary data could also be handled equally easily.
The process is completely deterministic. There are no random variables and so a dataset
with the same parameter set will always produce the same result. Two types of result were
measured. The first was an average error for each row in the dataset, after the classifier was
trained, calculated as the average difference between actual output and the desired output
value. The second measurement was how many categories were correctly classified, but
also with a consideration of the value range (error margin) just discussed. If increasing the
margin around a category value did not substantially increase the number of correct
classifications, then maybe it would not be worthwhile.
6.1
Benchmark Datasets with Train Versions Only
The classifier was first tested on 3 datasets from the UCI Machine Learning Repository [28].
Recent work [12] has tested some benchmark categorical datasets, including the Wine
Recognition database [7], Iris Plants database [6] and the Zoo database [33]. Wine
Recognition and Iris Plants have 3 categories, while the Zoo database has 7. These do not
have a separate training dataset and are benchmark tests for classifiers. A stopping criterion
of 10 iterations was used to terminate the tests. For the Wine dataset, the UCI [28] web
page states that the classes are separable, but only RDA [9] has achieved 100% correct
classification. Other classifiers achieved: RDA 100%, QDA 99.4%, LDA 98.9%, 1NN 96.1% (ztransformed data) and all results used the leave-one-out technique. So that is the current
state-of-the-art. As shown by Table 1, the new classifier can classify to the accuracy required
by these benchmark tests. The final column ‘Selected Best %’ lists the best results found by
some other researchers.
11
DCS
Dataset
Wine
Iris
Zoo
Abalone
Hayes-Roth
Liver
21 November 2017
Average
Error
0.004
0.005
-0.004
0.007
-0.007
0.02
Best % Error
Margin
25%
45%
45%
49%
25%
35%
Correctly
Classified
178 from 178
149 from 150
101 from 101
3410 from 4177
131 from 132
345 from 345
% Correct
100%
99%
100%
81%
99%
100%
Selected
Best %
100%
95.7%
94.5%
73%
50%
74%
Table 1. Classifier Test results. Average output error and minimum error margin for the
specified number of correct classifications. All datasets points normalised to be in the range
0 to 1. Error margin stopped at 49%.
Three other datasets were tested. These were: the Abalone shellfish dataset [1] with 28
categories and was trained with 20 iterations, or weight transpositions. The Hayes-Roth
concept learning dataset [16] with 3 categories, trained to 10 iterations and the BUPA Liver
dataset [24], with 2 categories that could be trained in 2 iterations. With the Abalone
shellfish dataset, they tried to classify using a decision tree C4.5, a k-NN nearest neighbour
and a 1R classifier, from the Weka [30] package. While they reported maybe 73% correct
classification, this new method can achieve 81% correct classification.
The paper [20] tested a number of datasets, including Iris, Wine and Zoo, using k-NN and
neural network classifiers, with maybe 95.67%, 96% or 94.5% as the best results from one of
the classifiers respectively. The values presented here are therefore probably better than
that. It also tested the Hayes-Roth dataset, but to only 50% accuracy. Other papers have
quoted better results and there is a test dataset available, but without any specified
categories. None of the other quoted results are close to 100% however. The paper [11]
tested the Liver dataset [24] to 74% accuracy using a sparse grid method, but the new
method achieves 100% accuracy in only 2 iterations. The table shows that for all datasets,
the error between the desired and the actual output values has reduced to practically zero,
but different margins of error are required for the number of correct classifications to be
optimised. The percentages still compare favourably with the other researchers’ results.
12
DCS
6.2
21 November 2017
Separate Train and Test Datasets
Four datasets were tried here, where two of them – User Modelling [21] and Bank Notes
[25] - were also tested in [12]. They have separate test datasets to the train datasets. This is
typically what a supervised neural network should be able to do and the results of this
section, given in Table 2, are again favourable. A stopping criterion of 10 iterations was used
to terminate the tests.
Dataset
UM
Bank notes
Heart
Letters
Average
Error
0.02
-0.05
0.13
0.002
Best % Error
Margin
49%
35%
35%
49%
Correctly
Classified
143 from 145
100 from 100
187 from 187
3692 from 4000
% Correct
98.5%
100%
100%
92%
Selected
Best %
97.9%
61%
84%
82%
Table 2. Classifier Test results. The same criteria as for Table 1, but a separate test dataset
to the train dataset.
The User Modelling dataset [21] was used as part of a knowledge-modelling project that
produced a new type of classifier in that paper. Their classifier was shown to be much better
than the standard ones for the particular problem of web page use, classifying to 97.9%
accuracy. This was compared to 85% accuracy for a k-NN classifier and 73.8% for a Bayes
classifier. This new model however appears to classify even better, at 98.5% accuracy.
Another test tried to classify the bank notes dataset [25]. These were scanned variable
values from ‘real’ or ‘fake’ banknotes, where the output was therefore binary. This is
another different type of problem, where a Wavelet transform might typically be used. The
dataset again contained a train and a test dataset, where the best classification realised
100% accuracy. In that paper they quote maybe only 61% correct classification, but other
papers have quoted close to 100% correct for similar problems.
A third dataset was a heart classifier from SPECT images [23]. While they noted 84%
accuracy on the test dataset using a sparse grid method, the new method can achieve 100%
accuracy. A fourth dataset was a letter recognition task [8]. Letters were categorised into
one of 26 alphabet types, where there were 20000 instances in total, with 16000 instances
13
DCS
21 November 2017
in the train set and 4000 instances in the test set. They used a fuzzy exemplar-based rule
creation method, but achieved 82% accuracy as compared to 92% accuracy here.
7
Conclusions
This paper describes a new type of weight adjustment method that can be used as part of a
classifier, or a neural network in particular. It is basically a neural unit with the addition of a
very simple rule. The inclusion of the comparison rule however gives the mechanism much
more control over weight updates and the unit could still operate in an almost automatic
manner. The classifier does not need to learn any complex data rules, but for best results,
data normalisation would be required. Another feature is the fact that the weight value can
be added or subtracted, and not multiplied, which is the usual mechanism. Another
potential advantage is the fact that it can be calculated using only the input and the output
values. It is not therefore necessary to fine-tune the classifier with initial weights, or
increment/decrement factor amounts, to start with. A stopping criterion should be added
however, where each iteration adds a new transposition layer to the matrix. Looking at
related work, the learning algorithm is possibly more similar to the Gauss or Pseudo-Newton
gradient descent ones [15]. So again, while the method appears to be new, there are
similarities with older models. The test results are very surprising. The new classifier appears
to work best of all classifiers and across a range of problems. It is also very fast, requiring
only a second or less and the setup is really minimal.
Each learning iteration produces a new set of error correction values and so when used, any
input value goes through a series of transformations, which is separate for each variable or
column value. It is thought that the weight adjustment performs a type of dampening on
the error, and so it should reduce for each transposition stage. The orthogonal nature allows
the variables to behave slightly differently to each other, where a variable that is close to
the desired output value can oscillate around it, while one that is still far away can make
larger corrections towards it. There are probably several examples of this type of
phenomenon in nature. Another paper that uses an even more orthogonal design is [12],
although the results for this paper are maybe slightly better.
14
DCS
21 November 2017
Acknowledgement
The author wishes to acknowledge an email discussion with Charles Sauerbier of the US
Navy, mainly because of its timing. He pointed out a belief that neural networks were a form
of cellular automata and several other points, which the author did not fully appreciate, but
the simple rule of this paper would push a neural element in that direction. The research
itself however derived from a different place, looking at wave shapes and possibly some
earlier ideas.
Addendum
It has not been made clear in the paper that the classifier actually used the correct output
category value to converge to a result when classifying any of the datasets. So even if the
classifier had not seen the dataset before, it still used its output category as part of the
classification process. This is a major constraint that might be resolved by testing with each
output category and selecting the category with the smallest error. However, the average
error is also incorrect as it did not take account of negative totals, but it can still be in the
hundredths or thousandths after being corrected. So, the results are correct for what is
described, apart from the error, but that can still be from a similar scale. A new paper ‘An
Improved Oscillating-Error Classifier with Branching’ has solved the other problems and
should also be read.
References
[1] Asim, A., Li, Y., Xie, Y. and Zhu, Y. (2002). Data Mining For Abalone, Computer Science
4TF3 Project, Supervised by Dr. Jiming Peng, Department of Computing and Software,
McMaster University, Hamilton, Ontario.
[2] Brattka, V. (2003). A computable Kolmogorov superposition theorem. Computability
and Complexity in Analysis. Informatik Berichte, Vol. 272, pp.7-22.
[3] Carpenter, G., Grossberg, S., and Rosen, D. (1991). Fuzzy ART: Fast stable learning and
categorization of analog patterns by an adaptive resonance system. Neural Networks,
Vol. 4, pp. 759-771.
15
DCS
21 November 2017
[4] Chen, S., Cai, D., Pearce, K., Sun, P.Y-W, Roberts, A.C. and Glanzman, D.L. (2014).
Reinstatement of long-term memory following erasure of its behavioral and synaptic
expression in Aplysia, eLife 2014;3:e03896, pp. 1 - 21. DOI: 10.7554/eLife.03896.
[5] Dershowitz, N. and Falkovich, E. (2015). Cellular Automata are Generic, U. Dal Lago and
R. Harmer (Eds.): Developments in Computational Models 2014 (DCM 2014). EPTCS
179, pp. 17-32, doi:10.4204/EPTCS.179.2.
[6] Fisher, R.A. (1936). The use of multiple measurements in taxonomic problems, Annual
Eugenics, 7, Part II, pp. 179-188, also in 'Contributions to Mathematical Statistics' (John
Wiley, NY, 1950).
[7] Forina, M. et al. (1991). PARVUS - An Extendible Package for Data Exploration,
Classification and Correlation. Institute of Pharmaceutical and Food Analysis and
Technologies, Via Brigata Salerno, 16147 Genoa, Italy.
[8] Frey, P.W. and Slate, D.J. (1991). Letter recognition using Holland-style adaptive
classifiers, Machine learning, Vol. 6, No. 2, pp. 161-182.
[9] Friedman, J.H. (1989). Regularized Discriminant Analysis, Journal of the American
Statistical Association, Vol. 84, No. 405, pp. 165-175.
[10] Gallant, S.I. (1990). Perceptron-Based Learning Algorithms, IEEE Transactions on
Neural Networks, Vol. 1, No. 2.
[11] Garcke, J. and Griebel, M., 2002. Classification with sparse grids using simplicial basis
functions. Intelligent data analysis, Vol. 6, No. 6, pp. 483-502.
[12] Greer, K. (2015). A Single-Pass Classifier for Categorical Data, available on arXiv at
http://arxiv.org/abs/1503.02521.
[13] Greer, K. (2013). Artificial Neuron Modelling Based on Wave Shape, BRAIN. Broad
Research in Artificial Intelligence and Neuroscience, Vol. 4, Issues 1-4, pp. 20-25, ISSN
2067-3957 (online), ISSN 2068-0473 (print).
[14] Grossberg, S. (2013). Adaptive resonance theory. Scholarpedia, Vol. 8, No. 5, pp.
1569.
[15] Hagan, M.T. and Menhaj, M.B. (1994). Training Feedforward Networks with the
Marquardt Algorithm, IEEE Transactions on Neural Networks, Vol. 5, No. 6, pp. 989-993.
16
DCS
21 November 2017
[16] Hayes-Roth, B. and Hayes-Roth, F. (1977). Concept Learning and the Recognition and
Classification of Exemplars, Journal of Verbal Learning and Verbal Behavior, Vol. 16, No.
3, pp. 321-338.
[17] Hect-Nielsen, R., Neurocomputing, Addison-Wesley, 1990.
[18] Hinton, G.E., Osindero, S. and Teh, Y.W. (2006). A fast learning algorithm for deep
belief nets. Neural computation, Vol. 18, No. 7, pp. 1527-1554.
[19] Hoffman, J., Tzeng, E., Donahue, J., Jia, Y., Saenko, K. and Darrell, T. (2014). One-Shot
Adaptation of Supervised Deep Convolutional Models, arXiv:1312.6204v2 [].
[20] Jiang, Y. and Zhi-Hua Zhou, Z-H. (2004). Editing training data for knn classifiers with
neural network ensemble, In Lecture Notes in Computer Science, Vol. 3173, pp. 356361.
[21] Kahraman, H.T., Sagiroglu, S. and Colak, I. (2013). The development of intuitive
knowledge classifier and the modeling of domain dependent data, Knowledge-Based
Systems, Vol. 37, pp. 283-295.
[22] Kolmogorov, A.N. (1963). On the representation of continuous functions of many
variables by superposition of continuous functions of one variable and addition.
American Mathematical Society Translation, Vol. 28, No. 2, pp.55-59.
[23] Kurgan, L.A., Cios, K.J., Tadeusiewicz, R., Ogiela, M. and Goodenday, L.S. (2001).
Knowledge Discovery Approach to Automated Cardiac SPECT Diagnosis, Artificial
Intelligence in Medicine, Vol. 23, No. 2, pp 149-169.
[24] Liver dataset (2016). https://archive.ics.uci.edu/ml/datasets/Liver+Disorders.
[25] Lohweg, V., Dörksen, H., Hoffmann, J. L., Hildebrand, R., Gillich, E., Schaede, J., and
Hofmann, J. (2013). Banknote authentication with mobile devices. In IS&T/SPIE
Electronic Imaging (pp. 866507-866507). International Society for Optics and Photonics.
[26]Pershin, Y.V., La Fontaine, S. and Di Ventra, M. (2008). Memristive model of amoeba’s learning,
E-print arXiv:0810.4179, 22 Oct 2008.
[27] Rojas, R. (1996). Neural Networks: A Systematic Introduction. Springer-Verlag, Berlin
and online at books.google.com.
[28] UCI Machine Learning Repository (2016). http://archive.ics.uci.edu/ml/.
[29] Waxman, S.G. (2012). Sodium channels, the electrogenisome and the electrogenistat:
lessons and questions from the clinic, The Journal of Physiology, pp. 2601 – 2612.
17
DCS
21 November 2017
[30] Weka (2015). http://www.cs.waikato.ac.nz/ml/weka/index.html.
[31] Widrow, B. and Lehr, M. (1990). 30 Years of adaptive neural networks: perceptron,
Madaline and backpropagation, Proc IEEE, Vol. 78, No. 9, pp. 1415-1442.
[32] Wolfram, S. (1983). Cellular Automata, Los Alamos science.
[33] Zoo database (2016). https://archive.ics.uci.edu/ml/datasets/Zoo.
18
| 2 |
Matrix Balancing in Lp Norms:
A New Analysis of Osborne’s Iteration
arXiv:1606.08083v1 [] 26 Jun 2016
Rafail Ostrovsky
UCLA
[email protected]∗
Yuval Rabani
The Hebrew University of Jerusalem
[email protected]†
Arman Yousefi
UCLA
[email protected]∗
June 28, 2016
Abstract
We study an iterative matrix conditioning algorithm due to Osborne (1960). The goal of the algorithm
is to convert a square matrix into a balanced matrix where every row and corresponding column have
the same norm. The original algorithm was proposed for balancing rows and columns in the L2 norm,
and it works by iterating over balancing a row-column pair in fixed round-robin order. Variants of the
algorithm for other norms have been heavily studied and are implemented as standard preconditioners
in many numerical linear algebra packages. Recently, Schulman and Sinclair (2015), in a first result of
its kind for any norm, analyzed the rate of convergence of a variant of Osborne’s algorithm that uses the
L∞ norm and a different order of choosing row-column pairs. In this paper we study matrix balancing in
the L1 norm and other Lp norms. We show the following results for any matrix A = (aij )ni,j=1 , resolving
in particular a main open problem mentioned by Schulman and Sinclair.
1. We analyze the iteration for the L1 norm under a greedy order of balancing. We show that it
converges to an ǫ-balanced matrix in K = O(min{ǫ−2 log w, ǫ−1 n3/2 log(w/ǫ)}) iterations that cost
a total of O(m + Kn log n) arithmetic operations
over O(n log w)-bit numbers. Here m is the
P
number of non-zero entries of A, and w = i,j |aij |/amin with amin = min{|aij | : aij 6= 0}.
2. We show that the original round-robin implementation converges to an ǫ-balanced matrix in
O(ǫ−2 n2 log w) iterations totalling O(ǫ−2 mn log w) arithmetic operations over O(n log w)-bit numbers.
3. We show that a random implementation of the iteration converges to an ǫ-balanced matrix in
O(ǫ−2 log w) iterations using O(m+ǫ−2 n log w) arithmetric operations over O(log(wn/ǫ))-bit numbers.
√
4. We demonstrate a lower bound of Ω(1/ ǫ) on the convergence rate of any implementation of the
iteration.
5. We observe, through a known trivial reduction, that our results for L1 balancing apply to any Lp
norm for all finite p, at the cost of increasing the number of iterations by only a factor of p.
We note that our techniques are very different from those used by Schulman and Sinclair.
∗
Research supported in part by NSF grants 1065276, 1118126 and 1136174, US-Israel BSF grants, OKAWA Foundation
Research Award, IBM Faculty Research Award, Xerox Faculty Research Award, B. John Garrick Foundation Award, Teradata
Research Award, and Lockheed-Martin Corporation Research Award. This material is also based upon work supported in
part by DARPA Safeware program. The views expressed are those of the authors and do not reflect the official policy or
position of the Department of Defense or the U.S. Government.
†
Research supported in part by ISF grant 956-15, by BSF grant 2012333, and by I-CORE Algo.
1
Introduction
Let A = (aij )n×n be a square matrix with real entries, and let k·k be a given norm. For an index i ∈ [n], let
kai,. k and ka.,i k, respectively, denote the norms of the ith row and the ith column of A, respectively. The
matrix A is balanced in k · k iff ka.,i k = kai,. k for all i. An invertible diagonal matrix D = diag(d1 , . . . , dn )
is said to balance a matrix A iff DAD −1 is balanced. A matrix A is balanceable in k · k iff there exists a
diagonal matrix D that balances it.
Osborne [8] studied the above problem in the L2 norm and considered its application in preconditioning
a given matrix in order to increase the accuracy of the computation of its eigenvalues. The motivation is
that standard linear algebra algorithms that are used to compute eigenvalues are numerically unstable for
unbalanced matrices; diagonal balancing addresses this issue by obtaining a balanced matrix that has the
same eigenvalues as the original matrix, as DAD −1 and A have the same eigenvalues. Osborne suggested
an iterative algorithm for finding a diagonal matrix D that balances a matrix A, and also proved that his
algorithm converges in the limit. He also observed that if a diagonal matrix D = diag(d1 , . . . , dn ) balances
a matrix A, then the diagonal vector d = (d1 , . . . , dn ) minimizes the Frobenius norm of the matrix DAD −1 .
Osborne’s classic algorithm is an iteration that at each step balances a row and its corresponding column
by scaling them appropriately. More specifically the algorithm balances row-columnppairs in a fixed cyclic
order. In orderpto balance row and column i, the algorithm scales the ith row by ka.,i k/kai,. k and the
ith column by kai,. k/ka.,i k. Osborne’s algorithm converges to a unique balanced matrix, but there have
been no upper bounds on the converges rate of Osborne’s algorithm for the L2 norm prior to our work.
Parlett and Reinsch [9] generalized Osborne’s algorithm to other norms without proving convergence.
The L1 version of the algorithm has been studied extensively. The convergence in the limit of the L1
version was proved by Grad [4], uniqueness of the balanced matrix by Hartfiel [5], and a characterization
of balanceable matrices was given by Eaves et al. [3]. Again, there have been no upper bounds on the
running time of the L1 version of the iteration. The first polynomial time algorithm for balancing a
matrix in the L1 norm was given by Kalantari, Khachiyan, and Shokoufandeh [6]. Their approach is
different from the iterative algorithm of Osborne-Parlett-Reinsch. They reduce the balancing problem to
a convex optimization problem and then solve that problem approximately using
P the ellipsoid algorithm.
4
Their algorithm runs in O(n log(n log w/ǫ)) arithemtic operations where w = i,j |ai,j |/amin for amin =
min{|aij | : aij 6= 0} and ǫ is the relative imbalance of the output matrix (see Definition 1).
For matrix balancing in the L∞ norm, Schneider and Schneider [11] gave an O(n4 )-time non-iterative
algorithm. This running time was improved to O(mn + n2 log n) by Young, Tarjan, and Orlin [14]. Despite
the existence of polynomial time algorithms for balancing in the L1 and L∞ norms, and the lack of any
theoretical bounds on the running time of the Osborne-Parlett-Reinsch (OPR) iterative algorithm, the
latter is favored in practice, and the Parlett and Reinsch variant [9] is implemented as a standard in almost
all linear algebra packages (see Chen [2, Section 3.1], also the book [10, Chapter 11] and the code in [1]).
One reason is that iterative methods usually perform well in practice and run for far fewer iterations than
are needed in the worst case. Another advantage of iterative algorithms is that they are simple, they
provide steady partial progress, and they can always generate a matrix that is sufficiently balanced for the
subsequent linear algebra computation.
Motivated by the impact of the OPR algorithm and the lack of any theoretical bounds on its running
time, Schulman and Sinclair [12] recently showed the first bound on the convergence rate of a modified version of this algorithm in the L∞ norm. They prove that their modified algorithm converges in
O(n3 log(ρn/ǫ)) balancing steps where ρ measures the initial imbalance of A and ǫ is the target imbalance
of the output matrix. Their algorithm differs from the original algorithm only in the order of choosing
row-column pairs to balance (we will use the term variant to indicate a deviation from the original roundrobin order). Schulman and Sinclair do not prove any bounds on the running time of the algorithm for
other Lp norms; this was explicitly mentioned as an open problem. Notice that when changing the norm,
1
not only the target balancing condition changes but also the iteration itself, so we cannot deduce an upper
bound on the rate of convergence in the Lp norm from the rate of convergence in the L∞ norm.
In this paper we resolve the open question of [12], and upper bound the convergence rate of the OPR
iteration in any Lp norm.1 Specifically, we show the following bounds for the L1 norm. They imply the
same bounds with an extra factor of p for the Lp norm, by using them on the matrix with entries raised
to the power of p. (Below, the Õ(·) notation hides factors that are logarithmic in various parameters of
the problem. Exact bounds await the statements of the theorems in the following sections.) We show that
the original algorithm (with no modification) converges to an ǫ-balanced matrix in Õ(n2 /ǫ2 ) balancing
steps, using Õ(mn/ǫ2 ) arithmetic operations. We also show that a greedy variant converges in Õ(1/ǫ2 )
balancing steps, using O(m) + Õ(n/ǫ2 ) arithmetic operations; or alternatively in Õ(n3/2 /ǫ) iterations,
using Õ(n5/2 /ǫ) arithmetic operations. Thus, the number of arithmetic operations needed by our greedy
variant is nearly linear in m or nearly linear in 1/ǫ. The near linear dependence on m is significantly
better than the Kalantari-Khachiyan-Shokoufandeh algorithm that uses O(n4 log(n log w/ǫ)) arithmetic
operations (and also the Schulman and Sinclair version with a stricter, yet L∞ , guarantee). For an accurate
comparison we should note that we may need to maintain Õ(n) bits of precision, so the running time is
actually O(m + n2 log n log w/ǫ2 ) (the Kalantari et al. algorithm maintains O(log(wn/ǫ))-bit numbers).
We improve this with yet another, randomized, variant that has similar convergence rate (nearly linear in
m), but needs only O(log(wn/ǫ)) bits of precision. Finally, we show that the dependence on ǫ given by our
√
analyses is within the right ballpark—we demonstrate a lower bound of Ω(1/ ǫ) on the convergence rate
of any variant of the algorithm to an ǫ-balanced matrix. Notice the contrast with the Schulman-Sinclair
upper bound for balancing in the L∞ norm that has O(log(1/ǫ)) dependence on ǫ (this lower bound is for
the Kalantari et al. notion of balancing so it naturally applies also to strict balancing).
Osborne observed that a diagonal matrix D = diag(d1 , . . . , dn ) that balances a matrix A in the L2 norm
also minimizes the Frobenius norm of the matrix DAD −1 . Thus, the balancing problem can be reduced to
minimizing a convex function. Kalantari et al. [6] gave a convex program for balancing in the L1 norm. Our
analysis is based on their convex program. We relate the OPR balancing step to the coordinate descent
method in convex programming. We show that each step reduces the value of the objective function. Our
various bounds are derived through analyzing the progress made in each step. In particular, one of the main
tools in our analysis is an upper bound on the distance to optimality (measured by the convex objective
function) in terms of the the L1 norm of the gradient, which we prove using network flow arguments.
For lack of space, many proofs are missing inline. They appear in Section 7.
2
Preliminaries
In this section we introduce notation and definitions, we discuss some previously known facts and results,
and we prove a couple of useful lemmas.
The problem. Let A = (aij )n×n be a square real matrix, and let k · k be a norm on Rn . For an
index i ∈ [n], let kai,. k and ka.,i k, respectively, denote the norms of the ith row and the ith column of
A, respectively. A matrix A is balanced in k · k iff ka.,i k = kai,. k for all i. An invertible diagonal matrix
D = diag(d1 , . . . , dn ) is said to balance a matrix A iff DAD −1 is balanced. A matrix A is balanceable in
k · k iff there exists a diagonal matrix D that balances it.
For balancing a matrix A in the Lp norm only the absolute values of the entries of A matter, so we
may assume without loss of generality that A is non-negative. Furthermore, balancing a matrix does not
1
It should be noted that the definition of target imbalance ǫ in [12] is stricter than the definition used by [6]. We use the
definition in [6]. This is justified by the fact that the numerical stability of eigenvalue calculations depends on the Frobenius
norm of the balanced matrix, see [9].
2
change its diagonal entries, so if a diagonal matrix D balances A with its diagonal entries replaced by
zeroes, then D balances A too. Thus, for the rest of the paper, we assume without loss of generality that
the given n × n matrix A = (aij ) is non-negative and its diagonal entries are all 0.
A diagonal matrix D = diag(d1 , . . . , dn ) balances A = (aij ) in the Lp norm if and only if D p =
diag(d1 p , . . . , dn p ) balances the matrix A′ = (aij p ) in the L1 norm. Thus, the problem of balancing
matrices in the Lp norm (for any finitie p) reduces to the problem of balancing matrices in the L1 norm;
for the rest of the paper we focus on balancing matrices in the L1 norm.
For an n × n matrix A, we use GA = (V, E, w) to denote the weighted directed graph whose adjacency
matrix is A. More formally, GA is defined as follows. Put V = {1, . . . , n}, put E = {(i, j) : aij 6= 0}, and
put w(i, j) = aij for every (i, j) ∈ E. We use an index i ∈ [n] to refer to both the ith row or column of A,
and to the node i of the digraph GA . Thus, the non-zero entries of the ith column (the ith row, respectively)
correspond to the arcs into (out of, respectively) node i. In the L1 norm it is useful to think of the weight of
an arc as a flow being carried by that arc. Thus, ka.,i k1 is the total flow into vertex i and kai,. k1 is the total
flow out of it. Note that if a matrix A is not balanced then for some nodes i, ka.,i k1 6= kai,. k1 , and thus the
flow on the arcs does not constitute a valid circulation because flow conservation is not maintained. Thus,
the goal of balancing in the L1 norm can be stated as applying diagonal scaling to find a flow function on
the arcs of the graph GA that forms a valid circulation. We use both views of the graph (with arc weights
or flow), and also the matrix terminology, throughout this paper, as convenient.
Without loss of generality we may assume that the undirected graph underlying GA is connected.
Otherwise, after permuting V = {1, . . . , n}, the given matrix A can be replaced by diag(A1 , . . . , Ar ) where
each of A1 , . . . , Ar is a square matrix whose corresponding directed graph is connected. Thus, balancing
A is equivalent to balancing each of A1 , . . . , Ar .
The goal of the iterative algorithm is to balance approximately a matrix A, up to an error term ǫ. We
define the error here.
Definition 1 (approximate balancing). Let ǫ > 0.
√Pn
k −kai,. k1 )2
i=1 (ka
P .,i 1
≤ ǫ.
1. A matrix A is ǫ-balanced iff
ai,j
i,j
2. A diagonal matrix D with positive diagonal entries is said to ǫ-balance A iff DAD −1 is ǫ-balanced.
The algorithms. Kalantari et al. [6] introduced the above definition of ǫ-balancing, and showed that
their algorithm for ǫ-balancing a matrix in the L1 norm uses O(n4 ln((n/ǫ) ln w)) arithmetic operations. In
their recent work, Schulman and Sinclair [12] use, in the context of balancing in the L∞ norm, a stronger
notion of strict balancing (that requires even very low weight row-column pairs to be nearly balanced).
Their iterative algorithm strictly ǫ-balances a matrix in the L∞ norm in O(n3 log(nρ/ǫ)) iterations where
ρ measures the inital imbalance of the matrix. In this paper, we prove upper bounds on the convergence
rate of the Osborne-Parlett-Reinsch (OPR) balancing.
The OPR iterative algorithm balances indices in a fixed round-robin order. Schulman and Sinclair
considered a variant that uses a different rule to choose the next index to balance. We consider in this paper
several alternative implementations of OPR balancing (including the original round-robin implementation)
that differ only in the rule by which an index to balance is chosen at each step. For all rules that we
consider, the iteration generates a sequence A = A(1) , A(2) , . . . , A(t) , . . . of n × n of matrices that converges
to a unique balanced matrix A∗ (see Grad [4] and Hartfiel [5]). The matrix A(t+1) is obtained by balancing
(t)
(t+1) = D (t) A(t) D (t) −1 where D (t) is a
an index of A(t) . If the ith index
q of A is chosen, we get that A
(t)
diagonal matrix with dii =
(t)
(t)
(t)
(t)
(t)
ka.,i k1 /kai,. k1 and djj = 1 for j 6= i. Note that ai,. (a.,i , respectively)
denotes the ith row (ith column, respectively) of A(t) . Also, putting D̄ (1) = In×n and D̄ (t) = D (t−1) · · · D (1)
for t > 1, we get that A(t) = D̄ (t) A(D̄ (t) )−1 .
3
The following lemma shows that each balancing step reduces the sum of entries of the matrix.
Lemma 1. Balancing the ith index
matrix B = (bij )n×n (with bii = 0) decreases the
p of a non-negative
p
total sum of the entries of B by ( kb.,i k1 − kbi,. k1 )2 .
Proof. Before balancing, the total sum of entries
k1 +kb.,i k1 . Balp in the ith row and in the ith column is kbi,.p
ancing scales the entries of the ith column by kbi,. k1 /kb.,i k1 and entries of the ith row by kb.,i k1 /kbi,. k1 .
Thus, after p
balancing the sum of entries in the ith column, which equals the sum of entries in the ith row,
is equal to kbi,. k1 · kb.,i k1 . The entries that are not in the
P balanced row and columnpare not changed.
Therefore, keeping in mind that bii = 0, balancing decreases i,j bij by kb.,i k1 +kbi,. k1 −2 kbi,. k1 · kb.,i k1 =
p
p
( kb.,i k1 − kbi,. k1 )2 .
A reduction to convex optimization. Kalantari et al. [6], as part of their algorithm, reduce matrix
balancing to a convex optimization problem. We overview their reduction here. Our starting point is
Osborne’s observation that if a diagonal matrix D = diag(d1 , . . . , dn ) balances a matrix A in the L2 norm,
then the diagonal vector d = (d1 , . . . , dn ) minimizes the Frobenius norm of the matrix DAD −1 . The
analogous claim for the L1 norm is that if a diagonal matrix D = diag(d1 , . . . , dn ) balances
A
P a matrix
di
in the L1 norm, then the diagonal vector d = (d1 , . . . , dn ) minimizes the function F (d) = i,j aij dj . On
the other hand, Eaves et al. [3] observed that a matrix A can be balanced if and only if the digraph GA is
strongly connected. The following theorem [6, Theorem 1] summarizes the above discussion.
Theorem 1 (Kalantari et al.). Let A = (aij )n×n be a real non-negative matrix, aii = 0, for all i = 1, . . . n,
such that the undirected graph underlying GA is connected. Then, the following statements are equivalent.
(i) A is balanceable (i.e., there exists a diagonal matrix D such that DAD −1 is balanced).
(ii) GA is strongly connected.
P
(iii) Let F (d) = (i,j)∈E aij ddji . There is a point d∗ ∈ Ω = {d ∈ Rn : di > 0, i = 1, . . . , n} such that
F (d∗ ) = inf{F (d) : d ∈ Ω}.
We refer the reader to [6, Theorem 1] for a proof. We have the following corollary.
Corollary 1. d∗ minimizes F over Ω if and only if D ∗ = diag(d∗1 , . . . , d∗n ) balances A.
∗
∂F (d )
∗
∗
Proof. P
As F attains its infimum
Pn at d ∈∗ Ω,∗its gradient ∇F satisfies ∇F (d ) = 0. ∗Also, ∂di = 0 if and
n
∗
∗
only if j=1 aij · (di /dj ) = j=1 aji · (dj /di ) for all i ∈ [n]. In other words, ∇F (d ) = 0 if and only if the
matrix D ∗ AD ∗−1 is balanced where D ∗ = diag(d∗1 , . . . , d∗n ). Thus, d∗ minimizes F over Ω if and only if
D ∗ = diag(d∗1 , . . . , d∗n ) balances A.
It can also be shown that under the assumption of Theorem 1, the balancing matrix D ∗ is unique up to
a scalar factor (see Osborne [8] and Eaves et al. [3]). Therefore, the problem of balancing matrix A can be
reduced to optimizing the function F . Since we are optimizing over the set Ω of strictly positive vectors,
we can apply a change of variables d = (ex1 , . . . , exn ) ∈ Rn to obtain a convex objective function:
f (x) = fA (x) =
n
X
aij exi −xj .
(1)
i,j=1
Kalantari et al. [6] use the convex function f because it can be minimized using the ellipsoid algorithm.
We do not need the convexity of f , and use f instead of F only because it is more convenient to work
4
with, and it adds some intuition. Notice that the partial derivative of f with respect to xi is
n
n
j=1
j=1
X
∂f (x) X
aji · exj −xi ,
aij · exi −xj −
=
∂xi
(2)
which is precisely the difference between the L1 norms of the ith row and the ith column of the matrix
DAD −1 , where D = diag(ex1 , . . . , exn ). Also, by definition, the diagonal matrix diag(ex1 , . . . , exn ) ǫbalances A iff
r
2
Pn
Pn Pn
xi −xj −
xj −xi
a
e
a
e
ij
ji
j=1
j=1
i=1
k∇f (x)k2
Pn
=
≤ ǫ.
(3)
x
−x
i
j
f (x)
i,j=1 aij e
We now state and prove a key lemma that our analysis uses. The lemma uses combinatorial flow and
circulation arguments to measure progress by bounding f (x) − f (x∗ ) in terms of k∇f (x)k1 which is a
global measure of imbalances of all vertices.
Lemma 2. Let f be the function defined in Equation (1), and let x∗ be a global minimum of f . Then, for
all x ∈ Rn , f (x) − f (x∗ ) ≤ n2 · k∇f (x)k1 .
Proof. Recall that f (x) = fA (x) is the sum of entries of a matrix B = (bij ) defined by bij = aij · exi −xj .
Notice that f (x) = fB (~0), and f (x∗ ) = fB (x∗∗ ), where x∗∗ = x∗ − x. Alternatively, f (x) is the sum of
flows (or weights) of the arcs of GB , and f (x∗ ) is the sum of flows of the arcs of a graph G∗ (an arc ij
∗
∗
∗
of G∗ carries a flow of aij · exi −xj ). Notice
Pnthat GB and G have the same set of arcs, but with different
weights. By Equation (2), k∇fA (x)k1 = i=1 kb.,i k1 − kbi,. k1 , i.e., it is the sum over all the nodes of GB
of the difference between the flow into the node and flow out of it. Also notice that GB is unbalanced (else
the statement of the lemma is trivial), however G∗ is balanced. Therefore, the arc flows in G∗ , but not
those in GB , form a valid circulation.
Our proof now proceeds in two main steps. In the first step we show a way of reducing the flow on
some arcs of GB , such that the revised flows make every node balanced (and thus form a valid circulation).
We also make sure that the total flow reduction is at most n2 · k∇fA (x)k1 . In the second step we show that
sum of revised flows of all the arcs is a lower bound on f (x∗ ). These two steps together prove the lemma.
We start with the first step. The nodes of GB are not balanced. Let S and T be a partition of the
unbalanced nodes of GB , with S = {i ∈ [n] : kb.,i k1 > kbi,. k1 } and T = {i ∈ [n] : kb.,i k1 < kbi,. k1 }. That
is, the flow into a node in S exceeds the flow out of it, and the flow into a node in T is less than the flow
out of it. We have that
X
X
X
(kb.,i k1 − kbi,. k1 ) −
(kbi,. k1 − kb.,i k1 ) =
(kb.,i k1 − kbi,. k1 ) = 0.
i∈S
i∈T
i∈[n]
Thus, we can view each node i ∈ S as a source with supply kb.,i k1 − kbi,. k1 , and each node i ∈ T as a sink
with demand kbi,. k1 − kb.,i k1 , and the total supply equals the total demand. We now add some weighted
arcs connecting the nodes in S to the nodes in T . These arcs carry the supply at the nodes in S to the
demand at the nodes in T . Note that we may add arcs that are parallel to some existing arcs in GB . Such
arcs can be replaced by adding flow to the parallel existing arcs of GB . In more detail, to compute the
flows of the added arcs (or the added flow to existing arcs), we add arcs inductively as follows. We start
with any pair of nodes i ∈ S and j ∈ T , and add an arc from i to j carrying flow equal to the minimum
between the supply at i and the demand at j. Adding this arc will balance one of its endpoints, but in the
new graph the sum of supplies at the nodes of S is still equal to the sum of demands at the nodes of T , so
we can repeat the process. (Notice that either S or T or both lose one node.) Each additional arc balances
5
at least one unbalanced node, so GB gets balanced by adding
P at most n additional 1arcs from nodes in S
to nodes in T . The total flow on the added arcs is exactly i∈S (kb.,i k1 − kbi,. k1 ) = 2 · k∇f (x)k1 .
Let E ′ be the set of newly added arcs, and let GB ′ be the new graph with arc weights given by B ′ = (b′ij ).
Since GB ′ is balanced, the arc flows form a valid circulation. We next decompose the total flow of arcs into
cycles. Consider a cycle C in GB ′ that contains at least one arc from E ′ (i.e., C ∩ E ′ 6= ∅). Reduce the
flow on all arcs in C by α = minij∈C b′ij . This can be viewed as peeling off from GB ′ a circulation carrying
flow α. This reduces the flow on at least one arc to zero, and the remaining flow on arcs is still a valid
circulation, so we can repeat the process. It can be repeated as long as there is positive flow on some arc
in E ′ . Eliminating the flow on all arcs in E ′ using cycles reduces the total flow on the arcs by at most n
times the total initial flow on the arcs in E ′ (i.e., n2 · k∇f (x(1) )k1 ), because each cycle contains at most n
arcs and its flow α that is peeled off reduces the flow on at least one arc in E ′ by α. After peeling off all
the flow on all arcs in E ′ , all the arcs with positive flow are original arcs of GB . Let GB ′′ be the graph
with the remaining arcs and their flows which are given by B ′′ = (b′′ij ). The total flow on the arcs of GB ′′
is at least f (x) + 12 · k∇f (x)k1 − n2 · k∇f (x)k1 ≥ f (x) − n2 · k∇f (x)k1 .
Next we show that the total flow on the arcs of GB ′′ is a lower bound on f (x∗ ). Our key tool for
this is the fact that balancing operations preserve the product of arc flows on any cycle in the original graph GB , because balancing a node i multiplies the flow on the arcs into i by some factor r and
the flow on the arcs out of i by 1r . Thus, the geometric mean of the flows of the arcs on any cycle is
not changed by a balancing operation. The arc flows in GB ′′ form a valid circulation, and thus can be
decomposed into flow cycles C1 , . . . , Cq by a similar peeling-off process that was described earlier. Let
n1 , . . . , nq P
be the lengths of cycles, and let α1 , . . . , αq be their flows. The total flow on arcs in GB ′′ is,
therefore, qk=1 nk ·αk . Notice that, by construction, b′′ij ≤ bij , and the decomposition into cycles gives that
P
P
P
P
P
∗∗
∗∗
∗∗
∗∗
∗∗
∗∗
b′′ij = k:ij∈Ck αk . Thus, f (x∗ ) = ni,j=1 bij exi −xj ≥ ni,j=1 b′′ij exi −xj = ni,j=1 k:ij∈Ck αk exi −xj =
Q
P
P
Pq
Pq P
∗∗ 1/nk
∗∗
x∗∗
x∗∗
i −xj
i −xj
= qk=1 nk αk = ni,j=1 b′′ij , where the last
α
e
α
e
≥
n
ij∈Ck k
ij∈Ck k
k=1 k
k=1
inequality uses the arithmetic-geometric mean inequality. Notice that the right-hand side is the total flow
on the arcs of GB ′′ , which is at least f (x) − n2 · k∇f (x(1) )k1 . Thus, f (x∗ ) ≥ f (x) − n2 · k∇f (x)k1 , and this
completes the proof of the lemma.
3
Greedy Balancing
Here we present and analyze a greedy variant of the OPR iteration. Instead of balancing indices in a fixed
round-robin order, the greedy modification chooses at iteration t an index it of A(t) such that balancing
the chosen index results in the largest decrease in the sum of entries of A(t) . In other words, we pick it
such that the following equation holds.
2
q
q
(t)
(t)
ka.,i k1 − kai,. k1
(4)
it = arg max
i∈[n]
We give two analyses of this variant, one that shows that the number of operations is nearly linear
in the size of GA , and another that shows that the number of operations is nearly linear in 1/ǫ. More
specifically, we prove the following theorem.
Theorem 2. Given an n × n matrix A, let m = |E(GA )|, the greedy implementation of the OPR iterative
algorithm outputs an ǫ-balanced matrix in K iterations which
cost a total of O(m + Knlog n) arithmetic
operations over O(n log w)-bit numbers, where K = O min ǫ−2 log w, ǫ−1 n3/2 log(w/ǫ) .
The proof uses the convex optimization framework introduced in Section 2. Recall that A(t) =
(t)
(t)
D̄ (t) A(D̄ (t) )−1 . If we let D̄ (t) = diag(ex1 , . . . , exn ), the iterative sequence can be viewed as generating a
6
(t)
(t)
sequence of points x(1) , x(2) , . . . , x(t) , . . . in Rn , where x(t) = (x1 , . . . , xn ) and A(t) = D̄ (t) A(D̄ (t) )−1 =
(t)
(t)
(t)
(aij exi −xj )n×n . Initially, x(1) = (0, . . . , 0), and x(t+1) = x(t) + αt ei , where αt = ln(dii ) and ei is the ith
vector of the standard basis for Rn . By Equation (1), the value f (x(t) ) is sum of the entries of the matrix
A(t) . The following key lemma allows us to lower bound the decrease in the value of f (x(t) ) in terms of a
value that can be later related to the stopping condition.
Lemma 3. If index it defined in Equation (4) is picked to balance A(t) , then f (x(t) ) − f (x(t+1)) ) ≥
k∇f (x(t) )k22
.
4f (x(t) )
Corollary 2. If matrix A(t) is not ǫ-balanced, by balancing index it at iteration t, we have f (x(t) ) −
2
f (x(t+1)) ) ≥ ǫ4 · f (x(t) ).
Proof of Theorem 2. By Corollary 2, while A(t) is not ǫ-balanced,
there
exists an index it to balance such
2
ǫ
2
that f (x(t) ) − f (x(t+1)) ) ≥ ǫ4 · f (x(t) ). Thus, f (x(t+1) ) ≤ 1 −
· f (x(t) ). Iterating for t steps yields
4
P
2 t
f (x(t+1) ) ≤ 1 − ǫ4 ·f (x(1) ). So, on the one hand, f (x(1) ) = ni,j=1 aij since f (x(1) ) is the sum of entries
in A(1) . On the other hand, we argue that the value of f (x(t+1) ) is at least min(i,j)∈E aij . To see this,
consider a directed cycle in the graph GA . It’s easy to see that balancing operations preserve the product of
weights of the arcs on any cycle. Thus, the weight
one arcin the cycle
least
t Pis at least its weight in the
of at
t
ǫ2
ǫ2
(t+1)
(1)
input matrix A. Therefore, amin ≤ f (x
) ≤ 1 − 4 ·f (x ) = 1 − 4 · ni,j=1 aij . Thus, t ≤ ǫ42 ·ln w
and this is an upper bound on the number of balancing operations before an ǫ-balanced matrix is obtained.
The algorithm initially computes ka.,i k1 and kai,. k1 for all i ∈ [n] in O(m) time. Also the algorithm initially
p
p
2
kai,. k1 − ka.,i k1 for all i in O(m) time and inserts the values in a priority
computes the value of
2
q
q
(t)
(t)
(t)
(t)
are updated
kai,. k1 − ka.,i k1
queue in O(n log n) time. The values of kai,. k1 , ka.,i k1 for all i and
after each balancing operation. In each iteration the weights of at most n arcs change. Updating the
2
q
q
(t)
(t)
(t)
(t)
kai,. k1 − ka.,i k1
involves
values of kai,. k1 and ka.,i k1 takes O(n) time and updating the values of
at most n updates of values in the priority queue, each taking time O(log n). Thus, the first iteration takes
O(m) operations and each iteration after that takes O(n log n) operations, so the total running time of the
algorithm in terms of arithmetic operations is O(m + (n log n log w)/ǫ2 ).
√
An alternative analysis completes the proof. Notice that k∇f (x(t) )k2 ≤ k∇f (x(t) )k1 ≤ n·k∇f (x(t) )k2 .
k∇f (x(t) )k2
(t)
(t)
(x )k2
(x )k2
√
Therefore, f (x(t) )−f (x(t+1)) ) ≥ 4f (x(t) ) 2 ≥ k∇f
·k∇f (x(t) )k1 ≥ 2n13/2 · k∇f
·(f (x(t) )−f (x∗ )),
4 n·f (x(t) )
f (x(t) )
where the first inequality follows from Lemma 3, and the last inequality follows from Lemma 2. Therefore,
(x(t) )k2
> ǫ), we have that f (x(t) ) − f (x(t+1)) ≥ 2nǫ3/2 · (f (x(t) ) − f (x∗ )).
while At is not ǫ-balanced (so k∇f
f (x(t) )
Rearranging the terms, we get f (x(t+1) ) − f (x∗ ) ≤ 1 − 2nǫ3/2 · (f (x(t) ) − f (x∗ )). Therefore, f (x(t+1) ) −
t
f (x∗ ) ≤ 1 − 2nǫ3/2 · (f (x(1) ) − f (x∗ )). Notice that by Lemma 3, f (x(t+1) ) − f (x∗ ) ≥ f (x(t+1) ) −
2
2
(t+1) )k
(x(t+1) )k2
2
(t+1) ) ≥ k∇f (x
·
f
(x
· amin . On the other hand, f (x(1) ) − f (x∗ ) ≤
f (x(t+2) ) ≥ k∇f
(t+1)
(t+1)
2f (x
)
2f (x
)
P
(x(t+1) )k2
f (x(1) ) ≤ ni,j=1 aij . Thus, for t = 2ǫ−1 · n3/2 ln(4w/ǫ2 ), we have that k∇f
≤ ǫ, so the matrix is
f (x(t+1) )
ǫ-balanced.
7
4
Round-Robin Balancing (the original algorithm)
Recall that original Osborne-Parlett-Reinsch algorithm balances indices in a fixed round-robin order. Although the greedy variant of the OPR iteration is a simple modification of the implementation, the convergence rate of the original algorithm (with no change) is interesting. This is important because the original
algorithm has a slightly simpler implementation, and also because this is the implementation used in almost all numerical linear algebra software including MATLAB, LAPACK and EISPACK (refer to [13, 7]
for further background). We give some answer to this question in the following theorem.
Theorem 3. Given an n × n matrix A, the original implementation of the OPR iteration outputs an ǫbalanced matrix in O(ǫ−2 n2 log w) iterations totalling O(ǫ−2 mn log w) arithmetic operations over O(n log w)bit numbers (m is the number of non-zero entries of A).
5
Randomized Balancing
In Theorem 2 the arithmetic operations were applied to O(n ln w)-bit numbers. This will cause an additional
factor of O(n ln w) in the running time of the algorithm. In this section we fix this issue by presenting a
randomized variant of the algorithm that applies arithmetic operations to numbers of O(ln(wn/ǫ)) bits.
Thus, we obtain a algorithm for balancing that
in nearly linear time. While the greedy algorithm
p runs p
works by picking the node i that maximizes ( kai,. k − ka.,i k)2 , the key idea of the randomized algorithm
is sampling a node for balancing using sampling probabilities that do not depend on the difference in arc
weights (the algorithm uses low-precision rounded weights, so this can affect significantly the difference).
Instead, our sampling probabilities depend on the sum of weights of the arcs incident on a node.
We first introduce some notation. We use O(ln(wn/ǫ)) bits of precision to approximate xi -s with x̂i -s.
(t) (t)
(t)
b(t) = (x̂1 , x̂2 , . . . , x̂n ) at every time t.
Thus, xi − 2−O(ln(nw/ǫ)) ≤ x̂i ≤ xi . In addition to maintaining x
(t)
(t)
(t)
(t)
The algorithm also maintains for every i and j the value of b
aij which is aij = aij ex̂i −x̂j truncated to
O(ln(wn/ǫ)) bits of precision. We set the hidden constant to give a truncation error of r = (ǫ/wn)10 amin , so
P
P
(t)
(t)
(t)
(t)
(t)
(t)
(t)
aij − r ≤ b
aij ≤ aij . The algorithm also maintains for every i, kb
ai,. k = nj=1 b
aij and kb
a.,i k = nj=1 b
aji .
P
P
(t)
(t)
(t)
For every i, we use the notation kai,. k = nj=1 aij and ka.,i k = nj=1 aji . Note that the algorithm does
(t)
(t)
(t)
not maintain the values aij , ka.,i k or kai,. k.
The algorithm works as follows (see the pseudo-code of Algorithm 1 that appears in Section 7). In each
(t)
(t)
kb
ai,. k+kb
a.,i k
P
(t) .
2 i,j b
aij
iteration it samples an index i with probability pi =
If i is sampled, a balancing operation
is applied to index i only if the arcs incident on i have significant weight, and i’s imbalance is sufficiently
(t)
(t)
(t)
(t)
large. Put M̂i = max{kb
ai,. k, kb
a.,i k} and put m̂i = min{kb
ai,. k, kb
a.,i k}. The imbalance is considered large if
m̂i = 0 (this can happen because of the low precision), or if m̂i 6= 0 and
(t)
(t)
(t)
M̂i
m̂i
≥ 1 + nǫ . A balancing operation
bi , where α = 12 ln(kb
is done by adding α to x
a.,i k/kb
ai,. k), unless m̂i = 0, in which case we replace the 0
value by nr. This updates the weights of the arcs incident on i. Also, the L1 norms of changed rows and
columns are updated. (For convenience we use in this section k · k instead of k · k1 to denote the L1 norm.)
Note that in the pseudo-code, ← indicates an assignment where the value on the right-hand side is
computed to O(ln(wn/ǫ)) bits of precision. Thus, we have
(t+1)
and
α − (ǫ/wn)10 ≤ x
bi
(t+1)
aij ex̂i
(t+1)
−x̂j
(t+1)
−r ≤b
aij
8
(t)
−x
bi ≤ α
(t+1)
≤ aij ex̂i
(5)
(t+1)
−x̂j
,
(t+1)
aji ex̂i
Theorem 4. With probability at least
an ǫ-balanced matrix.
(t+1)
−x̂j
9
10 ,
(t+1)
−r ≤b
aji
(t+1)
≤ aji ex̂i
(t+1)
−x̂j
.
Algorithm 1 returns in time O(m ln
P
ij
aij + ǫ−2 n ln(wn/ǫ) ln w)
The idea of proof is to show that in every iteration of the algorithm we reduce f (.) by at least a factor
of 1 − Ω(ǫ2 ). Before we prove the theorem, we state and prove a couple of useful lemmas.
(t)
(t)
Fix an iteration t, and define three sets of indices as follows: A = {i : kb
ai,. k + kb
a.,i k ≥ ǫamin /10wn},
B = {i : m̂i 6= 0 ∧ M̂i /m̂i ≥ 1 + ǫ/n}, and C = {i : m̂i = 0}. If the random index i satisfies i ∈
/ A or
i ∈ A \ (B ∪ C), the algorithm does not perform any balancing operation on i. The following lemma states
that the expected decrease due to balancing such indices is small, and thus skipping them does not affect
the speed of convergence substantially.
q
2
q
P
2
(t)
(t)
< 2ǫn · f (x(t) ), where p is the
pi ·
kai,. k − ka.,i k
Lemma 4. For every iteration t, i∈A∩(B∪C)
/
probability distribution over indices at time t.
We now show a lower bound on the decrease in f (·), if a node i ∈ A ∩ (B ∪ C) is balanced.
2
q
q
(t)
(t)
1
(t+1)
(t)
Lemma 5. If i ∈ A∩(B∪C) is balanced in iteration t, then f (b
x
)−f (b
x ) ≥ 10 ·
kai,. k − ka.,i k .
Proof of Theorem 4. By Lemma 5, the expected decrease in f (.) in iteration t is lower bounded as follows.
E[f (b
x(t) ) − f (b
x(t+1) )] ≥
X
i∈A∩(B∪C)
=
pi ·
1
10
q
2
q
(t)
(t)
kai,. k − ka.,i k
q
2
q
n
1 X
(t)
(t)
pi ·
·
kai,. k − ka.,i k −
10
i=1
The second term can be bounded, using Lemma 4, by
For the first term, we can write
n
X
i=1
pi ·
q
(t)
kai,. k −
q
(t)
ka.,i k
2
≥
=
≥
=
where the penultimate inequality holds because
M̂i
Mi
P
n
X
i=1
X
i∈A∩(B∪C)
/
i∈A∩(B∪C)
/
q
(t)
pi ·
pi ·
(t)
kai,. k
−
q
q
(t)
kai,. k −
2
(t)
ka.,i k
q
≤
(t)
ka.,i k
2ǫ2
n
2
.
· f (b
x(t) ).
(t)
(kai,. k − ka.,i k)2
(t)
(t)
2(kai,. k + ka.,i k)
(t)
(t)
(t)
(t)
n
X
kb
ai,. k + kb
a.,i k (kai,. k − ka.,i k)2
·
P (t)
(t)
(t)
2 ij b
aij
2(kai,. k + ka.,i k)
i=1
(t)
(t) 2
n
1 X (kai,. k − ka.,i k)
·
P (t)
16
a
i=1
ij
k∇f (b
x(t) )k22
16f (b
x(t) )
≥ 12 , so
9
≥
ǫ2
16
(t)
(t)
(t)
(t)
kb
ai,. k+kb
a.,i k
kai,. k+ka.,i k
ij
· f (b
x(t) ),
≥
M̂i
2Mi
≥ 14 , and the last inequality
holds as long as the matrix is not ǫ-balanced, so
E[f (b
x(t) ) − f (b
x(t+1) )] ≥
≥
k∇f (b
x(t) )k2
f (b
x(t) )
≥ ǫ. Combining everything together, we get
q
q
2
2
q
q
n
X
1 X
(t)
(t)
(t)
(t)
·
pi ·
pi ·
kai,. k − ka.,i k −
kai,. k − ka.,i k
10
i=1
i∈A∩(B∪C)
/
2
ǫ
2ǫ2
ǫ2
1
·
· f (b
x(t) ) −
· f (b
x((t)) ) ≥
· f (b
x(t) ),
10
16
n
320
where the last inequality assumes n ≥ 64. This implies that the expected number of iterations to obtain
9
an ǫ-balanced matrix is O(ǫ−2 ln w). Markov’s inequality implies that with probability 10
an ǫ-balanced
−2
matrix is obtained in O(ǫ ln w) iterations. It P
is easy to see that each iteration of the algorithm takes
O(n ln(wn/ǫ)) time. Initializations take O(m ln ij aij ) time. So the total running time of the algorithm
P
is O(m ln ij aij + ǫ−2 n ln(wn/ǫ) ln w).
6
A Lower Bound on the Rate of Convergence
In this section we prove the following lower bound.
Theorem 5. There are matrices for which all variants of the Osborne-Parlett-Reinsch iteration (i.e.,
√
regardless of the order of indices chosen to balance) require Ω(1/ ǫ) iterations to balance the matrix to the
relative error of ǫ.
Before proving this theorem, we present the claimed construction. Let A be the following 4 × 4 matrix,
and let A∗ denote the corresponding fully-balanced matrix.
0
1
0
0
0 1
0
0
p
1 0 β + ǫ 0
1
ǫ(β + ǫ) 0
0
∗
p
A=
,
A
=
0
0 ǫ
0
1
ǫ(β + ǫ)
0
1
0 0
1
0
0
0
1
0
Here ǫ > 0 is arbitrarily small, and β = 100ǫ. It’s easy to see that A∗ = D ∗ AD ∗ −1 where
!
r
r
β+ǫ
β+ǫ
D = diag 1, 1,
.
,
ǫ
ǫ
√
To prove Theorem 5, we show that balancing A to the relative error of ǫ requires Ω(1/ ǫ) iterations,
regardless of the order of balancing operations. Notice that in order to fully balance A, we simply need to
replace a23 and a32 by their geometric mean. We measure the rate of convergence using the ratio a32 /a23 .
1
ǫ
= 101
. When the matrix is fully balanced, the ratio becomes 1. We show that
This ratio is initially β+ǫ
this ratio increases by a small factor in each iteration, and that it has to increase sufficiently for the matrix
to be ǫ-balanced. This is summarized in the following two lemmas.
(t+1)
1 + 7√β a(t)
a32
· 32
≤
.
Lemma 6 (change in ratio). (t+1)
(t)
1
+
ǫ
a23
a23
(t)
Lemma 7 (stopping condition). If A(t) is ǫ-balanced, then
a32
(t)
a23
>
1
.
100
Before proving the two lemmas we show how they lead to the proof of Theorem 5.
10
√ t
√ t
(t+1)
a
1+7 β
1+7 β
a32
ǫ
Proof of Theorem 5. By Lemma 6, 32
≤
·
· β+ǫ
=
. By Lemma 7, if A(t+1) is
(t+1)
1+ǫ
a23
1+ǫ
a23
√ t
(t+1)
√ t ǫ
a
1+7 β
ǫ
1
·
< 32
≤
1
+
7
ǫ-balanced, then 100
β · β+ǫ . Using β = 100ǫ, we get the condition
(t+1) ≤
1+ǫ
β+ǫ
√ t 101 a23
√
that (1 + 7 β) > 100 , which implies that t = Ω(1/ ǫ).
P
Proof of Lemma 6. Using the notation we defined earlier, we have that f (x(1) ) = 4i,j=1 aij = 4 + 2ǫ + β
p
P
and f (x∗ ) = 4i,j=1 a∗ij = 4 + 2 ǫ(β + ǫ), so f (x(1) ) − f (x∗ ) < β. We observe that at each iteration t,
(t) (t)
(t) (t)
(t) (t)
a12 a21 = a34 a43 = 1 and a23 a32 = ǫ(β + ǫ) because the product of weights of arcs on any cycle in GA is
preserved (for instance, arcs (1, 2) and (2, 1) form a cycle and initially a12 a21 = 1).
(t) (t)
The ratio a32 /a23 is only affected in iterations that balance index 2 or 3. Let’s assume a balancing
operation at index 2, a similar analysis applies to balancing at index 3. By balancing at index 2 at time t
we have
v
u (t)
(t+1)
(t)
u a + a(t)
a32
a23
23
t 21
=
=
.
(6)
(t)
(t+1)
(t)
(t)
a32
a23
a12 + a32
Thus, to prove Lemma 6, it suffices to show that
(t)
(t+1)
a32
(t)
a32
·
a23
(t+1)
a23
(t) (t)
=
(t)
(t)
(t)
(t)
a21 + a23
a12 + a32
(t)
√
1+7 β
≤
.
1+ǫ
(t)
(7)
(t) (t)
By our previous observation, a12 a21 = 1, so if a21 = y, then a12 = 1/y. Similarly a23 a32 = ǫ(β + ǫ) implies
(t)
(t)
that there exists z such that a23 = (β + ǫ)z and a32 = ǫ/z. Therefore:
(t)
(t)
(t)
a12
(t)
a32
a21 + a23
+
=
y + (β + ǫ)z
(1/y) + (ǫ/z)
(8)
We bound
√ the right hand side of Equation (8) by proving upper bounds on y and z. We first show that
y < 1 + 2 β. To see this notice that on the one hand,
f (x(t) ) =
4
X
i,j=1
(t)
(t)
(t)
(t)
(t)
(t)
(t)
aij = a12 + a21 + a23 + a32 + a34 + a43 ≥ y +
p
1
+ 2 ǫ(β + ǫ) + 2,
y
(9)
p
(t)
(t)
(t) (t)
where we used a34 + a43 ≥ 2 and a34 a43 ≥ 2 ǫ(β + ǫ), both implied by the arithmetic-geometric mean
inequality. On the other hand,
p
f (x(t) ) ≤ f (x(1) ) ≤ f (x∗ ) + β = 4 + 2 ǫ(β + ǫ) + β.
(10)
Combining Equations (9) and (10) together, we have y + (1/y) − 2 ≤ β. For sufficiently small ǫ, the last
inequality√implies, in particular, that y < 2. Thus, we have (y − 1)2 ≤ yβ < 2β, and this implies that
y < 1 + 2 β.
Next we show that z ≤ 1. Assume for contradiction that z > 1. By the arithmetic-geometric mean
(t)
(t)
(t)
(t)
inequality a12 + a21 ≥ 2 and a34 + a43 ≥ 2. Thus,
f (x(t) ) =
4
X
i,j=1
(t)
aij ≥ 2 + (β + ǫ)z +
ǫ
1
+ 2 = 4 + βz + ǫ z +
> 4 + β + 2ǫ = f (x(1) ),
z
z
11
where the last inequality follows because z > 1, and z + 1/z > 2. But this is a contradiction, because each
balancing iteration reduces the value of F , so f (x(t) ) ≤ f (x(1) ).
(t)
(t)
(t)
(t)
We can now bound (a21 + a23 )/(a12 + a32 ). By Equation (8), and using our bounds for y and z,
√
√
√
(t)
(t)
1+4 β
1+7 β
a21 + a23
y + (β + ǫ)z
(1 + 2 β) + (β + ǫ)
≤
≤
=
≤
.
(t)
(t)
ǫ
1
1
(1/y) + (ǫ/z)
1+ǫ
a12 + a32
√ +ǫ
√ +
√
1+2 β
1+2 β 1+2 β
√
The last line uses the fact that β ≫ β = 100ǫ ≥ ǫ, which holds if ǫ is sufficiently small.
Proof of Lemma 7. Let t − 1 be the last iteration before an ǫ-balanced matrix is obtained. We argued
(t)
(t)
that there is z ≤ 1 such that a23 = (β + ǫ)z and a32 = ǫ/z. Assume for the sake of contradiction that
(t) (t)
a32 /a23 < 1/100. This implies that (ǫ/z)/((β + ǫ)z) < 1/100, and thus z 2 > 100/101. So, we get
r
q
q 2
q
1 2
(t)
(t)
(t)
(t)
(t)
(t)
(t)
≥ a23 1 −
a23 − a32
f (x(t) ) − f (x∗ ) ≥ a23 + a32 − 2 a23 a32 =
100
r
100
= 0.81 · (β + ǫ)z ≥ 0.81 · (β + ǫ) ·
≥ 81 · ǫ.
(11)
101
By Lemma 2, the left hand side the of above can be bounded as follows.
f (x(t) ) − f (x∗ ) ≤ nk∇f (x(t) )k1 ≤ n2 k∇f (x(t) )k2
(12)
Note that for sufficiently small ǫ, f (x(t) ) ≤ f (x(1) ) ≤ 5. Combining Equations (11) and (12), and using
n = 4 and f (x(t) ) ≥ 5, we get that
81
k∇f (x(t) )k2
>
· ǫ > ǫ.
(13)
(t)
80
f (x )
By Equation (3), this contradicts our assumption that t − 1 is the last iteration.
7
Proofs
Proof of Lemma 3. The value f (x(t) ) is the sum of the entries of A(t) . By Lemma 1, balancing the i-th
q
2
q
(t)
(t)
(t)
(t)
index of A reduces the value of f (x ) by
ka.,i k1 − kai,. k1 . To simplify notation, we drop the
superscript t in the following equations. We have
q
2
q
(ka.,i k1 − kai,. k1 )2
(ka.,i k1 − kai,. k1 )2
ka.,i k1 − kai,. k1
.
= p
p
2 ≥
2 (ka.,i k1 + kai,. k1 )
ka.,i k1 + kai,. k1
It is easy to see that
Pn
(ka.,i k1 − kai,. k1 )2
(ka.,i k1 − kai,. k1 )2
≥ Pi=1
.
max
n
i∈[n] (ka.,i k1 + kai,. k1 )
i=1 (ka.,i k1 + kai,. k1 )
(14)
(15)
But the right hand side of the above
resuming the use of the superscript t) equals
inequality (after
k∇f (x(t) )k22
(t)
(t)
. This is because for all i, kai,. k1 − ka.,i k1 is by Equation (2) the i-th coordinate of ∇f (x(t) ),
2f (x(t) )
Pn (t)
(t)
= 2f (x(t) ). Together with Equations (14) and (15),
ka
k
+
ka
k
and in the denominator
i,. 1
.,i 1
i=1
(q
2 )
q
(t)
(t)
ka.,i k1 − kai,. k1
decreases f (x(t) ) by the claimed
this implies that balancing it = arg maxi∈[n]
value.
12
Proof of Corollary 2. From Equation (3), we know that the diagonal matrix diag(ex1 , . . . , exn ) balances
k∇f (x)k2
k∇f (x(t) )k2
A with relative error ǫ if and only if
> ǫ. By
≤ ǫ. Thus, if A(t) is not ǫ-balanced,
f (x)
f (x(t) )
2
2
k∇f (x(t) )k2
(x(t) )k2
· f (x(t) ) ≥ ǫ4 · f (x(t) ).
Lemma 3, f (x(t) ) − f (x(t+1)) ) ≥ 4f (x(t) ) 2 = 41 · k∇f
f (x(t) )
Proof of Theorem 3. In the original Osborne-Parlett-Reinsch algorithm, the indices are balanced in a fixed
round-robin order. A round of balancing is a sequence of n balancing operations where each index is
balanced exactly once. Thus, in the OPR algorithm all n indices are balanced in the same order every
round. We prove a more general statement that any algorithm that balances indices in rounds (even if
the indices are not balanced in the same order every round) obtains an ǫ-balanced matrix in at most
O((n log w)/ǫ2 ) rounds. To this end, we show that applying a round of balancing to a matrix that is not
ǫ-balanced reduces the value of function f at least by a factor of 1 − ǫ2 /16n.
To simplify notation, we consider applying a round of balancing to the initial matrix A(1) = A. The
argument clearly holds for any time-t matrix A(t) . If A is not ǫ-balanced, by Lemma 3 and Corollary 2,
there exists an index i such that by balancing i the value of f is reduced by:
(1)
f (x
(2)
) − f (x
)=
q
2
q
ǫ2
ka.,i k1 − kai,. k1
≥ f (x(1) ).
4
(16)
If i is the first index to balance in the next round of balancing, then in that round the value of f is
reduced at least by a factor of 1−ǫ2 /4 ≥ 1−ǫ2 /16n, and we are done. Consider the graph GA corresponding
to the matrix A. If node i is not the first node in GA to be balanced, then some of its neighbors in the
graph GA might be balanced before i. The main problem is that balancing neighbors of i before i may
reduce the imbalance of i significantly, so we cannot argue that when we reach i and balance it the value of
f reduces significantly. Nevertheless, we show that balancing i and its neighbors in this round will reduce
the value of f by at least the desired amount. Let t denote the time that i is balanced in the round. For
(t)
(t)
every arc (j, i) into i, let δj = |aji − aji |, and for every arc (i, j) out of i let σj = |aij − aij |. These values
measure the weight change of these arcs due to balancing a neighbor of i at any time since the beginning
of the round. The next lemma shows if the weight of an arc incident on i has changed since the beginning
of the round, it must have reduced the value of f .
Claim 1. If balancing node j changes aji to aji + δ, then the balancing reduces the value of f by at least
δ2 /aji . Similarly if balancing node j changes aij to aij + δ, then the balancing reduces the value of f by at
least δ2 /aij .
Proof. To simplicity notation we assume that j is balanced in the first iteration of the round. If balancing
j changes aji to aji + δ, then by the definition of balancing,
s
ka.,j k1
aji + δ
=
.
(17)
aji
kaj,. k1
Thus, by Lemma 1 the value of f reduces by
q
ka.,j k1 −
q
kaj,. k1
2
=
s
!2
2
2
δ
δ2
aji + δ
ka.,j k1
kaj,. k1 ≥
− 1 kaj,. k1 =
− 1 kaj,. k1 =
kaj,. k1
aji
aji
aji
The proof for the second part of the claim is similar.
13
Going back to the proof of Theorem 3, let t denote the iteration in the round that i is balanced. By
Claim 1, balancing neighbors of i has already reduced the value of f by
δj2
X
j:(j,i)∈E
Balancing i reduces value of f by an additional
round is reduced by at least:
R=
X
j:(j,i)∈E
δj2
+
aji
X
aji
j:(i,j)∈E
q
j:(i,j)∈E
σj2
X
+
(t)
ka.,i k1
−
aij
q
.
(18)
(t)
kai,. k1
2
, so the value of f in the current
2
q
q
σj2
(t)
(t)
ka.,i k1 − kai,. k1
+
aij
Assume without loss of generality that kai,. k1 > ka.,i k1 . To lower bound R, we consider two cases:
X
X
1
case (i)
δj +
σj ≥ (kai,. k1 − ka.,i k1 ). In this case,
2
j:(j,i)∈E
j:(i,j)∈E
R≥
X
j:(j,i)∈E
δj2
+
aji
X
j:(i,j)∈E
σj2
1
≥
aij
ka.,i k1
X
δj2 +
j:(j,i)∈E
X
1
kai,. k1
σj2
j:(i,j)∈E
X
X
1
1
≥
(
δj )2 +
(
nka.,i k1
nkai,. k1
j:(j,i)∈E
σj )2 ,
j:(i,j)∈E
where the last inequality follows by Cauchy-Schwarz inequality. By assumption of case (i),
X
X
1
max(
δj ,
σj ) ≥ (kai,. k1 − ka.,i k1 )
4
j:(j,i)∈E
(19)
(20)
j:(i,j)∈E
Equations (19) and (20) together imply that
P
P
( j:(j,i)∈E δj )2 + ( j:(i,j)∈E σj )2
1 (kai,. k1 − ka.,i k1 )2
R≥
≥
n max(ka.,i k1 , kai,. k1 )
16n max(ka.,i k1 , kai,. k1 )
p
p
p
2 p
2
2
q
q
ka.,i k1 − kai,. k1
ka.,i k1 + kai,. k1
1
≥
=
ka.,i k1 − kai,. k1 .
16n max(ka.,i k1 , kai,. k1 )
16n
case (ii)
X
δj +
j:(j,i)∈E
X
j:(i,j)∈E
1
σj < (kai,. k1 − ka.,i k1 ). By definition of δj ’s and σj ’s:
2
ka.,i k1 −
kai,. k1 −
X
j:(j,i)∈E
X
j:(i,j)∈E
X
(t)
δj ≤ ka.,i k1 ≤ ka.,i k1 +
(t)
σj ≤ kai,. k1 ≤ kai,. k1 +
δj
(21)
σj .
(22)
j:(j,i)∈E
X
j:(i,j)∈E
Combining Equations (21) and (22), and the assumption of case (ii) gives:
X
X
(t)
(t)
kai,. k1 + ka.,i k1 ≤ kai,. k1 + ka.,i k1 +
σj +
δj ≤ 2 (kai,. k1 + ka.,i k1 )
(t)
(t)
kai,. k1 − ka.,i k1 ≥ kai,. k1 − ka.,i k1 −
j:(i,j)∈E
j:(j,i)∈E
X
X
j:(i,j)∈E
14
σj −
j:(j,i)∈E
δj ≥
1
(kai,. k1 − ka.,i k1 ) .
2
(23)
(24)
Using Equations (23) and (24), we can write:
R≥
q
(t)
kai,. k1 −
q
(t)
ka.,i k1
2
(t)
(t)
ka.,i k1 − kai,. k1
(kai,. k1 − ka.,i k1 )2
= q
2 ≥ (t)
q
(t)
(t)
(t)
k
k
+
ka
8
ka
.,i 1
i,. 1
ka.,i k1 + kai,. k1
2
(kai,. k1 − ka.,i k1 )2
1
≥
≥
16 (kai,. k1 + ka.,i k1 )
16
q
kai,. k1 −
q
ka.,i k1
2
.
Thus, we have shown in both cases that in one round the balancing operations on node i and its neighbors
reduces the value of f by at least
q
2
q
1
ka.,i k1 − kai,. k1 ,
(25)
16n
2
which in turn is at least Ω( ǫn f (x(1) )) by Equation (16). Thus, we have shown that if A is not ǫ-balanced,
one round of balancing (where
index is balanced exactly once) reduces the objective function f by
2 each
ǫ
(1)
a factor of at least 1 − Ω n f (x ) . By an argument similar to the one in the proof of Theorem 2,
we get that the algorithm obtains an ǫ-balanced matrix in at most O(ǫ−2 n log w) rounds. The number of
balancing iterations in each round is n, and the number of arithmetic operations in each round is O(m), so
the original OPR algorithm obtains an ǫ-balanced matrix using O(ǫ−2 mn log w) arithmetic operations.
Proof of Lemma 4. Notice that for every i,
(t)
(t)
(kb
ai,. k + kb
a.,i k) ·
(t)
q
(t)
kai,. k −
(t)
q
(t)
(t)
ka.,i k
2
(t)
(t)
≤ (kb
ai,. k + kb
a.,i k) ·
2
(t)
(t)
kai,. k − ka.,i k
(t)
(t)
kai,. k + ka.,i k
2
(t)
(t)
≤ kai,. k − ka.,i k ,
(t)
because kb
ai,. k + kb
a.,i k ≤ kai,. k + ka.,i k. We first bound the sum over i ∈
/ A.
X
i∈A
/
pi ·
q
(t)
kai,. k
−
q
2
(t)
ka.,i k
2
q
(t)
(t)
q
X kb
ai,. k + kb
a.,i k
(t)
(t)
kai,. k − ka.,i k
·
P
(t)
2 i,j b
aij
i∈A
/
2
X (t)
1
(t)
·
ka
k
−
ka
k
≤
P
i,.
.,i
(t)
2 i,j b
aij i∈A
/
2
X (t)
1
(t)
≤
·
kb
a
k
+
kb
a
k
+
2nr
P
i,.
.,i
(t)
2 i,j b
aij i∈A
/
X
1
·
(2ǫamin /10wn)2
≤
P
(t)
2 i,j b
aij i∈A
/
=
≤
2
P
1
(t)
i,j
b
aij
· n · (ǫamin /5wn)2 ≤
(t)
ǫ2
ǫ2
· amin ≤
· f (x(t) )
25n
25n
(t)
(t)
(t)
where the second inequality follows because, for every j, aij ≤ b
aij + r and aji ≤ b
aji + r, and the third
(t)
(t)
inequality follows because kb
ai,. k + kb
a.,i k < ǫamin /10wn and nr < ǫamin /10wn.
(t)
(t)
(t)
(t)
Next, we bound the sum over i ∈ A\(B∪C). Recall M̂i = max{kb
ai,. k, kb
a.,i k} and m̂i = min{kb
ai,. k, kb
a.,i k}.
(t)
(t)
(t)
(t)
Put Mi = max{kai,. k, ka.,i k} and mi = min{kai,. k, ka.,i k}. Let k = arg maxi∈A\(B∪C) (Mi − mi )2 . We
15
Algorithm 1 RandomBalance(A, ǫ)
Input: Matrix A ∈ Rn×n , ǫ
Output: An ǫ-balanced matrix
r = amin · (ǫ/wn)10
(1)
(1)
2: Let âij = aij for all i and j
1:
(t)
(t)
(t)
(t)
Let kb
ai,. k = kai,. k and kb
a.,i k = ka.,i k for all i
−2
4: for t = 1 to O(ǫ ln w) do
3:
(t)
5:
Pick i randomly with probability pi =
6:
if kb
ai,. k + kb
a.,i k ≥ ǫamin /10wn then
(t)
(t)
(t)
8:
9:
(t)
(t)
12:
13:
14:
15:
(t+1)
b
aij
16:
bits)
20:
21:
22:
23:
24:
(t)
1
2
1
2
(t)
ln(nr/kb
ai,. k)
(t)
else if m̂i = kb
ai,. k = 0 then α = ln(kb
a.,i k/nr)
end if
b(t+1) ← x
b(t) + αei (truncated to O(ln(wn/ǫ)) bits of precision)
Let x
for j = 1 to n do
if j is a neighbor of i then
11:
19:
(t)
else if m̂i = kb
a.,i k = 0 then α =
10:
18:
(t)
M̂i = max{kb
ai,. k, kb
a.,i k}, m̂i = min{kb
ai,. k, kb
a.,i k}
if m̂i = 0 or M̂i /m̂i ≥ 1 + ǫ/n then
(t)
(t)
a.,i k/kb
ai,. k)
if m̂i 6= 0 then α = 12 ln(kb
7:
17:
(t)
kb
ai,. k+kb
a.,i k
P
(t)
2 i,j b
aij
(t+1)
← aij ex̂i
(t+1)
(t+1)
−x̂j
(t)
(t)
(t+1)
(t+1)
← aji ex̂j
and b
aji
(t+1)
(t+1)
(t+1)
−x̂i
(t)
, (truncated to O(ln(wn/ǫ))
(t)
(t+1)
kb
aj,. k = kb
aj,. k − b
aji + b
aji
and kb
a.,j k = kb
a.,j k − b
aij + b
aij
end if
end for P
P
(t+1)
(t+1)
(t+1)
(t+1)
kb
ai,. k = nj=1 b
aij
and kb
a.,i k = nj=1 b
aji
end if
end if
end for
return the resulting matrix
have
X
i∈A\(B∪C)
pi ·
q
(t)
kai,. k −
q
2
(t)
≤
ka.,i k
≤
≤
2
2
2
P
P
P
To bound the last quantity, we prove an upper bound on
(t)
(t)
we have M̂k + m̂k = kb
ak,. k + kb
a.,k k ≥
ǫamin
10wn .
Thus, M̂k ≥
16
1
(t)
i,j
1
b
aij
(t)
aij
i,j b
1
(t)
i,j
b
aij
·
·
X
i∈A\(B∪C)
X
i∈A\(B∪C)
·n
· m2k
2
(t)
(t)
kai,. k − ka.,i k
(Mi − mi )2
2
Mk
−1 .
mk
M̂k
Mk
ǫ
mk using the fact that m̂k < 1 + n . As k ∈ A,
M̂k
ǫamin
ǫ
20wn . Combining this with m̂k < 1 + n implies
ǫamin
40wn .
that m̂k > 12 M̂k ≥
Hence,
ǫ 9 M̂
M̂k + nr
M̂k
M̂k
Mk
nr
40ǫ9
2ǫ
Mk
k
≤
≤
≤
≤
+
=
+ 40n ·
+ 8 ≤1+ .
mk
m̂k
m̂k
m̂k
ǫamin /40wn
m̂k
wn
m̂k
n
n
(Notice that w ≥ 1.) Using the upper bound on
X
i∈A\(B∪C)
pi ·
q
(t)
kai,. k −
q
Mk
mk ,
(t)
ka.,i k
we obtain
2
≤
2
P
1
(t)
i,j
1
b
aij
2
Mk
−1
·n
mk
2
2ǫ
2
· n · mk
n
· m2k
≤
2
≤
ǫ2
2ǫ2
· mk ≤
· f (x(t) ),
n
n
P
(t)
i,j
b
aij
where the penultimate inequality uses the fact that mk ≤ m̂k + nr ≤ m̂k +
P
(t)
aij .
i,j b
Proof of Lemma 5. We will assume that ǫ <
1
10 .
ǫamin
40wn
< 2m̂k ≤ M̂k + m̂k ≤
We first consider the case that i ∈ A ∩ B (notice that
(t)
(t+1)
B ∩ C = ∅). The update using O(ln(wn/ǫ)) bits of precision gives x
bi + α − (ǫ/nw)10 ≤ x
bi
so
v
v
u (t)
u (t)
u kb
u kb
k
a
(t+1)
(t)
10
.,i
x
b
x
b
−(ǫ/wn)
t a.,i k · exb(t)
t
i
i
i .
≤
≤
e
·
e
(t)
(t)
kb
ai,. k
kb
ai,. k
(t)
≤x
bi + α,
Therefore,
(t+1)
kai,.
k=
n
X
(t+1)
aij exbi
(t+1)
−b
xj
j=1
and
(t+1)
ka.,i
k=
n
X
j=1
(t+1)
(t+1)
x
bj
−b
xi
aji e
v
v
v
u (t)
u (t)
u (t)
n
X
u
u a.,i k
u kb
(t)
(t)
a.,i k
kb
a.,i k
(t) t kb
x
bi −b
xj
t
t
a
e
=
·
·
ka
k
,
≤
ij
i,.
(t)
(t)
(t)
kb
ai,. k j=1
kb
ai,. k
kb
ai,. k
10
≤ e(ǫ/wn)
v
v
u (t)
u (t)
n
X
u kb
u kb
(t)
(t)
a
k
ai,. k
i,.
(t)
x
b
−b
x
10
i
· t (t) ·
aji e j
≤ (1 + 2(ǫ/wn) )) · t (t) · ka.,i k.
kb
a.,i k j=1
kb
a.,i k
We used the fact that ex ≤ 1 + 2x for x ≤ 21 . We will now use the notation M̂i , m̂i , Mi , and mi
(the reader can recall the definitions from the proof of Lemma 4). We also put δ = 2(ǫ/wn)10 , and
(t+1)
M̂i /m̂i
Mi /mi .
Thus, decrease of function f (·) due to balancing i is f (b
x(t) ) − f (b
x(t+1) ) = Mi + mi − ka.,i k −
q
q
p
√ √
(t+1)
1/σ + σ · Mi mi =
kai,. k ≥ Mi + mi − (1 + δ) Mi m̂i /M̂i + mi M̂i /m̂i = Mi + mi − (1 + δ)
√
√
√
√
√ 2
Mi − mi − ((1 + δ)/ σ + (1 + δ) σ − 2) · Mi mi . We now consider three cases, and in each case
show that
p
√
√
√ 2
9 p
Mi − mi .
·
(1 + δ)/ σ + (1 + δ) σ − 2 · Mi mi ≤
10
σ=
17
4
case (i): 1 ≤ σ < 1 + nǫ 2 . We first note that Mi ≥ M̂i ≥
mi
m̂i +nr
1
ǫ
1
≤ 1+ǫ/n
. Since ǫ < 10
, we have
+ nr ≤ 1 − 2n
Mi ≤
M̂i
M̂i +m̂i
2
>
ǫamin
20wn .
Also, mi ≤ m̂i + nr, so
M̂i
9
10
1−
r
mi
Mi
2
r
2
ǫ
9
≥
1− 1−
10
2n
4
4ǫ
≥
2
n
ǫ4
≥
(1 + δ) + (1 + δ) · 1 + 2 − 2
n
r
√
1+δ
mi
√ + (1 + δ) σ − 2 ·
,
≥
Mi
σ
where the third inequality holds by definition of δ, and the last inequality holds because mi /Mi ≤ 1 and
σ ∈ [1, 1 + ǫ4 /n2 ]. By multiplying both sides of the inequality by Mi we obtain the desired bound.
i
σ < 1. We first prove a lower bound on the value of σ, as follows: M̂
m̂i ≥
20ǫ9
9
Mi
Mi
Mi
nr
nr
(1
−
)
≥
·
1
−
·
1
−
≥
, and thus σ ≥ 1 − 20ǫ
8 . So we have
mi
Mi
mi
Mi
mi
n
8
n
r
√
1+δ
mi
1+δ
√ + (1 + δ) σ − 2 ·
≤ q
+ (1 + δ) − 2
9
Mi
σ
1 − 20ǫ
n8
20ǫ9
+ (1 + δ) − 2
≤ (1 + δ) 1 + 8
n
2
r
mi
4ǫ4
9
24ǫ9
,
· 1−
< 2 ≤
≤
n8
n
10
Mi
case (ii):
proving the desired inequality in this case. The first inequality holds because
know that
≥
Mi −nr
mi
≥
9
≤ 1 and 1 − 20ǫ
n8 ≤ σ ≤ 1.
ǫ4
.
n2
The idea is to show that Mi /mi is large so the desired inequality follows. We
M̂i
Mi
nr
= m̂i ≤ m̂i and therefore m̂i ≤ mσi . On the other hand, m̂i ≥ mi − nr, so mi ≤ 1−1/σ
.
ǫamin /20wn
Mi
nr
n6
ǫ4
1 − 2n2 , so mi < ǫ4 /2n2 . Also, Mi ≥ ǫamin /20wn. Therefore, mi ≥ 2n3 r/ǫ4 ≥ 40ǫ5 . Next,
case (iii): σ > 1 +
σMi
mi
mi
Mi
M̂i
mi
Clearly, 1/σ <
notice that since m̂i > 0 it must be that m̂i ≥ r. Therefore, mi ≤ m̂i + nr ≤ 2nm̂i . This implies that
M̂i
Mi
Mi
m̂i ≤ m̂i ≤ 2n · mi , so σ ≤ 2n. Finally,
r
√
√
Mi
1
1+δ
√ + (1 + δ) σ − 2 ≤ (1 + δ) · 2n ≤≤
·
,
σ
10
mi
q
Mi
i
with room to spare (using the lower bound on M
).
Multiplying
both
sides
by
mi
mi gives
r
√
Mi
1+δ
1 Mi
9
√ + (1 + δ) σ − 2 ·
≤
≤
mi
10 mi
10
σ
with more room to spare. This completes proof of the case i ∈ A ∩ B.
18
r
!2
Mi
−1 ,
mi
1
2
We now move on to the case i ∈ A ∩ C, so M̂i + m̂i ≥
(t)
ln(nr/kb
ai,. k)
or α =
(t)
1
a.,i k/nr).
2 ln(kb
ǫamin
10wn
and m̂i = 0. In the algorithm, α =
The idea is that we therefore replace m̂i (which is 0) by nr in some
q
q
nr
(t)
(t+1)
i
+ mi M̂
of the equations. In particular, f (b
x ) − f (b
x
) ≥ Mi + mi − (1 + δ) Mi
nr . Note that
M̂i
since m̂i = 0 then mi ≤ nr. Therefore,
so
M̂i
nr
≥
ǫamin /20wn
n(ǫ/wn)10 amin
≥
n8
.
20ǫ9
M̂i
nr
Thus we get
≤
Mi
nr
≤
Mi
mi .
On the other hand, since i ∈ A, M̂i ≥ ǫamin /20wn,
≥
≥
where the third inequality holds because
nr
s
M̂i
nr
M̂i
r
r !
20ǫ9
Mi
+ mi
Mi + mi − (1 + δ) Mi
8
n
mi
r
20ǫ9
Mi + mi − 2(1 + δ)Mi
n8
4
√
1 p
1
20ǫ
≥ Mi ≥ ( Mi − mi )2 ,
Mi 1 − 4
n
10
10
q
q
q
mi
nr
i
=
M
≤
M
mi M
.
i
i
mi
Mi
f (b
x(t) ) − f (b
x(t+1) ) ≥ Mi + mi − (1 + δ) Mi
≥
r
+ mi
M̂i
References
[1] EISPACK implementation. http://www.netlib.org/eispack/balanc.f.
[2] T.-Y. Chen. Balancing sparse matrices for computing eigenvalues. Master’s thesis, UC Berkeley, May
1998.
[3] B. C. Eaves, A. J. Hoffman, U. G. Rothblum, and H. Schneider. Line-sum-symmetric scalings of
square nonnegative matrices. In Mathematical Programming Essays in Honor of George B. Dantzig
Part II, pages 124–141. Springer, 1985.
[4] J. Grad. Matrix balancing. The Computer Journal, 14(3):280–284, 1971.
[5] D. J. Hartfiel. Concerning diagonal similarity of irreducible matrices. In Proceedings of the American
Mathematical Society, pages 419–425, 1971.
[6] B. Kalantari, L. Khachiyan, and A. Shokoufandeh. On the complexity of matrix balancing. SIAM
Journal on Matrix Analysis and Applications, 118(2):450–463, 1997.
[7] D. Kressner. Numerical methods for general and structured eigenvalue problems. Princeton University
Press, 2005.
[8] E. E. Osborne. On pre-conditioning of matrices. Journal of the ACM (JACM), 7(4):338–345, 1960.
[9] B. N. Parlett and C. Reinsch. Balancing a matrix for calculation of eigenvalues and eigenvectors.
Numerische Mathematik, 13(4):293–304, 1969.
[10] W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery. Numerical Recipes: The Art of
Scientific Computing, 3rd Edition. Cambridge University Press, 2007.
19
[11] H. Schneider and M. H. Schneider. Max-balancing weighted directed graphs and matrix scaling.
Mathematics of Operations Research, 16(1):208–222, February 1991.
[12] L. J. Schulman and A. Sinclair. Analysis of a classical matrix preconditioning algorithm. In Proceedings
of the Forty-Seventh Annual ACM on Symposium on Theory of Computing, pages 831–840, 2015.
[13] L. N. Trefethen and M. Embree. Spectra and pseudospectra: The behavior of nonnormal matrices and
operators. Springer, 2005.
[14] N. E. Young, R. E. Tarjan, and J. B. Orlin. Faster parametric shortest path and minimum-balance
algorithms. Networks, 21(2):205–221, 1991.
20
| 8 |
Dynamic Interference Steering in Heterogeneous
Cellular Networks
Zhao Li∗ , Canyu Shu∗ , Fengjuan Guo∗ , Kang G. Shin† , Jia Liu‡
∗
arXiv:1801.00145v1 [] 30 Dec 2017
State Key Laboratory of Integrated Service Networks, Xidian University
†
The University of Michigan, USA
‡
National Institute of Informatics, Japan
[email protected], [email protected], [email protected], [email protected], [email protected]
Abstract—With the development of diverse wireless communication technologies, interference has become a key impediment
in network performance, thus making effective interference
management (IM) essential to accommodate a rapidly increasing
number of subscribers with diverse services. Although there have
been numerous IM schemes proposed thus far, none of them
are free of some form of cost. It is, therefore, important to
balance the benefit brought by and cost of each adopted IM
scheme by adapting its operating parameters to various network
deployments and dynamic channel conditions.
We propose a novel IM scheme, called dynamic interference
steering (DIS), by recognizing the fact that interference can
be not only suppressed or mitigated but also steered in a
particular direction. Specifically, DIS exploits both channel state
information (CSI) and the data contained in the interfering signal
to generate a signal that modifies the spatial feature of the
original interference to partially or fully cancel the interference
appearing at the victim receiver. By intelligently determining the
strength of the steering signal, DIS can steer the interference
in an optimal direction to balance the transmitter’s power used
for IS and the desired signal’s transmission. DIS is shown via
simulation to be able to make better use of the transmit power,
hence enhancing users’ spectral efficiency (SE) effectively.
I. I NTRODUCTION
Due to the broadcast nature of wireless communications,
concurrent transmissions from multiple source nodes may
cause interferences to each other, thus degrading subscribers’
data rate. Interference management (IM) is, therefore, crucial
to meet the ever-increasing demand of diverse users’ Qualityof-Service (QoS).
Interference alignment (IA) is a powerful means of controlling interference and has thus been under development
in recent years. By preprocessing signals at the transmitter,
multiple interfering signals are mapped into a certain signal
subspace, i.e., the overall interference space at the receiver is
minimized, leaving the remaining subspace interference-free
[1-2]. IA is shown to be able to achieve the informationtheoretic maximum DoF (Degree of Freedom) in some interference networks [3-4]. However, to achieve such a promising
gain, it is required to use either infinite symbol extensions
over time/frequency [4] or a large number of antennas at each
receiver [5], both which are not realistic. That is, IA emerges
as a promising IM scheme, but its applicability is severely
limited by the high DoF requirement.
Some researchers have attempted to circumvent the stringent
DoF requirement by proposing other IM schemes, such as
interference neutralization (IN). IN refers to the distributed
zero-forcing of interference when the interfering signal traverses multiple nodes before arriving at the undesired receivers/destinations. The basic idea of IN has been applied to
deterministic channels and interference networks [6-10], both
of which employ relays. IM was studied in the context of
a deterministic wireless interaction model [6-7]. In [6], IN
was proposed and an admissible rate region of the Gaussian
ZS and ZZ interference-relay networks was obtained, where
ZS and ZZ denote two special configurations of a two-stage
interference-relay network in which some of the cross-links are
weak. The authors of [7] further translated the exact capacity
region obtained in [6] into a universal characterization for the
Gaussian network. The key idea used in the above interference
networks with relays is to control the precoder at the relay
so that the sum of the channel gains of the newly-created
signal path via the relay and the direct path to the destination
becomes zero. A new scheme called aligned interference
neutralization was proposed in [8] by combining IA and IN.
It provides a way to align interference terms over each hop
so as to cancel them over the air at the last hop. However,
this conventional relay incurs a processing delay — while
the direct path does not — between a source-destination pair,
limiting the DoF gain in a wireless interference network. To
remedy this problem, an instantaneous relay (or relay-withoutdelay) was introduced in [9-10] to obtain a larger capacity than
the conventional relay, and a higher DoF gain was achieved
without requiring any memory at the relay. Although the
studies based on this instantaneous relay provide some useful
theoretical results, this type of relay is not practical. On
the other hand, IN can mitigate interference, but the power
overhead of generating neutralization signals also affects the
system’s performance. To the best of our knowledge, the power
overhead has not been considered in any of the existing studies
related to IN — either more recent interference neutralization [8-10] or the same ideas known via other names for
many years, such as distributed orthogonalization, distributed
zero-forcing, multiuser zero-forcing, and orthogonalize-andforward [11-12]. In practice, higher transmit power will be
used by IN when the interference is strong, thus making less
power available for the desired data transmission. Furthermore,
IN may not even be available for mobile terminals due to their
limited power budget.
By recognizing the fact that interference can be not only
neutralized but also steered in a particular direction, one
can “steer” interference, which we call interference steering
(IS) [13]. That is, a steering signal is generated to modify
the interference’s spatial feature so as to steer the original
interference in the direction orthogonal to that of the desired
signal perceived at the victim receiver. Note, however, that
IS simply steers the original interference in the direction
orthogonal to the desired signal, which we call Orthogonal-IS
(OIS) in the following discussion, regardless of the underlying
channel conditions, as well as the strength and spatial feature
of interference(s) and the intended transmission. Therefore, the
tradeoff between the benefit of IS (i.e., interference suppression) and its power cost was not considered there. Since the
more transmit power is spent on interference steering, the less
power for the desired signal’s transmission will be available,
one can naturally raise a question: “Is it always necessary to
steer interference in the direction orthogonal to the desired
signal?”
To answer the above question, we propose a new IM
scheme, called dynamic interference steering (DIS). With DIS,
the spatial feature of the steered interference at the intended
receiver is intelligently determined so as to achieve a balance
between the transmit power consumed by IS and the residual
interference due to the imperfect interference suppression, thus
improving the user’s SE.
The contributions of this paper are two-fold:
●
●
Proposal of a novel IM scheme called dynamic interference steering (DIS). By intelligently determining the
strength of steering signal, we balance the transmit power
used for IS and that for the desired signal’s transmission.
DIS can also subsume orthogonal-IS as a special case,
making it more general.
Extension of DIS to general cases where the number of
interferences from macro base station (MBS), the number
of desired signals from a pico base station (PBS) to its
intended pico user equipment (PUE), and the number of
PBSs and PUEs are all variable.
II. S YSTEM M ODEL
We consider the downlink1 transmission in heterogeneous
cellular networks (HCNs) composed of overlapping macro and
pico cells [14]. As shown in Fig. 1, macro and pico base
stations (MBSs and PBSs) are equipped with NT1 and NT0
antennas, whereas macro user equipment (MUE) and PUE are
equipped with NR1 and NR0 > 1 antennas, respectively. Since
mobile stations/devices are subject to severer restrictions in
cost and hardware, than a base station (BS), the BS is assumed
to have no less antennas than a UE, i.e., NTi ≥ NRi where
i = 0, 1. The radio range, d, of a picocell is known to be
300m or less, whereas the radius, D, of a macrocell is around
3000m [14]. Let x1 and x0 denote the transmit data vectors
from MBS and PBS to their serving subscribers, respectively.
E(∥x1 ∥2 ) = E(∥x0 ∥2 ) = 1 holds. For clarity of exposition, our
design begins with the assumption of beamforming (BF), i.e.,
only one data stream is sent from MBS to MUE (or from PBS
to PUE). Then, x1 and x0 become scalars x1 and x0 . We will
generalize this to multiple data streams sent from MBS and
PBS in Section IV. We use P1 and P0 to denote the transmit
power of MBS and PBS, respectively. Let H0 ∈ CNR0 ×NT0 and
H1 ∈ CNR1 ×NT1 be the channel matrices from MBS to MUE
and from PBS to PUE, respectively, whereas that from MBS
to PUE is denoted by H10 ∈ CNR0 ×NT1 . We adopt a spatially
uncorrelated Rayleigh flat fading channel model to model the
elements of the above matrices as independent and identically
distributed zero-mean unit-variance complex Gaussian random
variables. We assume that all users experience block fading,
i.e., channel parameters remain constant in a block consisting
of several successive time slots and vary randomly between
successive blocks. Each user can accurately estimate CSI
w.r.t. its intended and unintended Txs and feed it back to
the associated BS via a low-rate error-free link. We assume
reliable links for the delivery of CSI and signaling. The
delivery delay is negligible relative to the time scale on which
the channel state varies.
NT1
NT0
H10
H0
H1
x0
N R0
The rest of this paper is organized as follows. Section II describes the system model, while Section III details the dynamic
interference steering. Section IV presents the generalization of
DIS and Section V evaluates its performance and overhead.
Finally, Section VI concludes the paper.
Throughout this paper, we use the following notations.
The set of complex numbers is denoted as C, while vectors
and matrices are represented by bold lower-case and uppercase letters, respectively. Let XT , XH and X−1 denote the
transpose, Hermitian, and inverse of matrix X. ∥ ⋅ ∥ and ∣ ⋅ ∣
indicate the Euclidean norm and the absolute value. E(⋅)
denotes statistical expectation and ⟨a, b⟩ represents the inner
product of two vectors.
x1
N R1
Spatial feature of signals
perceived at PUE.
Fig. 1. System model.
As mobile data traffic has increased significantly in recent
years, network operators have strong preference of open access
to offload users’ traffic from heavily loaded macrocells to
other infrastructures such as picocells [14-15]. Following this
trend, we assume each PBS operates in an open mode, i.e.,
users in the coverage of a PBS are allowed to access it.
The transmission from MBS to MUE will interfere with the
1 Neither IS nor DIS is applicable for uplink transmission due to their
requirement of Tx’s cooperation.
intended transmission from PBS at PUE. Nevertheless, due
to the limited coverage of a picocell, PBS will not cause too
much interference to MUE, and is thus omitted in this paper.
As a result, the interference shown in Fig. 1 is asymmetric.
Since picocells are deployed to improve the capacity and
coverage of existing cellular systems, each picocell has subordinate features as compared to the macrocell, and hence
the macrocell transmission is given priority over the picocell’s
transmission. Specifically, MBS will not adjust its transmission
for pico-users. However, we assume that PBS can acquire the
information of x1 via inter-BS collaboration; this is easy to
achieve because PBS and MBS are deployed by the same
operator [16]. With such information, DIS can be implemented
to adjust the disturbance in a proper direction at PUE. Since
the transmission from MBS to MUE depends only on H1 and
is free from interference, we only focus on the pico-users’
transmission performance.
Although we take HCN as an example to design our scheme,
it should be noticed that other types of network as long as
they are featured as 1) collaboration between the interfering
Tx and victim Tx is available, and 2) the interference topology
is asymmetric, our scheme is applicable.
III. DYNAMIC I NTERFERENCE S TEERING
As mentioned earlier, by generating a duplicate of the
interference and sending it along with the desired signal,
the interference could be steered in the direction orthogonal
to the desired signal at the intended PUE with orthogonalIS. However, the tradeoff between the benefit of interference
steering and its power cost has not been considered before.
That is, under a transmit power constraint, the more power
consumed for IS, the less power will be available for the
intended signal’s transmission. To remedy this deficiency,
we propose a novel IM scheme called dynamic interference
steering (DIS). By intelligently determining the strength of
steering signal, the original interference is adjusted in an
appropriate direction. DIS balances the transmit power used
for generating the steering signal and that for the desired
signal’s transmission.
A. Signal Processing of DIS
As mentioned above, since the macrocell receives higher
priority than picocells, MBS will not adjust its transmission
for pico-users. In what follows, we use NTi = NRi ≥ 2 where
i = 0, 1 as an example, but our scheme can be easily extended
to the case of NTi ≥ NRi .
Due to path loss, the mixed signal received at PUE can be
expressed as:
√
√
r0 = P0 10−0.1L0 H0 p0 x0 + P1 10−0.1L10 H10 p1 x1 +n0 (1)
where the column vectors p0 and p1 represent the precoders
for data symbols x0 and x1 sent from PBS and MBS,
respectively. The first term on the right hand side (RHS)
of Eq. (1) is the desired signal, the second term denotes
the interference from MBS, and n0 represents for the additive white Gaussian noise (AWGN) with zero-mean and
variance σn2 . The path loss from MBS and PBS to a PUE
is modeled as L10 = 128.1 + 37.6 log10 (η10 /103 ) dB and
L0 = 38 + 30 log10 (η0 ) dB, respectively [17], where the
variable η(⋅) , measured in meters (m), is the distance from
the transmitter to the receiver.
The estimated signal at PUE after post-processing can be
written as r̃0 = f0H r0 where f0 denotes the receive filter. Recall
that the picocell operates in an open mode, and a MUE in the
area covered by PBS will become a PUE and then be served
by the PBS. The interference model shown in Fig. 1 has an
asymmetric feature in which only the interference from MBS
to PUE is considered. Moreover, since the macrocell is given
priority over the picocells, MBS will not adjust its transmission
for the PUEs, and hence transmit packets to MUE based only
on H1 . Here we adopt the singular value decomposition (SVD)
based BF transmission, but can also use other types of preand post-processing. Applying SVD to Hi (i = 0, 1), we get
(1)
(1)
Hi = Ui Σi ViH . We then employ pi = vi and fi = ui ,
(1)
(1)
where vi and ui are the first column vectors of the right
and left singular matrices (Vi and Ui ), respectively, both of
which correspond to the principal eigen-mode of Hi .
From Fig. 1 one can see that the strengths of desired signal
and interference at PUE depend on the network topology,
differences of transmit power at PBS and MBS, as well as
channel conditions. All of these factors affect the effectiveness
of IM. For clarity of presentation, we define P0e = P0 10−0.1L0 ,
P1e = P1 10−0.1L10 , where P0e and P1e indicate the transmit
power of PBS and MBS incorporated with the path loss perceived by PUE. With this definition, consideration of various
network topologies and transmit power differences can be
simplified to P0e and P1e . In what follows, we first present
the basic principle of orthogonal-IS (OIS), and then elaborate
on the design of dynamic-IS (DIS) where we provide the
existence and calculation of the optimal steering signal.
i QQ
sstIntIn
sst t
OQ
stQt
i Qi Q
ii
stst
iiInIn ss
(a) An illustration of OIS.
i i
iR
OO
In In
iR i i s s
(b) An illustration of DIS.
Fig. 2. Illustrations of OIS and DIS.
With OIS, PBS acquires interference information, including
data and CSI, from MBS via inter-BS collaboration and
by PUE’s estimation and feedback, respectively. PBS then
generates a duplicate of the interference and sends it along
with the desired signal. The former is used for interference
steering at PUE, whereas the latter carries the payload. The
received signal at PUE then becomes:
√
√
e
r0 = P0e − POIS
H0 p0 x0 + P1e H10 p1 x1
√
(2)
e
+ POIS
H0 pOIS x1 + n0
e
where POIS
= POIS 10−0.1L0 . POIS represents the power
overhead of OIS, and pOIS is the precoder for the steering
signal. We first define the directions of the desired signal and
the original interference combined√with the √steering signal
P e H10 p1 + P e H0 pOIS
H0 p0
as ds = ∥H
and di+st = ∥√P1e H p +√POIS
,
e
H0 pOIS ∥
0 p0 ∥
10 1
1
OIS
respectively. Then, the original interference should be steered
in the direction orthogonal to the desired signal by letting
⟨ds , di+st ⟩ = 0. As shown in Fig. 2(a), both the disturbance
(i) and the steering signal (st ), can be decomposed into an
in-phase component and a quadrature component, denoted by
the superscripts In and Q, respectively, w.r.t. the intended
Q
transmission s, i.e., i = iIn + iQ and st = sIn
t + st . When
In
In
st = −i , OIS is realized. Furthermore, since the length of a
vector indicates the signal’s strength, OIS with the minimum
Q
power overhead is achieved when st = sIn
t , i.e., st = 0. Hence,
Q
in order to reduce√power cost, we let st = 0. It can be easily
seen that iIn = P1e PH10 p1 where P = ds (dTs ds )−1 dTs
denotes the projection√matrix. To implement
√ OIS, the steering
e
H0 pOIS = − P1e PH10 p1 . This
signal should satisfy POIS
equation can be decomposed into H0 pOIS = −αPH10 p1 and
e
POIS
= βP1e where αβ = 1, so that we can get pOIS =
−1
−αH0 PH10 p1 . Note that ∥pOIS ∥ = 1 is not guaranteed, i.e.,
pOIS affects the power cost of OIS.
When NTi > NRi (i = 0, 1), the inverse of H0 should be replaced by its Moore-Penrose pseudo-inverse. The mechanism
can then be generalized. In addition, when the interference is
too strong, P0 may not be sufficient for OIS, in such a case,
we can simply switch to the non-interference management
(non-IM) mode, e.g., matched filtering (MF) at the victim
receiver, or other IM schemes with less or no transmit power
consumption such as zero-forcing reception.
(1)
By adopting f0 = u0 as the receive filter, the SE of PUE
with OIS can be computed as:
⎧
(1) ⎫
e
⎪
(P0e − POIS
)[λ0 ]2 ⎪
⎪
⎪
⎬,
(3)
cOIS
=
log
⎨
1
+
2
0
⎪
⎪
σn2
⎪
⎪
⎭
⎩
(1)
where λ0 is the largest singular value of H0 , indicating the
amplitude gain of the principal spatial sub-channel.
From Eq. (3), we can see that although the original interference is steered into the orthogonal direction of the desired
signal and the disturbance to the intended transmission is
completely eliminated, it accompanies a transmit power loss,
e
POIS
, degrading the received desired signal strength. One
can then raise a question: “is the orthogonal-IS always necessary/worthwhile?” To answer this question, we may adjust
both the direction and strength of the steering signal adaptively
to implement dynamic IS. Note, however, that in order to
minimize the transmit power overhead, the steering signal
should be opposite to the spatial feature of the desired signal.
Thus, only the strength of steering signal needs to be adjusted.
In what follows, we generalize the OIS to DIS by introducing
a coefficient ρ ∈ (0, 1] called the steering factor, representing
the portion of in-phase component of the disturbance w.r.t.
the desired signal to be mitigated. When ρ = 1, orthogonalIS is realized, while DIS approaches non-IM as ρ → 0.
Without ambiguity, we adopt st to represent√the dynamic
steering signal. Then, we have st = −ρiIn = −ρ P1e PH10 p1 .
As illustrated by Fig. 2(b), when ρ < 1, interference i is
steered into a direction not orthogonal to ds , i.e., iIn is not
completed eliminated. Then, the steered interference becomes
i + st = (1 − ρ)iIn + iQ , whose projection on ds is non-zero,
i.e., provided that ρ < 1, a residual interference, expressed as
iR = iIn + st = (1 − ρ)iIn , exists.
Similarly to the discussion about OIS, to implement DIS,
the following equation should hold:
√
√
e
(4)
PDIS
H0 pDIS = −ρ P1e PH10 p1 .
For clarity of exposition, we normalize the precoder so
that the direction and strength requirements for DIS could
be decoupled from each other. Then, the expression of DIS
implementation is given as:
−1
pDIS = −H−1
0 PH10 p1 /∥H0 PH10 p1 ∥
,
(5)
e
2
PDIS
= ρ2 P1e ∥H−1
PH
p
10 1 ∥
0
e
where pDIS is the precoder for steering signal and PDIS
denotes the power overhead for DIS at PBS, i.e., PDIS ,
incorporated with path loss 10−0.1L0 .
The received signal at PUE with DIS is then
√
√
e
H0 p0 x0 + P1e (1−ρ)PH10 p1 x1 +n0 , (6)
r0 = P0e − PDIS
{
where the second term on the RHS of Eq. (6) indicates the
residual interference, √
which is the in-phase component of the
original interference
P1e H10 p1 x1 combined with the steer√ e
ing signal ρ PDIS H0 pDIS x1 w.r.t. the desired transmission.
(1)
By employing f0 = u0 as the receive filter, the achievable
SE of PUE employing DIS can then be calculated as:
⎧
(1) ⎫
e
⎪
(P0e − PDIS
)[λ0 ]2 ⎪
⎪
⎪
cDIS
=
log
⎨
1
+
⎬,
(7)
2
0
⎪
⎪
σn2 + IR
⎪
⎪
⎩
⎭
where IR = P1e ∥f0H (1 − ρ)PH10 p1 ∥2 denotes the strength of
residual interference after post-processing at PUE.
Based on the above discussion, it can be easily seen that
IR = 0 when ρ = 1, i.e., DIS becomes OIS. So, DIS includes
OIS as a special case, making it more general.
B. Optimization of Steering Factor ρ
In what follows, we will discuss the existence of the
optimal ρ, denoted by ρ∗ , with which PUE’s SE can be
maximized with limited P0 . Based on the Shannon’s equation,
we can instead optimize the signal-to-interference-plus-noise
ratio (SINR) of PUE, denoted by ϕ0 .
Substituting Eq. (5) into Eq. (7), we can obtain ϕ0 as:
(1)
ϕ0 =
2
2
(P0e − ρ2 P1e ∥H−1
0 PH10 p1 ∥ ) [λ0 ]
(1 − ρ)2 P1e ∥f0H PH10 p1 ∥2 + σn2
(1)
(1)
P e [λ ]2 − ρ2 P1e ∥g∥2 [λ0 ]2
= 20 e 0 2
ρ P1 ∣χ∣ − (2ρ − 1)P1e ∣χ∣2 + σn2
,
(8)
H
where g = H−1
0 PH10 p1 and χ = f0 PH10 p1 .
Eq. (8) can be simplified as:
ϕ0 =
A − ρ2 B
C − ρD + ρ2 E
(9)
(1)
(1)
where A = P0e [λ0 ]2 , B = P1e ∥g∥2 [λ0 ]2 , C = P1e ∣χ∣2 + σn2 ,
D = 2P1e ∣χ∣2 and E = P1e ∣χ∣2 . Note that all of these coefficients
are positive.
Next, we elaborate on the existence of ρ∗ under the P0
constraint, with which ϕ0 is maximized. By substituting g into
Eq. (5), we can see that when P0e > P1e ∥g∥2 , PBS has enough
power to steer the interference into the direction orthogonal
to the desired signal, i.e., OIS is achievable. Otherwise, when
P0e ≤ P1e ∥g∥2 , the maximum of ρ, denoted by √
ρmax is limited
Then, we can get:
1
1
P0e
P0e
)+∣ −
∣
ρ∗+ > ( +
e
2
2 2P1 ∥g∥
2 2P1e ∥g∥2
.
P0e
⎧
⎪
≤ 12
⎪ 1,
2P1e ∥g∥2
e
= ⎨ P0e
P0
1
⎪
⎪
⎩ P1e ∥g∥2 , 2P1e ∥g∥2 > 2
P0e
should not be less
2P1e ∥g∥2
e
P
P0e
1
∗
0
than or equal to 12 . However, when 2P e ∥g∥
2 > 2 , ρ+ > 2P e ∥g∥2
1
1
is equivalent to√
ρ∗+ > 1. As a result, ρ∗+ ∉ (0, ρmax ] where
P0e
∗
ρmax = min (1, P e ∥g∥
2 ), i.e., ρ+ is not acceptable.
1
√
(BC+AE)− ∆
∗
, since ABD2 > 0, BC +
As for
BD
√ ρ− =
AE > ∆ holds, thus justifying ρ∗− > 0. Then, we prove
ρ∗− < ρmax as follows.
First, we define a function g(C) =
√
Note that ρ∗+ ≤ 1, and hence
Pe
0
is the
by the PBS’s transmit power. In such a case,
P1e ∥g∥2
maximum portion of the in-phase component of the original
interference w.r.t. the desired signal that can be mitigated with
P0 . Based on the above√discussion, we have ρ∗ ∈ (0, ρmax ]
where ρmax = min (1,
P0e
).
P1e ∥g∥2
∗
In what follows, we first
prove the solvability of ρ , and then show the quality of the
resulting solution(s).
By computing the derivative of ϕ0 to ρ and setting it to 0,
we get:
BD 2
ρ − (BC + AE)ρ + AD
2
2
= 0.
(10)
1
2 E)2
(C
−
Dρ
+
ρ
2
(BC + AE)√ − ∆. Since the derivative of g(C) to C,
√
< 0, g(C) is a monotonically
g ′ (C) = B ∆−B(BC+AE)
∆
decreasing function of variable C. Let C ′ = PTe1 ∣χ∣2 , then
C = σn2 +P1e ∣χ∣2 > C ′ , thus leading to g(C) < g(C ′ ). Similarly
to the derivations of Eqs. (13)–(15), we get:
Since the denominator cannot be 0, we only need to solve
AD
BD 2
ρ − (BC + AE)ρ +
=0
(11)
2
2
which is a quadratic equation with one unknown. Let’s define
2
2
∆
√ = (BC + AE) −√ABD . Since ∆ = (BC√+ AE +
ABD)(BC + AE − ABD), and BC + AE +
√ ABD is
positive, we only need to show that BC + AE − ABD > 0.
The proof is given below by Eq. (12).
Based on the √
above discussion, we can obtain two solutions
(BC+AE)± ∆
∗
ρ± =
. We then need to verify the qualification
BD
of the two solutions ρ∗± . We first investigate
the feasibility of
√
∆
the larger solution ρ∗+ = (BC+AE)+
.
The
first term of ρ∗+
BD
can be rewritten as:
(1)
1
1
P0e
= +
2 2P1e ∥g∥2
0
1
The second term of ρ∗+ is:
√
√
∆
(BC + AE)2 A
=
−
BD
B 2 D2
B
¿
2
Á 1
P0e
P0e .
À( +
>Á
)
−
e
e
2 2P1 ∥g∥2
P1 ∥g∥2
e
1
P0
=∣ −
∣
2 2P1e ∥g∥2
BC + AE −
√
ρ∗−
⎧ 1,
g(C) g(C ′ ) ⎪
⎪
=
<
= ⎨ P0e
⎪ e 2,
BD
BD
⎪
⎩ P1 ∥g∥
P0e
2P1e ∥g∥2
P0e
2P1e ∥g∥2
≥
<
1
2
1
2
.
(16)
Pe
0
Eq. (16) is equivalent to ρ∗− < ρ2max = min (1, P e ∥g∥
2 ). Since
1
2
2
∗
0 < ρmax ≤ 1, ρmax ≤ ρmax holds, thus proving ρ− < ρmax .
Finally, we prove that ρ∗− could achieve the maximum ϕ0 .
ρ2 −(BC+AE)ρ+ AD
is
Since it can be proved that h(ρ) = BD
2
2
a monotonically decreasing function of variable ρ ∈ (0, ρmax ],
when 0 < ρ < ρ∗− , we get h(ρ) > h(ρ∗ ) = 0. Similarly, when
ρ∗− < ρ < ρmax , h(ρ) < 0 can be derived. Thus, ρ∗− corresponds
to the maximum ϕ0 . The optimal steering factor is calculated
as ρ∗ = ρ∗− .
IV. G ENERALIZATION OF DIS
(1)
BC + AE P1e ∥g∥2 [λ0 ]2 P1e ∣χ∣2 + P0e [λ0 ]2 P1e ∣χ∣2
>
(1)
BD
2P e ∥g∥2 [λ ]2 P e ∣χ∣2
(15)
A. Generalized Number of Interferences
.
So far, we have assumed that the MBS sends a single data
stream to MUE, i.e., only one interference is imposed on the
PUE. When multiple desired signals are sent from a MBS,
the proposed DIS can be extended as follows. Since picocells
are deployed within the coverage of a macrocell, interferences
from the other MBSs are negligible. For clarity of presentation,
Fig. 3 shows a two-interference situation as an example, where
i1 and i2 are the interferences. Only one desired signal is
considered.
As shown in this figure, each interference can be decomposed into an in-phase component and a quadrature component
w.r.t. the desired signal. Then, DIS can be applied to each
(13)
(14)
√
(1)
(1)
(1)
(1)
ABD = P1e ∥g∥2 [λ0 ]2 (σ02 + P1e ∣χ∣2 ) + P0e [λ0 ]2 P1e ∣χ∣2 − 2 P0e [λ0 ]2 P1e ∥g∥2 [λ0 ]2 P1e ∣χ∣2
(1)
(1)
1
3
(1)
> (P1e )2 ∥g∥2 [λ0 ]2 ∣χ∣2 + P0e P1e [λ0 ]2 ∣χ∣2 − 2(P0e ) 2 (P1e ) 2 [λ0 ]2 ∥g∥∣χ∣2
(1)
1
1
2
= P1e [λ0 ]2 ∣χ∣2 [(P1e ) 2 ∥g∥ − (P0e ) 2 ] ≥ 0
.
(12)
i1Q
i2Q
st ,2
st ,1
For simplicity, we use the function f (ρ1 , ρ2 ) to denote cDIS
0
under N = 2 in the following description. The above algorithm
can be extended to the case of N > 2. Due to space limitation,
we do not elaborate on this any further in this paper.
i1
i2
O
i1In
i2In s
Fig. 3. Generalization of the number of interferences.
interference separately, following the processing as described
in Section III.
For N > 1 interferences, the received signal at the victim
PUE with DIS can be expressed as:
¿
N
N √
Á
ÀP e − ∑ P e
r =Á
H p x + ∑ Pe H p x
0
0
DIS,n
n=1
N
+∑
√
0 0 0
1,n
10 1,n 1,n
n=1
,
e
PDIS,n
H0 pDIS,n x1,n + n0
n=1
(17)
where x1 = [x1,1 , ⋯, x1,n , ⋯, x1,N ] is the transmit data vector
of MBS. The transmission of x1,n causes the interference term
√
e
e H p
P1,n
10 1,n x1,n to the PUE, in which P1,n is the transmit
power for x1,n , i.e., P1,n , incorporated with the path loss
e
e
10−0.1L10 . p1,n denotes the precoder for x1,n . ∑N
n=1 P1,n = P1
holds. PBS generates a duplicate of this interference with the
e
power overhead PDIS,n where PDIS,n
= PDIS,n 10−0.1L0 and
the precoder pDIS,n so as to adjust the interference to an
appropriate direction at the victim PUE.
Similarly to the derivation of Eq. (5), we can obtain the DIS
design for the n-th interfering component as:
−1
p
= −H−1
0 PH10 p1,n /∥H0 PH10 p1,n ∥
{ DIS,n
,
e
2 e
−1
PDIS,n = ρn P1,n ∥H0 PH10 p1,n ∥2
Algorithm 1
1: Take the derivative of f (ρ1 , ρ2 ) to ρ1 and ρ2 respectively,
1 ,ρ2 )
to obtain fρ′ n (ρ1 , ρ2 ) = ∂f (ρ
where n = 1, 2.
∂ρn
2: Compute the stationary point (ρ̃1 , ρ̃2 ) of f (ρ1 , ρ2 ) by
solving the equation fρ′ n (ρ1 , ρ2 ) = 0. We define set Φ
consisting of (ρ̃1 , ρ̃2 ).
3: Calculate the second-order derivative of f (ρ1 , ρ2 ) at
the stationary point (ρ̃1 , ρ̃2 ), i.e., fρ′′1 ,ρ2 (ρ̃1 , ρ̃2 ) =
∂f (ρ1 ,ρ2 )
∣
. For clarity of exposition, we de∂ρ1 ∂ρ2 ρ1 =ρ̃1 ,ρ2 =ρ̃2
fine A = fρ′′1 ,ρ2 (ρ̃1 , ρ̃2 ). Similarly, we define variables
B = fρ′′1 ,ρ1 (ρ̃1 , ρ̃2 ) and C = fρ′′2 ,ρ2 (ρ̃1 , ρ̃2 ).
4: Check whether the stationary point is an extreme point or
not. If A2 − BC < 0 and B < 0, (ρ̃1 , ρ̃2 ) is an extreme
point; otherwise not. We can obtain the set of extreme
points and the value of f (ρ1 , ρ2 ) at each extreme point
correspondingly. We define the extreme value set as Ω.
5: Since both ρ1 and ρ2 range from 0 to 1, i.e., ρn ∈ (0, 1],
the maximum value of f (ρ1 , ρ2 ) may exist at the boundary
points. We define set F = f (ρ1 , ρ2 ) ∣ρn ∈{0,1},n=1,2 .
∗
6: Determine the optimal (ρ∗
1 , ρ2 ) outputting the maximum
value of f (ρ1 , ρ2 ) ∣ρ1 =ρ∗1 ,ρ2 =ρ∗2 by searching all the elements in sets Ω and F.
B. Generalized Number of Desired Data Streams
(18)
where P represents the projection matrix depending only on
the spatial feature of the desired transmission, with which
we can calculate the in-phase component of the interference
caused by the transmission of x1,n w.r.t. the intended signal.
ρn is the steering factor for the steering signal carrying x1,n .
One should note that when there are multiple interferences,
it is difficult to determine the optimal steering factors for all
the interfering components. However, we can allocate a power
budget P0,n , satisfying ∑N
n=1 P0,n < P0 , to each interference,
and then by applying DIS to each disturbance under its power
budget constraint, a vector of n sub-optimal steering factors is
achieved. P0,n can be assigned with the same value or based
on the strength of interferences. The achievable SE of the PUE
can then be calculated as:
(1) 2 ⎫
⎧
e
⎪
(P0e − ∑N
⎪
⎪
n=1 PDIS,n ) [λ0 ] ⎪
DIS
⎬
(19)
c0
= log2 ⎨1 +
N
2
⎪
⎪
σn + ∑n=1 IR,n
⎪
⎪
⎩
⎭
where IR,n = P1e ∥f0H (1 − ρn )PH10 p1,n ∥2 (n = 1, ⋯, N ) is
the strength of the n-th residual interference to the intended
(1)
transmission. f0 = u0 is the receive filter for data x0 .
To further elaborate on the extension of the proposed
scheme, we provide below an algorithm for N = 2, with which
the optimal ρn can be determined, maximizing the system SE.
We now generalize the number of desired signals, denoted
by M , sent from PBS to its PUE. For clarity of exposition,
we take M = 2 and the number of interferences N = 1 as an
example as shown in Fig. 4. However, this discussion can be
readily extended to more general parameter settings. As can be
seen from the figure, the interference i forms a plane with each
of the desired signals sm where m = 1, 2. The projection of i
onto sm is denoted by iIn
m (in-phase component), whereas the
quadrature component is iQ
m . By applying DIS to each (i, sm )
pair, a set of steering signals can be determined.
i1Q
i
i2Q
st ,1
st ,2
O
i2In
i1In
s2
s1
Fig. 4. Generalization of the number of desired signals.
Since multiple data streams are sent from PBS to PUE via
mutually orthogonal eigenmodes/subchannels, and the steering
signal is opposite to the spatial feature of the desired transmission it intends to protect, an arbitrary steering signal, say st,m ,
is orthogonal to any of the other desired signals sm′ where
m′ ≠ m. Hence, no additional interference will be created by
the steering signal st,m .
Based on the above discussion, the number of steering
signals is equal to that of the desired data streams M . Thus,
the mixed signal received at PUE can be expressed as:
M √
√
e
e
− PDIS,m
H0 p0,m x0,m + P1e H10 p1 x1
r0 = ∑ P0,m
m=1
M √
+ ∑
e
PDIS,m
H0 pDIS,m x1 + n0
m=1
(20)
where x0 = [x0,1 , ⋯, x0,m , ⋯, x0,M ] is the transmit data vector
e
of PBS. P0,m
denotes the transmit power budget for x0,m ,
i.e., P0,m , incorporated with the path loss 10−0.1L0 . PDIS,m
is the power cost for steering the interference away from
e
the m-th desired signal, and PDIS,m
= PDIS,m 100.1L0 . p0,m
and pDIS,m represent the precoders for x0,m and its steering
signal, respectively. Similarly to the derivation of Eqs. (5) and
(18), the DIS solution for multi-desired-signal situation can
be readily obtained. In such a case, the achievable SE of PUE
can be expressed as:
(m)
⎧
e
e
M
⎪
⎪
) [λ0 ]2 ⎫
(P0,m
− PDIS,m
⎪
⎪
⎬ . (21)
cDIS
=
log
⎨
1
+
∑
2
0
2
⎪
⎪
σn + IR,m
⎪
⎪
m=1
⎭
⎩
(m)
λ0 denotes the amplitude gain of the m-th desired transmission from PBS to PUE. The residual interference IR,m =
(m)
H
P1e ∥f0,m
(1 − ρm )Pm H10 p1 ∥2 where f0,m = u0
is the
receive filter for data x0,m . The projection matrix Pm =
H p
ds,m (dTs,m ds,m )−1 dTs,m where ds,m = ∥H00 p0,m
.
0,m ∥
e
It should be noted that the optimal PDIS,m , or equivalently
e
e
e
ρm is dependent on P0,m
where ∑M
m=1 P0,m = P0 holds. Thus,
different power allocations will yield different DIS solutions.
For example, P0 can be equally allocated to the M intended
data transmissions, or in terms of the quality of subchannels
and/or the strength of interference imposed on each desired
signal. Then, suboptimal performance w.r.t. the transmission
from PBS to PUE is achieved. How to jointly determine the
optimal P0,m and PDIS,m is our future work.
C. Generalized Number of PBSs and PUEs
We discuss the generalization of the number of PBSs deployed in the coverage of a macrocell and the number of PUEs
served by each PBS. As mentioned before, PBSs are installed
by the network operator. Inter-picocell interference could
therefore be effectively avoided by the operator’s planned
deployment or resource allocation. Even when inter-picocell
interference exists, our scheme can be directly applied by
treating the interfering PBS as the MBS in this paper, and DIS
is implemented at the PBS associated with the victim PUE.
It should be noted that the proposed DIS is applicable to the
scenario with asymmetric interferences. Otherwise, concurrent
data transmissions should be scheduled or other schemes
should be adopted to address the interference problem, which
is beyond the scope of this paper. As for the multi-PUE case,
each PUE can be assigned an exclusive channel so as to
avoid co-channel interference, which is consistent with various
types of wireless communication systems, such as WLANs. In
summary, with an appropriate system design, the proposed DIS
can be applied to the system with multiple PBSs and PUEs.
V. E VALUATION
We evaluate the performance of the proposed mechanism
using MATLAB. We set d = 300m, D = 3000m, P0 = 23dBm
and P1 = 46dBm [14]. The path loss is set to L10 =
128.1+37.6 log10 (η10 /103 ) dB and L0 = 38+30 log10 (η0 ) dB
where η0 ≤ d and η10 ≤ D. Since L0 and L10 are dependent on
the network topology, P0e ranges from −89dBm to 23dBm,
whereas P1e varies between −100dBm to 46dBm. For clarity
of presentation, we adopt γ̄ = 10lg(γ) where γ = P1e /σn2 . We
also define ξ = P0e /P1e . Then, based on the above parameter
settings, ξ ∈ [−135, 123] dB. Note, however, that we obtained
this result for extreme boundary situations, making its range
too wide to be useful. Without specifications, the simulation
is done under NT0 = NT1 = NR0 = 2 antenna configuration. However, same conclusion can be drawn with various
parameter settings. In practice, a PBS should not be deployed
close to MBS and mobile users may select an access point
based on the strength of reference signals from multiple access
points. Considering this practice, we set ξ ∈ [0.1, 100] in our
simulation. There are M desired signals and N interferences.
In the following simulation, when the power overhead of an
IM scheme exceeds P0 at the victim Tx, we simply switch
to non-IM mode, i.e., matched filtering (MF) is employed by
(m)
letting f0,m = u0 , while the interference remains unchanged.
Fig. 5 shows two samples of the relationship between ρ
and PUE’s achievable SE. The interference shown in Fig. 5(a)
is relatively weak, and hence the transmit power of PBS is
sufficient for OIS, i.e., ρmax can be as large as 1. In Fig. 5(b),
since the interference is strong, and hence, when ρ > ρmax
where ρmax < 1, there won’t be enough power for PBS to
realize OIS. In such a case, we simply switch off IS and
adopt non-IM. In both figures, the optimal ρ, denoted by
ρ∗ , is computed as in Section III, which corresponds to the
maximum SE. We can conclude from Fig. 5 that in order
to better utilize the transmit power for both IS and data
transmission, it is necessary to intelligently determine the
appropriate strength of the steering signal, or equivalently, the
direction into which the interference is steered.
Fig. 6 plots the PUE’s average SE versus ρ for different
ξ. The average ρ∗ , marked by pentagram, which corresponds
to the PUE’s maximum SE, grows as ξ increases. When ξ
gets too high, a large portion of interference is preferred to be
mitigated. As shown in the figure, given ξ = 100, the average
ρ∗ is approximately 0.9. In addition, since the strength of
the desired signal relative to the interference grows with an
increase of ξ, the PUE’s SE performance improves with ρ.
Fig. 7 shows the average ρ∗ versus γ̄ under M = N = 1 and
different ξ. As the figure shows, with fixed ξ, the average ρ∗
grows with an increase of γ̄. This is because the interference
gets stronger with increasing γ̄, and hence, to achieve the
maximum SE, ρ∗ should increase, i.e., more interference
3
2.5
2
0.2
0.4
1
0.95
0.9
0.85
0.80.80
0.6
ρ
-1
4
[2,4,2]
[3,4,2]
[4,4,2]
[5,4,2]
[6,4,2]
Maximum cDIS
0
1.5
0
1
3.8
3.6
3.4
ρ*
3.2
3
2.8
2.6
0.4 0 0.6
ρ
10.2
5
[4,2,2]
[4,3,2]
[4,4,2]
[4,5,2]
[4,6,2]
Maximum cDIS
0
-1
0.8
(a) NT0 is variable.
ρmax=1
0.2
0.8
4.5
-1
0.6
ρ
Spectral efficiency (bit ⋅ s ⋅ Hz )
3.5
0.4
-1
-1
4
Spectral efficiency (bit ⋅ s ⋅ Hz )
4.5
-1
5
0.2
Spectral efficiency (bit ⋅ s ⋅ Hz )
2.6
0
Spectral efficiency (bit ⋅ s-1 ⋅ Hz-1)
S
2.8
10.4
ρ
0.6
0.8
4
3.5
3
2.5
0
1
[4,4,2]
[4,4,3]
[4,4,4]
Maximum cDIS
0
0.2
0.4
ρ
0.6
0.8
(c) NR0 is variable.
(b) NT1 is variable.
0.95
0.9
0.85
0.8
0
ρmax=1
0.2
0.4
0.6
ρ
0.8
-1
1.5
-1
ρ*
Spectral efficiency (bit ⋅ s ⋅ Hz )
1
Spectral efficiency (bit ⋅ s-1 ⋅ Hz-1)
Spectral efficiency (bit ⋅ s-1 ⋅ Hz-1)
5
Fig. 8. SE of PUE vs. ρ under
γ̄ = 5dB, ξ = 1, M = N = 1, and different antenna settings.
[2,4,2]
1
0.5
1
0
0
0.2
4.5 ρ* [3,4,2]
[4,4,2]
[5,4,2]
4
[6,4,2]
Maximum cDIS
0
3.5
ρ=0
3
2.5
0.4 2
ρmax
0.6
ρ
0.8
1.5
0
0.2
(b) Strong interference.
(a) Weak interference.
1
0.4
4
-1
Spectral efficiency (bit ⋅ s ⋅ Hz )
1
-1
Spectral efficiency (bit ⋅ s-1 ⋅ Hz-1)
Fig. 5. SE of PUE vs. ρ under γ̄ = 0dB, M = N = 1, and ξ = 1.
1.5
0.5
0
0
0.2
ρ*
Maximum cDIS
0
3.5
ρ=0
3
2.5
ρmax
20.4
ρ
0.6
0.8
1.5
1
0
1
ξ=0.3,0.5,0.8,1,5,10,20,50,100
0.2
0.4
ρ
0.6
0.8
1
express the antenna configuration. In Fig. 8(a), NT1 and NR0
are fixed, and NT0 varies from 2 to 6. Since the transmit array
gain of the desired signal grows with an increase of NT0 ,
meaning that the desired signal becomes stronger relative to
the interference, and more interference can be eliminated by
DIS with the same power overhead as NT0 grows, both the
average ρ∗ and the achievable SE improve as NT0 increases.
In Fig. 8(b), NT0 and NR0 are fixed while NT1 ranges from 2
0.8
toρ 6.0.6Although
NT11 varies, MBS causes random interferences
to PUE as the PUE adopts f0 to decode x0 regardless of
the interference channel H10 . Hence, both ρ∗ and the PUE’s
average SE under different NT1 remain similar. In Fig. 8(c),
NT0 and NT1 are fixed while NR0 ranges from 2 to 4.
With such antenna settings, since NT0 and NT1 are fixed,
the processing gain with the transmit antenna array doesn’t
change for the desired signal or the interference. However, as
NR0 increases, the receive gain for the intended signal grows
as the filter vector f0 , an NR ×1 vector designed to match H0 .
As a result, the desired signal, relative to the interference, after
the receive filtering becomes stronger with an increase of NR0 ,
thus enhancing the PUE’s SE.
Fig. 6. SE of PUE vs. ρ under γ̄ = 5dB, M = N = 1, and different ξ.
0.9
0.8
Average ρ*
1
0.7
( P > P0 )
Prob
Probability
imposing onto the desired signal should be mitigated. Given
the same γ̄, the average ρ∗ grows with an increase of ξ, which
is consistent with Fig. 6.
0.6
0.5
0.4
0.3
0.9
0.2
0.8
0.1
0.7
0
0
ξ=0.1,0.5,1,10,20,50,100
0.6
[1 1 DIS]
[1 1 OIS]
[1 1 IN]
[2 1 DIS]
[2 1 OIS]
[2 1 IN]
[1 0.5 DIS]
[1 0.5 OIS]
[1 0.5 IN]
5
10
γ .(dB
(dB))
15
20
Fig. 9. P rob(PM > P0 ) vs. γ̄ under M = 1, and different ξ and N .
0.5
0.4
0.3
0
5
10
(dB))
γ .(dB
15
20
Fig. 7. Average ρ∗ vs. γ̄ under M = N = 1, and different ξ.
Fig. 8 plots PUE’s average SE along with ρ under different
antenna settings. We use a general form [NT0 NT1 NR0 ] to
Fig. 9 shows the probabilities that the power overhead of
DIS, OIS and IN are greater than P0 , i.e., P rob(PM > P0 )
where PM denotes the power cost at PBS with IM scheme
M. We use a general form [N ξ M] to denote the parameter
settings for different mechanisms, where N is the number of
interferences. ξ = P0e /P1e . P1e is the total power of the interferer, i.e., P1 , incorporated with path loss. P rob(PM > P0 ) is
1
Spectral efficiency (bit ⋅ s-1 ⋅ Hz-1)
Prob(P
> 0P))
( PM>P
Prob
0.8
0.6
0.4
[2,1,IN]
[1,1,IN]
[2,1,OIS]
[1,1,OIS]
0.2
0
0
0.5
1
1.5
Power normalized
by P0
P
2
7
6
5
4
8
DIS
OIS
IS w/ ρ = 0.7
IS w/ ρ = 0.5
IS w/ ρ = 0.3
ZF
ZFBF
IN
Spectral efficiency (bit ⋅ s-1 ⋅ Hz-1)
8
1
3
2
1
0
0
5
10
γ (dB)
.(dB)
15
20
7
6
5
[1,2,DIS]
[1,2,OIS]
[1,2,IN]
[2,1,DIS]
[2,1,OIS]
[2,1,IN]
4
3
2
1
0
0
5
10
γ (dB)
15
20
Fig. 10. P rob(PM > P̄ ) vs. P̄ under ξ = 1, M = 1, Fig. 11. SE of PUE vs. γ̄ with various IM schemes under Fig. 12. SE of PUE vs. γ̄ with various IM schemes under
M = N = 1, and ξ = 1.
ξ = 1 and different M and N .
and different N .
shown to increase as ξ decreases since a small ξ results in a
strong interference, incurring higher PM . P rob(PIN > P0 ) is
notably higher than P rob(POIS > P0 ) and P rob(PDIS > P0 ),
and P rob(PDIS > P0 ) is less than P rob(POIS > P0 ). This is
because IN consumes more power than DIS and OIS. For DIS,
the power overhead increases as γ̄ grows and approaches OIS
when γ̄ becomes too large. One may note in Fig. 9 that with
fixed ξ and P1e , P rob(PIN > P0 ) with N = 2 interferences
is higher than that with a single interference. However, as for
DIS and OIS, a larger N produces lower P rob(PM > P0 ).
This phenomenon can be explained by the results illustrated
in Fig. 10.
Fig. 10 plots the distribution of P̄M for an arbitrary γ̄
where P̄M and P̄ represents for PM and the power value
P normalized by P0 . Since P rob(P̄DIS > P̄ ) varies with γ̄
whereas P rob(P̄IN > P̄ ) and P rob(P̄OIS > P̄ ) do not, for
simplicity, we only study IN and OIS. As shown in the figure,
P rob(P̄IN > P̄ ) with N = 2 is no less than that with N = 1.
As for OIS, when P̄ < 0.5, P rob(P̄OIS > P̄ ) with N = 2 is
larger than that with N = 1. However, as P̄ grows larger than
0.5, 2 interferences incur statistically less power overhead.
When P̄ = 1, P rob(P̄M > P̄ ) with different schemes shown
in Fig. 10 is consistent with the results given in Fig. 9.
Fig. 11 shows the PUE’s average SE with different IM
schemes. Besides IN, OIS and DIS, zero-forcing (ZF) reception and zero-forcing beamforming (ZFBF) are also simulated.
With ZF reception, a receive filter being orthogonal to the
unintended signal is adopted so as to nullify the interference
at PUE, but an attenuation w.r.t. the desired signal results.
As for ZFBF, we let PBS adjust its beam so that the desired
signal is orthogonal to the interference at the intended receiver.
It should be noted that for either IN, IS with fixed ρ, OIS or
DIS, when the power overhead at PBS exceeds P0 , i.e., the IM
scheme is unavailable, we simply switch to ZF reception. As
shown in the figure, DIS yields the best SE performance. When
γ̄ is low, noise is the dominant factor affecting the PUE’s
SE. Therefore, IS with fixed ρ (ρ < 1) yields similar SE to
OIS. Moreover, although DIS can achieve the highest SE, its
benefit is limited in the low γ̄ region. As γ̄ grows large, ρ∗
increases accordingly, and hence IS with large ρ exceeds that
with small ρ in SE. In addition, OIS gradually outperforms
those IS schemes with fixed ρ as γ̄ increases. Moreover, by
intellectually determining ρ∗ , the advantage of DIS becomes
more pronounced with an increase of γ̄. Although IN yields
more power overhead than IS with fixed ρ, DIS and OIS,
with the help of ZF, IN yields slightly higher SE than ZF
reception. As for ZFBF, more desired signal power loss results
as compared to ZF, thus yielding inferior SE performance.
Fig. 12 plots the PUE’s average SE with various mechanisms under different numbers of interferences and desired
signals. We use a general form [M N M] to denote the
parameter settings, where M represents the number of desired
signals, N is the number of interferences, and M denotes the
IM schemes. When M > 1, equal power allocation is adopted,
e
i.e., P0,m
(m = 1, ⋯, M ) in Eq. (21) is P0e /M . As shown in the
figure, OIS achieves the highest SE among the three schemes,
whereas IN yields the lowest SE. Since ρ∗ approaches 1 as γ̄
increases, with the same M and N , DIS becomes OIS when γ̄
grows too large. Given fixed ξ, IN yields better SE when there
are 2 interfering signals than the single interference case. As
for DIS and OIS, SE with 2 interferences is lower than that
with one disturbance. This is consistent with the results shown
in Figs. 9 and 10.
VI. C ONCLUSION
In this paper, we proposed a new interference management scheme, called Dynamic Interference Steering (DIS), for
heterogeneous cellular networks. By intelligently determining
the strength of the steering signal, the original interference
is steered into an appropriate direction. DIS can balance the
transmit power used for generating the steering signal and that
for the desired signal’s transmission. Our in-depth simulation
results show that the proposed scheme makes better use of the
transmit power, and enhances users’ spectral efficiency.
ACKNOWLEDGMENT
This work was supported in part by NSFC (61173135, U14
05255); the 111 Project (B08038); the Fundamental Research
Funds for the Central Universities (JB171503). It was also
supported in part by the US National Science Foundation
under Grant 1317411.
R EFERENCES
[1] C. M. Yetis, T. Gou, S. A. Jafar, et al., “On feasibility of interference
alignment in MIMO interference networks,” IEEE Trans. Sig. Process.,
vol. 58, no. 9, pp. 4771-4782, 2010.
[2] S. Jafar and S. Shamai, “Degrees of freedom region of the mimo x
channel,” IEEE Trans. Inf. Theory, vol. 54, no. 1, pp. 151-170, 2008.
[3] M. Maddah-Ali, A. Motahari, and A. Khandani, “Communication over
MIMO X channels: Interference alignment, decomposition, and performance analysis,” IEEE Trans. Inf. Theory, vol. 54, no. 8, pp. 3457-3470,
2008.
[4] V. R. Cadambe and S. A. Jafar, “Interference alignment and degrees of
freedom of the K-user interference channel,” IEEE Trans. Inf. Theory,
vol. 54, no. 8, pp. 3425-3441, 2008.
[5] C. Suh, M. Ho, and D. Tse, “Downlink interference alignment,” IEEE
Trans. Commun., vol. 59, no. 9, pp. 2616-2626, 2011.
[6] S. Mohajer, S. N. Diggavi, C. Fragouli, and D. N. C. Tse, “Transmission
techniques for relay-interference networks,” in Proc. 46th Annu. Allerton
Conf. Commun., Control, Comput., pp. 467-474, 2008.
[7] S. Mohajer, S. N. Diggavi, and D. N. C. Tse, “Approximate capacity of a
class of Gaussian relay-interference networks,” in Proc. IEEE Int. Symp.
Inf. Theory, vol. 57, no. 5, pp. 31-35, 2009.
[8] T. Gou, S. A. Jafar, S. W. Jeon, and S. Y. Chung, “Aligned interference
neutralization and the degrees of freedom of the 2×2×2 interference
channel,” IEEE Trans. Inf. Theory, vol. 58, no. 7, pp. 4381-4395, 2012.
[9] Z. Ho and E. A. Jorswieck, “Instantaneous relaying: optimal strategies
and interference neutralization,” IEEE Trans. Sig. Process., vol. 60, no.
12, pp. 6655-6668, 2012.
[10] N. Lee and C. Wang, “Aligned interference neutralization and the degrees of freedom of the two-user wireless networks with an instantaneous
relay,” IEEE Trans. Commun., vol. 61, no. 9, pp. 3611-3619, 2013.
[11] S. Berger, T. Unger, M. Kuhn, A. Klein, and A. Wittneben, “Recent
advances in amplify-and-forward two-hop relaying,” IEEE Commun.
Mag., vol. 47, no. 7, pp. 50-56, 2009.
[12] K. Gomadam and S. Jafar, “The effect of noise correlation in amplifyand-forward relay networks,” IEEE Trans. Inf. Theory, vol. 55, no. 2, pp.
731-745, 2009.
[13] Z. Li, F. Guo, Kang G. Shin, et al., “Interference steering to manage
interference,” 2017. [Online]. Available: https://arxiv.org/abs/1712.07810.
[Accessed Dec. 23, 2017].
[14] T. Q. S. Quek, G. de la Roche, and M. Kountouris, “Small cell networks:
deployment, PHY techniques, and resource management,” Cambridge
University Press, 2013.
[15] Cisco, “Cisco Visual Networking Index: Global Mobile Data Traffic
Forecast Update, 2015-2020,” 2016.
[16] F. Pantisano, M. Bennis, W. Saad, M. Debbah, and M. Latva-aho,
“Interference alignment for cooperative femtocell networks: A gametheoretic approach,” IEEE Trans. Mobile Comput., vol. 12, no. 11, pp.
2233-2246, 2013.
[17] 3GPP TR 36.931 Release 13, “LTE; Evolved Universal Terrestrial Radio
Access (E-UTRA); Radio Frequency (RF) requirements for LTE Pico
Node B,” 2016.
| 7 |
Estimating Weighted Matchings in o(n) Space
Elena Grigorescu∗
Morteza Monemizadeh†
Samson Zhou‡
arXiv:1604.07467v3 [] 5 Sep 2016
September 6, 2016
Abstract
We consider the problem of estimating the weight of a maximum weighted matching of a weighted
graph G(V, E) whose edges are revealed in a streaming fashion. Extending the framework from Crouch
and Stubbs (APPROX 2014), we develop a reduction from the maximum weighted matching problem to
the maximum cardinality matching problem that only doubles the approximation factor of a streaming
algorithm developed for the maximum cardinality matching problem. Our results hold for the insertiononly and the dynamic (i.e, insertion and deletion) edge-arrival streaming models. The previous bestknown reduction is due to Bury and Schwiegelshohn (ESA 2015) who develop an algorithm whose
approximation guarantee scales by a polynomial factor.
As an application, we obtain improved estimators for weighted planar graphs and, more generally, for
weighted bounded-arboricity graphs, by feeding into our reduction the recent estimators due to Esfandiari
et al. (SODA 2015) and to Chitnis et al. (SODA 2016). In particular, we obtain a (48+ ǫ)-approximation
estimator for the weight of a maximum weighted matching in planar graphs.
1 Introduction
We study the problem of estimating the weight of a maximum weighted matching in a weighted graph
G(V, E) whose edges arrive in a streaming fashion. Computing a maximum cardinality matching (MCM)
in an unweighted graph and a maximum weighted matching (MWM) of a weighted graph are fundamental
problems in computational graph theory (e.g., [25], [13]).
Recently, the MCM and MWM problems have attracted a lot of attention in modern big data models
such as streaming (e.g., [12, 24, 23, 11, 1, 16, 2, 17, 3]), online (e.g., [5, 21, 6]), MapReduce (e.g., [22]) and
sublinear-time (e.g., [4, 27]) models.
Formally, the Maximum Weighted Matching problem is defined as follows.
Definition 1 (Maximum Weighted Matching (MWM)) Let G(V, E) be an undirected weighted graph with
edge weights w : E → R+ . A matching M in G is a set of pairwise non-adjacent edges; that is, no two
edges share a common
vertex. A matching M is called a maximum weighted matching of graph G if its
P
weight w(M) = edge e∈M w(e) is maximum.
∗
Department of Computer Science, Purdue University, West Lafayette, IN. Email: [email protected].
Rutgers University, Piscataway, NJ 08854, USA. Supported by NSF CCF 1535878, IIS 1447793 and CCF 1161151. Email:
[email protected].
‡
Department of Computer Science, Purdue University, West Lafayette, IN. Email: [email protected].
†
1
If the graph G is unweighted (i..e, w : E → {1} ), the maximum weighted matching problem becomes the
Maximum Cardinality Matching (MCM) problem.
In streaming models, the input graph is massive and the algorithm can only use a small amount of
working space to solve a computational task. In particular, the algorithm cannot store the entire graph
G = (V, E) in memory, but can only operate with a sublinear amount of space, preferably o(n), where
|V| = n. However, many tasks are not solvable in this amount of space, and in order to deal with such
problems, the semi-streaming model [12, 26] was proposed, which allows O(n polylog(n)) amount of
working space. Both these settings have been studied in the adversarial model, where the edge order may be
worst-case, and in the random order model, where the order of the edges is a uniformly random permutation
of the set of edges.
For matching problems, if the goal is to output a set of edges that approximates the optimum matching,
algorithms that maintain only Õ(n) edges cannot achieve better than (e/e − 1)-approximation ratio ([14],
[19]). Showing upper bounds has drawn a lot of recent interest (e.g., [12], [20], [23], [28], [10]), including a
recent result [15] showing a 3.5-approximation, which improves upon the previous 4-approximation of [9].
If, on the other hand, the goal is to output only an estimate of the size of the matching, and not a
matching itself, algorithms that use only o(n) space are both desirable and possible. Surprisingly, very little
is known about MWM/MCM in this model. Recent work by Kapralov et al. [18] shows the first polylog(n)
approximate estimator using only polylog(n) space for the MCM problem. Further, if Õ(n2/3 ) space is
allowed, then constant factor approximation algorithms are possible [11].
In a recent work, Bury and Schwiegelshohn [7] consider the MWM problem in o(n) space, showing a
reduction to the MCM problem, that scales the approximation factor polynomially. In particular, they are the
first to show a constant factor estimator for weighted graphs with bounded arboricity. Their results hold in
the adversarial insertion-only model (where the updates are only edge insertion), and in the dynamic models
(where the updates are both edge insertion and deletion). They also provide an Ω(n1−ǫ ) space lower bound
to estimate the matching within 1 + O(ǫ). Our results significantly improve the current best-known upper
bounds of [7], as detailed in the next section.
2 Our Contribution
We extend the framework of [9] to show a reduction from MWM to MCM that preserves the approximation
within a factor of 2(1 + ǫ). Specifically, given a λ-approximation estimation for the size of a maximum
cardinality matching, the reduction provides a (2(1 + ǫ) · λ)-approximation estimation of the weight of a
maximum weighted matching. Our algorithm works both in the insertion-only streaming model, and in the
dynamic setting. In both these models the edges appear in adversarial order.
We next state our main theorem. As it is typical for sublinear space algorithms, we assume that the
edge-weights of G = (V, E) are bounded by poly(n).
Theorem 2 Suppose there exists a streaming algorithm (in insertion-only, or dynamic
streaming model) that estimates the size of a maximum cardinality matching of an unweighted graph within a factor of λ, with probability at least (1−δ), using S(n, δ) space.
Then, for every ǫ > 0, there exists a streaming algorithm that estimates the weight of
a maximum weighted matching of a
weighted
graph
within
a factor of 2λ(1 + ǫ), with
δ
probability at least (1 − δ), using O S n, c log n log n space.
2
We remark that if the estimator for MCM is specific to a monotone graph property (a property of graphs
that is closed under edge removal), then our algorithm can use it as a subroutine to obtain an estimator for
MWM in the weighted versions of the graphs with such properties (instead of using a subroutine for general
graphs, which may require more space, or provide worse approximation guarantees).
Our result improves the result of [7], who show a reduction from MWM to MCM that achieves a O(λ4 )approximation estimator for MWM, given a λ-approximation estimator for MCM. Their reduction also
allows extending MCM estimators to MWM estimators in monotone graph properties.
In particular, using specialized estimators for graphs of bounded arboricity, we obtain improved approximation guarantees compared with the previous best results of [7], as explained in Section 2.1, e.g., Table
2.1. In addition, our algorithm is natural and allows for a clean analysis.
2.1 Applications
Theorem 2 has immediate consequences for computing MWM in graphs with bounded arboricity. A graph
G = (V, E) has arboricity ν if
|E(U)|
,
ν = max
U⊆V |U| − 1
where E(U) is the subset of edges with both endpoints in U. The class of graphs with bounded arboricity
includes several important families of graphs, such as planar graphs, or more generally, graphs with bounded
degree, genus, or treewidth. Note that these families of graphs are monotone.
Graphs with Bounded Arboricity in the Insert-only Model Esfandiari et al. [11] provide an estimator
for the size of a maximum cardinality matching of an unweighted graph in the insertion-only streaming
model (for completeness we state their result as Theorem 5 in the Preliminaries). Theorem 2, together with
Theorem 5 (due to [11]) implies the following result.
Theorem 3 Let G be a weighted graph with arboricity ν and n = ω(ν2 ) vertices. Let ǫ, δ ∈ (0, 1).
Then, there exists an algorithm that estimates the weight of a MWM in G within a 2λ-approximation factor,
where λ = (5ν + 9)(1 + ǫ), in the insertion-only streaming model, with probability at least (1 − δ), using
Õ(νǫ−2 log(δ−1 )n2/3 )1 space. Both the update time and final processing time are O(log(δ−1 ) log n).
In particular, for planar graphs, ν = 3 and by choosing δ = n−1 in Theorem 3, and ǫ as a small constant,
the output of our algorithm is within (48 + ǫ)-approximation factor of a MWM, with probability at least
1 − n1 , using Õ(n2/3 ) space. The previous result of [7] gave an approximation factor of > 3 · 106 for planar
graphs.
Table 2.1 summarizes the state of the art for MWM.
Graphs with Bounded Arboricity in the Dynamic Model Our results also apply to the dynamic model.
Here we make use of the recent result of Chitnis et al. [8] that provides an estimator for MCM in the dynamic
model (See Theorem 6 in the Preliminaries).
Again, Theorem 6 satisfies the conditions of Theorem 2 with λ = (5ν + 9)(1 + ǫ), and consequently,
we have the following application.
1
Õ(f) = Õ(f · (log n)c ) for a large enough constant c.
3
[7]
Here
Approximation for Planar Graphs
> 3 · 106
48 + ǫ
Approximation for Graphs with Arboricity ν
12(5ν + 9)4
2(5ν + 9) + ǫ
Table 2.1: The insertion-only streaming model requires Õ(νǫ−2 log(δ−1 )n2/3 ) space for all graph classes,
while the dynamic streaming model requires Õ(νǫ−2 log(δ−1 )n4/5 ) space for all graph classes.
Theorem 4 Let G be a weighted graph with arboricity ν and n = ω(ν2 ) vertices. Let ǫ, δ ∈ (0, 1). Then,
there exists an algorithm that estimates the weight of a maximum weighted matching in G within a 2(5ν +
9)(1+ǫ)-factor in the dynamic streaming model with probability at least (1−δ), using Õ(νǫ−2 log(δ−1 )n4/5 )
space. Both the update time and final processing time are O(log(δ−1 ) log n).
In particular, for planar graphs, ν = 3, and by choosing δ = n−1 and ǫ as a small constant, the output of
our algorithm is a (48 + ǫ)-approximation of the weight of a maximum weighted matching with probability
at least 1 − n1 using at most Õ(n4/5 ) space.
We further remark that if 2-passes over the stream are allowed, then we may use the recent results of [8]
√
to obtain a (2(5ν + 9)(1 + ǫ))-approximation algorithm for MWM using only Õ( n) space.
2.2 Overview
We start by splitting the input stream into O(log n) substreams S1 , S2 , · · · , such that substream Si contains
every edge e ∈ E whose weight is at least (1 + ǫ)i , that is, w(e) ≥ (1 + ǫ)i . Splitting the stream into sets of
edges of weight only bounded below was used by Crouch and Stubbs in [9], leading to better approximation
algorithms for MWM in the semi-streaming model.
The construction from [9] explicitly saves maximal matchings in multiple substreams by employing
a greedy strategy for each substream. Once the stream completes, the algorithm from [9] again uses a
greedy strategy, by starting from the substream of highest weight and proceeding downward to streams of
lower weight. In each substream, the algorithm from [9] adds as many edges as possible, while retaining a
matching. However, with o(n) space, we cannot store maximal matchings in memory and so we no longer
have access to an oracle that explicitly returns edges from these matchings.
Instead, for each substream Si , we treat its edges as unweighted edges and apply a MCM estimator. We
then implicitly apply a greedy strategy, where we iteratively add as many edges possible from the remaining
substreams of highest weight, tracking an estimate for both the weight of a maximum weighted matching,
and the number of edges in the corresponding matching. The details of the algorithm appear in Section 4.
In our analysis, we use the simple but critical fact that, at any point, edges in our MWM estimator can
conflict with at most two edges in the MCM estimator, similar to an idea used in [9]. Therefore, if the MCM
estimator for a certain substream is greater than double the number of edges in the associated matching, we
add the remaining edges to our estimator, as shown below in Figure 2.2. Note that in some cases, we may
discard many edges that the algorithm of [9] chooses to output, but without being able to keep a maximal
matching, this may be unavoidable.
More formally, for each i, let U∗i be a maximum cardinality matching for Si . Then each edge of U∗i
intersects with either one, or two edges of U∗j , for all j < i. Thus, if |U∗i−1 | > 2|U∗i |, then at least |U∗i−1 | −
2|U∗i | edges from U∗i−1 can be added to U∗i while remaining a matching. We use a variable Bi to serve as an
estimator for this lower bound on the number of edges in a maximum weighted matching, including edges
from U∗j , for j ≥ i. We then use the estimator for MCM in each substream i as a proxy for U∗i .
4
U∗i−1
U∗i
U∗i−1
U∗i−1
Figure 2.2: If |U∗i | > 2|U∗i−1 |, then some edge(s) from U∗i−1 can be added while maintaining a matching.
Our algorithm differs from the algorithm of [7] in several points. They consider substreams Si containing
the edges with weight [2i , 2i+1 ), and their algorithm estimates the number of each edges in each stream, and
chooses to include the edges if both the number of the edges and their combined weight exceed certain
thresholds, deemed to contribute a significant value to the estimate. However, this approach may not capture
a small number of edges which nonetheless contribute a significant weight.
Our greedy approach is able to handle both these facets of a MWM problem. Namely, by greedily
taking as many edges as possible from the heavier substreams, and then accounting for edges that may be
conflicting with these in the next smaller substream, we are able to account for most of the weight. The
formal analysis appears in Section 5.
3 Preliminaries
Let S be a stream of insertions of edges of an underlying undirected weighted graph G(V, E) with weights
w : E → R. We assume that vertex set V is fixed and given, and the size of V is |V| = n. Observe that the
≤ n2 , so that we may assume that O(log |S|) = O(log n). Without
size of stream S is |S| ≤ n2 = n(n−1)
2
loss of generality we assume that at time i of stream S, edge ei arrives (or is revealed). Let Ei denote those
edges which are inserted (revealed)
up to time i, i.e., Ei = {e1 , e2 , e3 , · · · , ei }. Observe that at every time
i ∈ [|S|] we have |Ei | ≤ n2 ≤ n2 , where [x] = {1, 2, 3, · · · , x} for some natural number x. We assume that
at the end of stream S all edges of graph G(V, E) arrived, that is, E = E|S| .
We assume that there is a unique numbering for the vertices in V so that we can treat v ∈ V as a unique
number v for 1 ≤ v ≤ n = |V|. We
denote an undirected edge in E with two endpoints u, v ∈ V by (u, v).
n
The graph G can have at most 2 = n(n
− 1)/2 edges. Thus, each edge can also be thought of as referring
to a unique number between 1 and n2 .
The next theorems imply our results for graphs with bounded arboricity in the insert-only and dynamic
models.
Theorem 5 [11] Let G be an unweighted graph with arboricity ν and n = ω(ν2 ) vertices. Let ǫ, δ ∈ (0, 1)
be two arbitrary positive values less than one. There exists an algorithm that estimates the size of a maximum
matching in G within a (5ν + 9)(1 + ǫ)-factor in the insertion-only streaming model with probability at
least (1 − δ), using Õ(νǫ−2 log(δ−1 )n2/3 ) space. Both the update time and final processing time are
O(log(δ−1 )). In particular, for planar graphs, we can (24+ǫ)-approximate the size of a maximum matching
with probability at least 1 − δ using Õ(n2/3 ) space.
5
Theorem 6 [8] Let G be an unweighted graph with arboricity ν and n = ω(ν2 ) vertices. Let ǫ, δ ∈
(0, 1) be two arbitrary positive values less than one. There exists an algorithm that estimates the size of a
maximum matching in G within a (5ν + 9)(1 + ǫ)-factor in the dynamic streaming model with probability
at least (1 − δ), using Õ(νǫ−2 log(δ−1 )n4/5 ) space. Both the update time and final processing time are
O(log(δ−1 )). In particular, for planar graphs, we can (24+ǫ)-approximate the size of a maximum matching
with probability at least 1 − δ using Õ(n4/5 ) space.
4 Algorithm
For a weighted graph G(V, E) with weights w : E → R such that the minimum weight of an edge is at least
c
1 and the maximum weight W of an edge is polynomially bounded in n, i.e.,
c,
W = n for some constant
for T = ⌈log1+ǫ W⌉, we create T + 1 substreams such that substream Si = e ∈ S : w(e) ≥ (1 + ǫ)i .
Given access to a streaming algorithm MCM Estimator which estimates the size of a maximum cardinality matching of an unweighted graph G within a factor of λ with probability at least (1 − δ), we use
MCM Estimator as a black box algorithm on each Si and record the estimates. In general, for a substream
Si , we track an estimate Ai , of the weight of a maximum weighted matching of the subgraph whose edges
are in the substream Si , along with an estimate, Bi , which represents the number of edges in our estimate
Ai . The estimator Bi also serves as a running lower bound estimator for the number of edges in a maximum
matching. We greedily add edges to our estimation of the weight of a maximum weighted matching of graph
[
G. Therefore, if the estimator M
i−1 for the maximum cardinality matching of the substream Si−1 is more
than double the number of edges in Bi represented by our estimate Ai of the substream Si , we let Bi−1 be
i−1 . We iterate through
[
[
Bi plus the difference M
i−1 − 2Bi , and let Ai−1 be Ai plus (Mi−1 − 2Bi ) · (1 + ǫ)
the substream estimators, starting from the substream ST of largest weight, and proceeding downward to
dT , equivalent to taking all
substreams of lower weight. We initialize our greedy approach by setting BT = M
d
edges in MT .
Algorithm 1 Estimating Weighted Matching in Data Streams
Input: A stream S of edges of an underlying graph G(V, E) with weights w : E → R+ such that the
maximum weight W of an edge is polynomially bounded in n, i.e, W = nc for some constant c.
^ of w(M∗ ), the weight of a maximum weighted matching M∗ , in G.
Output: An estimator A
1: Let Ai be a running estimate for the weight of a maximum weighted matching.
2: Let Bi be a running lower bound estimate for the number of edges in a maximum weighted matching.
\
3: Initialize AT +1 = 0, BT +1 = 0, and M
T +1 = 0.
4: for i = T to i = 0 do
5:
Let Si = {e ∈ S : w(e) ≥ (1 + ǫ)i } be a substream of S of edges whose weights are at least (1 + ǫ)i .
6:
Let Si′ be unweighted versions of edges in Si .
7:
Let Sbi′ be the output of MCM Estimator for each Si′ with parameter δ ′ = Tδ .
b′
ci = max(M
[
8:
Let M
i+1 , Si ).
ci − 2Bi+1 ⌉).
9:
Set ∆i = max(0, ⌈M
10:
Update Bi = Bi+1 + ∆i .
11:
Update Ai = Ai+1 + (1 + ǫ)i ∆i .
^ = A0 .
12: Output estimate A
We note that the quantities Ai and Bi satisfy the following properties, which will be useful in the analysis.
6
Observation 7 Aj =
PT
Observation 8 Bj =
PT
+ ǫ)i ∆i
i=j (1
i=j ∆i
5 Analysis
ci ≤ 2Bi .
Lemma 9 For all i, Bi ≤ M
Proof : We prove the statement by induction on i, starting from i = T down to i = 0. For the base case
ci , so Bi = Bi+1 + ∆i = M
ci , and the desired inequality
i = T , we initialize Bi+1 = 0. In particular, ∆i = M
follows.
c
[
Now, we suppose the claim is true for Bi+1 ≤ M
i+1 ≤ 2Bi+1 . Next, we prove it for Bi ≤ Mi ≤ 2Bi . To
c
prove the claim for i we consider two cases. The first case is when 2Bi+1 < Mi . Then
Bi = Bi+1 + ∆i
ci − 2Bi+1
= Bi+1 + M
Additionally,
(By definition)
ci − 2Bi+1 )
(∆i = M
ci − Bi+1
=M
ci
≤M
ci < M
ci + (M
ci − 2Bi+1 )
M
ci − 2Bi+1 ))
= 2(Bi+1 + (M
ci )
(2Bi+1 < M
ci − 2Bi+1 )
(∆i = M
= 2(Bi+1 + ∆i )
= 2Bi
(By definition)
ci ≤ 2Bi .
and so Bi ≤ M
ci ≤ 2Bi+1 . Then, by definition, Bi = Bi+1 . Since S ′ is a subset of S ′ , then
The second case is when M
i+1
i
[
Bi = Bi+1 ≤ M
i+1
(Inductive hypothesis)
b′
ci = max(M
[
(M
i+1 , S ))
ci
≤M
i
≤ 2Bi+1 = 2Bi
ci ≤ 2Bi , which completes the proof.
and again Bi ≤ M
ci ≤ 2Bi+1 )
(M
✷
ci satisfies M
ci ≤ |U∗ | ≤ λM
ci , where U∗ is the size of a
Corollary 10 Suppose for all i, the estimator M
i
i
′
∗
maximum cardinality matching of Si . Then Bi ≤ |Ui | ≤ 2λBi .
ci ≤ 2Bi , so then λM
ci ≤ 2λBi . Similarly, by Lemma 9, Bi ≤ M
ci . But by
Proof : By Lemma 9, M
∗
ci ≤ |U | ≤ λM
ci , and so
assumption, M
i
ci ≤ |U∗ | ≤ λM
ci ≤ 2λBi .
Bi ≤ M
i
7
✷
ci satisfies M
ci ≤ |U∗ | ≤ λM
ci , where U∗ is the size of a
Lemma 11 Suppose for all i, the estimator M
i
i
maximum cardinality matching of Si′ . Then for all j,
T
X
i=j
∆i ≤
T
X
i=j
∗
|M ∩ (Sj − Sj+1 )| ≤
T
X
2λ∆i ,
i=j
where M∗ is a maximum weighted matching.
Proof : Since M∗ is a matching, then the number of edges in M∗ with weight at least (1 + ǫ)j is at most
|U∗j |. Thus,
T
X
|M∗ ∩ (Sj − Sj+1 )| ≤ |U∗j |.
i=j
Note that by Observation 8,
PT
i=j ∆i
= Bj , so then by Corollary 10,
T
X
i=j
|M∗ ∩ (Sj − Sj+1 )| ≤ 2λ
T
X
∆i .
i=j
On the other hand, Bi is a running estimate of the lower bound on the number of edges in M∗ ∩ Si , so
T
X
i=j
∆i = B j ≤
T
X
i=j
|M∗ ∩ (Sj − Sj+1 )|,
as desired.
✷
ci satisfies M
ci ≤ |U∗ | ≤ λM
ci for all i, where
Lemma 12 With probability at least 1 − δ, the estimator M
i
∗
′
Ui is the maximum cardinality matching of Si .
ci ≤ |U∗ | ≤ λM
ci succeeds with probability at least 1− δ , then the probability M
^ i succeeds
Proof : Since M
i
T
for i = 1, 2, . . . , T is at least 1 − δ by a union bound.
✷
We now prove our main theorem.
Proof of Theorem 2: We complete the proof of Theorem 2 by considering the edges in a maximum
weighted matching M∗ . We partition these edges by weight and bound the number of edges in each partition.
We will show that A0 ≤ w(M∗ ) ≤ 2λ(1 + ǫ)A0 . First, we have
X
w(M∗ ) =
w(e)
e∈M∗
=
T
X
X
w(e)
(2)
(1 + ǫ)i+1
(3)
i=0 e∈M∗ ∩(Si −Si+1 )
≤
T
X
i=0
X
e∈M∗ ∩(S
i −Si+1 )
8
≤
≤
T
X
i=0
T
X
|M∗ ∩ (Si − Si+1 )|(1 + ǫ)i+1
(4)
2λ∆i (1 + ǫ)i+1
(5)
T
X
(6)
i=0
≤ 2λ(1 + ǫ)
∆i (1 + ǫ)i = 2λ(1 + ǫ)A0 ,
i=0
where the identity in line (2) results from partitioning the edges by weight, so that e ∈ M∗ appears in
Si − Si+1 if (1 + ǫ)i ≤ w(e) < (1 + ǫ)i+1 . The inequality in line (3) results from each edge e in Si − Si+1
having weight less than (1+ǫ)i+1 , so an upper bound on the sum of the weights of edges in M∗ ∩(Si −Si+1 )
is (1 + ǫ)i+1 times the number of edges in |M∗ ∩ (Si − Si+1 )|, as shown in line (4). By Lemma 11, the
partial sums of 2λ∆i dominates the partial sums of |M∗ ∩ (Si − Si+1 |, resulting in the inequality in line (5).
The final identity in line (6) results from Observation 7. Similarly,
X
w(e)
w(M∗ ) =
e∈M∗
=
T
X
X
w(e)
(2)
(1 + ǫ)i
(3)
i=0 e∈M∗ ∩(Si −Si+1 )
≥
≥
≥
≥
T
X
X
i=0 e∈M∗ ∩(Si −Si+1 )
T
X
|M∗ ∩ (Si − Si+1 )|(1 + ǫ)i
(4)
T
X
∆i (1 + ǫ)i
(5)
i=0
T
X
Ai = A0 ,
(6)
i=0
i=0
where the identity in line (2) again results from partitioning the edges by weight, so that e ∈ M∗ appears in
Si − Si+1 if (1 + ǫ)i ≤ w(e) < (1 + ǫ)i+1 . The inequality in line (3) results from each edge e in Si − Si+1
having weight at least (1 + ǫ)i , so a lower bound on the sum of the weights of edges in M∗ ∩ (Si − Si+1 ) is
(1 + ǫ)i times the number of edges in |M∗ ∩ (Si − Si+1 )|, as shown in line (4). By Lemma 11, the partial
sums of |M∗ ∩ (Si − Si+1 )| dominates the partial sums of ∆i , resulting in the inequality in line (5). The final
identity in line (6) results from Observation 7.
b = A0 is a 2λ(1 + ǫ)-approximation for w(M∗ ).
Thus, A
Note that the assumption of Lemma 11 holds with probability at least 1 − δ by Lemma 12. Since we require
ci ≤ |U∗ | ≤ λM
ci with probability at least 1 − δ , then S n, δ space is required for each estimator.
M
i
T
T
9
c
Since
W substreams are used and W ≤ n for some constant c, then the overall space necessary is
T = log
δ
✷
S n, c log
n (c log n). This completes the proof.
Acknowledgements
We would like to thank anonymous reviewers for their helpful comments regarding the presentation of the
paper.
References
[1] Kook Jin Ahn, Sudipto Guha, and Andrew McGregor. Analyzing graph structure via linear measurements. In Proceedings of the Twenty-Third Annual ACM-SIAM Symposium on Discrete Algorithms,
SODA, pages 459–467, 2012.
[2] Abhash Anand, Surender Baswana, Manoj Gupta, and Sandeep Sen. Maintaining approximate maximum weighted matching in fully dynamic graphs. In IARCS Annual Conference on Foundations of
Software Technology and Theoretical Computer Science, FSTTCS, pages 257–266, 2012.
[3] Sepehr Assadi, Sanjeev Khanna, Yang Li, and Grigory Yaroslavtsev. Maximum matchings in dynamic
graph streams and the simultaneous communication model. In Proceedings of the Twenty-Seventh
Annual ACM-SIAM Symposium on Discrete Algorithms, SODA, pages 1345–1364, 2016.
[4] S. Baswana, M. Gupta, and S. Sen. Fully dynamic maximal matching in O(log n) update time. In
Proceedings of the 52nd IEEE Symposium on Foundations of Computer Science (FOCS), pages 383–
392, 2011.
[5] Benjamin E. Birnbaum and Claire Mathieu. On-line bipartite matching made simple. SIGACT News,
39(1):80–87, 2008.
[6] Bartlomiej Bosek, Dariusz Leniowski, Piotr Sankowski, and Anna Zych. Shortest augmenting paths
for online matchings on trees. In Approximation and Online Algorithms - 13th International Workshop,
WAOA. Revised Selected Papers, pages 59–71, 2015.
[7] Marc Bury and Chris Schwiegelshohn. Sublinear estimation of weighted matchings in dynamic data
streams. In Algorithms - ESA - 23rd Annual European Symposium, Proceedings, pages 263–274, 2015.
[8] Rajesh Chitnis, Graham Cormode, Hossein Esfandiari, MohammadTaghi Hajiaghayi, Andrew McGregor, Morteza Monemizadeh, and Sofya Vorotnikova. Kernelization via sampling with applications
to finding matchings and related problems in dynamic graph streams. In Proceedings of the TwentySeventh Annual ACM-SIAM Symposium on Discrete Algorithms, SODA, pages 1326–1344, 2016.
[9] M. Crouch and D. S. Stubbs. Improved streaming algorithms for weighted matching, via unweighted
matching. In Proceedings of the 17th International Workshop on Randomization and Approximation
Techniques in Computer Science (RANDOM), pages 96–104, 2014.
[10] L. Epstein, A. Levin, J. Mestre, and D. Segev. Improved approximation guarantees for weighted
matching in the semi-streaming model. SIAM J. Discrete Math, 25(3):1251–1265, 2011.
10
[11] H. Esfandiari, M. T. Hajiaghayi, V. Liaghat, M. Monemizadeh, and K. Onak. Streaming algorithms
for estimating the matching size in planar graphs and beyond. In SODA, pages 1217–1233, 2015.
[12] J. Feigenbaum, S. Kannan, McGregor, S. Suri, and J. Zhang. On graph problems in a semi-streaming
model. Theoretical Computer Science, 348(2-3):207–216, 2005.
[13] H. N. Gabow. Data structures for weighted matching and nearest common ancestors with linking. In
Proceedings of the 1st Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 434–
443, 1990.
[14] Ashish Goel, Michael Kapralov, and Sanjeev Khanna. On the communication and streaming complexity of maximum bipartite matching. In Proceedings of the Twenty-Third Annual ACM-SIAM Symposium on Discrete Algorithms, SODA, pages 468–485, 2012.
[15] Elena Grigorescu, Morteza Monemizadeh, and Samson Zhou. Streaming weighted matchings: Optimal meets greedy. CoRR, abs/1608.01487, 2016.
[16] S. Guha and A. McGregor. Graph synopses, sketches, and streams: A survey. PVLDB, 5(12):2030–
2031, 2012.
[17] Manoj Gupta and Richard Peng. Fully dynamic (1+ e)-approximate matchings. In 54th Annual IEEE
Symposium on Foundations of Computer Science, FOCS, pages 548–557, 2013.
[18] M. Kapralov, S. Khanna, and M. Sudan. Approximating matching size from random streams. In
Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA, pages
734–751, 2014.
[19] Michael Kapralov. Better bounds for matchings in the streaming model. In Proceedings of the TwentyFourth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA, pages 1679–1697, 2013.
[20] C. Konrad, F. Magniez, and C. Mathieu. Maximum matching in semi-streaming with few passes.
In Proceedings of the 15th IInternational Workshop on Approximation Algorithms for Combinatorial
Optimization Problems (APPROX), pages 231–242, 2012.
[21] Nitish Korula, Vahab S. Mirrokni, and Morteza Zadimoghaddam. Bicriteria online matching: Maximizing weight and cardinality. In Web and Internet Economics - 9th International Conference, WINE,
Proceedings, pages 305–318, 2013.
[22] Silvio Lattanzi, Benjamin Moseley, Siddharth Suri, and Sergei Vassilvitskii. Filtering: a method for
solving graph problems in mapreduce. In SPAA: Proceedings of the 23rd Annual ACM Symposium on
Parallelism in Algorithms and Architectures, pages 85–94, 2011.
[23] A. McGregor. Finding graph matchings in data streams. In Proceedings of the of 8th IInternational
Workshop on Approximation Algorithms for Combinatorial Optimization Problems (APPROX), pages
170–181, 2005.
[24] Andrew McGregor. Graph mining on streams. In Encyclopedia of Database Systems, pages 1271–
1275. Springer, 2009.
11
p
[25] S. Micali and V. V. Vazirani. An o( |V||e|) algorithm for finding maximum matching in general
graphs. Proceedings of the 21st IEEE Symposium on Foundations of Computer Science (FOCS), pages
17–27, 1980.
[26] S. Muthukrishnan. Data streams: Algorithms and applications. Foundations and Trends in Theoretical
Computer Science, 1(2), 2005.
[27] O. Neiman and S. Solomon. Simple deterministic algorithms for fully dynamic maximal matching.
Proceedings of the 45th Annual ACM Symposium on Theory of Computing (STOC), 2013.
[28] M. Zelke. Weighted matching in the semi-streaming model. Algorithmica, 62(1):1–12, 2012.
12
| 8 |
Sushi Dish : Object detection and classification from real images
Yeongjin Oh1
Seoul National University
Seunghyun Son1
Seoul National University
Gyumin Sim1
Seoul National University
[email protected]
[email protected]
[email protected]
restaurant, Niwa, and took more than 200 pictures of
piled dishes with various combinations of different
colors, illumination conditions, and perspectives.
Those images were used for tuning the detector and
training the classifier. After implementing the system,
authors tested system with 15 images.
Abstract
In conveyor belt sushi restaurants, billing is a
burdened job because one has to manually count the
number of dishes and identify the color of them to
calculate the price. In a busy situation, there can be a
mistake that customers are overcharged or undercharged. To deal with this problem, we developed a
method that automatically identifies the color of
dishes and calculate the total price using real images.
Our method consists of ellipse fitting and convolutional neural network. It achieves ellipse detection
precision 85% and recall 96% and classification
accuracy 92%.
2. Detection
Our dish detector consists of the three stages: edge
processing, ellipse detection, and reconstruction. In
edge processing, we get smooth curvature which is
strongly possible to be portion of ellipses. With the
smooth curvature, we fit many ellipses and filter out
outliers of them. From the detected ellipses, we
reconstruct missing dishes by predicting the
parameters. The details of each stage will be
explained in the following subsections.
1. Introduction
Conveyor belt sushi restaurant is popular format of
sushi restaurants. People eat sushi freely from
conveyor belt and pay for what they ate when they
leave the restaurant. Problem occurs when people try
to calculate the total price that they have to pay.
Normally a color of dishes indicates the price of the
sushi and employee in the restaurant has to count the
number of each color of dishes to charge customers.
However, this process is time consuming and
stressful job. Also this process can give customer
impression of unreliability. This is human oriented
job and there can be a mistake which is related to
money. In this paper, we propose a system that can
automatically detect dish number and color from the
real images taken by cell phone cameras.
Our proposed system consists of two major parts;
detector and classifier. We find dish objects from the
real image by ellipse detection because dishes in the
image can be expressed as ellipses ignoring
perspective distortion. After the detection, we apply
convolutional neural network based classifier to the
dish objects to find out what colors of the dishes are,
which is related to the price of sushi. Our ellipse
detector is capable of detecting dish and
approximating the missing dish and classifier is
robust to different illumination condition, shadows,
stains such as sushi sauce in the dish, and even errors
from the ellipse detector.
To get the data set, we collected data from
2.1 Edge Processing
After extracting an edge map from the real image,
our goal is to process the edge map so that it can be
used directly for ellipse detection. Using the
information of the connected edge pixels or edge
contours improves the possibilities of generating
stronger cues for ellipses and for this, Prasad [1]
defines ‘smooth curvature’ which is possible portion
of ellipse. We follow 4 steps to extract smooth
curvatures from the edge map.
2.1.1 Edge detection
For the edge detection, we pre-process the input
images. The images are converted to gray scale and
resized not to be larger than 800 pixels length of
bigger side because too large images may have too
many edges even in the area with texture. Then, we
perform histogram equalization to the resized images
in order to correctly detect the edges even if they
were taken on the dim illumination condition. After
this, Canny edge detector [2] is applied to the preprocessed image with the parameters: low hysteresis
threshold 𝑇𝐿 = 0.2, high hysteresis threshold 𝑇𝐻 = 0.2,
and standard deviation for the Gaussian filter 𝜎 = 1.
1
(a) Input edge map without branch
(b) The image with edge contours
(c) The image with smooth curvatures
Figure 1. The result of edge map processing, edge contour extraction, and smooth curvature division after line segment fitting
Let us consider an edge contour 𝑒={𝑃1 , 𝑃2 , …, 𝑃𝑛 },
where 𝑒 is an edge contour with end points 𝑃1 and 𝑃𝑛 .
The line passing through a pair of end points
𝑃1 (𝑥1, 𝑦1) and 𝑃n (𝑥n , 𝑦n) is given by :
2.1.2 Edge contour extraction
To use the information of the connected edge
contours, we have to extract continuous, nonbranching edge contours. Thus, we apply ‘thin’
morphological operator to the edge map and remove
all junctions to make sure of getting branchless edges.
We also remove all isolated pixels so that we can
assume two types of edge: an edge with distinct start
and end point, a loop.
For an edge with distinct end points, we can easily
find one of end points by following connected points.
We mark all trace of points from the start point and
follow connected point until getting stuck at the other
end point. This work gives us an edge contour which
consists of continuous points. After extracting all
edges with distinct end points, there only remain
loops. By selecting an arbitrary point as an end point
among unmarked points in the edge map, a loop can
be also extracted as an edge contour with two end
points using the same work. Figure 1 shows the result
image with edge contours.
𝑥(𝑦1 − 𝑦𝑛 ) + 𝑦(𝑥𝑛 − 𝑥1 ) + 𝑦𝑛 𝑥1 − 𝑦1 𝑥𝑛 = 0
(1)
Then the deviation 𝑑𝑖 of a point 𝑃𝑖 (𝑥i , 𝑦i) from the
line passing through the pair is given as:
𝑑𝑖 = |𝑥𝑖 (𝑦1 − 𝑦𝑛 ) + 𝑦𝑖 (𝑥𝑛 − 𝑥1 ) + 𝑦𝑛 𝑥1 − 𝑦1 𝑥𝑛 |
(2)
Using the equation (2), we find a maximal
deviation point 𝑃𝑀𝐴𝑋 and split an edge at that point.
We repeat these steps until maximum deviation
becomes small enough. Choosing 2 pixels as a
deviation threshold gives us appropriate number of
points representing line segments. Figure 2 shows the
result of line segment fitting.
2.1.4 Smooth curvature division
Since the curvature of any elliptic shape changes
continuously and smoothly, we intend to obtain edges
with smooth curvature. The term smooth curvature is
defined as a portion of an edge which does not have a
sudden change in curvature, either in terms of amount
of change or the direction of change [1].
By splitting edge contours at either of sharp-turn
point and inflexion point, we can get smooth
curvatures. We choose 90° as a sharp-turn threshold
for the amount of change. The inflexion point is
defined as a point at which the direction changes.
2.1.3 Line segment fitting
Each edge consists of many continuous points and
it makes the remaining processes take much more
time. Since a few points more than five are enough to
represent an ellipse, we reduce the number of points
which represent an edge contour. By representing the
edge contours using piece-wise linear segments, we
can make it without loss of accuracy for ellipse
detection. We use the Ramer–Douglas–Peucker
algorithm [3] to approximate a curve into a set of
line segments.
(a) Edge contours with continuous points
(b) Edge contours with line segments
Figure 2. Edge contours represented by continuous points, and line segments. Red circles show how many points are on an edge.
2
2.2 Ellipse Detection
We assume that the input images have little
perspective distortion so that the border of dishes can
be counted as ellipses. An ellipse has five parameters:
the coordinate of center (𝑝, 𝑞), the major radius 𝐴,
the minor radius B, and the orientation α; it can be
represented by the equation (3):
𝑥
𝑝
𝑐𝑜𝑠(𝛼) −𝑠𝑖𝑛(𝛼) 𝐴𝑐𝑜𝑠(𝜃)
[𝑦 ] = [𝑞 ] + [
][
]
𝑠𝑖𝑛(𝛼) 𝑐𝑜𝑠 (𝛼) 𝐵𝑠𝑖𝑛(𝜃)
Figure 4. The result of RANSAC and double line elimination.
(3)
2.3 Reconstruction
Some dishes have many short fitting segments
instead of a long one and it gives so small evidence
that the dishes are not detected. To reconstruct the
missing dishes, we predict an ellipse for each dish
and find evidences to get an optimal fitting ellipse by
calculating error between the ellipse and fitting
segments. For this work, the assumption that dishes
are stacked in a single tower is necessary.
We implement the ellipse fitting by using the code
of Richard [4]. It fits ellipses by solving least
squares for the result segment of the edge processing,
which is smooth curvature with more than 5 points.
However, the result of ellipse fitting has many
false positives, which are not part of dishes but
detected as ellipses. As shown in the Figure 3, there
are many obstacles on the table and they are also
detected as ellipses.
2.3.1 Prediction
To predict missing dishes, we use ellipse
parameter matrix which consists of parameters of all
ellipses:
𝑝1
𝑞1
𝐴1
𝐵1
𝑝𝑛
𝑞𝑛
𝐴𝑛
𝐵𝑛 𝛼𝑛
𝐸=[⋮
⋮
⋮
⋮
𝛼1
⋮]
(4)
Using parameters [𝑝𝑖 𝑞𝑖 𝐴𝑖 𝐵𝑖 𝛼𝑖 ] representing an
ellipse 𝐸𝑖 , we find the bottom-most point 𝑦𝑖 according
to 𝑦 -coordinate on the ellipse 𝐸𝑖 . Under the
assumption that 𝑦𝑖 ′𝑠 are close to the linear sequence
with small distortion of perspective, we can find
index of missing dish by computing the gap between
each pair of 𝑦𝑖 and 𝑦𝑖+1.
After finding index of missing dish, we predict an
ellipse using the tendency of each parameter for the
bottom-most and the intermediate dishes. Since the
top-most dish has strong edge, it is rarely not
detected. Then, we find fitting segments around each
prediction. Figure 5 shows prediction ellipses and
fitting segments.
Figure 3. The result of ellipse fitting applied to smooth
curvatures in Figure 1-(c). It has many false positives.
In order to filter them out, we make an assumption
that input images show just one tower of stacked
dishes and they are stacked vertically. Then, the
correct ellipses for dishes have similar parameters of
𝑝. Also, they should have similar parameter values
for 𝐴 and α because the all dishes have the same size
and the same orientation. We adopt the idea of
RANSAC to filter out the outliers for each of
parameters 𝑝, 𝐴, and α. As a result, we get a clear
tower of stacked ellipses.
Another problem is that the border of the dishes
has a little thick white area. So, fitted ellipses are
doubly detected for most of the dishes. We set a
proper threshold as the minimal gap between a pair of
the 𝑦-coordinates of the bottom-most point of each
ellipse, and use it to identify whether the ellipses are
doubly detected or not. After sorting the ellipses in
descending order of 𝑦-coordinate of the bottom-most
point, if two consecutive ellipses were not apart from
each other less than the threshold, one of them would
be thrown out; we choose the lower one. The result of
elimination of false-positives is shown in Figure 4.
Figure 5. Prediction ellipses and fitting segments. White
ellipses represent prediction using the ellipse parameter
matrix of red ellipses.
3
2.3.2 Reconstruction
Before reconstructing the missing dish, we find an
optimal ellipse fitting the dish and check whether it
has strong evidences to avoid making false positives.
Given a prediction ellipse and fitting segments, we
find an optimal ellipse which has the smallest error
from fitting segments varying the prediction
parameters pi , qi , Ai , Bi . Then, to reconstruct the
optimal ellipse, we check some conditions:
1.
2.
3.
Fitting segments covers more than 10% of perimeter
of the optimal ellipse.
Error between fitting segments and the optimal
ellipse is small enough; we choose 0.1 as the
threshold value.
All points in fitting segments has the error smaller
than 0.2.
Figure 7. The precision and recall for the tuning data set
with/without reconstruction.
3. Classification
For dish color classification, we use multi-layer
convolutional neural network [5], [6] which is variant
of neural network [7] which works well for image
classification tasks. There are other methods for color
detection. Baek et al. [8] proposed using 2D
histogram from HSV color space and SVM.
Rachmadi et al [9] proposed deep learning approach
for vehicle color classification and showed that CNN
works well on color classification. We use end-to-end
network that automatically find useful features. This
approach can deal with real images that suffer from
different illumination condition, shadows, polluted
dish image and even error from ellipse detector.
Classification pipeline consists of 4 parts. Input
definition, data augmentation, classifier architecture,
and evaluation.
If the optimal ellipse meets all three conditions, we
regard it has strong evidence and reconstruct it. The
result of reconstruction is shown in Figure 6.
Figure 6. The result of reconstruction. A blue ellipse
represents the optimal ellipse with strong evidence.
2.4 Evaluation
3.1 Input Image
In the tuning data set, only 88 images meet our
assumption and there are 461 dishes. Before the
reconstruction stage, 361 dishes are correctly
detected, and there were 6 false positives which are
not actual dishes but detected as dishes. After the
reconstruction 391 dishes are correctly detected, and
there were 14 false positives. The precision and recall
are summarized in Figure 7. Although the
reconstruction slightly increases the false positive
ratio by 1.82%p, it significantly increases the
precision by 6.51%p. For the testing data set, with the
reconstruction, the precision was 84.52% and the
recall was 95.95% which are similar to the result for
the tuning data set (84.82% and 96.54%).
Input to classifier is generated from results of
ellipse detector. After ellipse detection, we transform
ellipses to a circle by homography. For each dish,
transformed circle is subtracted by upper transformed
circle. The results are shown in Figure 8. Each
image’s dimension at this point is [100x100x3]. After
this process, each image is cut in half and only the
lower half of the image is fed into the network. This
is done for the reduction of dimensionality and
variance over images. Lower half of the image has all
the information for classification and dish that was
stacked at the top layer only has the upper part. After
this process, image’s dimension is [50x100x3] and all
images are manually labeled with 8 classes of colors
and set information.
4
First convolution layer takes [50x100x3] input.
But we don’t use augmented data directly. Data is
subtracted by the data mean. Special thing about first
layer is that it uses skewed filter to produce squared
output. Followed by max pooling layer.
Second convolution layer is also followed by max
pooling layer. Second layer’s weight is [5x5x20x50]
with stride 1.
Third convolution layer’s weights are [4x4x50x500]
with stride 1. Followed by ReLU layer [10]. After
this layer, we get 500-dimensional feature vector that
is fed into fully connected layer.
Last layer is fully connected layer with softmax. It
classifies 500-dimensional feature vector into 8
classes. 8 classes used for classification is shown in
Figure 11.
(a)
(b)
Figure 8. The result of feature extraction.
(a) Features with dimension [100x100x3]
(b) Features after subtraction with dimensions of [50x100x3]
Original color of dishes is blue but it looks white or even black
because of different illumination conditions and shadows.
3.2 Data Augmentation
To train deep network, there has to be sufficient
number of training data so we synthesize training
images. And to train network that is robust to noises,
we add some noise to the images. Collected labeled
data wasn’t sufficient for deep network and number
of images for each class was unbalanced. We use
stochastic gradient descent method for updating
weights. But difference of the number of images
between classes was too high. For example, the
number of brown colored dish images was 4 times
larger than the number of red one. To make
probability of each class to be in the batch even, we
duplicate data to balance number of images among 8
classes. After this process, there were 1,031 images
for training and validation.
To double existing training data, left and right of
images are flipped to generate new data. After that,
gaussian noise is applied with zero mean and 0.001
as variance. Applying gaussian noise is shown in
Figure 9. After this process, we had 4,124 images for
training and validation.
Figure 10. CNN classifier architecture. 500-dimensional
feature vector is fed into fully connected layer.
Figure 11. The labels of 8 classes used for training.
3.4 Evaluation
We trained our model with stochastic gradient
descent method. Testing our model with 15 novel
dish images, we got test accuracy 91.55%, 65 correct
predictions of 71 dishes. Total confusion matrix for
test image is shown in Figure 12. Training time and
loss graph is shown in Figure 13. We found that our
model has many points of improvement because this
result is obtained from only 447 training images
before data augmentation. It is very small compared
to popular datasets that have more than ten thousand
raw training images.
Figure 9. Left is original image and right is image with
gaussian noise applied.
3.3 Classifier Architecture
We use multi-layer convolutional neural network
which consist of 3 convolution layers and one fully
connected layer with softmax. For pooling, 2x2 max
pooling with stride 2 is used and for activation
function, rectified linear unit is used. Whole model is
shown in Figure 10.
5
Figure 12. Confusion matrix for test image.
Color close to red means more instance.
Figure 13. Training time- loss graph. validation error and train error sharply decreases.
[3] D. H. Douglas and T. K. Peucker, "Algorithms for the
reduction of the number of points required to represent
a digitized line or its caricature," Cartographica: The
International Journal for Geographic Information and
Geovisualization, 1973.
[4] Richard Brown, “FITELLIPSE : Least squares ellipse
fitting demonstration”, MATLAB Examples, 2007.
[5] Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E.
Hinton. "Imagenet classification with deep
convolutional neural networks." Advances in neural
information processing systems. 2012.
[6] LeCun, Yann. "LeNet-5, convolutional neural
networks." URL: http://yann. lecun. com/exdb/lenet
(2015).
[7] Cybenko, George. "Approximation by superpositions of
a sigmoidal function." Mathematics of Control, Signals,
and Systems (MCSS) 2.4 (1989): 303-314.
[8] N. Baek, S.-M. Park, K.-J. Kim, and S.-B. Park, Vehicle
Color Classification Based on the Support Vector
Machine Method, International Conference on
Intelligent Computing, ICIC 2007, pp: 1133–1139.
[9] Reza Fuad Rachmadi and I Ketut Eddy Purnama ,
“Vehicle Color Recognition using Convolutional Neural
Network “, arXiv, 2015.
[10] R Hahnloser, R. Sarpeshkar, M A Mahowald, R. J.
Douglas, H.S. Seung (2000). Digital selection and
analogue amplification coexist in a cortex-inspired
silicon circuit. Nature. 405. pp. 947–951.
4. Conclusion
In this paper, we present the sushi dish recognition
system using ellipse detection and convolutional
neural network. Our dish detector achieves precision
85% and recall 96% and classifier achieves accuracy
92%. This performance is considered meaningful
because this result is from very small dataset. There
were only 447 instances of dish images which are
relatively small compared to popular datasets. With a
slight improvement of current performance, our
system could work on the field because real
prototype is likely to have verification method from
the user such as asking users (both employees and
customers) about the suspicious images that have
probability of being other classes in the softmax layer.
The real working system may be able to be improved
by itself by applying reinforcement learning on the
classification. Also, applying some kind of machine
learning approach to the detector would be another
interesting study and a good attempt to improve the
detection precision on the fly without further manual
tuning by human.
References
[1] Dilip Kumar Prasad and K. H. Leung, “Object Detection
in Real Images”, Nanyang University, 2010.
[2] Canny, J., A Computational Approach To Edge
Detection, IEEE Trans. Pattern Analysis and Machine
Intelligence, 8(6):679–698, 1986.
Appendix
The full source code of our implementation can be found
on GitHub: https://github.com/YeongjinOh/Sushi-Dish
6
| 1 |
TYPE DIRECTED SYNTHESIS OF PRODUCTS
arXiv:1510.08121v1 [] 27 Oct 2015
Jonathan Frankle
A THESIS
PRESENTED TO THE FACULTY
OF PRINCETON UNIVERSITY
IN CANDIDACY FOR THE DEGREE
OF MASTER OF SCIENCE IN ENGINEERING
RECOMMENDED FOR ACCEPTANCE
BY THE DEPARTMENT OF
COMPUTER SCIENCE
Adviser: David Walker
September 2015
© Copyright by Jonathan Frankle, 2015. All rights reserved.
Abstract
Software synthesis - the process of generating complete, general-purpose programs from specifications - has become a hot research topic in the past few years. For decades the problem was
thought to be insurmountable: the search space of possible programs is far too massive to efficiently traverse. Advances in efficient constraint solving have overcome this barrier, enabling
a new generation of effective synthesis systems. Most existing systems compile synthesis tasks
down to low-level SMT instances, sacrificing high-level semantic information while solving only
first-order problems (i.e., filling integer holes). Recent work takes an alternative approach, using the Curry-Howard isomorphism and techniques from automated theorem proving to construct
higher-order programs with algebraic datatypes.
My thesis involved extending this type-directed synthesis engine to handle product types,
which required significant modifications to both the underlying theory and the tool itself. Product types streamline other language features, eliminating variable-arity constructors among other
workarounds employed in the original synthesis system. A form of logical conjunction, products
are invertible, making it possible to equip the synthesis system with an efficient theorem-proving
technique called focusing that eliminates many of the nondeterministic choices inherent in proof
search. These theoretical enhancements informed a new version of the type-directed synthesis prototype implementation, which remained performance-competitive with the original synthesizer. A
significant advantage of the type-directed synthesis framework is its extensibility; this thesis is a
roadmap for future such efforts to increase the expressive power of the system.
1
Introduction
Since the advent of computer programming, the cycle of writing, testing, and debugging code has
remained a tedious and error-prone undertaking. There is no middle ground between a program
that is correct and one that is not, so software development demands often-frustrating levels of
precision. The task of writing code is mechanical and repetitive, with boilerplate and common
idioms consuming ever more valuable developer-time. Aside from the languages and platforms,
the process of software engineering has changed little over the past four decades.
The discipline of software synthesis responds to these deficiencies with a simple question: if
computers can automate so much of our everyday lives, why can they not do the same for the task
of developing software? Outside of a few specialized domains, this question – until recently – had
an equally simple answer: the search space of candidate programs is too large to explore efficiently.
As we increase the number of abstract syntax tree (AST) nodes that a program might require, the
search space explodes combinatorially. Synthesizing even small programs seems hopeless.
In the past decade, however, several research program analysis and synthesis systems have overcome these barriers and developed into useful programming tools. At the core of Sketch [13],
Rosette [22, 23], and Leon [9] are efficient SAT and SMT-solvers. These tools automate many
development tasks – including test case generation, verification, angelic nondeterminism, and synthesis [22] – by compiling programs into constraint-solving problems.
These synthesis techniques, while a major technological step forward, are still quite limited.
Constraint-solving tasks are fundamentally first-order: although they efficiently fill integer and
boolean holes, they cannot scale to higher-order programs. Furthermore, in the process of compiling synthesis problems into the low-level language of constraint-solvers, these methods sacrifice
3
high-level semantic and type information that might guide the synthesis procedure through the
search space more efficiently.
Based on these observations, recent work by Steve Zdancewic and Peter-Michael Osera at the
Univerity of Pennsylvania [14] explores a different approach to synthesis: theorem-proving. The
Curry-Howard isomorphism permits us to treat the type of a desired program as a theorem whose
proof is the synthesized program. Translating this idea into an algorithm, we can search for the
solution to the synthesis problem using existing automated theorem-proving techniques. Since
many programs inhabit the same type, users also provide input-output examples to better specify
the desired function and constrain the synthesizer’s result.
This type-directed approach scales to higher-order programs and preserves the high-level program structures that guide the synthesis process. The synthesis algorithm can be designed to search
only for well-typed programs in normal form, drastically reducing the search space of possible
ASTs. At the time of writing, both of these features are unique to type-directed synthesis.
Not only is this technique efficient, but it also has the ability to scale to language features that
have, until now, remained beyond the reach of synthesis. Many desirable features, like file inputoutput, map to existing systems of logic that, when integrated into the synthesizer, instantly enable
it to generate the corresponding programs. For example, one might imagine synthesizing effectful computation using monads by performing proof search in lax logic [16]. When these logical
constructs are kept orthogonal to one another, features can be added and removed from the synthesis language in modular fashion. This extensibility is one of the most noteworthy benefits of
type-directed synthesis.
This thesis represents the first such extension to the type-directed synthesis system. My research
involved adding product types to the original synthesis framework, which until then captured only
the simply typed lambda calculus with recursion and algebraic datatypes. In the process, I heavily
revised and expanded both the formal judgments that govern the synthesis procedure and the code
that implements it, integrating additional theorem proving techniques that pave the way for future
language extensions.
Contributions of this thesis.
1. An analysis of the effect of searching only for typed programs or typed programs in normal
form on the size of the number of programs at a particular type.
2. A revised presentation of the judgments for the original type-directed synthesis system.
3. An extension of the original synthesis judgments to include product types and the focusing
technique.
4. Theorems about the properties of the updated synthesis judgments and focusing, including
proofs of admissibility of focusing and soundness of the full system.
5. Updates to the type-directed synthesis prototype that implement the new judgments.
6. An evaluation of the performance of the updated implementation on several canonical programs involving product types.
7. A thorough survey of related work.
8. A discussion of future research avenues for type-directed synthesis with emphasis on the
observation that examples are refinement types.
Overview. The body of this thesis is structured as follows: I begin with an in-depth look at the
original type-directed synthesis system in Section 2. In Section 3, I introduce the theory underlying
4
synthesis of products and extensively discuss the focusing technique, which handles product types
efficiently. I describe implementation changes made to add tuples to the synthesis framework and
evaluate the performance of the modified system in Section 4. In Section 5, I discuss related
work, including extended summaries of other synthesis systems. Finally, I outline future research
directions in Section 6, with particular emphasis on using intersection and refinement types that
integrate input-output examples into the type system. I conclude in Section 7. Proofs of theorems
presented throughout this thesis appear in Appendix A.
2
2.1
Type-Directed Synthesis
Overview
The following is a brief summary of type-directed synthesis [14], presented loosely within the
Syntax-Guided Synthesis [3] framework.
Background theory. The system synthesizes over the simply-typed lambda calculus with recursive functions and non-polymorphic algebraic datatypes. In practice, it uses a subset of OCaml
with the aforementioned features. In order to guarantee that all programs terminate, the language
permits only structural recursion. Pre-defined functions can be made available to the synthesis
process if desired (i.e., providing map and fold to a synthesis task involving list manipulation).
Synthesis problem. A user specifies the name and type signature of a function to be synthesized.
No additional structural guidance or “sketching” is provided.
Solution specification. The function’s type information combined with input-output examples
constrain the function to be synthesized. We can treat this set of examples as a partial function
that we wish to generalize into a total function. Since type-directed synthesis aims to scale to
multi-argument, higher-order functions, these examples can map multiple inputs, including other
functions, to a single output. Functions are not permissible as output examples, however, since the
synthesis procedure must be able to decidably test outputs for equality.
Optimality criterion. The synthesis problem as currently described lends itself to a very simple
algorithm: create a function that (1) on an input specified in an example, supplies the corresponding
output and (2) on all other inputs, returns an arbitrary, well-typed value. To avoid generating such
useless, over-fitted programs, type-directed synthesis requires some notion of a best result. In
practice, we create the smallest (measured in AST nodes) program that satisfies the specification,
since a more generic, recursive solution will lead to a smaller program than one that merely matches
on examples.
Search strategy. Type-directed synthesis treats the signature of the function in the synthesis
problem as a theorem to be proved and uses a modified form of reverse proof search [15] that
integrates input-output examples to generate a program. By the Curry-Howard isomorphism, a
proof of the theorem is a program with the desired type; if it satisfies the input-output examples,
then it is a solution to the overall synthesis problem.
5
Style. Type-directed synthesis can be characterized as a hybrid algorithm that has qualities of
both deductive and inductive synthesis. It uses a set of rules to extract the structure of the inputoutput examples into a program, which is reminiscent of deductive synthesis algorithms that use
a similar process on complete specifications. When guessing variables and function applications,
however, type-directed synthesis generates terms and checks whether they satisfy the examples, an
approach in the style of inductive strategies like CEGIS [20].
2.2
Case Study: List length
Before delving into the technical details of the theory, consider the process of synthesizing the list
length function as illustrated in Figure 1.
In Figure 1a, we begin with our synthesis problem: a function with its type signature and a list
of input-output examples. The ? preceding each example represents the hole to which the example
corresponds. Initially, the entire function to be synthesized comprises a single hole.
Enclosed within angle brackets is each example world, which we define as a pair of (1) bindings
of names to values and (2) the goal value that should be taken on by the expression chosen to fill
the hole when the names are bound to those values. For example, we can read the expression
?1 :
hx = 1, y = 2; 5i
as
When x = 1 and y = 2, the expression synthesized to fill hole ?1 should evaluate to 5.
In our initial synthesis problem in Figure 1a, no names have yet been bound to values. We could
imagine providing the synthesis instance with a library of existing functions, like fold and map, in
which case our example worlds would contain already-bound names at the start of the synthesis
process.
Each goal value is expressed as a partial function mapping input arguments to an output. For
example,
[1; 2] => inc => [2; 3]
means that, on inputs [1; 2] and the increment function, the desired output is [2; 3].
In Figure 1b, we observe that every example is a partial function mapping a natlist to a nat
and, as such, we synthesize a function with a natlist argument called ls. We must update our
examples in kind. Where before we had an example world of the form
h·; [4; 3] => 2i
we now extract the value of ls and move it to the list of names bound to values:
hls = [4; 3]; 2i
Since we have now synthesized a function, our hole is of type nat and the goal value is the
remainder of the partial function with ls removed.
In Figure 1c, we synthesize a match statement to break down the structure of ls. This creates
two holes, one for each branch of the match statement. We partition our set of examples between
the two holes depending on the constructor of the value bound to ls. Those examples for which
ls = [] constrain the hole for the Nil branch; all other examples constrain the Cons branch.
In Figure 1d, we turn our attention to the Nil branch. Since we only have a single example, it
is safe to simply synthesize the value in the example’s goal position (namely the constructor O). In
the other branch, too, every example has the same constructor (S), which we then generate in our
6
Description (a) Initial synthesis problem.
(b) Synthesize a function.
Program
len :
len (ls :
Examples
?:h·; []
=> 0i
?:h·; [3]
=> 1i
?:h·; [4; 3] => 2i
natlist -> nat = ?
natlist) :
nat = ?
?:hlen = ..., ls =
[]; 0i
?:hlen = ..., ls =
[3]; 1i
?:hlen = ..., ls = [4; 3]; 2i
Judgment
IR EFINE -F IX
(c) Synthesize a match statement.
(d) Complete the Nil branch with the O constructor.
len (ls : natlist) : nat =
match ls with
| Nil
-> ?1
| Cons(hd, tl) -> ?2
len (ls : natlist) : nat =
match ls with
| Nil
-> O
| Cons(hd, tl) -> ?2
?1 :hlen = ..., ls =
[]
; 0i
?2 :hlen = ..., ls =
[3], hd = 3, tl = []; 1i
?2 :hlen = ..., ls = [4; 3], hd = 4, tl = [3]; 2i
?2 :hlen = ..., ls =
[3], hd = 3, tl = []; 1i
?2 :hlen = ..., ls = [4; 3], hd = 4, tl = [3]; 2i
IR EFINE -M ATCH , EG UESS -C TX
IR EFINE -C TOR
(e) Synthesize constructor S in the remaining branch.
(f) Synthesize an application to fill the final hole.
len (ls : natlist) : nat =
match ls with
| Nil
-> O
| Cons(hd, tl) -> S(?2 )
len (ls : natlist) : nat =
match ls with
| Nil
-> O
| Cons(hd, tl) -> S(len tl)
?2 :hlen = ..., ls =
[3], hd = 3, tl = []; 0i
?2 :hlen = ..., ls = [4; 3], hd = 4, tl = [3]; 1i
IR EFINE -C TOR
IR EFINE -G UESS , EG UESS -A PP, EG UESS -C TX
Figure 1: A step-by-step derivation of the list length function in type-directed synthesis. A ?
character refers to a hole in the program that the synthesis algorithm aims to fill. Each example
world, delimited with h and i, contains variable bindings to the left of the ; and the goal value to the
right. The preceding ? indicates the hole to which the example world corresponds. For brevity,
we write unary numbers in their Arabic equivalents (S (S (O)) is abbreviated as 2) and lists in
equivalent OCaml syntax (Cons(2, Cons(1, Nil)) is [2; 1]). The names of all recursive functions in
scope (in this case len) are always available in the list of variable bindings. They are bound to
the partial functions comprising their definitions. The example for len is the initial partial function
example at the beginning of the synthesis process; it is elided from the example worlds for space.
7
Possible ASTs at Type (nat -> nat) by Number of Nodes
Possible ASTs (Logarithmic)
1.E+18
1.E+16
1.E+14
1.E+12
1.E+10
1.E+08
1.E+06
1.E+04
1.E+02
1.E+00
1
2
3
4
5
6
7
8
9
10 11 12 13 14 15 16 17 18 19
AST Nodes
Typed Beta-Normal Eta-Long
Untyped
Typed
Figure 2: The number of possible ASTs (at type nat → nat) with a particular number of nodes.
ASTs were generated over the type-directed synthesis language (the lambda calculus with recursive functions and algebraic datatypes). The top line (orange) includes all possible ASTs, while
the middle and bottom lines include only typed (grey) and typed beta-normal eta-long (blue) ASTs
respectively. The scale on the vertical axis is logarithmic.
program (Figure 1e). We update the value in the goal position accordingly, removing one use of
the S constructor. Finally, with a recursive call to len on tl, our function is complete.
It is important to note that every function we generate is recursive. Therefore, the name of each
function, including the top-level function of the synthesis problem, is available in every example
world for which it is in scope. Function names are bound to the partial functions comprising their
examples. Making these names available allows for recursive function calls. The examples for
len are elided from example worlds in Figure 1 for readability.
2.3
Proof Search
Search strategy. The primary distinguishing quality of type-directed synthesis is its search strategy. The grammar of expressions e in the background theory is specified in Figure 3. Within
this grammar are myriad ASTs that are not well-typed. As Figure 2 illustrates, merely restricting
our search to well-typed terms drastically decreases the size of the search space. Even amongst
well-typed terms, there are numerous expressions that are functionally identical. For example,
(fix f (x : τ1 ) : τ1 = x) 1 beta-reduces to 1 and fix f (x : τ1 ) : τ2 = (g x) eta-reduces to g. If we confine our search to programs in eta-long, beta-normal form, we avoid considering some of these
duplicates and further reduce our search space. Figure 2 demonstrates the benefit of this further
restriction: an additional one to two orders of magnitude reduction in the size of the search space.
Based on these observations, type-directed syntax avoids searching amongst all expressions (e),
instead generating well-typed ASTs over a more restrictive grammar: that of introduction (I) and
elimination (E) forms. Doing so guarantees that programs synthesized are in beta-normal form,
8
τ
v
pf
ε
e
::= B | τ1 → τ2
::= C (v1 , ..., vm )
| fix f (x : τ1 ) : τ2 = e | p f
::= vi ⇒ εi i∈m
::= C (ε1 , ..., εm ) | p f
::= x | C (e1 , ..., em ) | p f
| fix f (x : τ1 ) : τ2 = e | e1 e2
(Types: base types and function types)
(Values: constructors, functions, and partial functions)
(Partial functions: a finite map from inputs to outputs)
(Examples)
(Expressions)
| match e with Ci (x1 , ..., xm ) → ei
E
I
i∈l
::= x | E I
::= E | C (I1 , ..., Im )
| fix f (x : τ1 ) : τ2 = I
(Elimination forms)
(Introduction forms)
| match E with Ci (x1 , ..., xm ) → Ii
Σ
Γ
E
w
W
::=
::=
::=
::=
::=
i∈l
· | Σ, C : τ1 ∗ ... ∗ τn → B
· | Γ, x : τ
· | E, x . ε
hE; εi
· | W, w
(Constructor context: a map from constructor names to types)
(Type context: bindings of names to types)
(Example context: names refined by examples)
(A single world: an example context and result example)
(A set of worlds)
Figure 3: Grammars for the synthesis judgments of type-directed synthesis.
Σ|Γ|W `τ
IR EFINE -G UESS
E
Σ|Γ`B E
Σ | Γ | hEi ; εi i
`B
I
I
IR EFINE -C TOR
C : τ1 ∗ ... ∗ τm → B ∈ Σ
∀i ∈ n, Ei (E) →∗ εi
i∈n
I
i∈n
∀ j ∈ m, Σ | Γ | hEi ; ε(i, j) i
i∈n
Σ | Γ | hEi ;C (ε(i,1) , ..., I(i,m) )i
E
`B
I
` τj
I
Ij
C (I1 , ..., Im )
IR EFINE -F IX
(i,k)∈(n,mi )
Σ | Γ, f : τ1 → τ2 , x : τ1 | hEi , f . v(i, j) ⇒ ε(i, j) j∈mi , x . v(i,k) ; ε(i,k) i
i∈n
Σ | Γ | hEi ; v(i, j) ⇒ ε(i, j) j∈mi i
` τ1 → τ2
I
` τ2
I
I
fix f (x : τ1 ) : τ2 = I
IR EFINE -M ATCH
Σ|Γ`B
E
∀ j ∈ m, C j : τ( j,1) ∗ ... ∗ τ( j,l) → B ∈ Σ
E
∀ j ∈ m, Σ | Γ, x : τ( j,1) , ..., x : τ( j,l) | {hEi , x1 . v1 , ..., xl . vl ; εi i Ei (E) →∗ C j (v1 , ..., vl )} ` τ
i∈n
Σ | Γ | hEi ; εi i
`τ
I
match E with C j (x1 , ..., xl ) → I j
Σ|Γ`τ
EG UESS -C TX
x:τ ∈Γ
Σ|Γ`τ
E
E
Ij
j∈m
E
EG UESS -A PP
Σ | Γ ` τ1 → τ
E
Σ | Γ | · ` τ1
E
Σ|Γ`τ
x
I
E
I
I
EI
Figure 4: Synthesis judgments for type-directed synthesis. An expression with a bar and an index
above it represents a set of expressions sharing the same structure.
9
since no lambda term can appear on the left side of an application. If, in addition, we generate only
lambdas at function type, programs will also be eta-long.
We can combine these ideas into a proof search algorithm that begins with a synthesis problem as
in Figure 1 and uses type information, along with the structure of examples, to generate programs.
The synthesis judgments for this algorithm appear in Figure 4.
Refining introduction forms. The judgment for producing introduction forms, which is also the
top-level synthesis judgment, is of the form
Σ|Γ|W `τ
I
I
which states:
Given constructors Σ and names bound to types Γ, we synthesize introduction form I at type
τ conforming to the examples in worlds W.
The IR EFINE -F IX rule extracts the structure of a partial function example as in Figure 1b, synthesizing a fixpoint and creating a new subproblem for the fixpoint’s body that can refer to the
argument and make recursive calls. The IR EFINE -C TOR rule observes that every example shares
the same constructor and therefore synthesizes the constructor in the program, likewise creating
subproblems for each of the constructor’s arguments.
The IR EFINE -M ATCH rule generates a match statement, guessing an elimination-form scrutinee at base type on which to pattern-match. The rule creates a sub-problem for each branch of
the match statement - that is, one for every constructor of the scrutinee’s type. The existing set of
examples is partitioned among these sub-problems according to the value that the scrutinee takes
on when evaluated in each example world. Since the values of the bound variables may vary from
world to world, the scrutinee can evaluate to different constructors in different contexts. This evaluation step determines which examples correspond to which branches, allowing us to constrain the
sub-problems and continue with the synthesis process.
Finally, at base type (B), we may also guess an elimination form, captured by the IR EFINE G UESS rule. The restriction that this only occurs at base type helps enforce the property that we
only generate eta-long programs.
Guessing elimination forms. The judgment form for guessing elimination forms
Σ|Γ`τ
E
E
states:
Given constructors Σ and names bound to types Γ, we synthesize elimination form E at type τ.
Observe that this judgment form does not include examples. Unlike the introduction forms, we
cannot use the structural content of the examples to inform the elimination forms we guess. Instead, we may only ensure that the elimination form that we generate obeys the examples. This
requirement appears in the additional condition of the IR EFINE -G UESS rule
∀i ∈ n, Ei (E) →∗ εi
reading:
For each of our n examples, the elimination form E must, when substituted with the value bindings in Ei , evaluate to the corresponding goal value εi .
10
The EG UESS -A PP rule guesses a function application by recursively guessing a function and
an argument. The function must be an elimination form, which ensures that we only guess betanormal programs. The argument may be an introduction form, but the call back into an IR EFINE
rule does so without examples. Finally, the base case EG UESS -C TX guesses a name from the
context Γ.
Nondeterminism. These rules are highly nondeterministic: at any point, several rules apply.
This is especially pertinent at base type, where we could refine a constructor or match statement
while guessing an elimination form. Figure 1 traces only one of many paths through the expansive
search space that this nondeterminism creates.
Nondeterminism provides the flexibility for an optimality condition (Section 2.1) to select the
best possible program. Although we could continually apply IR EFINE rules until we fully extract
the structure of the examples, doing so would generate a larger than necessary program that has
been overfitted to the examples. Instead, the EG UESS rules allow us to make recursive calls that
create smaller, more general programs.
Relationship with other systems. The synthesis judgments are closely related to two other systems: the sequent calculus and bidirectional type-checking. We can reframe the judgment
Σ | x1 : τ1 , ..., xn : τn | W ` τ
I
I
as the sequent
τ1 , ..., τn =⇒ τ
if we elide the proof terms (x1 through xn and I) and examples. Many of our rules (particularly
IR EFINE -F IX and EG UESS -A PP) directly mirror their counterparts in Gentzen’s characterization
of the sequent calculus [6]. The only modifications are the addition of proof terms and bookkeeping
to manage examples. The rules that handle algebraic datatypes and pattern matching (IR EFINE C TOR and IR EFINE -M ATCH) loosely resemble logical disjunction, which Gentzen also describes.
The type-directed synthesis system also resembles bidirectional typechecking. Where traditional
bidirectional typechecking rules are functions that synthesize or check a type given a context and
an expression, however, our judgments synthesize an expression from a context and a type. The
IR EFINE judgments correspond to rules where a type would be checked and the EG UESS judgments to rules where a type would be synthesized.
3
3.1
Synthesis of Products
Overview of Products
In this section, I describe the process of adding k-ary (where k > 1) tuples to the type-directed
synthesis framework. The syntax of tuples is identical to that of OCaml. A tuple of k values is
written:
(v1 , ..., vk )
Tuples are eliminated using projection, which, in our rules, follows syntax similar to that of Standard ML. The jth projection of a k-ary tuple (where 1 ≤ j ≤ k)
π j (v1 , ..., vk )
11
evaluates to
vj
As in OCaml, a tuple’s type is written as the ordered cartesian product of the types of its constituents. For example, the type
nat ∗ list ∗ bool
denotes the type of a tuple whose members are, in order, a natural number, a list, and a boolean.
3.2
Tuples as a Derived Form
It is entirely possible to encode tuples as a derived form in the original type-directed synthesis
framework. We could declare an algebraic datatype with a single variant that stores k items of the
necessary types. Projection would entail matching on a “tuple” to extract its contents. Although
this strategy will successfully integrate a functional equivalent of products into the system, first
class tuples are more desirable for several reasons.
Convenience. Since the existing system lacks polymorphism, we would need to declare a new
algebraic type for every set of types we wish to combine into a product. Even with polymorphism,
we would still need a new type for each arity we might desire. This approach would swiftly
become tedious to a programmer and, worse, would lead the synthesizer to write incomprehensible
programs filled with computer-generated type declarations and corresponding match statements.
Orthogonality. Implementing tuples as a derived form is an inelegant and syntactically-heavy
way to add an essential language feature to the system. In contrast, first-class tuples are orthogonal,
meaning we can add, remove, or modify other features without affecting products.
Cleaner theory. First class tuples simplify a number of other language features, including constructors and match statements. In the original type-directed synthesis system, variable-arity constructors were necessary to implement fundamental recursive types like lists. First class tuples can
replace this functionality, streamlining the theory behind the entire system.
Efficient synthesis. Theorem-proving machinery can take advantage of the logical properties of
product types to produce a more efficient synthesis algorithm (see Section 3.3). Although we could
specially apply these techniques to single-variant, non-recursive algebraic datatypes (the equivalent
of tuples), we can simplify the overall synthesis algorithm by delineating a distinct product type.
Informative matching. The original synthesis implementation only generates a match statement
if doing so partitions the examples over two or more branches, a restriction known as informative
matching. The rationale for this requirement is that pattern matching is only useful if it separates
our synthesis task into smaller subproblems, each with fewer examples. A single-constructor variant would never satisfy this constraint, meaning that we would need another special case in order
to use match statements to “project” on our tuple derived form.
12
::= B | τ1 → τ2 | unit | τ1 ∗ ... ∗ τm
::= () | (v1 , ..., vm )
|C v
| fix f (x : τ1 ) : τ2 = e
| pf
::= vi ⇒ εi i∈m
::= () | (ε1 , ..., εm ) | C ε | p f
::= x | p f
| () | (e1 , ..., em ) | πk e
| fix f (x : τ1 ) : τ2 = e | e1 e2
|C e
i∈l
| match e with Ci x → ei
τ
v
pf
ε
e
E
I
::= x | E I | πk E
::= E
| () | (I1 , ..., Im )
|C I
| fix f (x : τ1 ) : τ2 = I
i∈l
| match E with Ci x → Ii
Σ
Γ, A, F
E
w
W
::=
::=
::=
::=
::=
· | Σ, C : τ → B
· | Γ, E : τ
· | E, E . ε
hE; εi
· | W, w
(Types: base type, function type, unit type , and product type )
(Values: unit , tuples , single-arity constructors,
functions, and partial functions)
(Partial functions: a finite map from inputs to outputs)
(Examples)
(Expressions: projection is newly added)
(Elimination forms: projection is newly added)
(Introduction forms: unit and tuples are newly added)
(Constructor context: a map from constructor names to types)
(Type context: bindings of elimination forms to types)
(Example context: elimination forms refined by examples)
(A single world: an example context and result example)
(A set of worlds)
Figure 5: Updated grammars for type-directed synthesis with products. Changes are highlighted.
Σ1 | Γ1 ; A1 ; F1 | W1 =⇒ Σ2 | Γ2 ; A2 ; F2 | W2
F OCUS -U NIT
F OCUS -BASE
Σ | Γ; A; F, E : unit | W =⇒ Σ | Γ, E : unit; A; F | W
Σ | Γ; A; F, E : B | W =⇒ Σ | Γ, E : B; A; F | W
F OCUS -F UN
Σ | Γ; A; F, E : τ1 → τ2 | W =⇒ Σ | Γ, E : τ1 → τ2 ; A; F | W
F OCUS -T UPLE
i∈n
Σ | Γ; A; F, E : τ1 ∗ ... ∗ τm | hEi , E . (I(i,1) , ..., I(i,m) ); εi i
Σ | Γ; A, E : τ1 ∗ ... ∗ τm ; F, π j E : τ j
j∈m
| hEi , π j E . I(i, j)
j∈m
=⇒
i∈n
; εi i
Figure 6: Focusing rules for type-directed synthesis with products.
13
I
Σ | Γ; A; F | W ` τ
IR EFINE -G UESS
E
Σ | Γ; A; · ` B E
I
IR EFINE -U NIT
∀i ∈ n, Ei (E) →∗ εi
i∈n
Σ | Γ; A; · | hEi ; εi i
I
`B
i∈n
Σ | Γ; A; · | hEi ; ()i
E
` unit
I
()
IR EFINE -F IX
(i,k)∈(n,mi )
Σ | Γ; A; f : τ1 → τ2 , x : τ1 | hEi , f . v(i, j) ⇒ ε(i, j) j∈mi , x . v(i,k) ; ε(i,k) i
i∈n
Σ | Γ; A; · | hEi ; v(i, j) ⇒ ε(i, j) j∈mi i
IR EFINE -C TOR
C:τ →B∈Σ
i∈n
Σ | Γ; A; · | hEi ; εi i
i∈n
Σ | Γ; A; · | hEi ;C εi i
`B
I
`τ
I
I
` τ1 → τ2
I
I
fix f (x : τ1 ) : τ2 = I
IR EFINE -T UPLE
j∈n
∀i ∈ m, Σ | Γ; A; · | hE j ; ε(i, j) i ` τi
Σ | Γ; A; · | hE j ; (ε(1, j) , ..., ε(m, j) )i
CI
I
` τ2
j∈n
I
` τ1 ∗ ... ∗ τm
Ii
I
(I1 , ..., Im )
IR EFINE -M ATCH
Σ | Γ; A; · ` B
∀ j ∈ m, C j : τ j → B ∈ Σ
E
E
∀ j ∈ m, Σ | Γ; A; x : τ j | {hEi , x . v; εi i Ei (E) →∗ C j v} ` τ
i∈n
Σ | Γ; A; · | hEi ; εi i
`τ
I
match E with C j x → I j
IR EFINE -F OCUS
Σ1 | Γ1 ; A1 ; F1 | W1 =⇒ Σ2 | Γ2 ; A2 ; F2 | W2
Σ1 | Γ1 ; A1 ; F1 | W1 ` τ
Σ | Γ; A; F ` τ
EG UESS -A PP
Σ | Γ, E1 : τ1 → τ2 ; A; · | · ` τ1
EG UESS -C TX
E :B∈Γ
Σ | Γ; A; · ` B
E
E
E
I
I
I
E
I
I1
Σ | Γ, E1 : τ1 → τ2 ; A; E1 I1 : τ2 ` τ
Σ | Γ, E1 : τ1 → τ2 ; A; · ` τ
EG UESS -F OCUS
Σ1 | Γ1 ; A1 ; F1 | · =⇒ Σ2 | Γ2 ; A2 ; F2 | ·
Σ1 | Γ1 ; A1 ; F1 ` τ
Ij
j∈m
Σ2 | Γ2 ; A2 ; F2 | W2 ` τ
I
I
E
Σ2 | Γ2 ; A2 ; F2 ` τ
E
E
E
E
E
E
E
Figure 7: The updated judgments for type-directed synthesis with products. Altered judgments
have been highlighted.
14
3.3
Focusing
Theorem proving strategy. Product types are the equivalents of logical conjunction, which itself
is left-invertible [15]. That is, when we know that A ∧ B is true, we can eagerly conclude that A
and B are individually true without loss of information. Should we later need the fact that A ∧ B is
true, we can easily re-prove it.
We can turn this invertibility into an efficient strategy for theorem-proving called focusing.
When we add a value of product type to the context (our proof-term equivalent of statements
we know to be true) we can eagerly project on it in order to break it into its non-product type
constituents. Since products are invertible, we can always reconstruct the original entity from its
projected components if the need arises. Where many parts of the type-directed synthesis strategy
require us to guess derivations that might not lead to a candidate program, focusing allows us to
advance our proof search in a way that will never need to be backtracked.
Synthesis strategy. Integrating focusing into type-directed synthesis requires several fundamental changes to the existing theory, including the addition of an entirely new focusing judgment form
(Figure 6) and additional “focusing” contexts in the IR EFINE and EG UESS judgments (Figure 7).
The grammar of the type context Γ has subtly changed: rather than binding names to types, it now
binds elimination forms E (Figure 5). This alteration allows the context to store, not only a value
(x : τ1 ∗ τ2 ), but also the results of focusing: (π1 x : τ1 ) and (π2 x : τ2 ).
Two new contexts share the same structure as Γ: a focusing context F and an auxiliary context
A. We use the contexts as follows: when a new name or, more generally, elimination form is to be
added to the context, we first insert it into F. Items in F are repeatedly focused and reinserted into
F until they cannot be focused any further, at which point they move into Γ. The terms that we
project upon in the course of focusing are no longer useful for the synthesis process but are critical
for proving many properties about our context; they are therefore permanently moved into A.
Invariants. We can crystalize this workflow into several invariants about the contexts:
1. New names and elimination forms are added to F.
2. Elimination forms in Γ cannot be focused further.
3. No other rule can be applied until focusing is done.
4. Only elimination forms in Γ can be used to synthesize expressions.
We have already discussed the justification for the first two invariants. The third invariant ensures
that we prioritize backtracking-free focusing over the nondeterministic choices inherent in applying other rules. By always focusing first, we add determinism to our algorithm and advance our
proof search as far as possible before making any decisions that we might later have to backtrack.
To enforce this invariant, all of the IR EFINE and EG UESS rules that synthesize expressions require
F to be empty.
The final invariant ensures that we never generate terms from A or F that may not have been entirely focused, preserving the property that we only synthesize normal forms. Otherwise, we could
synthesize both a term (x : τ1 ∗ τ2 ) and the eta-equivalent expression ((π1 x, π2 x) : τ1 ∗ τ2 ). This
invariant implies an eta-long normal form for tuples: among the aforementioned two expressions,
our algorithm will always synthesize the latter.
15
Focusing judgments. Special IR EFINE -F OCUS and EG UESS -F OCUS rules allow the focusing
process to advance to completion so that synthesis can continue. They make calls to a separate
focusing judgment (Figure 6) that operates on contexts and example worlds. The judgment
Σ1 | Γ1 ; A1 ; F1 | W1 =⇒ Σ2 | Γ2 ; A2 ; F2 | W2
states that
The contexts Σ1 , Γ1 , A1 , and F1 along with example worlds W1 can be focused into the contexts
Σ2 , Γ2 , A2 , and F2 along with example worlds W2 .
The focusing rules move any elimination form of non-product type from F to Γ without modifying
any other contexts. On product types, the rule F OCUS -T UPLE projects on an elimination form in
F, returning the resulting expressions to F for possible additional focusing. It splits the examples
of the focused expression amongst the newly-created projections and moves the now-unnecessary
original expression into A.
3.4
Changes to Synthesis Judgments
The updated grammars, focusing rules, and judgments for type-directed synthesis with products
appear in Figures 5, 6, and 7 respectively. Below are summaries of the changes that were made to
the original theory in the process of adding products.
Tuple and projection expressions. Expressions for tuples and projection have been added to the
grammar of expressions (e). Likewise, the product type has been added to the grammar of types
(τ). Tuple creation is an introduction form with projection as the corresponding elimination form.
Single-argument constructors. Now that tuples exist as a separate syntactic construct, the
variable-arity constructors of the original type-directed synthesis judgments are redundant: creating a constructor with an argument of product type achieves the same effect. Without products,
variable-arity constructors were essential for defining recursive types that stored values (i.e., lists),
but this functionality can now entirely be implemented using tuples. Therefore, in the updated
judgments, all constructors have a single argument, enormously simplifying the IR EFINE -C TOR
and IR EFINE -M ATCH rules.
One drawback of this choice is that pattern matching can no longer deconstruct both a constructor and the implicit tuple embedded inside it as in the old judgments. Instead, pattern matching
always produces a single variable that must be focused and projected upon in order to extract its
components. Additional pattern-matching machinery could be added to the theory (and is present
in the implementation) to resolve this shortcoming.
Unit type. Now that all constructors require a single argument, what is the base case in an inductive datastructure? Previously, constructors with no arguments (O, Nil, etc.) served this purpose,
but the new rules require them to have an argument as well. To resolve this problem, a unit type
(unit) and unit value (written ()) have been introduced. Former no-argument constructors now
have type unit. A straightforward IR EFINE -U NIT rule synthesizes the unit value at unit type.
16
Tuple and projection judgments. A new IR EFINE -T UPLE judgment synthesizes a tuple at product type when all examples are tuple values with the same arity, inheriting much of the behavior
previously included in the IR EFINE -C TOR rule. It creates one subproblem for every value contained within the tuple and partitions the examples accordingly. Notably, there is no corresponding
EG UESS -P ROJ judgment. All projection instead occurs during focusing.
Corrected application judgment. The absence of an EG UESS -P ROJ rule required changes to
the EG UESS -A PP judgment. As a motivating example, consider the list unzip function:
let unzip (ls:pairlist) : list * list =
match ls with
| [] -> ([], [])
| (a, b) :: tl -> (a :: #1 (unzip tl), b ::
#2 (unzip tl))
Synthesizing unzip requires projecting on a function application (highlighted above).
The application rule in the original type-directed synthesis judgments is reproduced below. It has
been modified slightly to include the new context structure accompanying the updated judgments
in Figure 7:
Σ | Γ; A; · ` τ1 → τ
E
Σ | Γ; A; · | · ` τ1
E
Σ | Γ; A; · ` τ
E
I
I
EI
This judgment requires that we synthesize applications at the goal type and immediately use them
to solve the synthesis problem. In unzip, however, we need to synthesize an application at a product
type and subsequently project on it, meaning that, with this version of the EG UESS -A PP judgment,
we cannot synthesize unzip.
Alternatively, we could add a straightforward EG UESS -P ROJ judgment that would allow us to
project on applications:
Σ | Γ; A; · ` τ1 ∗ ... ∗ τm
E
Σ | Γ; A; · ` τk
1≤k≤m
E
E
πk E
Doing so, however, undermines the invariant that we only generate normal forms. Suppose we
wished to synthesize an expression at type τ1 ∗ τ2 . We could either synthesize an application
E I : τ1 ∗ τ2 or the corresponding eta-long version (π1 (E I), π2 (E I)) : τ1 ∗ τ2 .
For a solution to this quandary, we need to return to the underlying proof theory. In the sequent
calculus, the left rule for implication, which corresponds to our EG UESS -A PP judgment, appears
as below [6, 15]:
(1) Γ, A → B =⇒ A
(2) Γ, A → B, B =⇒ C
(3) Γ, A → B =⇒ C
That is:
If the context contains proof that A implies B and (1) we can prove A and (2) if the context also
contains a proof of B, then we can prove C, then (3) we can prove C.
This rule suggests a slightly different application rule, namely the one in Figure 7:
EG UESS -A PP
Σ | Γ, E1 : τ1 → τ2 ; A; · | · ` τ1
I
I1
Σ | Γ, E1 : τ1 → τ2 ; A; E1 I1 : τ2 ` τ
Σ | Γ, E1 : τ1 → τ2 ; A; · ` τ
E
E
E
E
The new EG UESS -A PP rule allows us to generate an application at any type, not just the goal type,
and to add it to the context. We then focus the application, thereby projecting on it as necessary.
17
F OCUS -C LOSURE -BASE
Σ1 | Γ1 ; A1 ; F1 | W1 =⇒∗ Σ1 | Γ1 ; A1 ; F1 | W1
F OCUS -C LOSURE -S TEP
Σ1 | Γ1 ; A1 ; F1 | W1 =⇒ Σ2 | Γ2 ; A2 ; F2 | W2
Σ2 | Γ2 ; A2 ; F2 | W2 =⇒∗ Σ3 | Γ3 ; A3 ; F3 | W3
Σ1 | Γ1 ; A1 ; F1 | W1 =⇒∗ Σ3 | Γ3 ; A3 ; F3 | W3
Figure 8: Judgments for the transitive closure of focusing.
T-VAR
x:τ ∈Γ
Γ`x:τ
T-U NIT
Γ ` () : unit
T-A BS
Γ, f : τ1 → τ2 , x : τ1 ` e : τ2
Γ ` fix f (x : τ1 ) : τ2 = e : τ1 → τ2
T-A PP
Γ ` e1 : τ1 → τ2
Γ ` e2 : τ1
Γ ` e1 e2 : τ2
T-T UPLE
∀ i ∈ m, Γ ` ei : τi
Γ ` (e1 , ..., em ) : τ1 ∗ ... ∗ τm
T-P ROJ
Γ ` e : τ1 ∗ ... ∗ τm
1≤k≤m
Γ ` πk e : τk
T-C TOR
C:τ →B∈Σ
Γ`e:τ
Γ`C e:B
T-M ATCH
∀ i ∈ m, Ci : τi → B ∈ Σ
Γ`e:B
∀ i ∈ m, Γ, x : τi ` ei : τ
Γ ` match e with Ci x → ei
i∈m
:τ
Figure 9: Typing judgments for the simply typed lambda calculus with recursion, algebraic
datatypes, and products.
This application now becomes available for use in the synthesis problem on which we are currently
working. In effect, we take a forward step in the context by guessing an application.
Note that, although the new application judgment corresponds more closely to the proof theory,
the application judgment in the original type-directed synthesis rules was not incorrect. Without
projection, the only possible use for a function application in the context would be to immediately
use it to satisfy the synthesis goal. The application judgment therefore simply short-circuited the
process of first adding an application to the context and then using it via EG UESS -C TX.
3.5
Properties
Overview
This section describes a number of useful properties of the type-directed synthesis system with
products and focusing. The accompanying proofs and lemmas appear in Appendix A. Note that,
since none of the synthesis or focusing judgments modify the constructor context (Σ), it has been
elided from the following theorems for readability; for our purposes, it is assumed to be a fixed
and globally available entity.
Theorem 2.2: Progress of Focusing
F1 = ·
OR
Γ1 ; A1 ; F1 | W1 =⇒ Γ2 ; A2 ; F2 | W2
Whenever the focusing context is non-empty, a focusing judgment can be applied.
18
S-C TOR
e → e0
C e → C e0
S-T UPLE
ek → e0k
(v1 , ..., ek , ..., en ) → (v1 , ..., e0k , ..., en )
S-A PP 1
e1 → e01
e1 e2 → e01 e2
S-M ATCH 1
S-P ROJ 2
1≤k≤m
πk (e1 , ..., em ) → ek
S-A PP 2
(fix f (x : τ1 ) : τ2 = e1 ) e2 → e1 [x 7→ e2 , f 7→ fix f (x : τ1 ) : τ2 = e1 ]
S-M ATCH 2
e → e0
match e with Ci x → ei
S-P ROJ 1
e → e0
πk e → πk e0
i∈m
→ match e0 with Ci x → ei
1≤k≤m
i∈m
match Ck v with Ci x → ei
i∈m
→ ek [x 7→ v]
Figure 10: Small-step semantics for the simply typed lambda calculus with recursion, algebraic
datatypes, and products.
S-C LOSURE -S TEP
e → e0
e0 →∗ e00
e →∗ e00
S-C LOSURE -BASE
e →∗ e
Figure 11: Judgments for the transitive closure of the small step semantics.
T YPE -C TX -O NE -WF
` Γ wf
`E :τ
` Γ, E : τ w f
T YPE -C TX -E MPTY-WF
` · wf
Figure 12: Judgments for the well-formedness of a type or focusing context.
[Γ]vars
[·]vars = {}
[Γ, x : τ]vars = {x : τ} ∪ [Γ]vars
[Γ, e : τ]vars = [Γ]vars where e 6= x
Figure 13: Rules for the variable extraction relation, which produces the set of all type bindings of
variables from a context that might otherwise contain arbitrary elimination forms.
19
Theorem 2.3: Preservation of Focusing
IF
AND
THEN
` Γ1 w f , ` A 1 w f , ` F 1 w f
Γ1 ; A1 ; F1 | W1 =⇒ Γ2 ; A2 ; F2 | W2
` Γ2 w f , ` A 2 w f , ` F 2 w f
The application of a focusing judgment preserves the well-formedness of Γ, A, and F. The judgments for well-formedness are in Figure 12.
Theorem 2.5: Termination of Focusing
Γ1 ; A1 ; F1 | W1 =⇒∗ Γ2 ; A2 ; · | W2
The focusing process always terminates. The judgments for the transitive closure of focusing are
in Figure 8.
Theorem 3.1: Focusing is Deterministic
IF
AND
THEN
Γ1 ; A1 ; F1 | W1 =⇒∗ Γ2a ; A2a ; · | W2
Γ1 ; A1 ; F1 | W1 =⇒∗ Γ2b ; A2b ; · | W2
Γ2a = Γ2b
AND
A2a = A2b
The focusing process proceeds deterministically.
Theorem 3.2: Order of Focusing
E0
I0
Define the judgments Γ; A; F ` τ
E and Γ; A; F | W ` τ
I to be identical to those in Figure
7 except that F need not be empty for any synthesis rule to be applied.
Γ; A; F | W ` τ
Γ; A; F ` τ
E
E
I
I
IFF
IFF
I0
Γ; A; F | W ` τ
Γ; A; F ` τ
E0
I
E
If we make the proof search process more non-deterministic by permitting arbitrary interleavings
of focusing and synthesis, we do not affect the set of expressions that can be generated. This more
lenient process makes many of the other theorems simpler to prove.
Admissibility of Focusing
E
I
Define the judgments Γ ` τ
E and Γ | W ` τ
I to be identical to those in Figure 7 without
focusing. Instead of focusing to project on tuples, we introduce the EG UESS -P ROJ rule as below:
Γ, E1 : τ1 ∗ ... ∗ τm , π1 E1 : τ1 , ..., πm E1 : τm ` τ
Γ, E1 : τ1 ∗ ... ∗ τm ` τ
20
E
E
E
E
EG UESS -P ROJ
Theorem 4.1: Soundness of Focusing
IF
IF
I
Γ; ·; · | · ` τ
E
Γ; ·; · ` τ
I
I
I
Γ|·`τ
THEN
Γ`τ
THEN
E
I
I
Any judgment that can be proven in the system with focusing can be proven in the system without
focusing. That is, the system with focusing is no more powerful than the system without focusing.
Theorem 4.2: Completeness of Focusing
IF
IF
Γ|·`τ
Γ`τ
E
I
I
THEN
I
THEN
I
Γ; ·; · | · ` τ
Γ; ·; · ` τ
E
I
I
Any judgment that can be proven in the system without focusing can be proven in the system with
focusing. That is, the system with focusing is at least as powerful as the system without focusing.
Together, Theorems 4.1 and 4.2 demonstrate that the system with focusing is exactly as powerful
as the system without focusing
Note 4.3: Pushing Examples Through Elimination Forms
Observe that we can push examples through the tuple focusing judgments but not the EG UESS P ROJ judgment. The sole distinction between these two judgments is where they may be used in
the synthesis process: EG UESS -P ROJ must be used during the elimination-guessing phase while
focusing may take place at any point in the synthesis process. This discrepancy demonstrates that
it is possible, at least in some cases, to make use of example information when synthesizing elimination forms. Since guessing elimination forms requires raw term enumeration, any constraints
on this process would yield significant performance improvements. I leave further investigation of
this behavior to future work.
Theorem 5.1: Soundness of Synthesis Judgments
IF
THEN
IF
THEN
n
Γ; A; F | hE; εi ` τ
[Γ, A, F]vars ` I : τ
I
I
AND
∀ i ∈ n, Ei (I) →∗ εi
E
Γ; A; F ` τ
E
[Γ, A, F]vars ` E : τ
The type-directed synthesis system will always produce expressions with the desired type that obey
the input-output examples. The typing judgments, small-step semantics, and variable extraction
relation are defined in Figures 9, 10, and 13.
Completeness of Synthesis Judgments
Stating a completeness theorem for type-directed synthesis, let alone proving it, remains an open
question. I therefore do not attempt to prove completeness here.
21
Program
make_pair
make_triple
make_quadruple
fst
snd
unzip (bool)
unzip (nat)
zip (bool)
zip (nat)
Examples
2
2
2
2
2
7
7
28
28
Program
bool_band
list_append
list_fold
list_filter
list_length
list_map
list_nth
list_sorted_insert
list_take
nat_add
nat_max
nat_iseven
tree_binsert
tree_nodes_at_level
tree_preorder
Time (s)
0.001
0.003
0.003
0.003
0.003
0.005
0.006
1.183
2.879
(a) The number of examples and seconds necessary to
synthesize various canonical programs that make use
of tuples and projection.
Before (s)
0.006
0.016
0.304
2.943
0.003
0.001
0.061
91.427
0.197
0.016
0.037
0.005
0.762
1.932
0.081
After (s)
0.007
0.024
0.183
0.657
0.003
0.039
0.072
10.600
0.083
0.011
0.065
0.006
0.865
0.812
0.076
(b) A comparison of the synthesis performance between the system as described in the original typedirected synthesis paper (Before) and the updated version of the system that includes the changes described
in this thesis (After).
Figure 14: Performance evaluation of the prototype implementation.
4
Evaluation
4.1
Implementation Changes
The prototype implementation of the type-directed synthesis system consists of approximately
3,000 lines of OCaml. The structure of the implementation largely mirrors that of the synthesis
judgments, with type and constructor contexts and recursive proof search for a program that satisfies the input-output examples. The implementation deviates only when searching for elimination
forms, during which it constructs terms bottom-up instead of the less-efficient top-down guessing
suggested by the synthesis judgments.
To add tuples, I altered between 1,500 and 2,000 lines of code. Notable changes included:
• Modifying the parser and type-checker to add tuples, projection, and single-argument constructors.
• Entirely rewriting the type context to store elimination forms and perform focusing.
• Updating the synthesis process to conform to the new judgments.
• Adding a unit type and value throughout the system.
• Extending the pattern-matching language to include mechanisms for recursively matching on
the tuples embedded within constructors.
• Refactoring, simplification, modularization, dead code elimination, and other cleanup to prepare the system for future extensions.
22
4.2
Performance Evaluation
Raw Performance. Figure 14a summarizes the performance of the prototype implementation
on a number of canonical programs that make use of tuples and projection. Programs that did not
require recursive calls, like make_pair and fst, were synthesized instantly. The list unzip
function was likewise generated in a small fraction of a second, but required a relatively large
number of examples. The synthesizer would overfit if any example was discarded. The list zip
function took the longest to synthesize and required far more examples. The reason for this behavior is that zip must handle many more cases than unzip, such as the possibility that the two lists
are of different lengths.
The unzip and zip programs were tested twice: once each with lists comprising boolean and
natural number values. The natural number case predictably took longer, since the synthesizer
likely spent a portion of its time generating programs that manipulate the values stored in the lists
as opposed to the lists themselves. Since there are a finite number of boolean values, the search
space of these faulty programs was much smaller. With polymorphic lists, this behavior could be
eliminated completely.
Performance Regression. The changes made in the process of adding tuples affected the entire
synthesis system. Even programs that do not directly make use of tuples could take longer to
synthesize. All multi-argument constructors, including Cons, now require tuples when created
and projection when destroyed. No-argument constructors, which formerly served as the base
cases of the synthesis process, now store values of unit type. Whenever a variable is added to
the context, it goes through the process of focusing before it can be used. While small, these
modifications add overhead that impacts any program we might attempt to synthesize.
Figure 14b contains a performance comparison of the system as described in the original paper
with the updated version that includes tuples. The running times were generated using the same
computer on the same date; they are not lifted directly from the type-directed synthesis paper [14].
The selected programs are a subset of those tested in the original paper. Overall, it appears that
there has been little tangible change in the performance of the system. Most examples saw at
most slight fluctuations. Several of the longer-running examples, however, experienced dramatic
performance improvements, which might have resulted from changes made to the memoization
behavior of the system when rewriting the type context.
5
5.1
Related Work
Solver-free Synthesis
StreamBit. Among the first modern synthesis techniques was sketching [19], developed by Armando Solar-Lezama, which allows users to specify an incomplete program (a sketch) that is later
filled in by a synthesizer. Although this line of work evolved into a general-purpose, solver-aided
synthesis language (see Section 5.2), the initial prototype was designed for programs that manipulate streams of bits (bitstreaming programs), a notable example of which is a cryptographic cipher.
In the system’s language, StreamBit, users specify composable transformers, splits, and joins on
streams of bits that together form a program. The paper states that StreamBit helped programmers
design optimized ciphers that were several times faster than existing versions written in C.
23
The process of synthesis in StreamBit involves two steps. First, a user writes a correctness
specification in the form of a program that implements the desired functionality. A user then writes
a sketch that describes some, but not all, of the program to be synthesized. The remaining integer
and boolean holes are filled by the synthesizer (using mathematical transformations and linear
algebra) to create a program whose behavior is equivalent to the specification. The theory behind
this design is that it is easy to write the naive, unoptimized versions of bitstreaming programs
that serve as specifications but difficult to manage the low-level intricacies of optimized variants.
Sketching allows a human to describe the algorithmic insights behind possible optimizations while
leaving the synthesizer to perform the complex task of working out the details.
Sketching also permits a programmer to determine that a particular syntactic template is not a
viable solution should the synthesizer fail to generate a program. (The idea of discovering an algorithm in top-down fashion by testing syntactic templates for viability with the aid of a synthesizer
or - more broadly - a solver is discussed at length in [4].)
Flash Fill. Designed by Sumit Gulwani, Flash Fill [7] synthesizes string transformations using
input-output examples. The system is featured in Excel 2013, where it generalizes user-completed
data transformations on a small set of examples into a formula that works for the larger spreadsheet.
The synthesizer behind Flash Fill relies on a domain-specific language with constructs for substring
manipulation, regular expression matching, loops, and conditionals.
Flash Fill works by examining each example in turn. It uses a graph to store the set of all
possible transformations that could convert each input into its corresponding output. By taking the
intersection of these graphs among all examples, Flash Fill synthesizes a transformation that works
in the general case. This process relies on a variety of string manipulation algorithms and relations
between transformations rather than a solver.
Flash Fill epitomizes several key synthesis design concepts. Like type-directed synthesis, Flash
Fill derives a program from input-output examples, which are more convenient for programmers
and non-programmers alike, rather than a complete specification. It uses a compact representation namely a transformation graph - to capture a large number of possible programs in a small amount
of storage. Finally, it synthesizes programs over a specialized, domain-specific language whose
search space is smaller than that of a general-purpose language, reducing the difficulty of the
overall synthesis task.
Escher. The closest relative to type-directed synthesis is Gulwani’s general-purpose synthesis
system, Escher [2]. Like type-directed synthesis, Escher aims to generate general-purpose, recursive programs from input-output examples, although Escher assumes the existence of an oracle
that can be queried when the synthesizer needs additional guidance. Gulwani notes that Escher’s
synthesis process is “generic” in that it works over an arbitrary domain of instructions, although
he only demonstrates the system on a language with algebraic datatypes.
In contrast to type-directed synthesis, Escher uses bottom-up forward search, generating programs of increasing size out of instructions, recursive calls to the function being synthesized, input
variables, and other language atoms. Escher maintains a goal graph, which initially comprises a
single node that stores the set of outputs a synthesized program would need to produce to satisfy
the examples and solve the synthesis problem. When a generated program p satisfies some (but not
all) of the outputs in a goal node, Escher creates a conditional that partitions the program’s control
flow, directing the satisfied cases to p. Doing so creates two new goal nodes: one for the condi24
tional’s guard and the other to handle the cases that p failed to satisfy. When no more unsatisfied
goal nodes remain, Escher has synthesized a program.
5.2
Solver-aided Synthesis
Syntax-guided Synthesis. Published in 2013 and co-authored by many leading researchers in
solver-aided synthesis (including Rastislav Bodik, Sanjit Seshia, Rishabh Singh, Armando SolarLezama, Emina Torlak, and others), Syntax-Guided Synthesis (SyGuS) [3] represents an attempt
to codify a “unified theory” of synthesis. The authors seek to create a standardized input format
for synthesis queries, similar to those developed for SMT and SAT solvers. Doing so for SMT and
SAT solvers has led to shared suites of benchmarks, objective performance comparisons between
solvers, and annual solver competitions, an effort credited with encouraging researchers to advance
the state of the art. More importantly, a common interface means that higher-level applications
can call off-the-shelf solvers, abstracting away solvers as subroutines. When solver performance
improves or better solvers are developed, applications instantly benefit. The SyGuS project seeks
to create a similar environment for synthesizers [1].
The common synthesis format encodes three pieces of information: a background theory in
which the synthesis problem will be expressed, a correctness specification dictating the requirements of an acceptable solution, and the grammar from which candidate programs may be drawn.
The paper includes a thorough taxonomy of modern synthesis techniques. It distinguishes
between deductive synthesis, in which a synthesizer creates a program by proving its logical
specification, and inductive synthesis, which derives a program from input-output examples. It
also describes a popular inductive synthesis algorithm, counterexample-guided inductive synthesis
(CEGIS). In CEGIS, a solver repeatedly generates programs that it attempts to verify against a
specification. Whenever verification fails, the synthesis procedure can use the verifier’s counterexample as an additional constraint in the program-generation procedure. If no new program can
be constructed, synthesis fails; if no counterexample can be found, synthesis succeeds. In practice, most synthesis problems require only a few CEGIS steps to generate the counterexamples
necessary to discover a solution.
Finally, the authors present an initial set of benchmarks in the common synthesis format and a
performance comparison of existing synthesis systems.
In the sections that follow, I detail several lines of research that fit into the framework of solveraided syntax-guided synthesis.
Rosette. Rosette, first presented in [22] by Emina Torlak, is a framework for automatically enabling solver-aided queries in domain-specific languages (DSLs). Rosette compiles a subset of
Racket into constraints (although the full language can be reduced to this subset), thereby enabling
the same for any DSL embedded in Racket. These constraints are then supplied to a solver, which
can facilitate fault-localization, verification, angelic non-determinism, and synthesis. To allow
DSLs to interact with the solver directly, Rosette includes first-class symbolic constants, which
represent holes whose values are constrained by assertions and determined by calls to the solver.
Rosette’s synthesis procedure fills a hole in a partially-complete program given a bounded-depth
grammar (a “sketch”) from which solutions are drawn. Correctness specifications come in the form
of assertions or another program that implements the desired functionality. Rosette compiles the
grammar into a nondeterministic program capturing every possible AST that the grammar might
produce. Nondeterministic choices are compiled as symbolic values. The solver determines the
25
nondeterministic choices necessary to fill the hole in a way that conforms to the specification,
thereby selecting an appropriate AST and solving the synthesis problem.
In [23], Torlak details the symbolic compiler underlying Rosette. The paper includes a textbookquality discussion of two possible techniques for compiling Racket into constraints: symbolic execution and bounded model checking. In symbolic execution, the compiler must individually explore every possible code path of the program, branching on loops and conditionals. This approach
concretely executes as much of the program as possible but produces an exponential-sized output.
In contrast, bounded model checking joins program states back together after branching, creating
new symbolic φ -values representing values merged from different states. The resulting formula is
smaller but contains fewer concrete values. Both approaches require bounded-depth loop unrolling
and function inlining to ensure termination of the compilation procedure.
Torlak adopts a hybrid strategy that draws on the strengths of both techniques: Rosette uses
bounded model checking but, when merging, first joins values like lists structurally rather than
symbolically, preserving opportunities for concrete evaluation. The compiler assumes that the
caller has implemented some form of termination guarantee, such as structural recursion on an
inductive datastructure or an explicit limit on recursion depth.
Kaplan, Leon, and related synthesis work. Viktor Kuncak’s Kaplan [10] explores the design
space of languages with first-class constraints, symbolic values, and the ability to make calls to a
solver at run-time in a manner similar to Rosette, adding these features to a pure subset of Scala.
Kaplan’s constraints are functions from inputs parameters to a boolean expression. Calling solve on
a constraint generates concrete values for the input parameters that cause the boolean expression to
be true. Kaplan also includes mechanisms for iterating over all solutions to a constraint, choosing
a particular solution given optimality criteria, or lazily extracting symbolic rather than concrete
results. Since constraints are simply functions, they can be composed or combined arbitrarily.
Kaplan introduces a new control-flow construct: an assuming...otherwise block. This structure functions like an if-statement where the guard is a constraint rather than a boolean expression.
If the constraint has a solution, the code in the assuming branch is executed; if it is unsatisfiable,
the otherwise block executes instead.
Kuncak has also explored the opposite approach, performing synthesis entirely at compile-time
rather than run-time. Invoking a solver at run-time, he argues, is unpredictable and unwieldy.
Ideally, synthesis should function as a compiler service that provides immediate, static feedback to
developers and always succeeds when a synthesis problem is feasible. In [11], Kuncak attempts to
turn decision procedures for various theories (i.e., linear arithmetic and sets) into corresponding,
complete synthesis procedures. As in Kaplan, users specify constraints, but the synthesis engine
derives a program that statically produces values for the constraint’s inputs instead of invoking a
solver at run-time.
Combining the lessons of these two systems, Kuncak developed a synthesis engine for creating
general-purpose, recursive programs [8]. The synthesizer uses a hybrid algorithm with deductive
and inductive steps. The synthesis problem is expressed as a logical specification to which a
program must conform rather than individual input-output examples.
In the deductive step, the synthesizer applies various rules for converting a specification into a
program in a fashion similar to type-directed synthesis. Special rules are generated to facilitate
recursively traversing each available inductive datatype.
At the leaves of this deductive process, the synthesizer uses an inductive CEGIS step to choose
candidate expressions. Like Rosette, it encodes bounded-depth ASTs that generate terms of a par26
ticular type as nondeterministic programs. By querying a solver for the nondeterministic choices
that satisfy the specification, the synthesizer derives a candidate expression. The synthesizer then
attempts to verify this expression. If verification fails, it receives a new counterexample that it
integrates into the expression-selection process. The synthesizer increases the AST depth until
some maximum value, at which point it determines that the synthesis problem is infeasible. As the
program size grows, verification becomes expensive, so the synthesizer uses concrete execution to
eliminate candidate programs more quickly.
One notable innovation behind this system is that of abductive reasoning, where conditionals
that partition the problem space are synthesized, creating more restrictive path conditions that
make it easier to find candidate solutions. This method is similar to that in Escher [2], which
creates a conditional whenever it synthesizes a program that partially satisfies the input-output
examples.
Sketch. The synthesis technique of sketching (introduced in Section 5.1 with the StreamBit language) eventually culminated in a C-like, general-purpose programming language, called Sketch,
which synthesizes values for integer and boolean holes. Although Sketch’s holes are limited to
these particular types, users can creatively place holes in conditional statements to force the synthesizer to choose between alternative code blocks in a manner similar to Rosette. Like StreamBit,
a Sketch specification is a program that implements the desired functionality.
Sketch was first introduced by Solar-Lezama in [20] and continues to see active development [12]. In [20], Solar-Lezama describes both the core Sketch language and the underlying,
solver-aided synthesis procedure, CEGIS. Given a specification program s and a partial program
p with input values ~x and holes ~h, Sketch’s synthesis problem may be framed as the question of
whether ∃ ~h, ∀ ~x, p(~h,~x) = s(~x). Solving this quantified boolean formula is Σ2 -complete (i.e., at
least as hard as NP-complete), making it too difficult to handle directly.
Instead, Solar-Lezama presents a two-step process for synthesizing a solution. First, a solver
guesses values ~v for ~h. A verifier then checks whether ∀ ~x, p(~v,~x) = s(~x). If the verification step
succeeds, then ~v is a solution to the synthesis problem; otherwise, the verifier returns a counterexample input c~1 . The solver then guesses new values ~v for ~h by integrating c~1 into the formula:
∃ ~h, p(~h, c~1 ) = s(~
c1 ). If no values for ~h can be found, synthesis fails; otherwise, we return to the
verification step and the CEGIS loop continues. In general, it takes no more than a few examples
to pinpoint values for ~h.
The Sketch language presented in the original paper includes several macro-like operators for
managing holes at a higher level. There are loop statements that repeat a hole for a particular number of iterations and generators that abstract holes into recursive hole-generating functions. Both
structures are evaluated and inlined at compile-time. Later versions of Sketch introduce a regular expression-like syntax for describing AST grammars and assertions in place of a specification
program [13]. Additional papers add features like concurrency [21]. Solar-Lezama’s dissertation
collects these ideas in a single document [13].
Autograder. In an exemplary instance of using synthesis as a service, Rishabh Singh built on
Sketch to create an autograder for introductory computer science assignments [18]. The autograder
relies on two premises: (1) a solution program is available and (2) common errors are known. A
user specifies rewrite rules consisting of possible corrections to a student program.
The autograder compiles student programs into Sketch. At each location where a rewrite rule
might apply, the autograder inserts a conditional statement guarded with a hole that chooses
27
whether to leave the previous statement in place or take the result of a rewrite rule. When Sketch is
run on the student program with the solution as its specification, the values for the holes determine
the corrections necessary to fix the student program. Singh had to modify Sketch’s CEGIS algorithm to also consider an optimization constraint: the holes should be filled such that the smallest
number of changes are made to the original program.
5.3
Theorem Proving
Frank Pfenning’s lecture notes for his course Automated Theorem Proving [15] are a textbookquality resource on techniques for efficient proof search in constructive logic. Building off of
an introduction to natural deduction and its more wieldy analog, the sequent calculus, Pfenning
describes methods for performing forward and backward search over first order logic. Among
strategies presented are focusing, in which one should eagerly apply proof steps whose effects are
invertible, and the subformula property, which states that, in forward search, the only necessary
proof steps are those that produce a subformula of the goal statement. The concepts portrayed in
Pfenning’s text, together with Gentzen’s classic treatment of the sequent calculus [6], represent the
theoretical underpinnings of many of the ideas contained in this thesis.
6
6.1
Future Work
Additional Language Features
Records and Polymorphism. Although the original type-directed synthesis system included
only the simply-typed lambda calculus with small additions, this representation already captured a
useful portion of a pure subset of ML. This thesis adds a number of missing components, including
tuples and the unit type. The most prominent absent features are records and polymorphism.
Records should generally resemble tuples, however their full ML specification would require
adding subtyping to the synthesis system. The primary dividends of integrating records would be
to explore synthesis in the presence of subtyping and to help transform the type-directed synthesis
implementation from a research prototype into a fully-featured programming tool.
Polymorphism presents a wider array of challenges and benefits. System F-like parametric
polymorphism would require significant theoretical, syntactic, and implementation changes to the
synthesis system. Adding the feature would, however, bring the synthesis language far closer to
ML and dramatically improve performance in some cases.
Consider, for example, the process of synthesizing map. If the synthesizer knows that map will
always be applied to a list of natural numbers, it is free use the S constructor or to match on an
element of the list. If, instead, map were to work on a polymorphic list about whose type we know
nothing, then the synthesizer will have far fewer choices and will find the desired program faster.
This behavior is a virtuous manifestation of Reynolds’ abstraction theorem [17].
Solver-Aided Types. The existing synthesis system works only on algebraic datatypes. A logical
next step would be to integrate true, 32-bit integers and 64-bit floating point numbers. Doing so
within the existing framework, however, would be impossibly inefficient. We could represent
integers as an algebraic datatype with 232 base cases, but the synthesizer would time out while
guessing every possible number.
28
Instead, we might integrate with a solver equipped with the theory of bitvectors. These tools
have proven to be quite efficient in other, first-order synthesis systems [13]. By combining the
higher-order power of type-directed synthesis with the performance of modern SMT-solvers, we
could leverage the best of both approaches while bringing the system’s language closer to full ML.
Monads and Linear Types. Type-directed synthesis is far more extensible than most other synthesis frameworks: in order to add a new language feature, we need only consult the corresponding
system of logic. A number of desirable features map to well-understood logical frameworks. For
example, monads are captured by lax logic and resources by linear logic. By integrating these
systems of logic into our synthesis judgments, we could generate effectful programs that raise exceptions or read from and write to files. These operations are at the core of many “real” programs
but remain beyond the capabilities of any existing synthesis system.
6.2
Refinements and Intersections
In a general sense, types represent (possibly-infinite) sets of expressions. Although we are used to
the everyday notions of integers, floating-point numbers, and boolean values, we could also craft
types that contain a single item or even no elements at all. This is the insight behind refinement
types [5], which create new, more restrictive subtypes out of existing type definitions. For example, we might create the type Nil as a subtype of list that contains only the empty list and
Cons(nat, list) that captures all non-empty lists. Refinements may be understood as simple
predicates on types: stating that x has refinement 2 is equivalent to the type {x ∈ nat | x = 2}.
The entire example grammar ε from the type-directed synthesis system comprises a language
of refinements on the type system τ. In order to fully express partial functions as refinements, we
need one additional operator: intersection. The refinement ∧(ε1 , ε2 ) requires that an expression
satisfy both ε1 and ε2 . For example, the inc function should obey the both 0 → 1 and 1 → 2.
Partial functions are simply intersections of arrow refinements.
There are two benefits to characterizing examples as types. The first is that, by lifting examples
into the type system, we are able to fully exploit the Curry-Howard isomorphism and decades of
research into automated theorem proving. Rather than forging ahead into unknown territory with
examples, we are treading familiar ground with an especially nuanced type system. The second is
that we can freely draw on more complex operators from refinement type research to enrich our
example language. For instance, we might add base types as refinements so users can express the
set of all non-empty lists as discussed before, or integrate union types to complement intersections.
Still richer refinements are possible. One intriguing avenue is that of negation, a refinement that
functions as the set complement operator over types. Negation allows us to capture the concept of
negative examples, for example not(0 → 1), or to express to the synthesizer that a particular property of a program does not hold. Adding negation to the type-directed synthesis system enables
new algorithms for generating programs, including a type-directed variant of CEGIS. Negation
represents one of many possible specification language enhancements that refinements make possible. We might consider adding quantifiers over specifications or even dependent refinements.
29
7
Conclusions
Type-directed synthesis offers a new approach to an old, popular, and challenging problem. Where
most existing systems rely on solvers to generate programs from specifications, type-directed synthesis applies techniques from automated theorem proving. At the time of writing, however, typedirected synthesis extends only to a minimal language: the simply-typed lambda calculus with
recursive functions and algebraic datatypes.
In this thesis, I have described extending type-directed synthesis to include product types, which
required rethinking both the underlying theory and the prototype implementation. Products simplify many of the existing rules but invite a host of other theoretical challenges. I leveraged the fact
that they are invertible to speed up the proof search process with focusing, which in turn required
new judgment forms and additional proofs about the behavior of the updated rules.
The prototype implementation now efficiently synthesizes tuples and projections, validating the
new theory. More importantly, it concretely demonstrates a key advantage of type-directed synthesis: extending the algorithm with new syntax is as simple as integrating the corresponding systems
of logic. Although products represent a seemingly small addition, they pave the way for the synthesis of other, more powerful language features.
8
Acknowledgements
I would like to thank:
My adviser, Professor David Walker, who has spent the past two years teaching me nearly
everything I know about programming languages and research. I am grateful beyond measure for
his patient guidance, generosity with his time, and willingness to share his expertise.
Professor Steve Zdancewic and Peter-Michael Osera at the University of Pennsylvania, who
kindly shared their research project and allowed me to run rampant in their codebase. Their
advice, insight, and patience made this thesis possible.
My parents and sister, who endured the long days and late nights that it took to bring this research
to fruition and kindly read countless drafts of this seemingly incomprehensible paper.
References
[1] Syntax-guided synthesis. URL http://www.sygus.org/.
[2] A. Albarghouthi, S. Gulwani, and Z. Kincaid. Recursive program synthesis. In Computer Aided
Verification, pages 934–950, 2013.
[3] R. Alur, R. Bodik, G. Juniwal, M. M. Martin, M. Raghothaman, S. A. Seshia, R. Singh, A. SolarLezama, E. Torlak, and A. Udupa. Syntax-guided synthesis. In Formal Methods in Computer-Aided
Design (FMCAD), 2013, pages 1–17. IEEE, 2013.
[4] R. Bodik, S. Chandra, J. Galenson, D. Kimelman, N. Tung, S. Barman, and C. Rodarmor. Programming with angelic nondeterminism. In ACM Sigplan Notices, volume 45, pages 339–352. ACM, 2010.
30
[5] T. Freeman and F. Pfenning. Refinement types for ML, volume 26. ACM, 1991.
[6] G. Gentzen. Investigations into logical deduction. American philosophical quarterly, pages 288–306,
1964.
[7] S. Gulwani. Automating string processing in spreadsheets using input-output examples. In ACM
SIGPLAN Notices, volume 46, pages 317–330. ACM, 2011.
[8] E. Kneuss, I. Kuraj, V. Kuncak, and P. Suter. Synthesis modulo recursive functions. In Proceedings of
the 2013 ACM SIGPLAN international conference on Object oriented programming systems languages
& applications, pages 407–426. ACM, 2013.
[9] E. Kneuss, I. Kuraj, V. Kuncak, and P. Suter. Synthesis modulo recursive functions. In Proceedings of
the 2013 ACM SIGPLAN international conference on Object oriented programming systems languages
& applications, pages 407–426. ACM, 2013.
[10] A. S. Köksal, V. Kuncak, and P. Suter. Constraints as control. In ACM SIGPLAN Notices, volume 47,
pages 151–164. ACM, 2012.
[11] V. Kuncak, M. Mayer, R. Piskac, and P. Suter. Complete functional synthesis. ACM Sigplan Notices,
45(6):316–329, 2010.
[12] A. S. Lezama. Armando solar-lezama. URL http://people.csail.mit.edu/asolar/.
[13] A. S. Lezama. Program synthesis by sketching. PhD thesis, University of California, Berkeley, 2008.
[14] P.-M. Osera and S. Zdancewic. Type-and-example-directed program synthesis. In Proceedings of
the 36th ACM SIGPLAN Conference on Programming Language Design and Implementation. ACM,
2015.
[15] F. Pfenning.
Automated theorem proving, 2004.
courses/atp/index.html.
URL http://www.cs.cmu.edu/~fp/
[16] F. Pfenning and R. Davies. A judgmental reconstruction of modal logic. Mathematical structures in
computer science, 11(04):511–540, 2001.
[17] J. C. Reynolds. Types, abstraction and parametric polymorphism. 1983.
[18] R. Singh, S. Gulwani, and A. Solar-Lezama. Automated feedback generation for introductory programming assignments. In ACM SIGPLAN Notices, volume 48, pages 15–26. ACM, 2013.
[19] A. Solar-Lezama, R. Rabbah, R. Bodík, and K. Ebcioğlu. Programming by sketching for bit-streaming
programs. ACM SIGPLAN Notices, 40(6):281–294, 2005.
[20] A. Solar-Lezama, L. Tancau, R. Bodik, S. Seshia, and V. Saraswat. Combinatorial sketching for finite
programs. In ACM Sigplan Notices, volume 41, pages 404–415. ACM, 2006.
[21] A. Solar-Lezama, C. G. Jones, and R. Bodik. Sketching concurrent data structures. In ACM SIGPLAN
Notices, volume 43, pages 136–148. ACM, 2008.
[22] E. Torlak and R. Bodik. Growing solver-aided languages with rosette. In Proceedings of the 2013 ACM
international symposium on new ideas, new paradigms, and reflections on programming & software,
pages 135–152. ACM, 2013.
31
[23] E. Torlak and R. Bodik. A lightweight symbolic virtual machine for solver-aided host languages. In
Proceedings of the 35th ACM SIGPLAN Conference on Programming Language Design and Implementation, page 54. ACM, 2014.
32
A
Proofs
A.1
Notes
In many of the judgments below, the constructor context (Σ) has been elided for the sake of readability. Since
none of the synthesis or focusing judgments modify the constructor context, it is assumed to be a fixed and
globally available entity.
A.2
Behavior of Focusing
Lemma 2.1: Properties of Focusing
IF
Σ1 | Γ1 ; A1 ; F1 | W1 =⇒ Σ2 | Γ2 ; A2 ; F2 | W2
THEN
Σ1 = Σ2 , Γ1 ⊆ Γ2 , A1 ⊆ A2
Proof: Immediately follows from the focusing judgments.
Theorem 2.2: Progress of Focusing
F1 = ·
Γ1 ; A1 ; F1 | W1 =⇒ Γ2 ; A2 ; F2 | W2
OR
Proof: By case analysis on F1 .
Case: F1 = ·.
Case: F1 = F, E : τ. Proof by case analysis on τ.
Subcase:
Subcase:
Subcase:
Subcase:
τ
τ
τ
τ
= unit.
= B.
= τ1 → τ2 .
= τ1 ∗ ... ∗ τn .
Apply F OCUS -U NIT.
Apply F OCUS -BASE.
Apply F OCUS -F UN.
Apply F OCUS -T UPLE.
Theorem 2.3: Preservation of Focusing
` Γ1 w f , ` A1 w f , ` F1 w f
THEN ` Γ2 w f , ` A2 w f , ` F2 w f
IF
AND
Γ1 ; A1 ; F1 | W1 =⇒ Γ2 ; A2 ; F2 | W2
Proof: By case analysis on the judgment Γ1 ; A1 ; F1 | W1 =⇒ Γ2 ; A2 ; F2 | W2 .
Case: F OCUS -U NIT.
(1)
F1 = F2 , E : unit By F OCUS -U NIT
(2)
Γ2 = Γ1 , E : unit By F OCUS -U NIT
(3)
A1 = A2
By F OCUS -U NIT
(4)
` F1 w f
Assumption
(5)
` A1 w f
Assumption
(6)
` Γ1 w f
Assumption
(7)
` F2 w f
4 and inversion on T YPE -C TX -O NE -WF
(8)
` E : unit
4 and inversion on T YPE -C TX -O NE -WF
(9)
` A2 w f
(3) and (5)
(10) ` Γ2 w f
(6), (9), T YPE -C TX -O NE -W F
Case: F OCUS -BASE, F OCUS -F UN. Similar.
Case: F OCUS -T UPLE.
33
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
F1 = F2 , E : τ1 ∗ ... ∗ τm
Γ2 = Γ1
A1 = A2 , π1 E : τ1 , ..., πn E : τn
` F1 w f
` A1 w f
` Γ1 w f
` F2 w f
` E : τ1 ∗ ... ∗ τm
∀i ∈ n, ` πi E : τi
` A2 w f
` Γ2 w f
By F OCUS -T UPLE
By F OCUS -T UPLE
By F OCUS -T UPLE
Assumption
Assumption
Assumption
4 and inversion on T YPE -C TX -O NE -WF
4 and inversion on T YPE -C TX -O NE -WF
(8) and T-P ROJ
(5), (9), and T YPE -C TX -O NE -WF
(2) and (6)
Lemma 2.4: Potential of Focusing
Let φ (τ) be the size of the tree comprising the type τ.
Let Φ(F) = ∑Ei :τi ∈F φ (τi ).
IF
Γ1 ; A1 ; F1 | W1 =⇒ Γ2 ; A2 ; F2 | W2
Φ(F2 ) < Φ(F1 )
THEN
Proof: By case analysis on the judgment Γ1 ; A1 ; F1 | W1 =⇒ Γ2 ; A2 ; F2 | W2 .
Case: F OCUS -U NIT. F2 ⊂ F1 , so Φ(F2 ) < Φ(F1 ).
Case: F OCUS -BASE, F OCUS -F UN. Similar.
Case: F OCUS -T UPLE . Removes E : τ1 ∗ ... ∗ τm from F1 and adds π1 E : τ1 , ..., πn E : τn to F1 . ∑i∈n φ (τi ) <
φ (τ1 ∗ ... ∗ τm ), implying that Φ(F2 ) < Φ(F1 ).
Theorem 2.5: Termination of Focusing
Γ1 ; A1 ; F1 | W1 =⇒∗ Γ2 ; A2 ; · | W2
Proof: By the potential method and case analyis on F1 .
Case: F1 = ·. Apply F OCUS -C LOSURE -BASE.
Case: F1 = F, E : τ. By Theorem 2.2, we can apply a focusing judgment to any non-empty F. Therefore,
apply F OCUS -C LOSURE -S TEP. By Lemma 2.4, Φ(F) decreases monotonically at each such step. Φ(·) = 0,
so this process eventually terminates with F1 = ·.
A.3
Order of Focusing in the Proof Search Process
Theorem 3.1: Focusing is Deterministic
IF
THEN
Γ1 ; A1 ; F1 | W1 =⇒∗ Γ2a ; A2a ; · | W2
Γ2a = Γ2b
AND
A2a = A2b
AND
Γ1 ; A1 ; F1 | W1 =⇒∗ Γ2b ; A2b ; · | W2
Proof: Immediately follows from the focusing judgments.
Theorem 3.2: Focusing Need Not Occur Before Synthesis
E0
I0
D EFINE THE JUDGMENTS
Γ; A; F ` τ
E AND
Γ; A; F | W ` τ
I TO BE IDENTICAL TO
THOSE IN F IGURE 7 EXCEPT THAT F NEED NOT BE EMPTY FOR ANY SYNTHESIS RULE TO BE APPLIED .
Γ; A; F | W ` τ
Γ; A; F ` τ
E
E
I
I
IFF
IFF
Γ; A; F | W ` τ
Γ; A; F ` τ
E0
I0
E
34
I
Proof: Rather than focusing eagerly, we merely focus lazily as focused terms become necessary. Since focusing is deterministic (Theorem 3.1), the time at which we perform focusing does not affect the expressions
we can synthesize.
A.4
Admissibility of Focusing
E
I
D EFINE THE JUDGMENTS
Γ`τ
E
AND
Γ|W `τ
I TO BE IDENTICAL TO THOSE IN
F IGURE 7 WITHOUT FOCUSING . I NSTEAD OF FOCUSING TO PROJECT ON TUPLES , WE INTRODUCE THE
EG UESS -P ROJ RULE AS BELOW:
Γ, E1 : τ1 ∗ ... ∗ τm , π1 E1 : τ1 , ..., πm E1 : τm ` τ
E
Γ, E1 : τ1 ∗ ... ∗ τm ` τ
E
E
EG UESS -P ROJ
E
Theorem 4.1: Soundness of Focusing
IF
IF
I
Γ; ·; · | · ` τ
E
Γ; ·; · ` τ
I
Γ|·`τ
THEN
I
Γ`τ
THEN
E
I
I
I
Proof: We can transform any derivation of a judgment in the system with focusing into the system without
focusing.
Where elimination forms of unit, base, or function type are placed in F and subsequently moved into Γ using
IR EFINE -F OCUS and EG UESS -F OCUS, simply remove the focusing judgments and move the elimination
forms directly into Γ.
Where elimination forms of tuple type are placed in F and projected upon into their non-tuple constituents
using IR EFINE -F OCUS and EG UESS -F OCUS, insert uses of the EG UESS -P ROJ judgment that perform the
same projections.
Theorem 4.2: Completeness of Focusing
IF
IF
Γ|·`τ
Γ`τ
E
I
I
THEN
I
THEN
Γ; ·; · | · ` τ
Γ; ·; · ` τ
E
I
I
I
Proof: We can transform any derivation of a judgment in the system without focusing into the system with
focusing.
Whenever an elimination form is inserted into Γ, instead insert it into F and perform focusing.
Elimination forms of unit, base, or function type may be used exactly as before aside from additional bookkeeping to move them from F to Γ.
Even in the system without focusing, tuples must still be fully projected before they may be used as a result
of the restriction that we can only synthesize programs that are eta-long. EG UESS -C TX only moves terms
from the context to goal position at base type. No elimination form judgment other than EG UESS -P ROJ
may make use of tuples. Therefore, any tuple placed into the context in the system without focusing is either
unused or fully projected. Focusing merely performs this projection eagerly.
A.5
Soundness of Synthesis Judgments
Theorem 5.1: Soundness of Synthesis Judgments
n
IF
Γ; A; F | hE; εi ` τ
IF
Γ; A; F ` τ
E
E
I
I
THEN
[Γ, A, F]vars ` I : τ
THEN
[Γ, A, F]vars ` E : τ
35
AND
∀i ∈ n, Ei (I) →∗ εi
Proof: By simultaneous induction on the IR EFINE and EG UESS judgments.
Case EG UESS -C TX.
(1) E : τ ∈ Γ Inversion
Case EG UESS -A PP.
E
(1) Γ, E1 : τ1 → τ2 ; A; E1 I1 : τ2 ` τ
E
Inversion
(2) [Γ, E1 : τ1 → τ2 , A, E1 I1 : τ2 ]vars ` e : τ Inductive Hypothesis
(3) [Γ, E1 : τ1 → τ2 , A]vars ` e : τ
Vars relation
Case EG UESS -F OCUS.
(1) [Γ1 , A1 , F1 ]vars = [Γ2 , A2 , F2 ]vars
(2)
(3)
(4)
E
Γ2 ; A2 ; F2 ` τ
E
[Γ2 , A2 , F2 ]vars ` E : τ
[Γ1 , A1 , F1 ]vars ` E : τ
Inversion
Inductive Hypothesis
(1) and (3)
Case IR EFINE -F OCUS.
(1) [Γ1 , A1 , F1 ]vars = [Γ2 , A2 , F2 ]vars
(2)
(3)
(4)
(5)
n
I
Γ2 ; A2 ; F2 | hE; εi ` τ
[Γ2 , A2 , F2 ]vars ` E : τ
[Γ1 , A1 , F1 ]vars ` E : τ
∀i ∈ n, Ei (I) →∗ εi
Case IR EFINE -G UESS.
E
(1) Γ; A; · ` B E
(2) [Γ, A, F]vars ` E : B
(3) ∀i ∈ n, Ei (E) →∗ εi
E
Immediate from focusing judgments
Inversion
Inductive Hypothesis
(1) and (3)
Inversion
Inversion
Inductive Hypothesis
Inversion
Case IR EFINE -U NIT.
(1) [Γ, A, F]vars ` () : unit
(2) ∀i ∈ n, Ei (()) →∗ ()
Case IR EFINE -C TOR.
(1) C : τ → B ∈ Σ
(2)
(3)
(4)
(5)
(6)
Immediate from focusing judgments
T-U NIT
S-C LOSURE -BASE
Inversion
i∈n
I
Γ; A; · | hEi ; Ii i ` τ
I
[Γ, A, F]vars ` I : τ
[Γ, A, F]vars ` C I : B
∀i ∈ n, Ei (I) →∗ Ii
∀i ∈ n, Ei (C I) →∗ C Ii
Inversion
Inductive Hypothesis
T-C TOR, (1), (3)
Inductive Hypothesis
S-C TOR, (5)
Case IR EFINE -T UPLE.
j∈n
I
(1) ∀i ∈ m, Γ; A; · | hE j ; I(i, j) i ` τi Ii
(2) ∀i ∈ m, [Γ, A, F]vars ` Ii : τi
(3) [Γ, A, F]vars ` (I1 , ..., Im ) : τ1 ∗ ... ∗ τm
(4) ∀(i, j) ∈ (m, n), E j (Ii ) →∗ I(i, j)
(5) ∀ j ∈ n, E j ((I1 , ..., Im )) →∗ (I(1, j) , ..., I(m, j) )
Inversion
Inductive Hypothesis
T-T UPLE , (2)
Inductive Hypothesis
S-T UPLE, (4)
Case IR EFINE -M ATCH.
E
(1) Γ; A; · ` B E
(2) ∀ j ∈ m, C j : τ j → B ∈ Σ
(3)
(4)
(5)
(6)
(7)
(8)
(9)
Inversion
Inversion
∀ j ∈ m, Σ | Γ; A; x : τ j | {hEi , x . v; εi i Ei (E) →∗ C j v} ` τ
[Γ, A, F]vars ` E : B
∀ j ∈ m, [Γ, A, F, x : τ j ]vars ` I j : τ
j∈m
[Γ, A, F]vars ` match E with C j x → I j
:τ
∀i ∈ n, Ei (E) →∗ C j v
∀i ∈ n, IF Ei (E) →∗ C j v THEN Ei , x . v(I j ) →∗ εi
j∈m
∀i ∈ n, Ei (match E with C j x → I j ) →∗ εi
36
I
Ij
Inversion
Inductive Hypothesis, (1)
Inductive Hypothesis, (2)
T-M ATCH, (2), (4), (5)
Inversion
Inductive Hypothesis, (3)
S-M ATCH 1, S-M ATCH 2, (7), (8)
Case IR EFINE -F IX.
(i,k)∈(n,mi )
(1) Γ; A; f : τ1 → τ2 , x : τ1 | hEi , f . v(i, j) ⇒ ε(i, j) j∈mi , x . v(i,k) ; ε(i,k) i
` τ2
(2) [Γ, A, F, f : τ1 → τ2 , x : τ1 ]vars ` I : τ2
(3) [Γ, A, F]vars ` fix f (x : τ1 ) : τ2 = I : τ1 → τ2
(4) ∀(i, k) ∈ (n, mi ), Ei , f . v(i, j) ⇒ ε(i, j) j∈mi , x . v(i,k) (I) →∗ ε(i,k)
(5) ∀i ∈ n, Ei (fix f (x : τ1 ) : τ2 = I) →∗ v(i, j) ⇒ ε(i, j) j∈mi
I
I
Inversion
Inductive Hypothesis
T-A BS, (2)
Inductive Hypothesis
S-C LOSURE -BASE, (4)
Note that we assume that the fixpoint and the partial function are equal to one another since they have the
same behavior on all inputs for which the partial function is defined.
37
| 6 |
Complex Block Floating-Point Format with Box Encoding For
Wordlength Reduction in Communication Systems
Yeong Foong Choo∗ , Brian L. Evans∗ and Alan Gatherer†
arXiv:1705.05217v2 [] 25 Oct 2017
∗ Wireless
Networking and Communications Group, The University of Texas at Austin, Austin, TX USA
† Wireless Access Laboratory, Huawei Technologies, Plano, TX USA
∗ [email protected], [email protected] † [email protected]
Abstract—We propose a new complex block floating-point
format to reduce implementation complexity. The new format
achieves wordlength reduction by sharing an exponent across
the block of samples, and uses box encoding for the shared
exponent to reduce quantization error. Arithmetic operations are
performed on blocks of samples at time, which can also reduce
implementation complexity. For a case study of a baseband
quadrature amplitude modulation (QAM) transmitter and receiver, we quantify the tradeoffs in signal quality vs. implementation complexity using the new approach to represent IQ samples.
Signal quality is measured using error vector magnitude (EVM)
in the receiver, and implementation complexity is measured in
terms of arithmetic complexity as well as memory allocation and
memory input/output rates. The primary contributions of this
paper are (1) a complex block floating-point format with box
encoding of the shared exponent to reduce quantization error,
(2) arithmetic operations using the new complex block floatingpoint format, and (3) a QAM transceiver case study to quantify
signal quality vs. implementation complexity tradeoffs using the
new format and arithmetic operations.
Index Terms—Complex block floating-point, discrete-time
baseband QAM.
I. I NTRODUCTION
Energy-efficient data representation in application specific
baseband transceiver hardware are in demand resulting from
energy costs involved in baseband signal processing [1]. In
macrocell base stations, about ten percent of energy cost
contribute towards digital signal processing (DSP) modules
while power amplification and cooling processes consume
more than 70% of total energy [2]. The energy consumption
by DSP modules relative to power amplification and cooling
will increase in future designs of small cell systems because
low-powered cellular radio access nodes handle a shorter radio
range [2]. The design of energy-efficient number representation will reduce overall energy consumption in base stations.
In similar paper, baseband signal compression techniques
have been researched for both uplink and downlink. The methods in [3], [4], and [5] suggest resampling baseband signals
to Nyquist rate, block scaling, and non-linear quantization.
All three papers report transport data rate gain of 3x to 5x
with less than 2% EVM loss. In [5], cyclic prefix replacement
technique is used to counter the effect of resampling, which
would add processing overhead to the system. In [4] and
[6], noise shaping technique shows improvement of in-band
signal-to-noise ratio (SNR). In [7], transform coding technique
is suggested for block compression of baseband signals in
Fig. 1.
32-bit equivalent SIMD ALU in Exponent Box Encoding format
the settings of multiple users and multi-antenna base station.
Transform coding technique reports potential of 8x transport
data rate gain with less than 3% EVM loss. The above methods
achieve end-to-end compression in a transport link and incur
delay and energy cost for the compression and decompression
at the entry and exit points, respectively. The overall energy
cost reduction is not well quantified. This motivates the design
of energy-efficient data representation and hardware arithmetic
units with low implementation complexity.
In [8], Common Exponent Encoding is proposed to represent 32-bit complex floating-point data by only 29-bit
wordlength in hardware to achieve 3-bit savings. The method
in [8] shows 10% reduction of registers and memory footprints
with a tradeoff of 10% increase in arithmetic units. In [9],
exponential coefficient scaling is proposed to allocate 6 bits
to represent real-valued floating-point data. The method in [9]
achieves 37x reduction in quantization errors, 1.2x reduction in
logic gates, and 1.4x reduction in energy per cycle compared
to 6-bit fixed-point representation. Both papers report less than
2 dB of signal-to-quantization-noise ratio (SQNR).
Contributions: Our method applies the Common Exponent
Encoding proposed by [8] and adds a proposed Exponent
Box Encoding to retain high magnitude-phase resolution. This
paper identifies the computational complexity of complex
block addition, multiplication, and convolution and computes
reference EVM on the arithmetic output. We apply the new
complex block floating-point format to case study of baseband
QAM transmitter chain and receiver chain. We also reduce
implementation complexity in terms of memory reads/writes
rates, and multiply-accumulate operations. We base the sig-
TABLE I
D EFINITION & B IT W IDTHS U NDER IEEE-754 N UMBER F ORMAT [10]
Components
Wordlength, W
Sign, S
Exponent, E
Mantissa, M
Definition
Nw
Ns
Ne
Nm
Bit Widths, B
{16, 32, 64}
{1}
{5, 8, 11}
{10, 23, 52}
TABLE III
D EFINITION & B IT W IDTHS U NDER E XPONENT B OX E NCODING
Components
Common Exponent, E
Real / Imaginary Sign, S
Real / Imaginary Lead, L
Real / Imaginary Box Shift, X
Real / Imaginary Mantissa, M
Definition
Ne
NsR,I
NlR,I
NxR,I
R,I
Nm
Bit Widths , B
{5, 8, 11}
{1}
{1}
{1}
{10, 23, 52}
TABLE II
D EFINITION & B IT W IDTHS U NDER C OMMON E XPONENT E NCODING [8]
Components
Common Exponent, E
Real / Imaginary, S
Real / Imaginary Lead, L
Real / Imaginary Mantissa, M
Definition
Ne
NsR,I
NlR,I
R,I
Nm
Bit Widths, B
{5, 8, 11}
{1}
{1}
{10, 23, 52}
nal quality of our method on the measurement of EVM at
the receiver. Our method achieves end-to-end complex block
floating-point representation.
II. M ETHODS
This section describes the data structure used in new representation of complex block floating-point [8] and suggests a
new mantissa scaling method in reducing quantization error. In
IEEE 754 format, the exponents of complex-valued floatingpoint data are separately encoded. Common Exponent Encoding technique [8] allows common exponent sharing that has
weak encoding of phase resolution.
A. Common Exponent Encoding Technique
Table I summarizes the wordlength precision of real-valued
floating-point data in IEEE-754 encoding [10]. We define Bw bit as the wordlength of scalar floating-point data. A complexvalued floating-point data requires 2Bw -bit and a complex
block floating-point of Nv samples requires 2Nv Bw -bit.
Fig. 2. Scatter plot of N = 25 complex-valued exponent pairs X ∼
N (130, 122 ) ( ) and potential candidate for common exponent ( )
The method in [11] assumes only magnitude correlation in
the oversampled complex block floating-point data. This assumption allows common exponent be jointly encoded across
complex block floating-point of Nv samples defined in Table
II. The implied leading bit of 1 of each floating-point data
is first uncovered. The common exponent is selected from
the largest unsigned exponent across the complex block. All
mantissa values are successively scaled down by the difference between common exponent and its original exponent.
Therefore, each floating-point data with smaller exponents
value loses leading bit of 1. The leading bit of complex block
floating-point is explicitly coded as Nl , using Bl -bit. The sign
bits are left unchanged. A complex block floating-point of Nv
samples requires {2Nv (Bs + Bl + Bm ) + Be }-bit.
We derive the maximum allowed exponent difference under
Common Exponent Encoding in Appendix A. Mantissa values
could be reduced to zero as a result of large phase difference.
Figure 2 shows the Effective Encoding Region (EER) under
Common Exponent Encoding technique ( ). Exponent pairs
outside the EER will have corresponding mantissa values
reduce to zero.
B. Exponent Box Encoding Technique
The Common Exponent Encoding technique suffers high
quantization and phase error in the complex block floatingpoint of high dynamic range. Exponent Box Encoding is
suggested to reduce quantization error of complex-valued
floating-point pairs by allocating 2Nv -bit per complex block.
Figure 2 shows the Effective Encoding Region under Exponent
Box Encoding technique ( ) which has four times larger the
area of EER of Common Exponent Encoding technique ( ).
The use of 2-bit per complex sample replaces the mantissas
rescaling operation with exponents addition/ subtraction. We
are able to preserve more leading bits of mantissas values
which improve the accuracy of complex block multiplication
and complex block convolution results. A complex block
floating-point of Nv samples requires {2Nv (Bs + Bl + Bx +
Bm ) + Be }-bit.
Arithmetic Logic Unit (ALU) hardware is designed to
perform Single-Instruction Multiple-Data (SIMD) operation
on complex block floating-point data. The Exponent Box
Encoding is performed when converting to Exponent Box
Encoding format. The Exponent Box Decoding is performed
at the pre-processing of mantissas in Complex Block Addition
and pre-processing of exponents in Complex Block Multiply.
TABLE IV
W ORDLENGTH R EQUIREMENT BY Nv C OMPLEX -VALUED S AMPLES
Encoding
Complex IEEE754
Common Exponent
Exponent Box
Bit Widths
2Nv (Bs + Be + Bm )
2Nv (Bs + Bl + Bm ) + Be
2Nv (Bs + Bl + Bx + Bm ) + Be
Fig. 4. Pre-Scale exponents in Complex Block Multiply
A. Complex Block Addition
Figure 3 shows simplified block diagram for Complex Block
Addition. Let X1 , X2 , Y ∈ C1×N be complex-valued row
vectors, such that,
Fig. 3. Pre-Scale mantissas in Complex Block Add
<{Y } = <{X1 } + <{X2 }
Table IV summarizes the wordlength analysis required by
complex block floating-point of Bv samples. The Exponent
Box Encoding and Exponent Box Decoding algorithms are
described as follows:
Algorithm 1 Exponent Box Encoding
Let U ← max{E{s}} − Bm
for ith ∈ Nv {R/I} samples do
if NeR {i} < U then
NeR {i} ← NeR {i} + Bm
NxR {i} ← 1
if NeI {i} < U then
NeI {i} ← NeI {i} + Bm
NxI {i} ← 1
Algorithm 2 Exponent Box Decoding
for ith ∈ Nv {R/I} samples do
if NxR {i} ≡ 1 then
NeR {i} ← NeR {i} − Bm
if NxI {i} ≡ 1 then
NeI {i} ← NeI {i} − Bm
={Y } = ={X1 } + ={X2 }
In IEEE-754 encoding format, complex block addition is
implemented as two real-valued addition. There are four
exponents to the two complex inputs and two exponents to the
complex output. Each real-valued addition block requires one
mantissa pre-scaling, one mantissa post-scaling, and one exponent arithmetic. Therefore, complex block addition requires
two mantissas pre-scaling, two mantissas post-scaling, and two
exponents arithmetic per sample.
In Common Exponent and Exponent Box Encoding, there
are two shared exponents to the two complex block inputs and
one shared exponent to the complex block output. Complexity
on shared exponent arithmetic is O(1). We pre-scale the mantissas corresponding to the smaller exponent and post-scale
the mantissas of the complex block output. With Exponent
Box Encoding in the worst case, we require two mantissas
pre-scaling and one mantissas post-scaling.
B. Complex Block Multiplication
Figure 4 shows simplified block diagram for Complex Block
Multiplication. Let X1 , X2 , Y ∈ C1×N be complex-valued
row vectors, where • denotes element-wise multiply, such that,
<{Y } = <{X1 } • <{X2 } − ={X1 } • ={X2 }
={Y } = <{X1 } • ={X2 } + ={X1 } • <{X2 }
III. A RITHMETIC U NIT
We identify the arithmetic units predominantly used on
complex block floating-point data. Complex-valued multiplication and addition are two primary ALU required in convolution operation. This section identifies the complexity of
pre-processing and post-processing mantissas and exponents
in the complex block addition, multiplication, and convolution arithmetic. Table V describes the worst-case complexity
analysis of complex block ALU on encoding format described
in Section II.
(1)
(2)
In IEEE-754 encoding format, complex block multiplication
is implemented as four real-valued multiplication and two
real-valued addition. Each real-valued multiplication requires
one mantissa post-scaling and one exponent arithmetic. Each
real-valued addition requires one mantissa pre-scaling, one
mantissa post-scaling, and one exponent arithmetic. Complex
block multiply requires two mantissas pre-scaling, six mantissas post-scaling, and six exponent arithmetic per sample.
In Common Exponent and Exponent Box Encoding, we
need two exponent arithmetic for multiply and normalization
TABLE V
M ANTISSAS AND E XPONENT P RE /P OST P ROCESSING C OMPLEXITY OF C OMPLEX B LOCK ALU
Block Addition
Complex IEEE754
Common Exponent
Exponent Box
Block Multiplication
Complex IEEE754
Common Exponent
Exponent Box
Convolution
Complex IEEE754
Common Exponent
Exponent Box
Mantissas Scaling
4∗N
4∗N
8∗N
Mantissas Scaling
8∗N
8∗N
16 ∗ N
Mantissas Scaling
6 ∗ N1 N2 + 4 ∗ (N1 − 1)(N2 − 1)
6 ∗ N1 N2 + 4 ∗ (N1 − 1)(N2 − 1)
10 ∗ N1 N2 + 8 ∗ (N1 − 1)(N2 − 1)
Exponents Arithmetic
2∗N
2
4
Exponents Arithmetic
6∗N
2
5
Exponents Arithmetic
6 ∗ N1 N2 + 2 ∗ (N1 − 1)(N2 − 1)
3 ∗ (N1 + N2 − 1) + 1
3 ∗ (N1 + N2 − 1) + 1
of the complex block output. With Exponent Box Encoding
in the worst case, we need eight more mantissas post-scaling.
Also, the Shift Vectors allow for four possible intermediate
exponent values instead of one intermediate exponent value in
Common Exponent Encoding.
QAM Parameters
Constellation Order
Transceiver Parameters
Definition
M
Definition
Values / Types
1024
Values / Types
C. Complex Convolution
Let X1 ∈ C1×N1 , X2 ∈ C1×N2 , and Y ∈ C1×(N1 +N2 −1)
be complex-valued row vectors, where ∗ denotes convolution,
such that,
Up-sample Factor
Symbol Rate (Hz)
Filter Order
Pulse Shape
Excess Bandwidth Factor
LT X , LRX
fsym
N T X , N RX
g T X , g RX
αT x , αRX
4
2400
32th
Root-Raised Cosine
0.2
<{Y } = <{X1 ∗ X2 }
={Y } = ={X1 ∗ X2 }
TABLE VI
QAM T RANSMITTER , R ECEIVER S PECIFICATIONS
(3)
We assume N1 < N2 for practical reason where the model
of channel impulse response has shorter sequence than the
discrete-time samples. Each term in the complex block output
is complex inner product of two complex block input of
varying length between 1 and min{N1 , N2 }. Complex convolution is implemented as complex block multiplication and
accumulation of intermediate results. We derive the processing
complexity of mantissas and exponents in Appendix B.
IV. S YSTEM M ODEL
We apply Exponent Box Encoding to represent IQ components in baseband QAM transmitter in Figure 5 and baseband
QAM receiver in Figure 6. The simulated channel model is
Additive White Gaussian Noise (AWGN). Table VI contains
the parameter definitions and values used in MATLAB simulation and Table VII summarizes the memory input/output rates
(bits/sec) and multiply-accumulate rates required by discretetime complex QAM transmitter and receiver chains.
A. Discrete-time Complex Baseband QAM Transmitter
We encode complex block IQ samples in Exponent Box
Encoding and retain the floating-point resolution in 32-bit
IEEE-754 precision in our model. For simplicity, we select
block size to be, Nv = LT X fsym . The symbol mapper
generates a LT X fsym -size of complex block IQ samples that
shares common exponent. Pulse shape filter is implemented
as Finite Impulse Response (FIR) filter of N T X -order and
requires complex convolution on the upsampled complex block
IQ samples.
Fig. 5. Block diagram of discrete-time complex baseband QAM transmitter
B. Discrete-time Complex Baseband QAM Receiver
Due to the channel effect such as fading in practice, the
received signals will have larger span in magnitude-phase response. The Common Exponent Encoding applied on sampled
complex block IQ samples is limited to selecting window
size of minimum phase difference. The Common Exponent
Encoding must update its block size at the update rate of gain
by the Automatic Gain Control (AGC). Instead, our Exponent
Box Encoding could lift the constraint and selects fixed block
size, Nv = LRX fsym in this simulation. We simulate matched
filter of N RX -order.
Fig. 6. Block diagram of discrete-time complex baseband QAM receiver
TABLE VII
M EMORY I NPUT / O UTPUT AND C OMPUTATIONAL R ATES ON E XPONENT B OX S HIFTING T ECHNIQUE
Transmitter Chain
Symbol Mapper
Upsampler
Pulse Shape Filter
Receiver Chain
Memory Reads Rate (bits/sec)
Jfsym
2fsym (Nw + Nl + Nb − Ne ) + Ne
(3LT x NgT x + 1)(LT x fsym )(Nw + Nl + Nb − Ne ) + 2Ne
Memory Reads Rate (bits/sec)
Memory Writes Rate (bits/sec)
2fsym (Nw + Nl + Nb − Ne ) + Ne
2LT x fsym (Nw + Nl + Nb − Ne ) + Ne
2LT x fsym (Nw + Nl + Nb − Ne ) + Ne
Memory Writes Rate (bits/sec)
MACs / sec
0
0
(LT x )2 NgT x fsym
MACs / sec
Matched Filter
Downsampler
Symbol Demapper
(3LRx NgRx + 1)(LRx fsym )(Nw + Nl + Nb − Ne ) + 2Ne
2LRx fsym (Nw + Nl − Ne ) + Ne + (Nw + Nl + Nb )
2fsym (Nw + Nl − Ne ) + Ne + J2 (Nw + Nl )
2LRx fsym (Nw + Nl + Nb − Ne ) + Ne
2fsym (Nw + Nl + Nb − Ne ) + Ne
Jfsym
(LRx )2 NgRx fsym
0
0
Fig. 7. Error vector magnitude of 32-bit complex block arithmetic
Fig. 8. Dynamic range of 32-bit RRC filter impulse response as function of
roll-off factor
V. S IMULATION R ESULTS
B. Error Vector Magnitude on Single-Carrier Transceiver
A. Error Vector Magnitude on Complex Block (32-bit) ALU
Let X, X̄ ∈ C1×N be complex-valued row vectors, such
that X is the reference results in IEEE-754 Encoding and X̄
is the simulated results in Complex Block Encoding.
The signal quality is measured on the complex block
arithmetic results. We truncate the arithmetic results to 32bit precision to make fair comparison. We use the RootMean-Squared (RMS) EVM measurement as described in the
following, with k • k2 as the Euclidean Norm,
Figure 8 shows the dynamic range of Root-Raised Cosine
(RRC) filter at transmitter and receiver and overall pulse
shape response as a function of α. Figure 9 shows the EVM
introduced by Complex Block Encoding under system model
defined in Section IV. The EVM plot is indistinguishable
between IEEE-754 Encoding and Complex Block Encoding.
The reasons are the selection of RRC Roll-off factor and
energy-normalized constellation map.
VI. C ONCLUSION
k X − X̄ k2
EV M =
∗ 100
k X k2
(4)
Figure 7 shows the EVM of complex block arithmetic
in Section III on Inputs Ratio ∈ (0, 200) dB. In complex
block addition, the Exponent Box Encoding does not show
significant advantage over Common Exponent Encoding because the mantissas addition emphasizes on magnitude over
phase. In complex block multiplication and convolution, the
Exponent Box Encoding achieves significant reduction in
encoding error over Common Exponent Encoding particularly
on Inputs Ratio ∈ (70, 140) dB where the improvement is
between (0, 99.999)%.
Our work has identified the processing overhead of the
mantissas and shared exponent in complex block floating-point
arithmetic. The common exponent encoding would slightly
lower the overhead in complex-valued arithmetic. The box
encoding of the shared exponent gives the same quantization
errors as common exponent encoding in our case study,
which is a 32-bit complex baseband transmitter and receiver.
Our work has also quantified memory read/write rates and
multiply-accumulate rates in our case study. Future work could
extend a similar approach to representing and processing IQ
samples in multi-carrier and multi-antenna communication
systems.
terms from X1 , X2 and requires (N2 − N1 )((N1 )(Nmult ) +
(N1 − 1)(Nadd )).
Overall Multiplication Requirement (Nmult ):
1
1
(N1 )(N1 + 1) + (N2 − N1 )(N1 ) + (N1 − 1)(N1 )
2
2
1 2
1 2
2
= (N1 + N1 ) + (N2 N1 − N1 ) + (N1 − N1 )
2
2
1
2
2
= (2N1 ) + (N2 N1 − N1 )
2
= N12 + (N2 N1 − N12 )
(6)
= N2 N1
Overall Addition Requirement (Nadd ):
Fig. 9. Error vector magnitude between encoding techniques on complexvalued IQ samples
A PPENDIX A
D ERIVATION OF M AXIMUM E XPONENT D IFFERENCE
U NDER C OMMON E XPONENT E NCODING T ECHNIQUE
= (N1 − 1)(N1 − 1) + (N2 − N1 )(N1 − 1)
= (N1 − 1)(N1 − 1 + N2 − N1 )
Let i, j be two bounded positive real numbers, representable
in floating point precision. Assume that i has larger magnitude
than j, |j| < |i|. Define E{k} as exponent and M {k} as
mantissa to k, and F (k) = 2E{k}−1 − 1 as exponent offset,
where k = {i, j}. Let E{∆} be the difference between two
exponents, (E{i} − E{j}) > 0.
j<i
(1.M {j} ∗ 2E{j}−F (j) ) < (1.M {i} ∗ 2E{i}−F (i) )
(1.M {j} ∗ 2E{j} ) < (1.M {i} ∗ 2E{i} )
(1.M {j} ∗ 2E{j}−E{i}+E{i} ) < (1.M {i} ∗ 2E{i} )
(1.M {j} ∗ 2E{j}−E{i} ) < (1.M {i})
1
1
(N1 − 1)(N1 ) + (N2 − N1 )(N1 − 1) + (N1 − 2)(N1 − 1)
2
2
1
= (N12 − N1 + N12 − 3N1 + 2) + (N2 − N1 )(N1 − 1)
2
= (N12 − 2N1 + 1) + (N2 − N1 )(N1 − 1)
(5)
(1.M {j} ∗ 2−E{∆} ) < (1.M {i})
(0.M {j 0 }) < (1.M {i})
1.M {j}
where M {j 0 } = E{∆}
2
0
The mantissa bits in M (j ) are truncated in practice, therefore, E{∆} must be less than M (j). The quantization error is
the largest when the M (j 0 ) gets zero when M (j) is nonzero.
A PPENDIX B
D ERIVATION OF P RE / P OST P ROCESSING C OMPLEXITY OF
C OMPLEX - VALUED C ONVOLUTION
exp
exp
mant
mant
Let Nmult
, Nadd
, Nmult
, Nadd
be processing complexity
of mantissas and exponents determined in Section III.
Among the first and last N1 terms of Y , they are computed by complex inner product of i ∈ {1, ..., N1 } input
1 +1)
(Nmult ) and
terms from X1 , X2 and requires (N1 )(N
2
(N1 −1)(N1 )
(Nadd ). Among the centering N2 − N1 terms of
2
Y , they are computed by complex inner product of N1 input
= (N1 − 1)(N2 − 1)
(7)
mant
Mantissa processing requirement is (Nmult
)(N2 N1 ) +
mant
(Nadd
)(N1 − 1)(N2 − 1) and exponent processing requireexp
exp
ment is (Nmult
)(N2 N1 ) + (Nadd
)(N1 − 1)(N2 − 1).
R EFERENCES
[1] G. Fettweis and E. Zimmermann, “ICT energy consumption-trends and
challenges,” in Proc. Int. Symposium on Wireless Personal Multimedia
Communications, vol. 2, no. 4, 2008, p. 6.
[2] O. Blume, D. Zeller, and U. Barth, “Approaches to energy efficient
wireless access networks,” in Int. Symposium on Communications,
Control and Signal Processing, March 2010, pp. 1–5.
[3] D. Samardzija, J. Pastalan, M. MacDonald, S. Walker, and R. Valenzuela, “Compressed Transport of Baseband Signals in Radio Access
Networks,” IEEE Transactions on Wireless Communications, vol. 11,
no. 9, pp. 3216–3225, September 2012.
[4] K. F. Nieman and B. L. Evans, “Time-domain compression of complexbaseband LTE signals for cloud radio access networks,” in Proc. IEEE
Global Conference on Signal and Information Processing, Dec 2013,
pp. 1198–1201.
[5] D. Peng-ren and Z. Can, “Compressed transport of baseband signals in
cloud radio access networks,” in Proc. Int. Conf. Communications and
Networking in China (CHINACOM), Aug 2014, pp. 484–489.
[6] L. S. Wong, G. E. Allen, and B. L. Evans, “Sonar data compression using
non-uniform quantization and noise shaping,” in Asilomar Conference
on Signals, Systems and Computers, Nov 2014, pp. 1895–1899.
[7] J. Choi, B. L. Evans, and A. Gatherer, “Space-time fronthaul compression of complex baseband uplink LTE signals,” in Proc. IEEE Int.
Conference on Communications, May 2016, pp. 1–6.
[8] N. Cohen and S. Weiss, “Complex Floating Point A Novel Data Word
Representation for DSP Processors,” IEEE Transactions on Circuits and
Systems I: Regular Papers, vol. 59, no. 10, pp. 2252–2262, Oct 2012.
[9] Z. Wang, J. Zhang, and N. Verma, “Reducing quantization error in lowenergy FIR filter accelerators,” in Proc. IEEE Int. Conf. on Acoustics,
Speech and Signal Processing, April 2015, pp. 1032–1036.
[10] “IEEE Standard for Floating-Point Arithmetic,” IEEE Std 754-2008, pp.
1–70, Aug 2008.
[11] N. McGowan, B. Morris, and E. Mah, “Compact floating point delta
encoding for complex data,” Mar. 3 2015, US Patent 8,972,359.
[Online]. Available: https://www.google.com/patents/US8972359
| 7 |
arXiv:1307.1676v2 [] 5 Jan 2014
ON THE RATIONALITY OF POINCARÉ SERIES
OF GORENSTEIN ALGEBRAS
VIA MACAULAY’S CORRESPONDENCE
GIANFRANCO CASNATI, JOACHIM JELISIEJEW, ROBERTO NOTARI
Abstract. Let A be a local Artinian Gorenstein ring with P
algebraically closed
∞
p
residue field A/M = k of characteristic 0, and let PA (z) := p=0 (TorA
p (k, k))z
be its Poicaré series. We prove that PA (z) is rational if either dimk (M2 /M3 ) ≤ 4
and dimk (A) ≤ 16, or there exist m ≤ 4 and c such that the Hilbert function
HA (n) of A is equal to m for n ∈ [2, c] and equal to 1 for n > c. The results are
obtained thanks to a decomposition of the apolar ideal Ann(F ) when F = G + H
and G and H belong to polynomial rings in different variables.
1. Introduction and notation
Throughout this paper, by ring we mean a Noetherian, associative, commutative
and unitary ring A with maximal ideal M and algebraically closed residue field
k := A/M of characteristic 0.
In [17] the author asked if the Poincaré series of the local ring A, i.e.
PA (z) :=
∞
X
p
dimk (TorA
p (k, k))z ,
p=0
is rational. Moreover he also proved its rationality when A is a regular local ring.
Despite many interesting results showing the rationality of the Poincaré series of
some rings, in [1] the author gave an example of an Artinian local algebra A with
transcendental PA . Later on the existence of an Artinian, Gorenstein, local ring
with M4 = 0 and transcendental PA was proved in [4].
Nevertheless, several results show that large classes of local rings A have rational Poincaré series, e.g. complete intersections rings (see [19]), Gorenstein local rings with dimk (M/M2) ≤ 4 (see [2] and [15]), Gorenstein local rings with
2000 Mathematics Subject Classification. Primary 13D40, Secondary 13H10.
Key words and phrases. Artinian Gorenstein local algebra, rational Poincaré series.
The first and third authors are members of GNSAGA group of INdAM. They are supported by
the framework of PRIN 2010/11 “Geometria delle varietà algebriche”, cofinanced by MIUR.
The second author is supported by the project “Secant varieties, computational complexity, and
toric degenerations” realised within the Homing Plus programme of Foundation for Polish Science,
co-financed from European Union, Regional Development Fund.
This paper is a part of “Computational complexity, generalised Waring type problems and tensor
decompositions” project within “Canaletto”, the executive program for scientific and technological
cooperation between Italy and Poland, 2013-2015.
1
2
G. CASNATI, J. JELISIEJEW, R. NOTARI
dimk (M2 /M3 ) ≤ 2 (see [16], [10]), Gorenstein local rings of multiplicity at most 10
(see [6]), Gorenstein local algebras with dimk (M2 /M3 ) = 4 and M4 = 0 (see [8]).
All the above results are based on the same smart combination of results on the
Poincaré series from [3] and [13] first used in [16] combined with suitable structure
results on Gorenstein rings and algebras. In this last case a fundamental role has
been played by Macaulay’s correspondence.
In Section 2 we give a quick resumé of the main results that we need later on in
the paper about Macaulay’s correspondence. In Section 3 we extend to arbitrary
algebras a very helpful decomposition result already used in a simplified form in
[9] and [8] for algebras with M4 = 0. In Section 4 we explain how to relate the
rationality of the Poincaré series of Gorenstein algebras with their representation in
the setup of Macaulay’s correspondence making use of the aforementioned decomposition result. Finally, in Section 5 we use such relationship in order to prove the
two following results.
Theorem A. Let A be an Artinian, Gorenstein local k–algebra with maximal ideal
M. If there are integers m ≤ 4 and c ≥ 1 such that
m
if t = 2, . . . , c,
t
t+1
dimk (M /M ) =
1
if t = c + 1,
then PA is rational.
Theorem B. Let A be an Artinian, Gorenstein local k–algebra with maximal ideal
M. If dimk (M2 /M3 ) ≤ 4 and dimk (A) ≤ 16, then PA is rational.
The above theorems generalize the quoted results on stretched, almost–stretched
and short algebras (see [16], [10], [6], [8]).
1.1. Notation. In what follows k is an algebraically closed field of characteristic
0. A k–algebra is an associative, commutative and unitary algebra over k. For
each N ∈ N we set S[N] := k[[x1 , . . . , xN ]] and P [N] := k[y1 , . . . , yN ]. We denote
by S[N]q (resp. P [N]q ) the homogeneous
component of degree
Lqq of such a graded
Lq
k–algebra, and we set S[N]≤q := i=1 S[N]i (resp. P [n]≤q := i=1 P [n]i ). Finally,
we set S[n]+ := (x1 , . . . , xn ) ⊆ S[n]. The ideal S[n]+ is the unique maximal ideal of
S[N].
A local ring R is Gorenstein if its injective dimension as R–module is finite.
If γ := (γ1 , . . . , γN ) ∈ NN is a multi–index, then we set tγ := tγ11 . . . tγNN ∈
k[t1 , . . . , tN ].
For all the other notations and results we refer to [12].
2. Preliminary results
In this section we list the main results on algebras we need in next sections. Let A
be a local, Artinian k–algebra with maximal ideal M. We denote by HA the Hilbert
function of the graded associated algebra
gr(A) :=
+∞
M
t=0
Mt /Mt+1 .
RATIONALITY OF THE POINCARÉ SERIES
3
We know that
A∼
= S[n]/J
2
for a suitable ideal J ⊆ S[n]+ ⊆ S[n], where n = emdim(A) := HA (1). Recall that
the socle degree sdeg(A) of A is the greatest integer s such that Ms 6= 0.
We have an action of S[n] over P [n] given by partial derivation defined by identifying xi with ∂/∂yi . Hence
α! αβ y β−α
if β ≥ α,
α
β
x ◦ y :=
0
if β 6≥ α.
Such an action endows P [n] with a structure of module over S[n]. If J ⊆ S[n] is an
ideal and M ⊆ P [n] is a S[n]–submodule we set
J ⊥ := { F ∈ P [n] | g ◦ F = 0, ∀g ∈ J },
Ann(M) := { g ∈ S[n] | g ◦ F = 0, ∀F ∈ M }.
For the following results see e.g. [11], [14] and the references therein. Macaulay’s
theory of inverse system is based on the fact that constructions J 7→ J ⊥ and M 7→
Ann(M) give rise to a inclusion–reversing bijection between ideals J ⊆ S[n] such
that S[n]/J is a local Artinian k–algebra and finitely generated S[n]–submodules
M ⊆ P [n]. In this bijection Gorenstein algebras A with sdeg(A) = s correspond
to cyclic S[n]–submodules hF iS[n] ⊆ P [n] generated by a polynomial F of degree s.
We simply write Ann(F ) instead of Ann(hF iS[n]).
On the one hand, given a S[n]–module M, we define
M ∩ P [n]≤q + P [n]≤q−1
P [n]≤q−1
L∞
Lq
where P [n]≤q :=
q=0 tdf(M)q . The module tdf(M)
i=0 P [n]i , and tdf(M) :=
can be interpreted as the S[n]–submodule of P [n] generated by the top degree forms
of all polynomials in M.
On the other hand, for each f ∈ S[n], the lowest degree of monomials appearing
with non–zero coefficient in the minimal
P representation of f is called the order of f
and it is denoted by ord(f ). If f = ∞
i=ord(f ) fi , fi ∈ S[n]i then ford(f ) is called the
lower degree form of f . It will be denoted in what follows with ldf(f ).
If f ∈ J, then ord(f ) ≥ 2. The lower degree form ideal ldf(J) associated to J is
tdf(M)q :=
ldf(J) := (ldf(f )|f ∈ J) ⊆ S[n].
We have ldf(Ann(M)) = Ann(tdf(M)) (see [11]: see also [9], Formulas (2) and
(3)) whence
gr(S[n]/Ann(M)) ∼
= S[n]/Ann(tdf(M)).
= S[n]/ldf(Ann(M)) ∼
Thus
(1)
HS[n]/Ann(M ) (q) = dimk (tdf(M)q ).
We say that M is non–degenerate if HS[n]/Ann(M ) (1) = dimk (tdf(M)1 ) = n, i.e. if
and only if the classes of y1 , . . . , yn are in tdf(M). If M = hF iS[n], then we write
tdf(F ) instead of tdf(M).
4
G. CASNATI, J. JELISIEJEW, R. NOTARI
Let A be Gorenstein with s := sdeg(A),
so that Soc(A) = Ms ∼
= k. In particular
P
s
∼
), where F :=
A = S[n]/Ann(F
i=0 Fi , Fi ∈ P [n]i . For each h ≥ 0 we set
P
F≥h := si=h Fi (hence Fs = F≥s ). We have that tdf(F≥h )i ⊆ tdf(F )i and equality
obviously holds if i ≥ h − 1 (see Lemma 2.1 of [7]).
Trivially, if s ≥ 1, we can always assume that the homogeneous part of F of degree
0 vanishes, i.e. F = F≥1 . Moreover, thanks to Lemma 2.2 of [7] we know that, if
s ≥ 2 and Ann(F ) ⊆ S[n]2+ , then we can also assume F1 = 0, i.e. F = F≥2 : we will
always make such an assumption in what follows.
We have a filtration with proper ideals (see [14]) of gr(A) ∼
= S[n]/ldf(Ann(F ))
CA (0) := gr(A) ⊃ CA (1) ⊇ CA (2) ⊇ · · · ⊇ CA (s − 2) ⊇ CA (s − 1) := 0.
Via the epimorphism S[n] ։ gr(A) we obtain an induced filtration
bA (0) := S[n] ⊃ C
bA (1) ⊇ C
bA (2) ⊇ · · · ⊇ C
bA (s − 2) ⊇ C
bA (s − 1) := ldf(Ann(F )).
C
bA (a)/C
bA (a + 1) are reflexive graded
The quotients QA (a) := CA (a)/CA (a + 1) ∼
=C
gr(A)–modules whose Hilbert function is symmetric around (s − a)/2. In general
gr(A) is no more Gorenstein, but the first quotient
(2)
G(A) := QA (0) ∼
= S[n]/Ann(Fs )
is characterized by the property of being the unique (up to isomorphism) graded
Gorenstein quotient k–algebra of gr(A) with the same socle degree. Moreover, the
Hilbert function of A satisfies
s−2
X
(3)
HA (i) = Hgr(A) (i) =
HQA(a) (i),
i ≥ 0.
a=0
Since HA (0) = HG(A) (0) = 1, it follows that if a ≥ 1, then QA (a)0 = 0, whence
QA (a)i = 0 when i ≥ s − a (see [14]) for the same values of a.
Moreover
a
X
Hgr(A)/CA (a+1) (i) = HS[n]/CbA(a+1) (i) =
HQA(α) (i),
i ≥ 0.
α=0
We set
fh :=
s−h
X
HQA (α) (1) = HS[n]/CbA(s−h+1) (1) = Hgr(A)/CA (s−h+1) (1)
α=0
(so that n = HA (1) = f2 ).
Finally we introduce the following new invariant.
Definition 2.1. Let A be a local, Artinian k–algebra with maximal ideal M and
s := sdeg(A). The capital degree, cdeg(A), of A is defined as the maximum integer
i, if any, such that HA (i) > 1, 0 otherwise. If c = cdeg(A) we also say that A is a
c–stretched algebra (for short, stretched if c ≤ 1).
By definition cdeg(A) ≥ 0 and cdeg(A) ≤ sdeg(A): if A is Gorenstein, then we
also have cdeg(A) < sdeg(A).
RATIONALITY OF THE POINCARÉ SERIES
5
The rationality of the Poincaré series PA of every stretched ring A is proved in
[16]. The proof has been generalized to rings with HA (2) = 2 in [10] and to rings
with HA (2) = 3, HA (3) = 1 in [6] . The rationality of PA when A is a 2–stretched
algebra has been studied in [8] with the restriction sdeg(A) = 3.
3. Decomposition of the apolar ideal
In the present section we explain how to decompose the ideal Ann(F ) as the sum
of two simpler ideals. Such a decomposition will be used in the next section in order
to reduce the calculation of the Poincaré series of A to the one of a simpler algebra.
Lemma 3.1. Let m ≤ n, G ∈ P [m], H ∈ k[ym+1 , . . . , yn ] and F = G + H. Let
us denote by Ann(G) and Ann(H) the annihilators of G and H inside S[m] and
k[[xm+1 , . . . , xn ]] respectively. Then
Ann(F ) = Ann(G)S[n] + Ann(H)S[n] + (σG − σH , xi xj )1≤i≤m,
m+1≤j≤n .
where σG ∈ S[m] and σH ∈ k[[xm+1 , . . . , xn ]] are any series of order deg(G) and
deg(H) such that σG ◦ G = σH ◦ H = 1.
Proof. The inclusions Ann(G)S[n], Ann(H)S[n] ⊆ Ann(F ) are completely trivial.
Also the inclusion (σG − σH , xi xj )1≤i≤m, m+1≤j≤n ⊆ Ann(F ) is easy to check. Thus
Ann(G)S[n] + Ann(H)S[n] + (σG − σH , xi xj )1≤i≤m,
m+1≤j≤n
⊆ Ann(F ).
Conversely let p ∈ Ann(F ). Grouping the different monomials in p, we can write
a decomposition p = p≤m + p>m + pmix , where p≤m ∈ S[m], p>m ∈ k[[xm+1 , . . . , xn ]]
and, finally, pmix ∈ (xi xj )1≤i≤m, m+1≤j≤n ⊆ S[n].
It is clear that pmix ∈ Ann(G)S[n] + Ann(H)S[n] + (σG − σH , xi xj )1≤i≤m, m+1≤j≤n ,
hence it suffices to prove that
p≤m + p>m ∈ Ann(G)S[n] + Ann(H)S[n] + (σG − σH , xi xj )1≤i≤m,
m+1≤j≤n .
To this purpose recall that 0 = p ◦ F = p≤m ◦ G + p>m ◦ H, by definition. Hence
p≤m ◦ G = u = −p>m ◦ H. Since p≤m ◦ G ∈ P [m] and p>m ◦ H ∈ k[ym+1 , . . . , yn ], it
follows that u ∈ k. So p≤m − u(σG − σH ) ∈ Ann(G)S[n], whence
p≤m ∈ (σG − σH ) + Ann(G)S[n] ⊆
⊆ Ann(G)S[n] + Ann(H)S[n] + (σG − σH , xi xj )1≤i≤m,
m+1≤j≤n .
A similar argument shows that
p>m ∈ (σG − σH ) + Ann(H)S[n] ⊆
⊆ Ann(G)S[n] + Ann(H)S[n] + (σG − σH , xi xj )1≤i≤m,
m+1≤j≤n ,
and this concludes the proof.
Let P
F be as in the statement above. Then Lemma 3.1 with G :=
H := nj=m+1 yj2 yield the following corollary.
Ps
i=2
Fi and
6
G. CASNATI, J. JELISIEJEW, R. NOTARI
Corollary 3.2. Let m ≤ n, G ∈ P [m] non–degenerate and F = G +
Let us denote by Ann(G) the annihilator of G inside S[m]. Then
Ann(F ) = Ann(G)S[n] + (x2j − 2σ, xi xj )1≤i<j≤n,
Pn
j=m+1
yj2 .
j≥m+1
where σ ∈ S[m] has order deg(G) and σ ◦ G = 1.
Proof. It suffices to apply Lemma 3.1 taking into account that Ann(H) = (x2j −
x2m+1 , xi xj )m+1≤i<j≤n, j≥m+1 and that x2m+1 ◦ H = 2.
4. Rationality of Poincaré series
We now focus on the Poincaré series PA (z) of the algebra A defined in the introduction: we will generalize some classical results (see [16], [10], [6]). Out of the
decomposition results proved in the previous section, the main tools we use are the
following ones:
• for each local Artinian, Gorenstein ring C with emdim(C) ≥ 2
(4)
PC (z) =
PC/Soc(C) (z)
1 + z 2 PC/Soc(C) (z)
(see [3]);
• for each local Artinian ring C with maximal ideal N, if c1 , . . . , ch ∈ N \ N2
are linearly independent elements of Soc(C), then
(5)
PC (z) =
PC/(c1 ,...,ch ) (z)
1 − hzPC/(c1 ,...,ch) (z)
(see [13]).
Let A be a local, Artinian, Gorenstein, k–algebra
Pn with2 s = sdeg(A) and n =
HA (1). Assume A = S[n]/Ann(F ) where F = G+ j=m+1 yj ∈ P [n] with G ∈ P [m].
Thanks to Corollary 3.2 we have
Ann(F ) + (σ, xm+1 , . . . , xn ) = Ann(G)S[n] + (σ, xm+1 , . . . , xn ),
thus
S[n]
S[m]
∼
.
=
Ann(F ) + (σ, xm+1 , . . . , xn )
Ann(G) + (σ)
Trivially S[m]/Ann(G) is a local, Artinian, Gorenstein, k–algebra.
Since Soc(A) is generated by the class of σ, it follows from formula (4) that
PA (z) =
PS[n]/Ann(F )+(σ) (z)
.
1 + z 2 PS[n]/Ann(F )+(σ) (z)
Notice that xi xj ∈ Ann(F ) + (σ), i = 1, . . . , n, j = m + 1, . . . , n, i ≤ j. In particular
xm+1 , . . . , xn ∈ Soc(S[n]/Ann(F ) + (σ)). It follows from formula (5) that
PS[n]/Ann(F )+(σ) (z) =
PS[n]/Ann(F )+(σ,xm+1 ,...,xn ) (z)
.
1 − (n − m)zPS[n]/Ann(F )+(σ,xm+1 ,...,xn ) (z)
RATIONALITY OF THE POINCARÉ SERIES
7
The inverse formula of (4) finally yields
PS[n]/Ann(F )+(σ,xm+1 ,...,xn ) = PS[m]/Ann(G)+(σ) (z) =
PS[m]/Ann(G) (z)
.
1 − z 2 PS[m]/Ann(G) (z)
Combining the above equalities we finally obtain the following
P
Proposition 4.1. Let G ∈ P [m], F := G+ nj=m+1 yj2 and define A := S[n]/Ann(F )
and B := S[m]/Ann(G). Then
PA (z) =
PB (z)
.
1 − (HA (1) − HA (2))zPB (z)
A first immediate consequence of the above Proposition is the following corollary.
P
Corollary 4.2. Let G ∈ P [m], F := G + nj=m+1 yj2 and define A := S[n]/Ann(F )
and B := S[m]/Ann(G). The series PB (z) is rational if and only if the same is true
for PA (z).
Now assume that m ≤ 4. Since the Poincaré series of each local Artinian, Gorenstein ring with embedding dimension at most four is rational (see [17], [19], [20],
[15]) we also obtain the following corollary.
P
Corollary 4.3. Let G ∈ P [4], F := G + nj=5 yj2 and define A := S[n]/Ann(F ).
Then PA (z) is rational.
Let A be a local, Artinian, Gorenstein k–algebra with n := HA (1).
Corollary 4.4. Let A be a local, Artinian, Gorenstein k–algebra such that f3 ≤ 4.
Then PA (z) is rational.
Proof. If s := sdeg(A), then
A∼
= S[n]/Ann(F )
2
where F :=
i=2 Fi +
j=f3 +1 yj , Fi ∈ P [fi ]i , i ≥ 3 and F2 ∈ P [f3 ]2 (see [7],
Remark 4.2). Thus the statement follows from Corollary 4.3.
Ps
Pn
5. Examples of algebras with rational Poincaré series
In this section we give some examples of local, Artinian, Gorenstein k–algebras A
with rational PA using the results proved in the previous section.
We start with the following Lemma generalizing a result in [18].
Lemma 5.1. LetPA be a local, Artinian, Gorenstein, 3–stretched k–algebra. If
s−4
HQA (a) (2) ≥ HA (3).
HA (3) ≤ 5, then a=0
Ps−4
Proof. We set m := HA (3) and p := a=0
HQA(a) (2). We have to show that p ≥ m:
assume p ≤ m − 1.
Ps−4
If s = 4, then HQA (0) = a=0
HQA(a) = (1, m, p, m, 1). If s ≥ 5, then we have
if a = 0,
(1, 1, 1, 1, 1, . . . , 1)
(0, 0, 0, 0, 0, . . . , 0)
if a = 1, . . . , s − 5,
HQA (a) =
(0, m − 1, p − 1, m − 1, 0, . . . , 0)
if a = s − 4.
8
G. CASNATI, J. JELISIEJEW, R. NOTARI
Ps−4
In particular a=0
HQA (a) = HQA(0) + HQA(s−4) . Notice that f4 = m.
Macaulay’s growth theorem (see [5], Theorem 4.2.10) and the restriction m ≤ 5
imply that 3 ≤ p = m − 1 necessarily. Thus we can restrict our attention to the two
cases p = 3, 4. We examine the second case, the first one being analogous.
Let n := HA (1), take a polynomial F := y1s + F4 + F3 + F2 , Fi ∈ P [fi ]i , x31 ◦ F4 = 0
Remark 4.2 of [7]) and set B := S[n]/Ann(F≥4 ).
such that A ∼
= S[n]/Ann(F ) (see
Ps−4
We first check that HB = a=0 HQA (a) = (1, 5, 4, 5, 1, . . . , 1). On the one hand,
bA (a) = C
bB (a), a ≤ s − 3, whence
Lemma 1.10 of [14] implies that C
HB (1) ≥
s−4
X
a=0
HQB (a) (1) =
s−4
X
HQA (a) (1) = 5.
a=0
On the other hand, F≥4 ∈ P [f4 ] = P [5], whence 5 = HB (1) ≤ 5. It follows that
equality holds, thus HQB (s−2) (1) = HQB (s−3) (1) = 0. By symmetry we finally obtain
HQB (s−2) = HQB (s−3) = 0. This last vanishing completes the proof of the equality
Ps−4
HB = a=0
HQA (a) = (1, 5, 4, 5, 1, . . . , 1).
Let I ⊆ k[x1 , . . . , xn ] ⊆ S[n] be the ideal generated by the forms of degree at
most 2 inside Ann(tdf(F≥4 )) = ldf(Ann(F≥4 )). We obviously have x6 , . . . , xn ∈ I,
because F≥4 ∈ P [5]. Denote by I sat the saturation of I and set R := k[x1 , . . . , xn ]/I,
Rsat := k[x1 , . . . , xn ]/I sat . Due to the definition of I we know that HR (t) ≥ HB (t)
for each t ≥ 0, and equality holds true for t ≤ 2. Moreover, we know that
HB (2)h2i = HB (3) ≤ HR (3) ≤ HR (2)h2i = HB (2)h2i ,
hence
2
4
= HR (2)h2i .
+
HR (3) =
2
3
Gotzmann Persistence Theorem (see [5], Theorem 4.3.3) implies that
t−1
t+1
= t + 2,
t ≥ 2.
+
HR (t) =
t−1
t
We infer HRsat (t) = t + 2, t ≫ 0.
When saturating, the ideal can only increase its size in each degree, hence HRsat (t) ≤
HR (t) for each t ≥ 0. Again Macaulay’s bound thus forces HRsat (t) = HR (t) = t + 2
for t ≥ 2. In particular the components It and Itsat of degree t ≥ 2 of I and I sat
coincide.
Since HRsat is non–decreasing, it follows that
HRsat (1) ≤ HRsat (2) = 4 < 5 = HB (1) = HR (1).
In particular there exists a linear form ℓ ∈ I sat \ I. The equality I2 = I2sat forces
ℓxj ∈ I2 ⊆ Ann(tdf(F≥4 )), j = 1, . . . , n. Since x6 , . . . , xn ∈ I, it follows that we can
assume ℓ ∈ S[5] ⊆ S[n]. Moreover we also know that y1s ∈ tdf(F≥4 ), hence ℓ cannot
be a multiple of x1 . In particular we can change linearly coordinates in such a way
that ℓ = x5 .
If j ≥ 2, then xj ◦ F≥4 = xj ◦ F4 , thus the condition xj x5 ∈ I2 ⊆ Ann(tdf(F≥4 )),
j = 2, . . . , 5, and x31 ◦ F4 = 0 imply that x5 ◦ F4 = 0. Such a vanishing contradicts
RATIONALITY OF THE POINCARÉ SERIES
9
the linear independence of the derivatives
x2 ◦ F≥4 ,
x3 ◦ F≥4 ,
x4 ◦ F≥4 ,
x5 ◦ F≥4 .
Indeed 5 = HB (1) = dimk (tdf(F≥4 )1 ) and xj ◦ F≥4 = 0, j ≥ 6.
Using the results proved in the previous section and the Lemma above we are able
to handle the first example of this section, proving the following theorem generalizing
Corollary 2.2 of [8].
Theorem 5.2. Let A be a local, Artinian, Gorenstein k–algebra with HA (2) ≤ 4
and cdeg(A) ≤ 3. Then PA is rational.
Proof. Let us examine the case cdeg(A) = 3, the other ones being similar. Lemma
5.1 yields
(6)
HA (2) ≥
s−4
X
HQ(a) (2) ≥ HA (3).
a=0
If sdeg(A) ≥ 5, then Decomposition (3) is
(1, 1, . . . , 1) + (0, a1 , a2 , a1 , 0) + (0, b1 , b1 , 0) + (0, c1 , 0)
for some integers a1 , a2 , b1 , c1 . Inequality (6) is equivalent to a1 ≤ a2 . We know
that HA (2) = a2 + b1 + 1 ≤ 4, so f4 = a1 + b1 + 1 ≤ 4 and the argument follows
from Corollary 4.4. In the case sdeg(A) = 4, the decomposition (3) changes, but
the argument stays the same.
Now we skip the condition cdeg(A) = 3 but we impose a restriction on the shape
of HA . The following theorem generalizes a well–known result proved when either
m = 1, 2 (see [16] and [10] respectively) or m ≤ 4 and s = 3 (see again [8]).
Theorem 5.3. Let A be a local, Artinian, Gorenstein k–algebra such that HA (i) =
m, 2 ≤ i ≤ cdeg(A). If m ≤ 4, then PA is rational.
Proof. Let c := cdeg(A), n := HA (1), take a polynomial F := y1s + Fc+1 + . . . ,
Fc+1 ∈ P [fc+1 ]c+1 = P [m]c+1 such that A ∼
= S[n]/Ann(F ) (see Remark 4.2 of [7])
and set B := S[n]/Ann(F≥c+1 ) so that QA (a) = QB (a) for a ≤ s − c − 1 (again
by Lemma 1.10 of [14]). In particular HB (c) = m, thus Decomposition (3) implies
HB (1) ≥ m. Since we know that F≥c+1 ∈ P [m], it follows that HB (1) ≤ m, hence
equality must hold.
As in the proof of the previous lemma one immediately checks that either s = c+1,
and HQA(0) = (1, m, . . . , m, 1), or s ≥ c + 2, and
if a = 0,
(1, 1, . . . , 1, 1, . . . , 1)
(0,
0,
.
.
.
,
0,
0,
.
.
.
,
0)
if
a = 1, . . . , s − c − 2,
HQA(a)
(0, m − 1, . . . , m − 1, 0, . . . , 0)
if a = s − c − 1.
Assume that HB (i) ≤ m − 1 ≤ 3 for some i = 2, . . . , c − 1. Let i0 be the maximal
of such i’s. We know that there are k(i0 ) > k(i0 − 1) > k(i0 − 2) > . . . such that
k(i0 − 2)
k(i0 − 1)
k(i0 )
+ · · · ≤ m − 1 ≤ 3.
+
+
HB (i0 ) =
i0 − 2
i0 − 1
i0
10
G. CASNATI, J. JELISIEJEW, R. NOTARI
If i0 ≥ 3, it would follow k(i0 ) ≤ i0 , thus Macaulay’s bound implies
HB (i0 + 1) ≤ HB (i0 )hi0 i =
k(i0 − 2) + 1
k(i0 − 1) + 1
k(i0 ) + 1
+··· =
+
+
=
i0 − 1
i0
i0 + 1
= HB (i0 ) ≤ m − 1,
a contradiction. We conclude that i0 = 2.
Due to the symmetry of HQB (s−c−1) we deduce that c = 3. If HQB (s−3) (2) = q, the
symmetry of HQB (s−3) implies HQB (s−3) (1) = q, hence Decomposition (3) implies
m = HB (1) =
s−2
X
HQB (a) (1) = m + q + HQB (s−2) (1).
a=0
It follows that q = HQB (s−2) (1) = 0, whence HB = (1, m, p, m, 1, . . . , 1) where
p ≤ m − 1 which cannot occur by Lemma 5.1.
We conclude that HQA(s−c−1) (i) = HQB (s−c−1) (i) = m − 1 for each i = 2, . . . , c,
then the hypothesis on HA (i) and Decomposition (3) yield
(0, 0, 0, . . . , 0, 0, . . . , 0)
if a = s − c, . . . , s − 3,
HQA (a) =
(0, n − m, 0, . . . , 0, 0, . . . , 0)
if a = s − 2,
Ps−3
whence f3 = a=1 HQ(a) (1) = m ≤ 4.
As third example we skip the condition on the shape of HA but we put a limit on
dimk (A), slightly extending the result proved in [7].
Theorem 5.4. Let A be a local, Artinian, Gorenstein k–algebra with dimk (A) ≤ 16
and HA (2) ≤ 4. Then PA is rational.
Proof. Thanks to [15] we can restrict our attention to algebras A with HA (1) ≥ 5.
The rationality of the Poincaré series of stretched algebras is proved in [16]. For
almost stretched algebras see [10]. For the case of algebras A with sdeg(A) = 3 and
HA (2) ≤ 4 see [8]. Finally the case HA (i) = m, 2 ≤ i ≤ cdeg(A) with m ≤ 4 is
covered by Theorem 5.3 above.
There are several cases which are not covered by the aforementioned results. In
each of these cases one can check that the condition f3 ≤ 4 of Corollary 4.4 is fulfilled.
We know that necessarily HA (2) ≥ 3, otherwise A is almost stretched by Macaulay’s
bound. The restriction HA (2) ≤ 4 implies HA (3) ≤ 5 again by Macaulay’s bound.
Theorem 5.2 deals with the case sdeg(A) = 4. Let us analyze the case sdeg(A) = 5
and dimk A ≤ 16. The decomposition is
(1, a1 , a2 , a2 , a1 , 1) + (0, b1 , b2 , b1 , 0) + (0, c1 , c1 , 0) + (0, d1, 0)
for some integers a1 , a2 , b1 , b2 , c1 , d1. If a1 = 1 then the algebra is 3-stretched, so we
may suppose a1 ≥ 2. We know that HA (2) = a2 + b2 + c1 ≤ 4 and we would like to
prove a1 +b1 +c1 ≤ 4. Suppose a1 +b1 +c1 ≥ 5, then the inequality on the dimension
of A shows that 2 · a2 + b2 ≤ 4, in particular a2 ≤ 2 and from Macaulay’s bound
it follows that a1 = a2 = 2. It follows that b2 = 0 and once again from Macaulay’s
bound b1 = 0. This forces a1 + b1 + c1 = 2 + c1 = a2 + b2 + c1 ≤ 4, a contradiction.
RATIONALITY OF THE POINCARÉ SERIES
11
Let us now suppose that sdeg(A) = 6. Look at the first row of the symmetric
decomposition (3): (1, a1 , a2 , a3 , a2 , a1 , 1).
• If a1 ≥ 3, then a2 , a3 ≥ 3 and the sum of the row is at least 17.
• If a1 = 2 then a2 = a3 = 2 and the sum of the row is 12. If we suppose
that f3 ≥ 5, then the sum of the first column of the remaining part of the
decomposition will be at least three, so the sum of whole remaining part will
be at least 2 · 3 = 6 and the dimension will be at least 12 + 6 > 16.
• Suppose a1 = 1 and look at the second row (0, b1 , b2 , b2 , b1 , 0). If b1 = 0
then the algebra is 3-stretched so the result follows from Theorem 5.2. From
HA (2) ≤ 4 it follows that b2 ≤ 3. If b2 = 3, then b1 ≥ 2 so the dimension is
at least 7 + 10 > 16. If b2 ≤ 2 then b1 ≤ b2 from Macaulay’s bound. Hence,
the same argument as before applies.
Let us finally suppose that sdeg(A) ≥ 7. Take the first row, beginning with
(1, a1 , a2 , . . . ). If a1 ≥ 3 then its sum is at least 3 · sdeg(A) − 1 > 16. If a1 =
2, the sum of this row is 2 · sdeg(A) ≥ 14. Then one can argue as in the case
sdeg(A) = 6, a1 = 2. A similar reasoning shows that when a1 = 1 the algebra has
decomposition (1, 1, . . . , 1) + (0, 4, 4, 0) and so HA (2) ≥ 5.
References
[1] D.J. Anick: A counterexample to a conjecture of Serre. Ann of Math. 115 (1982), 1–33.
[2] L. Avramov, A. Kustin, M. Miller: Poincaré series of modules over local rings of small embedding codepth or small linking number. J. Algebra 118 (1988), 162–204.
[3] L. Avramov, G. Levin: Factoring out the socle of a Gorenstein ring. J. of Algebra 55 (1978),
74–83.
[4] R. Bøgvad: Gorenstein rings with transcendental Poincaré–series. Math. Scand. 53 (1983),
5–15.
[5] W. Bruns, J. Herzog: Cohen–Macaulay rings. II Edition. Cambridge University Press. (1998).
[6] G. Casnati, R. Notari: The Poincaré series of a local Gorenstein ring of multiplicity up to 10
is rational. Proc. Indian Acad. Sci. Math. Sci. 119 (2009), 459–468.
[7] G. Casnati, R. Notari:
A structure theorem for 2–stretched Gorenstein algebras.
arXiv:1312.2191 [].
[8] G. Casnati, J. Elias, R. Notari, M.E. Rossi: Poincaré series and deformations of Gorenstein
local algebras with low socle degree. Comm. Algebra 41 (2013), 1049–1059.
[9] J. Elias, M.E. Rossi: Isomorphism classes of short Gorenstein local rings via Macaulay’s
inverse system. Trans. Amer. Math. Soc. 364 (2012), 4589–4604.
[10] J. Elias, G. Valla: A family of local rings with rational Poincaré series. Proc. Amer. Math.
Soc. 137 (2009), 1175–1178.
[11] J. Emsalem: Géométrie des points épais. Bull. Soc. Math. France 106 (1978), 399–416.
[12] R. Hartshorne: Algebraic geometry, G.T.M. 52, Springer (1977).
[13] T.H. Gulliksen, G. Levin: Homology of local rings. Queen’s Papers in Pure and Applied Math.
20 (1969).
[14] A.V. Iarrobino: Associated graded algebra of a Gorenstein Artin algebra, Mem. A.M.S. 107
(1994), n.514, viii+115.
[15] C. Jacobsson, A. Kustin, M. Miller: The Poincaré series of a codimension four Gorenstein
ring is rational, J. Pure Appl. Algebra 38 (1985), 255–275.
[16] J.D. Sally: The Poincaé series of stretched Cohen-Macaulay rings. Canad. J. Math. 32 (1980),
1261-1265.
12
G. CASNATI, J. JELISIEJEW, R. NOTARI
[17] J.P. Serre: Sur la dimension homologique des anneaux et des modules noethériens, in Proceedings of the International Symposium on Algebraic Number Theory, Tokyo, (1955), 175–189.
[18] R. P. Stanley: Combinatorics and commutative algebra, Birkhäuser. (1983), viii+88.
[19] J. Tate: Homology of noetherian rings and local rings, Ill. J. of Math. 1(1957), 14–25.
[20] H. Wiebe: Über homologische Invarianten lokaler Ringe, Math. Ann. 179 (1969), 257–274.
Gianfranco Casnati,
Dipartimento di Scienze Matematiche, Politecnico di Torino,
corso Duca degli Abruzzi 24, 10129 Torino, Italy
e-mail: [email protected]
Joachim Jelisiejew,
Faculty of Mathematics, Informatics, and Mechanics, University of Warsaw,
Banacha 2, 02-097 Warsaw, Poland
[email protected]
Roberto Notari,
Dipartimento di Matematica, Politecnico di Milano,
via Bonardi 9, 20133 Milano, Italy
e-mail: [email protected]
| 0 |
✐
✐
“2-Agarwal” — 2018/3/28 — 3:34 — page 219 — #1
✐
✐
Communications in Information and Systems
Volume 17, Number 4, 219–247, 2017
arXiv:1703.05348v2 [] 23 Mar 2018
Layered black-box, behavioral
interconnection perspective and
applications to problems in
communications, Part II: stationary
sources satisfying ψ-mixing criterion
Mukul Agarwal, Sanjoy Mitter, and Anant Sahai
Theorems from Part 1 of this paper are generalized to stationary,
ψ-mixing sources in this paper. As a consequence, these theorems
are proved for Markoff chains and order m Markoff chains. The
main result is the generalization of Theorem 1 in Part 1.
1. Introduction
In this paper, we generalize results Theorems 1-4 from Part 1 of this paper
[2] to the case when the source X is not necessarily i.i.d. but stationary and
satisfies a mixing condition, the ψ-mixing criterion (which implies that the
process is also ergodic). As a corollary, the results hold for Markoff chains
and order m Markoff chains.
In Part 1 of this paper, a direct equivalence was drawn between the
problem of communicating an i.i.d. source X to within a certain distortion
level D over an essentially unknown channel and reliable communication at
rates less than the rate-distortion function RX (D) over the channel. As a
result, assuming random codes are permitted, a source-channel separation
theorem was proved for communication over a general, compound channel,
where the channel model is general in the sense of Verdu-Han and compound
in the sense that the channel may belong to a set. These theorems were then
generalized to the unicast, multi-user setting where the sources were still
assumed to be i.i.d.
In this paper, these theorems from Part 1 are generalized to the case
when the source (also sources in the unicast multi-user setting) are not
necessarily i.i.d. but satisfy a mixing criterion called the ψ-mixing criterion.
219
✐
✐
✐
✐
✐
✐
“2-Agarwal” — 2018/3/28 — 3:34 — page 220 — #2
✐
✐
220
M. Agarwal, S. Mitter, and A. Sahai
2. Paper outline
In Section 3, the notation and denitions used in this paper are described.
This is followed by a description of the ψ-mixing condition, its properties
and intuition for it in Section 4; the proofs of these properties can be found
in Appendix A. In Section 5, the high-level idea of the proof of the generalization of Theorem 1 of [2] to this paper is stated. A simulation procedure
is required in order to bring this high-level idea to fruition and this is the
subject of Section 6. This is followed by the statement of the main lemma of
this paper, Lemma 6, which uses the simulation procedure of the previous
section to prove a result which is the heart of this paper and the heart of
what is needed in order to generalize Theorem 1 of [2] to ψ-mixing sources:
this is the subject of Section 7. Lemma 6 and a technical lemma relating
rate-distortion functions under the expected and the probability of excess
distortion criteria is needed in order to generalize Theorem 1 of [2] to ψmixing sources; this technical lemma, Lemma 7, is the subject of Section 8.
By use of Lemmas 6 and 7, the main theorem of this paper, Theorem 1,
the generalization of Theorem 1 of [2] to ψ-mixing sources, can be stated
and proved and this is done in Section 9 . Application to this theorem to
Markoff and order m Markoff sources is stated and proved in Section 10.
Some discussions are carried out in Section 11 where in part, it is discussed,
how to generalize Theorems 2, 3 and 4 of [2] to ψ-mixing sources. Section 12
discusses future research directions.
3. Notation and definitions
Let X1 , X2 , . . . , Xn , . . ., be a sequence of random variables defined on a probability space (Ω, Σ, P ). The range of each Xi is assumed to be a finite set X.
Denote this sequence of random variables by X. Such a sequence is called a
source. Further discussion and assumption on the source will be carried out
in Section 4.
Sets will be denoted by latex mathbb notation, example, X, Y, and random variables by basic mathematical notation, for example X, Y . Sigma
fields will be denoted by mathcal notation for example, S.
The source space at each time, as stated before, is X, and is assumed
to be a finite set. The source reproduction space is denoted by Y which is
assumed to be a finite set. Assume that X = Y.
d : X × Y → [0, ∞) is the single-letter distortion measure. Assume that
d(x, x) = 0 ∀x ∈ X.
✐
✐
✐
✐
✐
✐
“2-Agarwal” — 2018/3/28 — 3:34 — page 221 — #3
✐
✐
Part II: stationary sources satisfying ψ-mixing criterion
221
For xn ∈ Xn , y n ∈ Yn , the n-letter rate-distortion measure is defined additively:
n
X
d(xn (i), y n (i))
dn (xn , y n ) ,
i=1
where xn (i) denotes the ith component of xn and likewise for y n .
(X1 , X2 , . . . , Xn ) will be denoted by X n .
A rate R source-code with input space X and output space Y is a sequence < en , f n >∞
1 , where
en : Xn → {1, 2, . . . , 2⌊nR⌋ } and f n : {1, 2, . . . , 2⌊nR⌋ } → Yn .
We say that rate R is achievable for source-coding the source X within
distortion-level D under the expected distortion criterion if there exists a
rate R source code < en , f n >∞
1 such that
1 n n n n n
lim sup E
(1)
d (X , f (e (X ))) ≤ D
n
n→∞
The infimum of all achievable rates under the expected distortion is an
E (D).
operational rate-distortion function, denoted by RX
We say that rate R is achievable for source-coding the source X within
distortion-level D under the probability of excess distortion criterion if there
exists a rate R source code < en , f n >∞
1 such that
1 n n n n n
lim Pr
(2)
d (X , f (e (X ))) > D = 0
n→∞
n
The infimum of all achievable rates under the probability of excess distorP (D)
tion criterion is an operational rate-distortion function, denoted by RX
We used lim sup in (1) and lim in (2); in (2), we can equivalently use
lim sup. This is because for a sequence of non-negative real numbers an ,
limn→∞ an = 0 is equivalent to lim supn→∞ an = 0.
The block-independent approximation (henceforth shortened to BIA)
X T source is a sequence of random vectors (S1 , S2 , . . . , Sn , . . .), where Si
are independent, and ∀i, Si ∼ X T . To simplify notation, we will sometimes
denote (S1 , S2 , . . .) by S. S n will denote (S1 , S2 , . . . , Sn ). Note that BIA X T
source is an i.i.d. vector source and will also be called the vector i.i.d. X T
source.
The rate-distortion function for the vector i.i.d. X T source is defined in
the same way as above; just that the source input space would be XT , the
✐
✐
✐
✐
✐
✐
“2-Agarwal” — 2018/3/28 — 3:34 — page 222 — #4
✐
✐
222
M. Agarwal, S. Mitter, and A. Sahai
source output space will be YT , the single letter distortion function would
now be on T -length sequences and is defined additively, and when forming
block-codes, we will be looking at blocks of T -length vectors. Details are as
follows:
The source input space is XT . Denote it by S. The source reproduction
space is YT . Denote it by T. Denote a generic element of the source space
by s and that of the source reproduction space by t. Note that s and t are
T -length sequences. Denote the ith component by s(i) and t(i) respectively.
The single letter distortion function, now, has inputs which are length
T vectors. It is denoted by dT and is defined additively using d which has
been defined before:
P
dT (s, t) , Ti=1 d(s(i), t(i)).
Note that dT is the same as dT ; just that we use superscript T for T
length vectors, but now, we want to view a T -length vector as a scalar, and
on this scalar, we denote the distortion measure by dT .
sn will denote a block-length n sequence of vectors of length T . Thus,
n
s (i), which denotes the ith component of sn is an element of K. sn (i)(j)
will denote the j th component of sn (i).
The n-letter distortion function is defined additively using dT :
For sn ∈ Sn , P
tn ∈ Tn ,
n
n
n
dT (s , t ) , ni=1 dT (sn (i), tn (i)).
When coding the vector i.i.d. X T source (for short, denoted by S), a rate
n
n
⌊nR⌋ }
R source code is a sequence < en , f n >∞
1 , where e : S → {1, 2, . . . , 2
n
⌊nR⌋
n
and f : {1, 2, . . . , 2
}→T .
We say that rate R is achievable for source-coding the vector i.i.d. X T
source within distortion-level D under the expected distortion criterion if
there exists a rate R source code < en , f n >∞
1 such that
(3)
lim E
n→∞
1 n n n n n
dT (S , f (e (S ))) ≤ D
n
(Note that S n denotes (S1 , S2 , . . . , Sn )).
The infimum of all achievable rates under the expected distortion criteE (D).
rion is the operational rate distortion function, denoted by RX
T
The information-theoretic rate-distortion function of the vector i.i.d. X T
source is denoted and defined as
(4)
T
I
;Y T)
RX
T (D) , inf I(X
T
✐
✐
✐
✐
✐
✐
“2-Agarwal” — 2018/3/28 — 3:34 — page 223 — #5
✐
✐
Part II: stationary sources satisfying ψ-mixing criterion
223
where T is the set of W : S → P(T) defined as
W, W
(5)
X
pX T (s)W (t|s)dT (s, t) ≤ D
s∈S,y∈T
where pX T denotes the distribution corresponding to X T .
Note that this is the usual definition of the information-theoretic ratedistortion function for an i.i.d. source; just that the source under consideration is vector i.i.d.
E (D) = RI (D).
By the rate-distortion theorem, RX
T
XT
Further, it is also known that
1 E
RX T (T D)
T →∞ T
E
RX
(D) = lim
(6)
The channel is a sequence c =< cn >∞
1 where
(7)
(8)
cn :Xn → P(Yn )
xn → cn (·|xn )
When the block-length is n, the channel acts as cn (·|·); cn (y n |xn ) is the
probability that the channel output is y n given that the channel input is xn .
When the block-length is n, a rate R deterministic channel encoder
is a map ench : MnR → Xn and a rate R deterministic channel decoder is a
n : Yn → M̂n where M̂n , Mn ∪ {e} is the message reproduction set
map fch
R
R
R
where ‘e’ denotes error. The encoder and decoder are allowed to be random
in the sense that encoder-decoder is a joint probability distribution on the
n >∞ is the rate R
space of deterministic encoders and decoders. < ench , fch
1
channel code.
Denote
(9)
n
n
n
∞
g =< g n >∞
1 ,< ech ◦ c ◦ fch >1
gn has input space MnR and output space M̂nR . Consider the set of channels
(10)
GA , {e ◦ c ◦ f | c ∈ A}
g ∈ GA is a compound channel. Rate R is said to be reliably achievable over
n >∞ and a sequence
g ∈ GA if there exists a rate R channel code < ench , fch
1
✐
✐
✐
✐
✐
✐
“2-Agarwal” — 2018/3/28 — 3:34 — page 224 — #6
✐
✐
224
M. Agarwal, S. Mitter, and A. Sahai
< δn >∞
1 , δn → 0 as n → ∞ such that
sup gn ({mn }c |mn ) ≤ δn ∀c ∈ A
(11)
mn ∈Mn
R
Supremum of all achievable rates is the capacity of c ∈ A. Note that this is
the compound capacity, but will be referred to as just the capacity of c ∈ A.
The channel c ∈ A is said to communicate the source X directly within
distortion D if with input X n to cn , the output is Y n (possibly depending
on the particular c ∈ A) such that
(12)
Pr
1 n n n
d (X , Y ) > D
n
≤ ωn ∀c ∈ A
for some ωn → 0 as n → ∞.
4. Mixing condition used in this paper
In this section, ψ-mixing processes are defined, properties of ψ-mixing processes are stated (and proved in the appendix) and intuition on ψ-mixing
provided.
4.1. Definition of ψ-mixing process
Let X1 , X2 , . . . , Xn , . . . be a sequence of random variables defined on a probability space (Ω, Σ, P ). The random variables from Xa to Xb will be denoted
by Xab , 1 ≤ a ≤ b ≤ ∞. The whole sequence X1∞ will be denoted by X ∞ or
just by X. The range of each Xi is assumed to be contained in a finite set X.
Note that time is assumed to be discrete. Note further, that it is assumed
that the process is one-sided in time, that it runs from time 1 to ∞, not
−∞ to ∞. The Borel sigma-field on X∞ is defined in the standard way, and
is denoted by F ∞ ; see Pages 1, 2 of [10] for details.
Xba will denote the set corresponding to the ath to the bth coordinates
of X∞ , 1 ≤ a ≤ b < ∞. A sequence within these coordinates will be denoted
by xba , a random variable, by Xab . The Borel sigma-field on Xba is denoted by
b
Fab . Note that if a and b are finite, Fab = 2Xa , the power set of Xba .
∞
For A ∈ Fat and B ∈ Ft+τ
+1 , we will have occasion to talk about the
following probabilities:
✐
✐
✐
✐
✐
✐
“2-Agarwal” — 2018/3/28 — 3:34 — page 225 — #7
✐
✐
Part II: stationary sources satisfying ψ-mixing criterion
225
Pr(X1t ∈ A)
(13)
∞
Pr(Xt+τ
+1 ∈ B)
∞
Pr(X1t ∈ A, Xt+τ
+1 ∈ B)
∞
The intuitive meaning is clear: for example, Pr(X1t ∈ A, Xt+τ
+1 ∈ B)
refers to the probability that the random variable X1t takes values in the
∞
set A and the random variables Xt+τ
+1 take values in the set B. Mathematically, this is defined as follows. Define:
(14)
A′ = {(a1 , a2 , . . . , an , . . .)|(a1 , a2 , . . . , at ) ∈ A}
B′ = {(b1 , b2 , . . . , bn , . . .)|(bt+τ +1 , bt+τ +2 , . . .) ∈ B}
Then,
(15)
Pr(X1t ∈ A) , P (X ∞ ∈ A′ )
∞
∞
′
Pr(Xt+τ
+1 ∈ B) , P (X1 ∈ B )
∞
∞
′
′
Pr(X1t ∈ A, Xt+τ
+1 ∈ B) , P (X1 ∈ A ∩ B )
(16)
Further, if Pr(X1t ∈ A) > 0, the following definition will be used:
(17)
∞
t
Pr(Xt+τ
+1 ∈ B|X1 ∈ A) ,
∞
Pr(X1t ∈ A, Xt+τ
+1 ∈ B)
Pr(X1t ∈ A)
The one-sided version of ψ-mixing criterion of [4] will be used in this
document, This is because the stochastic process under consideration in
this document is one-sided in time, whereas the stochastic process under
consideration in [4] is two-sided in time.
Define, for τ ∈ W, the set of whole numbers (non-negative integers),
(18) ψ(τ ) = sup
sup
∞
t
∞
t∈N A∈F1t ,B∈Ft+τ
+1 ,Pr(X1 ∈A)>0,Pr(Xt+τ +1 ∈B)>0
∞
Pr(X1t ∈ A, Xt+τ
+1 ∈ B)
t
∞
Pr(X1 ∈ A) Pr(Xt+τ +1 ∈ B)
−1
The process X is said to be ψ-mixing if ψ(τ ) → 0 as τ → ∞.
The changes in (18) from [4] are:
✐
✐
✐
✐
✐
✐
“2-Agarwal” — 2018/3/28 — 3:34 — page 226 — #8
✐
✐
226
M. Agarwal, S. Mitter, and A. Sahai
• The first sup is taken over t ∈ Z in [4], see Page 111 of [4]. Also, t is
denoted by j in [4]. However, the sup in (18) is over j ∈ W. This is because the process in [4] is two-sided in time, whereas we are considering
a one-sided process.
• A change of notation, where probabilities in (18) are written in terms
of random-variables taking values in certain sets, whereas [4] considers
the underlying probability space and writes probabilities of sets on
that space, see Page 110, 111 of [4].
• The set A ∈ F1t in (18), whereas if one used the denition in [4], the set
t . This is, again, because the process in [4] is
A would belong to F−∞
two-sided whereas the process in this paper is one-sided.
The reader is referred to [4] and [8] for an overview of various kinds of
mixing conditions. [4] gives a thorough overview of strong mixing conditions.
[8] mentions both weak mixing and strong mixing conditions though the
coverage of strong mixing conditions is less thorough than in [4].
t+T
Let X be stationary. For B ⊂ XT , denote the probability P (Xt+1
∈ B)
(which is independent of t since X is stationary), by PT (B). Note that PT
is a probability distribution on XT where the underlying sigma-field is the
T
canonical sigma-field 2X .
4.2. Properties of ψ-mixing processes
Lemma 1. Let X be stationary, ψ-mixing. Then, ∀t ∈ N, ∀τ ∈ W, ∀T ∈
W, ∀A ⊂ Xt , ∀B ⊂ XT , P (X1t ∈ A) > 0,
(19)
t+τ +T
t
′
Pr(Xt+τ
+1 ∈ B|X1 ∈ A) = (1 − λτ )PT (B) + λτ Pt,τ,T,A (B)
′
T
for some probability distribution Pt,τ,T,
A on X (under the canonical sigma
field on XT ) which may depend on t, τ, T, A, and λτ → 0 as τ → ∞.
Proof. See Appendix A.
Lemma 2. If X is stationary, ψ-mixing, then X is ergodic.
Proof. See Appendix A.
Lemma 3. Let X = (X1 , X2 , . . . , Xn , . . .) be a stationary, irreducible, aperiodic Markoff chain evolving on a finite set X. Then, X is ψ-mixing.
Proof. See Appendix A.
✐
✐
✐
✐
✐
✐
“2-Agarwal” — 2018/3/28 — 3:34 — page 227 — #9
✐
✐
Part II: stationary sources satisfying ψ-mixing criterion
227
Lemmas 2 and 3 have been proved in [4] for two-sided ψ-mixing processes. The proof of Lemma 3 uses the result from [4] on two-sided processes.
Lemma 4. Let X = (X1 , X2 , . . .) be a stationary, ψ-mixing process evolving
tL
on a set X. For L ∈ N, define Zt = X(t−1)L+1
. Then, Z = (Z1 , Z2 , . . .) is a
stationary, ψ-mixing process evolving on the set XL .
Proof. See Appendix A.
Lemma 5. Let X be a stationary, order m Markoff chain evolving on a
tL
finite set X. Define Zt = X(t−1)L+1
. Note that Z = (Z1 , Z2 , . . .) is a Markoff
chain evolving on the set Z = XL . Assume that Z is irreducible, aperiodic.
Then X is ψ-mixing.
Proof. See Appendix A.
It should be noted here, that a ψ-mixing process can have a rate of
mixing as slow as is desired whereas a Markoff ψ-mixing chain implies exponential rate of convergence to the stationary distribution [3], [7]. Thus, the
set of ψ-mixing processes is strictly larger than the set of Markoff or order
m Markoff chains.
These lemmas are the same as the lemmas in [4] but for 1-sided ψ-mixing
processes, not 2-sided ψ-mixing processes. Many of the proofs use the result
from [4] for 2-sided ψ-mixing processes and via a suitable construction, prove
the same for 1-sided ψ-mixing processes.
4.3. Intuition on ψ-mixing
∞
Assume that X is stationary. Note (A.2). X1t and Xt+τ
+1 are independent
if
(20)
∞
t
∞
P (Xt+τ
+1 ∈ B|X1 ∈ A) = P (Xt+τ +1 ∈ B) = PT (B)
Thus, (A.2) says that the process ‘becomes more and more independent’
with time, further, this happens at a rate proportional to a factor λτ → 0
as τ → ∞ which is independent of the sets A and B in question, and also a
multiplicative factor which depends on the probability of the set B. This dependence on the probability of B is intuitively pleasing in the sense that, for
example, if PT (B) = 10−10 and λτ = 10−5 , then without the multiplicative
factor PT (B), it says nothing meaningful; however, with the multiplicative
factor PT (B), it says something meaningful. A mixing condition can indeed
✐
✐
✐
✐
✐
✐
“2-Agarwal” — 2018/3/28 — 3:34 — page 228 — #10
✐
✐
228
M. Agarwal, S. Mitter, and A. Sahai
be defined where PT (B) does not exist on the right hand side in (20), this
is the φ-mixing criterion in [4]. An even weaker condition is the α-mixing
condition [4] where independence is measured in the sense of
(21)
P (A ∩ B) = P (A)P (B)
instead of
(22)
P (B|A) = P (B)
The φ-mixing criterion has been used in the source coding literature,
see for example [12] and [11]. In [12], it is proved that if a certain version
of the goldwashing algorithm is applied to encode a stationary, φ-mixing
source, the expected distortion performance converges to the distortion-rate
function of the source as the codebook length goes to ∞. In [11], it is proved
that for sources which are φ-mixing and have summable mixing coefficients,
the redundancy of the fixed-database Lempel-Ziv algorithm with database
size n is lower bounded by a certain function of n as described in [11].
5. Idea of the proof
Theorem 1 of [2] will be generalized to ψ-mixing sources in this paper. This
will be done by reducing the problem to the case when the source is i.i.d.,
and then, use Theorem 1 of [2].
The basic idea of the proof is the following: Choose τ, T , where τ is small
3T +2τ
compared to T . Denote K1 = X1T , K2 = XT2T+τ+τ+1 , K3 = X2T
+2τ +1 , . . .. Each
Ki has the same distribution; denote it by K. By Lemma 1, each Ki has
distribution close to PT in the sense of (19). Thus, K1 , K2 , K3 , . . ., is close
to an i.i.d. process. Theorem 1 from [2] can be used and rates approximately
(23)
T 1 E
R (T D)
T +τ T K
are achievable for communication over a channel which is known to communicate the source X to within a distortion D. Take T → ∞ and it follows
E (D) are achievable, where X is the ψ-mixing source. Finally,
that rates < RX
since the description of the channel is in terms of a probability of excess disP (D) ≤ RE (D) and this will prove
tortion criterion, we will prove that RX
X
E
that if a certain rate RX (·) is achievable for the channel-coding problem,
P (D).
then so is the rate RX
A lot of technical steps are needed and this will be the material of the
future sections. Note also, that there are various definitions of mixing in the
✐
✐
✐
✐
✐
✐
“2-Agarwal” — 2018/3/28 — 3:34 — page 229 — #11
✐
✐
Part II: stationary sources satisfying ψ-mixing criterion
229
literature which will make K1 , K2 , . . ., almost independent, but the proof
will not work for all these definitions. The definition of ψ-mixing is used
primarily because (19) holds and this can be used to simulate the source X
in a way discussed in the next section, and this simulation procedure will be
a crucial element of the proof.
6. A simulation procedure for the stationary source X which
satisfies ψ-mixing
By using Lemma 1, a procedure to simulate the source X = (Xt , t = 1, 2, . . .)
will be described.
Fix T and τ , both strictly positive integers. Denote n = (T + τ )k for
some strictly positive integer k.
′
We will generate a (X1′ , X2′ , . . . , X(T
+τ )k ), as described below.
First divide time into chunks of time T , τ , T , τ , T , τ , and so on . . .
Call these slots A1 , B1 , A2 , B2 , . . ., Ai , Bi , . . ., Ak , Bk .
Thus,
A1 contains X ′ T1 .
B1 contains X ′ TT +τ
+1 .
2T +τ
′
A2 contains X T +τ +1 .
+2τ
B2 contains X ′ 2T
2T +τ +1 .
.. .. .. .. .. .. .. ..
. . . . . . . .
iT +(i−1)τ
Ai contains X ′ (i−1)(T +τ )+1 .
i(T +τ )
Bi contains X ′ iT +(i−1)τ +1 .
.. .. .. .. .. .. .. ..
. . . . . . . .
kT +(k−1)τ
Ak contains X ′ (k−1)(T +τ )+1 .
k(T +τ )
Bk contains X ′ kT +(k−1)τ +1 .
Let C1 = 1.
Generate C2 , C3 , . . . , Ck i.i.d., where Ci is 1 with probability (1 − λτ )
and 0 with probability λτ .
(g)
(b)
If Ci = 1, denote Ai by Ai and if Ci = 0, denote Ai by Ai . Think of
superscript ‘g’ as ‘good’ and ‘b’ as ‘bad’.
′
Generation of (X1′ , X2′ , . . . , X(T
+τ )k ) is carried out as follows:
′
The order in which the Xi s in the slots will be generated is the following:
A1 , A2 , B1 , A3 , B2 , . . . , Ai , Bi−1 , Ai+1 , . . ..
(g)
Generate X ′ T1 (slot A1 ) by the distribution PT .
Assume that all Xi have been generated until slot Ai−1 , in other words,
the generation in the following slots in the following order has happened:
✐
✐
✐
✐
✐
✐
“2-Agarwal” — 2018/3/28 — 3:34 — page 230 — #12
✐
✐
230
M. Agarwal, S. Mitter, and A. Sahai
A1 , A2 , B1 , A3 , B2 , . . . , Ai−1 , Bi−2 .
The next two slots to be generated, as per the order stated above, is Ai
and then Bi−1 .
For slot Ai ,
iT +(i−1)τ
If it is a ‘g’ slot, generate X ′ (i−1)(T +τ )+1 using PT .
(k−1)T +(k−2)τ
(k−1)T +(k−2)τ
If it is a ‘b’ slot, if P (X1
= x′ 1
) > 0, generate
iT +(i−1)τ
′
′
X (i−1)(T +τ )+1 using Pt,τ,T,A with t = (k − 1)T + (k − 2)τ and A =
(k−1)T +(k−2)τ
(k−1)T +(k−2)τ
{x′ 1
} where x′ 1
is the simulated process realization
(k−1)T +(k−2)τ
(k−1)T +(k−2)τ
′
so far. If P (X1
=x1
) > 0, no process generation
needs to be carried out anyway.
(i−1)(T +τ )
During the slot Bi−1 , X ′ (i−1)T +(i−2)τ +1 is generated using the probability
measure P of the stationary process given the values of the process already
(k−1)T +(k−2)τ
iT +(i−1)τ
generated, that is, given x′ 1
and x′ (i−1)(T +τ )+1 .
This finishes the description of the generation of the (X1′ , X2′ , . . . ,
′
X(T +τ )k ) sequence.
Note that by Lemma 1 and the way the above simulation has been
′
carried out, (X1′ , X2′ , . . . , X(T
+τ )k ) ∼ (X1 , X2 , . . . , X(T +τ )k ).
(g)
Note also, that during slots Ai , the source has distribution X T and is
independent over these slots. This fact is of importance in the next section.
7. The main lemma: channel-coding theorem
Lemma 6. Let c =< cn >∞
1 directly communicate the source X, assumed
to be ψ-mixing, within distortion D.
Let λ > 0 (think of λ small; λ << 1). Choose β > 0 (think of β small;
β << 1 − λ). Choose τ large enough so that λτ ≤ λ. Then, rates
(24)
1 − λτ − β E
R<
RX T
T +τ
(T + τ )D
1 − λτ − β
are reliably achievable over c ∀T ≥ 1 (think of T large).
Proof. Choose T ≥ 1.
Let n = (T + τ )k for some large k. n is the block-length.
Generate C1 , C2 , . . . as described previously.
Generate 2⌊nR⌋ codewords of block-length (T + τ )k = n by use of the
simulation procedure described previously. Note that C1 , C2 , . . . is the same
for generating all the 2⌊nR⌋ codewords.
✐
✐
✐
✐
✐
✐
“2-Agarwal” — 2018/3/28 — 3:34 — page 231 — #13
✐
✐
Part II: stationary sources satisfying ψ-mixing criterion
231
(g)
Note that over Ai time slots, the codewords are generated i.i.d., as
(g)
in Shannon’s random-coding argument; this generation during Ai is done
i.i.d. X T .
Recall the behavior of the channel which directly communicates the
source X within distortion D. End-to-end,
1 n n n
lim Pr
(25)
d (X , Y ) > D = 0
n→∞
n
(g)
Let us look at the behavior of the channel restricted to time slots Ai .
Assume that the fraction of ‘g’ slots among the k Ai slots is ≥ 1 − λτ − β.
(g)
That is, number of Ai slots is larger than or equal to ⌊(1 − λτ − β)k⌋ + 1.
Denote N = ⌊(1 − λτ − β)k⌋ + 1. This is a high probability event and the
probability → 1 as k → ∞ for any β. If this even does not happen, we will
declare decoding error; hence, in what follows, assume that this is the case.
(g)
Restrict attention to the first N Ai slots. Rename these slots G1 , G2 ,
. . ., GN .
Denote the part of the source during slot Gi by Si . Note that Si is a
T -length vector.
Denote S = (S1 , S2 , . . . , SN ).
Denote the channel output during slot Gi by Ti . Note that Ti is a T length vector. Denote T = (T1 , T2 , . . . , TN ).
Recall the definition of the distortion function dT for T -length vectors,
and its n-block additive extension.
Over Gi slots, then,
lim Pr
(26)
N →∞
N
1 X
(T + τ )kD
dT (Si , Ti ) >
N
N
i=1
!
=0
By substituting N = ⌊(1 − λτ − β)k⌋ + 1, it follows, after noting that
1
k
≤
⌊(1 − λτ − β)k⌋ + 1
1 − λτ − β
(27)
that
(28)
1
lim Pr
k→∞
⌊(1 − λτ − β)k⌋ + 1
⌊(1−λτ −β)k⌋+1
X
i=1
(T + τ )D
dT (Si , Ti ) >
=0
1 − λτ − β
✐
✐
✐
✐
✐
✐
“2-Agarwal” — 2018/3/28 — 3:34 — page 232 — #14
✐
✐
232
M. Agarwal, S. Mitter, and A. Sahai
Recall again that Si are i.i.d. X T and that, codeword generation over
Gi slots is i.i.d.
We have reduced, then, the problem to that where it is known that
an i.i.d. source is directly communicated over a channel within a certain
probability of excess distortion and we want to calculate a lower bound on
the capacity of the channel – this is Theorem 1 of [2].
If each Gi is considered to be a single unit of time, or in other words,
over Gi , the uses of the channel is considered as a single channel use, we are
thus able, by use of Theorem 1 of [2] to communicate at rates
(T + τ )D
E
R < RX T
(29)
(per channel use)
1 − λτ − β
Total time of communication, though, has been (T + τ )k and there are ⌊(1 −
λτ − β)k⌋ + 1 Gi slots over which the communication takes place. Noting
that
(30)
1 − λτ − β
⌊(1 − λτ − β)k⌋ + 1
≥
(T + τ )k
(T + τ )
it follows that rates
(31)
1 − λτ − β E
R<
RX T
(T + τ )
(T + τ )D
1 − λτ − β
are achievable for reliable communication over the original channel c per
channel use of c.
Roughly, the details of codebook generation and decoding are as follows:
Let reliable communication be desired at a rate R which is such that
there exist τ, β, T such that
1 − λτ − β E
(T + τ )D
R<
RX T
(32)
T +τ
1 − λτ − β
Generate C1 , C2 , . . .. Assume that this knowledge is available at both
encoder and decoder
Generate 2⌊k(T +τ )R⌋ codewords using the simulation procedure.
If the number of ‘g’ slots is less than ⌊(1 − λτ − β)k⌋, declare error.
(g)
Else, restrict attention only the first ⌊(1 − λτ − β)k⌋ Ai slots which
have been renamed G1 , G2 , . . ..
Over these slots, the codebook generation is i.i.d., and then, use the
procedure from Theorem 1 of [2].
✐
✐
✐
✐
✐
✐
“2-Agarwal” — 2018/3/28 — 3:34 — page 233 — #15
✐
✐
Part II: stationary sources satisfying ψ-mixing criterion
233
E
8. RP
X (D) ≤ RX (D) if X is stationary and satisfies ψ-mixing
Lemma 7. Let X = (Xt , t = 1, 2, 3, . . .) be stationary process which satisP (D) ≤ RE (D).
fies ψ-mixing. Then, RX
X
Proof. By Lemma 2, X is ergodic. Thus, X is stationary, ergodic.
The proof now, relies on [5], Pages 490-499, where the rate-distortion
theorem is proved for stationary, ergodic sources.
First, note the notation in [5]. [5] defines RL (D) and R(D), both on
Page 491. Note that by the rate-distortion theorem for an i.i.d. source, it
follows that
(33)
RL (D) (notation in [5]) =
1 E
R T (T D) (our notation)
T X
Thus,
(34)
E
R(D) (notation in [5]) = lim RX
T (T D) (our notation)
T →∞
E
= RX
(D) (our notation)
Look at Theorem 9.8.2 of [5]. This theorem holds if probability of excess distortion criterion is used instead of the expected distortion criterion: see (9.8.10) of [5]. By mapping the steps carefully, it follows that rate
R1 (D − ǫ) (notation in [5]) is achievable for source-coding the source X under a probability of excess distortion D for all ǫ > 0. Note that it follows
that rates R1 (D − ǫ) are achievable, not necessarily rates R1 (D). This is
because in (9.8.10), when making further arguments, dˆ is made D + 2δ and
not D. Hence, we need to keep a distortion level smaller than D in R1 (·) to
make this rate achievable for the probability of excess distortion criterion.
Next, we construct the Lth order super source as described on Page 495 of
tL
[5]: Define X ′ t = X(t−1)L+1
. Then, X ′ = (X ′ t , t = 1, 2, 3, . . .) is the nth order
′
super-source. X is stationary, ψ-mixing because X is (Lemma 4), and thus,
stationary, ergodic, by Lemma 2. One can thus use Theorem 9.8.2 of [5]
again to argue that rate RL (D − ǫ) (notation of [5]) is achievable for sourcecoding the source X under a probability of excess distortion D for all ǫ > 0.
By taking a limit as L → ∞ (the limit exists by Theorem 9.8.1 in [5]), it
follows that rate R(D − ǫ) (notation in [5]) is achievable for source-coding
the source X under a probability of excess distortion D for all ǫ > 0. As
stated at the end of the proof of Theorem 9.8.1 in [5], R(D) is a continuous function of D. Thus, it follows that rates < R(D) are achievable for
✐
✐
✐
✐
✐
✐
“2-Agarwal” — 2018/3/28 — 3:34 — page 234 — #16
✐
✐
234
M. Agarwal, S. Mitter, and A. Sahai
source-coding the source X under a probability of excess distortion D. At
this point, the lemma follows from (34).
9. Generalization of Theorem 1 in Part I to stationary
sources satisfying ψ-mixing
Before we prove the theorem, note the following: Let f : [0, ∞) → [0, ∞) be
a convex ∪ non-increasing function. Let f (0) = K. Let 0 < a < a′ . Then,
(35)
|f (a) − f (a′ )| ≤
K ′
(a − a)
a
Theorem 1. Let c be a channel over which the source X, assumed to be
stationary, ψ-mixing, is directly communicated within probability of excess
P (D) are reliably achievable over c.
distortion D, D > 0. Then, rates < RX
P (D) ≤ RE (D) by Lemma 7 and since it is known that
Proof. Since RX
X
(36)
1 E
RX T (T D)
T →∞ T
E
RX
(D) = lim
it is sufficient to prove that rates less than
(37)
1 E
RX T (T D)
T →∞ T
lim
are reliably achievable over c.
To this end, denote
(38)
D′ ,
D
1 − λτ − β
Then,
(39)
(40)
(41)
(42)
(43)
1 E
1 − λτ − β E
′
RX T ((T + τ )D ′ ) − lim RX
T (T D )
T →∞ T
T +τ
1
1 − λτ − β E
RX T ((T + τ )D ′ ) −
RE T ((T + τ )D ′ )
=
T +τ
T +τ X
1
1 E
′
+
RE T ((T + τ )D ′ ) − RX
T ((T + τ )D )
T +τ X
T
1 E
1 E
′
R T (T D ′ )
+ RX
T ((T + τ )D ) −
T
T X
1 E
1 E
′
+ RX
RX T (T D ′ )
T (T D ) − lim
T →∞ T
T
✐
✐
✐
✐
✐
✐
“2-Agarwal” — 2018/3/28 — 3:34 — page 235 — #17
✐
✐
Part II: stationary sources satisfying ψ-mixing criterion
235
Expression in (40) is
(44)
−λτ − β E
RX T ((T + τ )D ′ )
T +τ
Note that
(45)
′
E
RX
T ((T + τ )D ) ≤ T log |X|
Thus, the absolute value of the expression in (40) is upper bounded by
(λτ + β) log |X|.
Expression in (41) is
(46)
−τ
T
1
′
E
R T ((T + τ )D )
T +τ X
Note that
(47)
′
E
RX
T ((T + τ )D ) ≤ T log |X|
It then follows that expression in (41) → 0 as T → ∞.
Expression in (42) is
(48)
1
τ
1 E
′
E
RX T T (D ′ + D ′ ) − RX
T (T D )
T
T
T
1 E
T RX T (T D)
is a convex ∪ non-negative function of D, upper bounded by
log |X|. It follows that
(49)
1
τ
1 E
E
′
RX T T (D ′ + D ′ ) − RX
T (T D )
T
T
T
log |X| ′ τ ′
′
≤
D
)
−
D
(D
+
D′
T
→ 0 as T → ∞
Expression in (43) → 0 as T → ∞.
By noting the bound on the absolute value of expression (40) proved
above and by noting, as proved above, that expressions in (41), (42), and
(43) → 0 as T → ∞, it follows that ∃ ǫT → 0 as T → ∞, possibly depending
✐
✐
✐
✐
✐
✐
“2-Agarwal” — 2018/3/28 — 3:34 — page 236 — #18
✐
✐
236
M. Agarwal, S. Mitter, and A. Sahai
on λτ and β such that
(50)
1 − λτ − β E
1 E
′
RX T ((T + τ )D ′ ) − lim RX
T (T D ) ≤ (λτ + β)|X| + ǫT
T →∞ T
T +τ
By Lemma 6, and by recalling that D ′ = 1−λDτ −β it follows that rates
less than
D
1 E
(51)
− (λτ + β)|X| − ǫT
lim RX T T
T →∞ T
1 − λτ − β
are achievable reliably over c.
By using the fact that λτ and β can be made arbitrarily small and ǫT → 0
as T → ∞, and that, the function
(52)
1 E
RX T (T D)
T →∞ T
lim
is continuous in D, it follows that rates less than
(53)
E
lim RX
T (T D)
T →∞
are reliably achievable over c from which, as stated at the beginning of the
P (D) are reliably
proof of this theorem, it follows that rates less than RX
achievable over c.
Note that statements concerning resource consumption have not been
made either in Theorem 1 or Lemma 6 in this paper whereas they are part
of Theorem 1 in [2]. For the corresponding statements concerning resource
consumption, see Section 11. Further, the way Theorem 1 or Lemma 6 are
stated in this paper, the channel does not belong to a set whereas in Theorem
1 in [2], the channel may belong to a set. For the corresponding statement
where the channel may belong to a set, see Section 11.
10. Application to Markoff chains and order m
Markoff chains
Let X = (Xt , t = 1, 2, . . .) be a stationary, irreducible, aperiodic Markoff
chain evolving on a finite set X. By Lemma 3, X is ψ-mixing. X is thus,
stationary, ψ-mixing and thus, Theorem 1 holds for stationary, irreducible
Markoff chains evolving on a finite set.
Let X = (Xi , i ∈ N) be an order m stationary Markoff chain. Define
im
Zi = X(i−1)m+1
. Then, Z = (Zi , i ∈ N) is a Markoff chain. By Lemma 4, Z
✐
✐
✐
✐
✐
✐
“2-Agarwal” — 2018/3/28 — 3:34 — page 237 — #19
✐
✐
Part II: stationary sources satisfying ψ-mixing criterion
237
is stationary. Assume that this Z is irreducible, aperiodic. By Lemma 5, X
is ψ-mixing, and thus, Theorem 1 holds.
11. Discussion
It is really (19) that is crucial to the proof, not that ψ-mixing criterion;
this is because it is (19) which is needed for carrying out the simulation
procedure described in Section 6. Other places where ψ-mixing criterion is
used in minor ways is to prove ergodicity and some other properties needed
to finish parts of the proof but it is possible that they can be proved by use
of (19) too (or can just be taken as assumptions). However, the assumption
of ψ-mixing suffices, and since this condition holds for Markoff and order m
Markoff sources (under stationarity, irreducibility, aperiodicity assumptions
as stated above), the theorem has been proved for quite a large class of
sources.
In Theorem 1 of [2], the channel may belong to a set whereas the way
Lemma 6 and Theorem 1 are stated in this paper, the channel does not
belong to a set. However, it is easy to see that the proof of Lemma 7 does
not require knowledge of the channel transition probability; only the endto-end description that the channel communicates the source to within the
distortion level is needed; for this reason, Theorem 1 in this paper generalizes
to the case when the channel belongs to a set for the same reason as [2].
A source-channel separation theorem has also been stated and proved in
Theorem 2 in [2]; this can be done in this paper too. Statements concerning
resource consumption have not been made in this paper in Lemma 6 or
Theorem 1. They follow for the same reason as in [2]: in this context, note
that the codebook in the proof of Lemma 7 consists of codewords which are
independent of each other and further, each codeword has the distribution
as the process X; this point is the only observation needed to prove the
statements concerning resource consumption. Finally, generalization to the
unicast, multi-user setting, namely Theorem 3 and 4 of [2] follow for the
same reason as in [2]. In this context, the only observation that needs to
be made is the same as above that the codewords in the proof of Lemma 7
follow the distribution of the process X.
12. Future research directions
• Generalize Theorem 1 to arbitrary stationary, ergodic processes, not
just those which satisfy ψ-mixing, to the extent possible.
✐
✐
✐
✐
✐
✐
“2-Agarwal” — 2018/3/28 — 3:34 — page 238 — #20
✐
✐
238
M. Agarwal, S. Mitter, and A. Sahai
• In particular, explore a generalization to B-processes [6], the closure
of the set of Markoff chains of finite order.
• Consider an alternate proof strategy for proving Theorem 1 which
uses methods from classical ergodic and rate-distortion theory, that is,
methods similar to, for example, [5] and [6], and thus, does not rely on
the decomposition (19). This might help prove Theorem 1 for general
stationary, ergodic sources, not just those which satisfy ψ-mixing.
• Further, consider a strategy based on the theory of large deviations,
in the first instance, for irreducible, aperiodic Markoff chain source.
For i.i.d. sources, a large deviations based method was indeed used in
Part 1 [2].
• Generalize Theorem 1 to stationary, ergodic sources which evolve continuously in space and time (some assumptions might be needed on
the source). Since only the end-to-end description of the channel as
communicating the source X within distortion level D is used and not
the exact dynamics of the channel, the proof given in Part 1 for Theorems 2 and 4, and for similar theorems in this paper, directly holds
for channels which evolve continuously in space and time. The channel
k =< kn >∞
1 would however need to be rigorously defined for continuous time evolution. Further, the encoder-decoder < en , f n >∞
1 would
need to be defined on appropriate spaces so that the interconnection
< en ◦ kn ◦ f n >∞
1 makes sense.
• Research the possibility of an operational rate-distortioon theory for
stationary, ergodic sources (satisfying other conditions). An operational theory for i.i.d. sources has been presented in [1].
• The channel has been assumed to belong to a set in Part I [2] and
the same is the case in this paper. However, the source is assumed
to be known. Research the generalization of results in this paper to
compound sources.
13. Acknowledgements
The authors are extremely grateful to Prof. Robert Gray for his time and
many insightful discussions. The authors also thank Prof. Richard Bradley
for many important e-mail conversations which helped shed more light on
the ψ-mixing criterion.
✐
✐
✐
✐
✐
✐
“2-Agarwal” — 2018/3/28 — 3:34 — page 239 — #21
✐
✐
Part II: stationary sources satisfying ψ-mixing criterion
239
Appendix A. Proofs of properties of ψ-mixing sequences
Proof of Lemma 1 :
Proof. From (18) and (17), it follows that ψ(τ ) can be alternatively be written as
(A.1) ψ(τ ) = sup
sup
∞
t
∞
t∈N A∈F1t ,B∈Ft+τ
+1 ,Pr(X1 ∈A)>0,Pr(Xt+τ +1 ∈B)>0
t
∞
Pr(Xt+τ
+1 ∈ B|X1 ∈
∞
Pr(Xt+τ
+1 ∈ B)
A)
−1
From (A.1), it follows that ∃λτ → 0 as τ → ∞ such that ∀t ∈ N, ∀τ ∈ W,
∞
t
∞
∀A ∈ F1t , ∀B ∈ Ft+τ
+1 , Pr(X1 ∈ A) > 0, Pr(Xt+τ +1 ∈ B) > 0,
∞
t
∞
∞
(A.2) | Pr(Xt+τ
+1 ∈ B|X1 ∈ A) − Pr(Xt+τ +1 ∈ B)| ≤ λτ Pr(Xt+τ +1 ∈ B)
From (A.2), it follows tha ∃λτ → 0 as τ → ∞ such that ∀t ∈ N, ∀τ ∈ W,
∞
t
∞
∀A ∈ F1t , ∀B ∈ Ft+τ
+1 , Pr(X1 ∈ A) > 0, Pr(Xt+τ +1 ∈ B) > 0,
(A.3)
∞
∞
t
(1 − λτ ) Pr(Xt+τ
+1 ∈ B) ≤ Pr(Xt+τ +1 ∈ B|X1 ∈ A)
Specializing (A.3), it follows that,
(A.4)
t+τ +T
t+τ +T
t
(1 − λτ ) Pr(Xt+τ
+1 ∈ B) ≤ Pr(Xt+τ +1 ∈ B|X1 ∈ A)
t+τ +T
∀t ∈ N, ∀τ ∈ W, ∀T ∈ W, ∀A ⊂ Xt , ∀B ⊂ XT , Pr(X1t ∈ A) > 0, Pr(Xt+τ
+1 ∈
B) > 0.
t+τ +T
Note that Pr(Xt+τ
+1 ) = PT (B) . Substituting this into (A.4), it follows that ∀t ∈ N, ∀τ ∈ W, ∀T ∈ W, ∀A ⊂ Xt , ∀B ⊂ XT , Pr(X1t ∈ A) > 0,
t+τ +T
Pr(Xt+τ
+1 ∈ B) > 0,
(A.5)
t+τ +T
t
(1 − λτ )PT (B) ≤ Pr(Xt+τ
+1 ∈ B|X1 ∈ A)
t+τ +T
t
If λτ = 0, it follows from (A.2), that for Pr(Xt+τ
+1 ∈ B) > 0, P (X1 ∈ A) >
0,
(A.6)
t+τ +T
t
Pr(Xt+τ
+1 ∈ B|X1 ∈ A) = (1 − λτ )PT (B)
and the above equation also holds if PT (B) = 0 but P (X1t ∈ A) > 0; thus,
′
T
(19) holds with any probability distribution Pt,τ,T,
A on X .
✐
✐
✐
✐
✐
✐
“2-Agarwal” — 2018/3/28 — 3:34 — page 240 — #22
✐
✐
240
M. Agarwal, S. Mitter, and A. Sahai
If λτ > 0, define
′
Pt,τ,T,
A (B)
(A.7)
t+τ +T
t
P (Xt+τ
+1 ∈ B|X1 ∈ A) − (1 − λτ )PT (B)
=
λτ
From (A.5) , it follows that ∀t ∈ N, ∀τ ∈ W, ∀T ∈ W, ∀A ∈ Xt , ∀B ∈ XT ,
t+τ +T
P (X1t ∈ A) > 0, P (Xt+τ
+1 ∈ B) > 0, ∃λτ → 0 as τ → ∞ such that
(A.8)
t+τ +T
t
′
P (Xt+τ
+1 ∈ B|X1 ∈ A) = (1 − λτ )PT (B) + λτ Pt,τ,T,A (B)
′
T
for some probability distribution Pt,τ,T,
A on X which may depend on t, τ,
T, A.
Finally, note that if PT (B) = 0, (19) still holds with definition (A.7) for
′
Pt,τ,T,A since all the three probabilities in question are individually zero.
This finishes the proof of the lemma.
Proof of Lemma 2:
Proof. In order to prove this lemma, it is sufficient to prove the condition
on Page 19 in [10] (which implies ergodicity as is proved on the same page
of [10]), and which can be re-stated as
(A.9)
N −1
1 X
+T
P (X1t = at1 , Xττ+1
= bT1 ) = P (X1t = at1 )P (X1T = bT1 )
N →∞ N
lim
τ =0
∀t ∈ N, ∀T ∈ N, ∀at1 ∈ Xt , ∀bT1 ∈ XT .
To this end, note, first, that from (A.2), it follows that ∃λτ → 0 as
∞
t
∞
τ → ∞ such that ∀t ∈ N, ∀A ∈ F1t , ∀B ∈ Ft+τ
+1 , P (X1 ∈ A) > 0, P (Xt+τ +1 ∈
B) > 0,
(A.10)
∞
∞
t
(1 − λτ )P (Xt+τ
+1 ∈ B) ≤ P (Xt+τ +1 ∈ B|X1 ∈ A)
∞
≤ (1 + λτ )P (Xt+τ
+1 ∈ B)
∞
t
Thus, ∃λτ → 0 as τ → ∞ such that ∀t ∈ N, ∀A ∈ F1t , ∀B ∈ Ft+τ
+1 , P (X1 ∈
∞
A) > 0, P (Xt+τ
+1 ∈ B) > 0,
∞
(1 − λτ )P (X1t ∈ A)P (Xt+τ
+1 ∈ B)
∞
≤ P (X1t ∈ A, Xt+τ
+1 ∈ B)
(A.11)
∞
≤ (1 + λτ )P (X1t ∈ A)P (Xt+τ
+1 ∈ B)
If P (X1t = at1 ) = 0, then both the left hand side and the right hand side
in (A.9) are zero. If P (X1T = bT1 ) = 0, by use of the assumption that X is
✐
✐
✐
✐
✐
✐
“2-Agarwal” — 2018/3/28 — 3:34 — page 241 — #23
✐
✐
Part II: stationary sources satisfying ψ-mixing criterion
241
+T
stationary and thus noting that P (Xττ+1
) = P (X1T ) , it follows that both the
left hand side and the right hand side in (A.9) are zero. If neither P (X1t =
at1 ) = 0 nor P (X1T = bT1 ) = 0 is zero, it follows from (A.11) that for τ ≥ t,
+T
(1 − λτ −t )P (X1t = at1 )P (Xττ+1
= bT1 )
+T
≤ P (X1t = at1 , Xττ+1
= bT1 )
+T
≤ (1 + λτ −t )P (X1t = at1 )P (Xττ+1
= bT1 )
(A.12)
Denote
C,
(A.13)
t−1
X
+T
P (X1t = at1 , Xττ+1
= bT1 )
τ =0
It follows from (A.12) by taking a sum over τ that and by noting that since
+T
the process is stationary, P (Xττ+1
= bT1 ) = P (X1T = bT1 ) and substituting
(A.13) in (A.12)
C+
N −t−
N
−1
X
λτ −t
τ =t
≤
N
−1
X
!
P (X1t = at1 )P (X1T = bT1 )
+T
P (X1t = at1 , Xττ+1
= bT1 )
τ =0
≤C+
(A.14)
N −t+
N
−1
X
τ =t
λτ −t
!
P (X1t = at1 )P (X1T = bT1 )
After noting that C and t are constants, that λτ → 0 as τ → ∞, after dividing by N and taking limits as N → ∞ in (A.14) , it follows that
(A.15)
lim
N →∞
N
−1
X
+T
P (X1t = at1 , Xττ+1
= bT1 ) = P (X1t = at1 )P (X1T = bT1 )
τ =0
thus proving (A.9) , and thus, proving that the process X is ergodic if it is
stationary, ψ-mixing.
Proof of Lemma 3 :
✐
✐
✐
✐
✐
✐
“2-Agarwal” — 2018/3/28 — 3:34 — page 242 — #24
✐
✐
242
M. Agarwal, S. Mitter, and A. Sahai
Proof. Consider the two-sided extension V = (Vt , t ∈ Z) of X, defined on a
probability space (Ω′′ , Σ′′ , P ′′ ). That is,
(A.16)
P ′′ (Vt+1 = j|Vt = i) = pij ,
−∞ < t < ∞
where pij denotes the probability
(A.17)
P (Xt+1 = j|Xt = i),
1≤t<∞
which is independent of t since X is Markoff. Such an extension is possible, see for example [9]. Denote by XZ , the set of doubly-infinite sequences
taking values in X. The Borel-sigma field on XZ is the standard construction, see Pages 1-5 of [10]. Note that V is finite-state, stationary, irreducible,
aperiodic.
∞ and as was the case when
Denote the Borel-sigma field on XZ by H−∞
b
defining Fa , denote the Borel sigma-field on Xba by Hab , −∞ ≤ a ≤ b ≤ ∞.
For the process V , consider the standard definition of ψ-mixing as stated
in [4], and thus, define
(A.18) ψV (τ ) , sup
sup
t
∞
′′
′′
t∈Z K∈Ht−∞ ,L∈H∞
t+τ +1 ,P (V−∞ ∈K)>0,P (Vt+τ +1 ∈L)>0
∞
t
∈ K, Vt+τ
P ′′ (V−∞
+1 ∈ L)
t
∞
′′
′′
P (V−∞ ∈ K)P (Zt+τ +1 ∈ L)
−1
The process V is said to be ψ-mixing if ψV (τ ) → 0 as τ → ∞. Since V is
stationary, irreducible, aperiodic, finite-state Markoff chain, by Theorem 3.1
of [4], V is ψ-mixing.
Let A ∈ F1t . Consider the set A′′ defined as follows:
(A.19)
A′′ = {(. . . , a−n , . . . a−1 , a0 , a1 , . . . , at )|(a1 , a2 , . . . , at ) ∈ A}
Then, since X is stationary and V is the double-sided extension of X,
(A.20)
t
P ′′ (V−∞
∈ A′′ ) = P (X1t ∈ A)
and by use of the Markoff property, and again, noting that V is the doublesided extension of X, it follows that
(A.21)
t
∞
t
∞
P ′′ (V−∞
∈ A′′ , Vt+τ
+1 ∈ B) = P (X1 ∈ A, Xt+τ +1 ∈ B)
✐
✐
✐
✐
✐
✐
“2-Agarwal” — 2018/3/28 — 3:34 — page 243 — #25
✐
✐
Part II: stationary sources satisfying ψ-mixing criterion
243
By use of (A.20) and (A.21), it follows that
(A.22)
t
∞
∞
P ′′ (V−∞
∈ A′′ , Vt+τ
P (X1t ∈ A, Xt+τ
+1 ∈ B)
+1 ∈ B)
−
1
−1
=
t
∞
t
∞
′′
′′
′′
P (V−∞ ∈ A )P (Vt+τ +1 ∈ B)
P (X1 ∈ A)P (Xt+τ +1 ∈ B)
∞
t
∞
where A ∈ F1t , B ∈ Ft+τ
+1 , P (X1 ∈ A) > 0, P (Xt+τ +1 ∈ B) > 0.
Thus,
(A.23)
∞
t
P ′′ (V−∞
∈ A′′ , Vt+τ
+1 ∈ B)
−1
t
∞
P ′′ (V−∞ ∈ A′ )P ′′ (Vt+τ
+1 ∈ B)
sup
∞
t
∞
A∈F1t ,B∈Ft+τ
+1 ,P (X1 ∈A)>0,P (Xt+τ +1 ∈B)>0
=
∞
P (X1t ∈ A, Xt+τ
+1 ∈ B)
−1
t
∞
P (X1 ∈ A)P (Xt+τ +1 ∈ B)
sup
∞
t
∞
A∈F1t ,B∈Ft+τ
+1 ,P (X1 ∈A)>0,P (xt+τ +1 ∈B)>0
Thus,
(A.24)
sup
t
∞
t
∞
′′
′′
K∈G−∞
,L∈Gt+τ
+1 ,P (V1 ∈K)>0,P (Vt+τ +1 ∈L)>0
≥
sup
∞
t
∞
A∈F1t ,B∈Ft+τ
+1 ,P (X1 ∈A)>0,P (Xt+τ +1 B)>0
t
∞
P ′′ (V−∞
∈ K, Vt+τ
+1 ∈ L)
−1
t
∞
′′
′′
P (V−∞ ∈ K)P (Vt+τ +1 ∈ L)
∞
P (X1t ∈ A, Xt+τ
+1 ∈ B)
−1
t
∞
P (X1 ∈ A)P (Xt+τ +1 ∈ B)
t
∞
This is because there are sets K ∈ G−∞
and L ∈ Gt+τ
+1 which are not of the
′′
′′
form A and B respectively.
Denote the function ψ, defined in (18) for the process X by ψX . It follows
from (A.24) that ψZ (τ ) ≥ ψX (τ ) . Since Z is ψ-mixing as stated above, by
definition, ψZ (τ ) → 0 as τ → ∞. Thus, ψX (τ ) → 0 as τ → ∞, and thus, X
is ψ-mixing.
Proof of Lemma 4:
Proof. Stationary of Z follows directly from the definition of stationarity.
Denote the ψ function for X and Z by ψX and ψZ respectively. Note
that the ψ function for the process Z can be written as follows:
(A.25) ψZ (τ ) , sup
sup
∞
tL
∞
t∈N A∈F1tL ,B∈FtL+τ
L+1 ,P (X1 ∈A)>0,P (XtL+τ L+1 ∈B)>0
∞
P (X1tL ∈ A, XtL+τ
L+1 ∈ B)
−1
tL
∞
P (X1 ∈ A)P (XtL+τ L+1 ∈ B)
✐
✐
✐
✐
✐
✐
“2-Agarwal” — 2018/3/28 — 3:34 — page 244 — #26
✐
✐
244
M. Agarwal, S. Mitter, and A. Sahai
Note that when calculating the ψ function for Z, the supremum is taken
over a lesser number of sets than when calculating the ψ function for X. It
follows that ψZ (τ ) ≤ ψX (τ ). Since X is ψ-mixing, ψX (τ ) → 0 as τ → ∞. It
follows that ψZ (τ ) → 0 as τ → ∞. Thus, Z is ψ-mixing.
Proof of Lemma 5:
Proof. Note that Z is stationary by Lemma 4. Thus, Z is a stationary,
irreducible, aperiodic, finite-state Markoff chain, evolving on a finite set,
and by Lemma 3 , ψ-mixing.
Since the set Z is finite, the Borel sigma field on Z∞ can be constructed
analogously to that on X∞ ; see Page 1-2 of [10]. Denote this Borel sigma
field by G ∞ . Define the Borel sigma fields Gab , analogously as was done for
F1∞ . Denote the underlying probability space by (Ω′ , Σ′ , P ′ )
An element of Z∞ is denoted by (z1 , z2 , . . .) where zi ∈ Z = XL . The j th
component of zi will be denoted by zi (j) .
Define
(A.26) ψX (τ ) , sup
sup
∞
t
∞
t∈N A∈F1t ,B∈Ft+τ
+1 ,P (X1 ∈A)>0,P (Xt+τ +1 ∈B)>0
∞
t
P (X1 ∈ A, Xt+τ
+1 ∈ B)
t
∞
P (X1 ∈ A)P (Xt+τ +1 ∈ B)
−1
and
(A.27) ψZ (τ ) =, sup
sup
∞
t
∞
′
′
′
′
t∈N A′ ∈G1t ,B′ ∈Gt+τ
+1 ,P (Z1 ∈A )>0,P (Zt+τ +1 ∈B )>0
′
∞
P ′ (Z1t ∈ A′ , Zt+τ
+1 ∈ B )
∞
′
P ′ (Z1t ∈ A′ )P ′ (Zt+τ
+1 ∈ B )
−1
By definition, the processes X and Z are ψ-mixing if ψX (τ ) and ψZ (τ )
tend to zero as τ → ∞, respectively.
∞
t
∞
For A ∈ F1t , B ∈ Ft+τ
+1 , P (X1 ∈ A) > 0, P (Xt+τ +1 ∈ B) > 0, define,
(A.28)
κX (t, τ, A, B) ,
∞
P (X1t ∈ A, Xt+τ
+1 ∈ B)
−1
t
∞
P (X1 ∈ A)P (Xt+τ +1 ∈ B)
✐
✐
✐
✐
✐
✐
“2-Agarwal” — 2018/3/28 — 3:34 — page 245 — #27
✐
✐
Part II: stationary sources satisfying ψ-mixing criterion
245
Define
t
k1 ,
L
t+τ +1
k2 ,
L
(A.29)
Assume that τ ≥ 4L. It follows that k1 ≤ k2 (a weaker assumption is
possible, but this suffices).
Given A and B, define A′ and B′ by
(A.30)
A′ , {(a1 , a2 , . . . , ak1 L )|(a1 , a2 , . . . , at ) ∈ A}
B′ , {(bk2 L+1 , bk2 L+2 , . . .)|(bt+τ +1 , bt+τ +2 , . . .) ∈ B}
Think, now of (a1 , . . . , ak1 L ) as a′ = (a′1 , . . . , a′k1 ), a k1 length sequence,
where a′i ∈ Z. This can be done by defining a′i = aiL
(i−1)L+1 . Analogously,
′
′
′
′
think of (bk2 L+1 , bk2 L+2 , . . .) as (bk2 +1 , bk2 +2 , . . .) where b′ki is defined analogously to how a′i was defined. Think of A′ and B′ , now, as sequences of
elements in Z in the obvious way.
∞
Define, for J ∈ G1q , U ∈ Gq+q
′ +1 ,
(A.31)
κZ (q, q ′ , J, U) ,
∞
P ′ (Z1t ∈ J, Zt+τ
+1 ∈ U)
−1
t
∞
′
′
P (Z1 ∈ J)P (Zt+τ +1 ∈ U)
Then, it follows that for τ ≥ 4L,
(A.32)
κX (t, τ, A, B) = κZ (k1 , k2 − k1 , A′ , B′ )
Denote
(A.33) µX (t, τ ) =
sup
∞
t
∞
A∈F1t ,B∈Ft+τ
+1 ,P (X1 ∈A)>0,P (Xt+τ +1 ∈B)>0
∞
P (X1t ∈ A, Xt+τ
+1 ∈ B)
−1
t
∞
P (X1 ∈ A)P (Xt+τ +1 ∈ B)
µZ (q, q ′ ) =
sup
q
∞
∞
′
′
J∈G1q ,U∈Gq+q
′ +1 ,P (Z1 ∈J)>0,P (Zt+τ +1 ∈U)>0
∞
P ′ (Z1t ∈ J, Zt+τ
+1 ∈ U)
−1
t
∞
P ′ (Z1 ∈ J)P ′ (Zt+τ +1 ∈ U)
✐
✐
✐
✐
✐
✐
“2-Agarwal” — 2018/3/28 — 3:34 — page 246 — #28
✐
✐
246
M. Agarwal, S. Mitter, and A. Sahai
∞
It follows from (A.32) by taking supremum over sets A ∈ F1t and B ∈ Ft+τ
+1
k1
∞
and then, noting that there are sets J ∈ G1 and U ∈ Gk2 +1 which are not of
the form A′ and B′ , that
t
t+τ +1
t
µX (t, τ ) ≤ µZ (k1 , k2 − k1 ) = µZ
(A.34)
,
−
L
L
L
Thus,
(A.35)
t
t+τ +1
t
ψX (τ ) ≤ sup µZ
,
−
L
L
L
t∈N
The right hand side in the above equation → 0 as τ → ∞ since Z is ψ−
mixing. Thus, ψX (τ ) → 0 as τ → ∞, and thus, X is ψ-mixing.
References
[1] M. Agarwal, A universal, operational theory of multi-user communication with fidelity criteria, Ph.D. thesis, Massachusetts Institute of
Technology (2012).
[2] M. Agarwal, S. K. Mitter, and A. Sahai, Layered black-box, behavioral
interconnection perspective and applications to the problem in communications, Part I: i.i.d. sources, Communications in Information and
Systems 17 (2017), no. 4, 193–217.
[3] R. Bradley, On the ψ-mixing condition for stationary random sequences,
Transactions of the American Mathematical Society 276 (1983), no. 1,
55–66.
[4] R. C. Bradley, Basic Properties of Strong Mixing Conditions. A Survey
and Some Open Questions, Probability surveys 2 (2005), 107–144.
[5] R. G. Gallager, Information theory and reliable communication, Wiley
(1968).
[6] R. M. Gray, Entropy and information theory, Springer-Verlag (2011).
[7] H. Kesten and G. L. O. Brien, Examples of mixing sequences, Duke
Mathematical Journal 43 (1976), no. 2, 405–415.
[8] Y. V. Prohorov and Y. A. Rozanov, Probability theory: basic concepts,
limit theorems, random processes, Die Grundlehren der Mathematischen
Wissenschaften in Einzeldarstellungen mit besonderer Berucksichtigung
der Anwendungsgebiete, Band 157, Springer-Verlag, 1st edition (1969).
✐
✐
✐
✐
✐
✐
“2-Agarwal” — 2018/3/28 — 3:34 — page 247 — #29
✐
✐
Part II: stationary sources satisfying ψ-mixing criterion
247
[9] E. M. R. Douc and D. Stoffer, Nonlinear time series: theory, methods
and applications with R examples, Chapman and Hall/CRC, 1st edition
(2014).
[10] P. C. Shields, The ergodic theory of discrete sample paths, American
Mathematical Society (1996).
[11] E. Yang and J. C. Kieffer, On the redundancy of the fixed-database
Lempel-Ziv algorithm for φ-mixing sources, IEEE Transactions on Information Theory 43 (1997), no. 4, 1101–1111.
[12] Z. Zhang and E. Yang, An on-line universal lossy data compression
algorithm via continuous codebook refinement — Part 2: optimality for
phi-mixing source models, IEEE Transactions on Information Theory
42 (1996), no. 3, 822–836.
Navi Mumbai, 410210, Maharashtra, India
E-mail address: [email protected]
Laboratory for information and decision systems
Department of Electrical Engineering and Computer Science
Massachusetts Institute of Technology
Cambridge, MA 02139-4307, USA
E-mail address: [email protected]
Department of Electrical Engineering and Computer Sciences
University of California, Berkeley
Berkeley, CA 94720-1770, USA
E-mail address: [email protected]
✐
✐
✐
✐
✐
✐
“2-Agarwal” — 2018/3/28 — 3:34 — page 248 — #30
✐
✐
✐
✐
✐
✐
| 7 |
Data Clustering using a Hybrid of Fuzzy C-Means and
Quantum-behaved Particle Swarm Optimization
Saptarshi Sengupta
Sanchita Basak
Richard Alan Peters II
Department of EECS
Vanderbilt University
Nashville, TN, USA
[email protected]
Department of EECS
Vanderbilt University
Nashville, TN, USA
[email protected]
Department of EECS
Vanderbilt University
Nashville, TN, USA
[email protected]
Abstract – Fuzzy clustering has become a widely used data mining
technique and plays an important role in grouping, traversing and
selectively using data for user specified applications. The
deterministic Fuzzy C-Means (FCM) algorithm may result in
suboptimal solutions when applied to multidimensional data in
real-world, time-constrained problems. In this paper the
Quantum-behaved Particle Swarm Optimization (QPSO) with a
fully connected topology is coupled with the Fuzzy C-Means
Clustering algorithm and is tested on a suite of datasets from the
UCI Machine Learning Repository. The global search ability of
the QPSO algorithm helps in avoiding stagnation in local optima
while the soft clustering approach of FCM helps to partition data
based on membership probabilities. Clustering performance
indices such as F-Measure, Accuracy, Quantization Error,
Intercluster and Intracluster distances are reported for
competitive techniques such as PSO K-Means, QPSO K-Means
and QPSO FCM over all datasets considered. Experimental
results indicate that QPSO FCM provides comparable and in most
cases superior results when compared to the others.
Keywords—QPSO; Fuzzy C-Means Clustering; Particle Swarm
Optimization; K Means; Unsupervised Learning
I. INTRODUCTION
Clustering is the process of grouping sets of objects such that
objects in one group are more similar to each other than to those
in another group. Data clustering is widely used for statistical
analyses in machine learning, pattern recognition, image
analysis and the information sciences making it a common
exploratory data mining technique [1-2]. The K-Means
algorithm is one of the widely used partitioned data clustering
techniques, however its solution quality is sensitive to the initial
choice of cluster centres and it is susceptible to getting trapped
in local optima [1]. K-Means is NP-hard, thus approximation
algorithms have been used to obtain close to exact solutions [3].
Fuzzy C-Means (FCM) [4] algorithm is an unsupervised soft
clustering approach which uses a membership function to
assign an object to multiple clusters but suffers from the same
issue of stagnation using iterative gradient descent as in hard KMeans. This has led to several attempts to intelligently traverse
the search space and minimize the underlying cost, often at the
expense of increased time complexity. In the past two decades,
powered by increased computational capabilities and the advent
of nature-inspired algorithmic models of collective intelligence
and emergence, many studies have led to the application of
guided random search algorithms in cost optimization of
partitioned and soft clustering. Several metaheuristics
mimicking information exchange in social colonies of bird and
insect species are well known for their robust performances on
ill-structured global optimization problems, irrespective of the
continuity or gradient of the cost function. This paper makes a
comparative analysis of the performance of one such algorithm:
the Quantum-behaved Particle Swarm Optimization (QPSO)
[16], from both a hard, partitioned (QPSO K-Means) as well as
a soft, fuzzy clustering (FCM QPSO) point of view. The
literature suggests prior work on integrating Particle Swarm
Optimization (PSO) [8] into the deterministic K-Means
framework has led to improved clustering accuracy across
many datasets. This is evidenced by the works of Izakian et al.
[5], Emami et al. [6] and Yang et al. [7], among others. In [5]
the authors integrated FCM with a fuzzy PSO and noted the
efficiency and improvement in solution quality whereas the
authors of [6] hybridized FCM with PSO on one hand and an
Imperialist Competitive Algorithm (ICA) [24] on the other to
come to the conclusion that ICAPSO suited the clustering jobs
under consideration better than the competitor methods tested.
The work of Yang et al. in [7] used as metric the harmonic
average of distances between individual data points and cluster
centres summed over all points. The proposed PSO KHarmonic Means (PSOKHM) in [7] was found to outperform
K-Harmonic Means (KHM) and PSO in that it not only reduced
convergence time of PSO but also helped KHM escape local
minima. In this work, a detailed report of performance indices
for some popular datasets from the UCI Machine Learning
Repository [20] using FCM QPSO is made against QPSO KMeans, PSO K-Means and traditional K-Means. Subsequent
sections of the paper are structured as follows: Section II
elaborates on the FCM algorithm, Section III introduces the
variants of PSO used and Section IV describes the FCM QPSO
approach. Section V details the experimental setup while
Section VI reports and analyzes the results obtained. Finally,
Section VII makes concluding remarks.
II. FUZZY C-MEANS ALGORITHM (FCM)
The Fuzzy C-Means (FCM) algorithm aims to partition N
objects into C clusters. Essentially, this reduces to grouping the
object set D = {D1,D2,D3……..DN} into C clusters (1<C<N)
with Ω ={Ω1, Ω2, Ω3,…. ΩC} being the cluster centres. Each data
point belongs to a cluster with randomly initialized centroids,
according to a membership function μij defined as:
𝜇𝑖𝑗 =
1
(1)
2
𝑑𝑖𝑗
∑𝐶
𝑟=1(𝑑 )𝑚−1
𝑟𝑗
dij = || xi – yj || is the distance between i-th centre and j-th data
point, drj = || xr – yj || is that between r-th centre and j-th data
point and m ϵ [1, ∞) is a fuzzifier. FCM employs an iterative
gradient descent to compute centroids, which are updated as:
𝑥𝑖 =
𝑚
∑𝑁
𝑗=1 𝜇𝑖𝑗 𝑦𝑗
𝑓(𝑥𝑖 (𝑡 + 1)) < 𝑓(𝑝𝑖 (𝑡)) ⇒ 𝑝𝑖 (𝑡 + 1) = 𝑥𝑖 (𝑡 + 1)
else 𝑝𝑖 (𝑡 + 1) = 𝑝𝑖 (𝑡)
(6)
Here, f is the cost and pi is the personal best of a particle. The
global best (pg) is the minimum cost bearing element of the
historical set of personal bests pi of a particular particle. A
major limitation of the standard PSO is its inability to guarantee
convergence to an optimum as was shown by Van den Bergh
[14] based on the criterion established in [15].
(2)
𝑚
∑𝑁
𝑗=1 𝜇𝑖𝑗
B.
The objective function minimized by FCM can be formulated
as the sum of membership weighted Euclidean distances:
𝑚
𝜑 = ∑𝐶𝑖=1 ∑𝑁
𝑗=1 𝜇𝑖𝑗 (‖𝑥𝑖 − 𝑦𝑗 ‖)
2
(3)
By recursively calculating eqs. (1) and (2), FCM can be
terminated once a preset convergence criteria is met. Like many
algorithms which employ gradient descent, FCM can fall prey
to local optima in a multidimensional fitness landscape. To
avoid this, a stochastic optimization approach can be used.
III. VARIANTS OF PARTICLE SWARM OPTIMIZERS USED
A.
the position and velocity of 𝑖𝑡ℎ particle in 𝑗𝑡ℎ dimension
whereas 𝑝𝑖𝑗 (𝑡) and 𝑝𝑔𝑗 (𝑡) are the pbest and gbest positions. In
term 1 in the RHS of eq. (4), 𝜔 represents the inertia of the i-th
particle and terms 2 and 3 introduce guided perturbations
towards basins of attraction in the direction of movement of the
particle. The personal best (pbest) update follows a greedy
update scheme considering a cost minimization goal, as
discussed in the following equation.
Particle Swarm Optimization (PSO)
PSO proposed by Eberhart and Kennedy [8] is a stochastic
optimization strategy that makes no assumptions about the
gradient of the objective function. It has been able to effectively
produce promising results in many engineering problems where
deterministic algorithms fail. Although PSO is widely
considered a universal optimizer there exist numerous issues
with the standard PSO [8], most notably a poor local search
ability (Angeline et. al) [9]. This has led to several subsequent
studies on improvements of the same [10-13]. The particles in
PSO update their position through a personal best position pbest and a global best - gbest. After each iteration their
velocity and position are updated as:
𝑣𝑖𝑗 (𝑡 + 1) = 𝜔𝑣𝑖𝑗 (𝑡) + 𝐶1 𝑟1 (𝑡) (𝑝𝑖𝑗 (𝑡) − 𝑥𝑖𝑗 (𝑡))
+𝐶2 𝑟2 (𝑡) (𝑝𝑔𝑗 (𝑡) − 𝑥𝑖𝑗 (𝑡))
𝑥𝑖𝑗 (𝑡 + 1) = 𝑥𝑖𝑗 (𝑡) + 𝑣𝑖𝑗 (𝑡 + 1)
(4)
(5)
𝐶1 and 𝐶2 are social and cognitive acceleration constants, 𝑟1 and
𝑟2 are i.i.d. random numbers between 0 and 1, 𝑥𝑖𝑗 , 𝑣𝑖𝑗 represent
Quantum-behaved PSO (QPSO)
Sun et al. proposed a delta potential well model for PSO,
leading to a variant known as Quantum-behaved Particle
Swarm Optimization (QPSO) [16]. A detailed analysis of the
derivation of particle trajectories in QPSO may be found in [1619]. The state update equations of a particle in a fully connected
QPSO topology is described by the following equations:
1
𝑁
𝑚𝑏𝑒𝑠𝑡𝑗 = ∑𝑖=1 𝑝𝑖𝑗
(7)
𝛷𝑖𝑗 = 𝜃𝑝𝑖𝑗 + (1 − 𝜃)𝑝𝑔𝑗
(8)
𝑥𝑖𝑗 = 𝛷𝑖𝑗 + 𝛽 |𝑚𝑏𝑒𝑠𝑡𝑗 − 𝑥𝑖𝑗 (𝑡)| ln (1⁄𝑞 ) ∀ 𝑘 ≥ 0.5
= 𝛷𝑖𝑗 − 𝛽 |𝑚𝑏𝑒𝑠𝑡𝑗 − 𝑥𝑖𝑗 (𝑡)| ln (1⁄𝑞 ) ∀ 𝑘 < 0.5
(9)
𝑁
𝑚𝑏𝑒𝑠𝑡 is mean of pbest of the swarm across all dimensions and
𝛷𝑖𝑗 is the local attractor of particle i. θ, q and k are i.i.d. uniform
random numbers distributed in [0,1]. β is the contractionexpansion coefficient which is varied over the iterations as:
𝛽 = (1 − 0.1) (
𝑖𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛𝑚𝑎𝑥 −𝑖𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛𝑐𝑢𝑟𝑟𝑒𝑛𝑡
𝑖𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛𝑚𝑎𝑥
) + 0.1
(10)
Eq. (6) updates the pbest set and its minimum is set as gbest.
IV. FUZZY C-MEANS QPSO (FCM QPSO)
In this approach, each particle is a D dimensional candidate
solution in one of the C clusters that can be formally
represented as the matrix X:
𝑥11
𝑋=[ ⋮
𝑥𝐶1
…
⋱
⋯
𝑥1𝐷
⋮ ]
𝑥𝐶𝐷
(11)
A population of particles is randomly initialized and personal
as well as global best positions are determined. Subsequently
membership values are computed and a cost is assigned to each
particle. The QPSO algorithm minimizes the cost associated
with the particles through recursively calculating the mean best
position using eq. (7), the membership values and cost function
through eqs. (1) and (3) and updating the candidate cluster
centre solution X. The algorithm is terminated if there is no
improvement in the global best and the algorithm stagnates or
if the preset number of iterations is exhausted. By using the
stochastic and non-differentiable objective function handling
capabilities of QPSO within the FCM algorithmic framework,
the problem of stagnation in a local minima within a
multidimensional search space is mitigated to an extent better
than that possible with only the traditional FCM. The
pseudocode of FCM QPSO is outlined below:
Algorithm 1 FCM QPSO
1: for each particle xi
2:
initialize position
3: end for
4: Evaluate membership values using eq. (1)
5: Evaluate cost using eq. (3) and set pbest, gbest
6: do
7: Compute mean best (mbest) position using eq. (7)
8:
for each particle xi
9:
for each dimension j
10:
Calculate local attractor Φij using eq. (8)
11:
if k ≥ 0.5
12:
Update xij using eq. (9) with ‘+’
13:
else Update xij using eq. (9) with ‘-’
14:
end if
15:
end for
16:
Evaluate cost using eq. (3) and set pbest, gbest
17: end for
18: while max iter or convergence criterion not met
V. EXPERIMENTAL SETUP
A. Parameter Settings
Learning parameters C1 and C2 are chosen as 2.05 and inertia
weight ω in PSO is decreased linearly from 0.9 to 0.1 over the
course of iterations to facilitate global exploration in the early
stages and exploitation in the latter stages. The contractionexpansion parameter β is varied according to eq. (10) for QPSO.
B. Datasets
Five well-known real datasets from the UCI Machine Learning
Repository were used in analysis. These are:
1) Fisher’s Iris Dataset consisting of three species of the Iris
flower (Setosa, Versicolour and Virginica) with a total of
150 instances with 4 attributes each.
2) Breast Cancer Wisconsin (Original) Dataset consisting of
a total of 699 instances with 10 attributes and can be
classified into 2 clusters: benign and malignant.
3) Seeds Dataset consisting of 210 instances with 3 different
varieties of wheat (Kama, Rosa and Canadian), each with
70 instances and 7 attributes.
4) Mammographic Mass Dataset consisting of 961 instances
with 6 attributes and classified into two clusters: benign
and malignant based on BI-RADS attributes and patient’s
age.
5) Sonar Dataset with 208 instances with 60 attributes and can
be classified into either of 2 objects: mines or rocks.
C. Performance Indices
The performance indices which provide insight into the
clustering effectiveness are outlined below:
(a) Intercluster Distance: The sum of distances between the
cluster centroids, larger values of which are desirable and
imply a greater degree of non-overlapping cluster
formation.
(b) Intracluster Distance: The sum of distances between data
points and their respective parent cluster centroids, smaller
values of which are desirable and indicate greater
compactness of clustering.
(c) Quantization Error: The sum of distances between data
points in a particular cluster and that parent cluster
centroid, divided by the total data points belonging to that
cluster, subsequently summed over all clusters and
averaged by the number of data clusters.
Indices such as F-Measure and Accuracy for the datasets under
test are calculated. The clustering algorithms are implemented
in MATLAB R2016a with an Intel(R) Core(TM) i7-5500U
CPU @2.40GHz. Experimental results for 10 trials are
tabulated and are thereafter analyzed. Table 1 lists the datasets
used in this paper.
VI. RESULTS AND ANALYSIS
Tables 2 through 6 contain results on clustering performance
reporting mean and standard deviations for the performance
indices for QPSO FCM, QPSO K-Means and PSO K-Means
and Figures 1 through 5 compare the accuracy of each
algorithm over all datasets.
Table 1. Data Set Information
Iris
Breast Cancer
Seeds
Mammographic Mass
Sonar
No. of Data Points
150
699
210
961
208
No. of Attributes
4
10
7
6
60
No. of Clusters
3
2
3
2
2
Table 2. Comparison of Various Performance Indices for Iris Data Set
QPSO FCM
QPSO K-Means
PSO K-Means
Inter Cluster Distance
5.7312±0.0067
6.1476±0.0330
6.1411±0.0934
Intra Cluster Distance
9.4608±0.0109
8.9548±0.0075
9.0478±0.0927
Quantization Error
0.6414±0.0035
0.6123±0.0360
0.6418±0.0176
F Measure
0.9133±0.0000
0.9030±0.0021
0.8937±0.0063
Table 3. Comparison of Various Performance Indices for Breast Cancer Data Set
QPSO FCM
QPSO K-Means
PSO K-Means
Inter Cluster Distance
13.3462±1.2050
14.2993±0.4993
14.1413±0.7667
Intra Cluster Distance
146.0102±5.2887
142.9908±1.3635
141.7168±1.2193
Quantization Error
3.8394±0.0410
5.3048±0.0514
5.2737±0.0415
F Measure
0.9641±0.0024
0.9627±0.0028
0.9616±0.0038
Table 4. Comparison of Various Performance Indices for Seeds Data Set
QPSO FCM
QPSO K-Means
PSO K-Means
Inter Cluster Distance
10.0939±0.3889
9.8110±0.0051
9.8460±0.2415
Intra Cluster Distance
25.5266±0.5904
24.3534±0.0059
24.5255±0.2069
Quantization Error
0.6677±0.0063
1.4835±0.0162
1.4949±0.0020
F Measure
0.8953±0.0124
0.8995±0.0000
0.8979±0.0044
Table 5. Comparison of Various Performance Indices for Mammographic Mass Data Set
QPSO FCM
QPSO K-Means
PSO K-Means
Inter Cluster Distance
21.6798±5.8964
21.5716±0.0174
21.3634±0.4812
Intra Cluster Distance
275.2412±17.1193
261.0328±0.0361
261.8542±1.1712
Quantization Error
6.2366±0.0386
7.3978± 0.0000
7.4319± 0.0153
F Measure
0.6910±0.0070
0.6855±0.0000
0.6851±0.0011
Table 6. Comparison of Various Performance Indices for Sonar Data Set
QPSO FCM
QPSO K-Means
PSO K-Means
Inter Cluster Distance
0.7381±0.0916
1.3264±0.0446
1.2750±0.0081
Intra Cluster Distance
19.2274±0.6306
17.0267±0.1275
16.8552±0.0181
Quantization Error
0.6851±0.0077
0.9329±0.2072
1.0002±0.1073
F Measure
0.5989±0.0345
0.5702±0.0025
0.5421±0.0145
Figure 1. Accuracy of Algorithms on Iris
Figure 5. Accuracy of Algorithms on Sonar
Figure 2. Accuracy of Algorithms on Breast Cancer
Figure 6. Clustering using FCM QPSO on Iris Dataset
Figure 3. Accuracy of Algorithms on Seed
Performance indicators such as Intercluster Distance,
Intracluster Distance and Quantization Error computed from
the results obtained in Tables 2-6 and that in Figures 1-5 imply
that FCM QPSO has promising performance. Accuracy
improvements of 1.556%, 0.151%, 0.825% and 5.061%
respectively over QPSO K-Means are obtained on the Iris,
Breast Cancer (Original), Mammographic Mass and Sonar data
using FCM QPSO. On the Seed data, the accuracy drops by
0.371% and 0.212% for FCM QPSO when compared to QPSO
K-Means and PSO K-Means, while recording an improvement
of 18.447% over traditional K-Means.
Figure 4.Accuracy of Algorithms on Mammographic Mass
The improvements in clustering accuracy and F-Measure
obtained in case of FCM QPSO are at the expense of increased
time complexity with respect to traditional K-Means based
implementations. For instance, a typical FCM QPSO
implementation with cluster numbers fixed a priori with the
fuzzifier m set as 2 results in approximately six times the
computational cost as compared to QPSO K-Means when run
on the Sonar dataset. Figure 6 shows a three dimensional
partially representative classification of Iris Dataset into three
distinct clusters along with the optimized cluster centres
computed using FCM QPSO.
VII. CONCLUSIONS AND FUTURE SCOPE
This paper makes an effort to compare and contrast the
accuracy of hard and soft clustering techniques such as KMeans and Fuzzy C-Means upon hybridization with the
standard, fully-connected quantum-behaved versions of the
swarm intelligence paradigm of PSO on a number of datasets.
FCM QPSO utilizes fuzzy membership rules of FCM and the
guaranteed convergence ability of QPSO, thus avoiding
stagnation in local optima in the multidimensional fitness
landscape. Future work will analyze supervised approaches to
mitigate the initial solution quality sensitivity in high
dimensional datasets and aim at developing automatic
techniques for detection of optimal cluster numbers and cluster
centres in search spaces with reduced dimensionality.
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
Yang, F., Sun, T., Zhang, C., “An efficient hybrid data clustering method
based on K-harmonic means and Particle Swarm Optimization”, Expert
Syst. Appl. 36(6), 9847–9852 (2009).
Manning, C.D., Raghavan, P., Schütze, H., “Introduction to Information
Retrieval”, Vol. 1. Cambridge University Press, Cambridge (2008).
Kanungo, T., Mount, D.M., Netanyahu, N. S., Piatko, C. D., Silverman,
R., Wu, A. Y., “A local search approximation algorithm for k-means
clustering”, Proc. 18th Annu. ACM Sympos. Comput. Geom., pages 10–
18, 2002.
Bezdek, J. C., “Pattern Recognition with Fuzzy Objective Function
Algorithms”, New York: Plenum, 1981.
Izakian, H., Abraham, A., Snasel, V., “Fuzzy clustering using hybrid
fuzzy c-means and fuzzy particle swarm optimization”, 2009 IEEE World
Congress on Nature and Biologically Inspired Computing, pp. 1690–1694
(2009).
Emami, H., Derakhshan, F., “Integrating fuzzy K-means, particle swarm
optimization, and imperialist competitive algorithm for data clustering”,
Arab. J. Sci. Eng. 40 (12) (2015) 3545–3554.
Yang, F., Sun, T., Zhang, C., “An efficient hybrid data clustering method
based on K-harmonic means and particle swarm optimization”, Expert
Syst. Appl. 36 (6) (2009) 9847–9852.
Kennedy, J., Eberhart, R., “Particle swarm optimization”, Proc. IEEE Int.
Conf. Neural Network, 1995.
Angeline, P.J., “Evolutionary optimization versus particle swarm
optimization: philosophy and performance differences,” Lect. Notes
Comput. Sci. 1447 (1998) 601–610.
[10] Kennedy, J., Mendes, R., “Population structure and particle swarm
performance”, Proceedings of IEEE Congress on Evolutionary
Computation (2002) 1671–1676.
[11] Shi, Y., Eberhart, R., “A modified particle swarm optimizer”,
Proceedings of the IEEE Conference on Evolutionary Computation
(1998) 69–73.
[12] Suganthan, P.N., “Particle swarm optimizer with neighborhood operator”,
Proceedings of IEEE Congress on Evolutionary Computation (1999)
1958–1962.
[13] Clerc, M., Kennedy, J., “The particle swarm-explosion, stability, and
convergence in a multidimensional complex space”, IEEE Trans. Evol.
Comput. 6 (1) (2002) 58–73.
[14] Van den Bergh, F., “An analysis of particle swarm optimizers.”, Ph.D.
Thesis, University of Pretoria, November 2001.
[15] Solis, F.J., Wets, R. J-B., “Minimization by random search techniques”,
Mathematics of Operations Research, 6:19–30, 1981.
[16] Sun, J., Xu,W.B., Feng, B., “A global search strategy of quantum-behaved
particle swarm optimization.”, Cybernetics and Intelligent Systems
Proceedings of the 2004 IEEE Conference, pp. 111–116, 2004.
[17] Sun, J., Feng, B., Xu, W.B., “Particle swarm optimization with particles
having quantum behavior”, IEEE Proceedings of Congress on
Evolutionary Computation, pp. 325–331, 2004.
[18] Wang. J., Zhou. Y., “Quantum-Behaved Particle Swarm Optimization
with Generalized Local Search Operator for Global Optimization”,
Advanced Intelligent Computing Theories and Applications. With
Aspects of Artificial Intelligence, LNCS Vol 4682 pp 851-860, SpringerVerlag Berlin Heidelberg 2007.
[19] Sun, J., Fang, W., Wu, X., Palade, V. and Xu, W., “Quantum-Behaved
Particle Swarm Optimization: Analysis of Individual Particle Behavior
and Parameter Selection”, Evolutionary Computation, vol. 20, no. 3, pp.
349-393, Sept. 2012.
[20] UCI Machine Learning Repository, http://archive.ics.uci.edu/ml/
[21] Serapiö, A. B. S., Corra, G. S., Gonalves, F. B., Carvalho, V. O.,
“Combining K-means and K-harmonic with fish school search algorithm
for data clustering task on graphics processing units”, Applied Soft
Computing, vol. 41, pp. 290–304, 2016.
[22] Liu, C., Wang, C., Hu, J., and Ye, Z., “Improved K-means algorithm
based on hybrid rice optimization algorithm”, 2017 9th IEEE
International Conference on Intelligent Data Acquisition and Advanced
Computing Systems: Technology and Applications (IDAACS),
Bucharest, Romania, 2017, pp. 788-791.
[23] Prabha, K.A., and Visalakshi, N.K., “Improved Particle Swarm
Optimization Based K-Means Clustering”, 2014 International
Conference on Intelligent Computing Applications, Coimbatore, 2014,
pp. 59-63.
[24] Talatahari, S. et al., "Imperialist competitive algorithm combined with
chaos for global optimization", Commun. Nonlinear Sci. Numer. Simul.
17(3), 2012, pp. 1312–1319.
| 9 |
Robust Maximization of Non-Submodular Objectives
arXiv:1802.07073v2 [stat.ML] 14 Mar 2018
Ilija Bogunovic†
LIONS, EPFL
[email protected]
Junyao Zhao†
LIONS, EPFL
[email protected]
Abstract
Volkan Cevher
LIONS, EPFL
[email protected]
nality constraint k, i.e.
max
We study the problem of maximizing a monotone set function subject to a cardinality constraint k in the setting where some number
of elements τ is deleted from the returned
set. The focus of this work is on the worstcase adversarial setting. While there exist
constant-factor guarantees when the function
is submodular [1, 2], there are no guarantees
for non-submodular objectives. In this work,
we present a new algorithm ObliviousGreedy and prove the first constant-factor
approximation guarantees for a wider class
of non-submodular objectives. The obtained
theoretical bounds are the first constantfactor bounds that also hold in the linear
regime, i.e. when the number of deletions τ
is linear in k. Our bounds depend on established parameters such as the submodularity
ratio and some novel ones such as the inverse
curvature. We bound these parameters for
two important objectives including support
selection and variance reduction. Finally, we
numerically demonstrate the robust performance of Oblivious-Greedy for these two
objectives on various datasets.
1
Introduction
A wide variety of important problems in machine
learning can be formulated as the maximization of a
monotone 1 set function f : 2V → R+ under the cardi†
Equal contribution.
Non-negative and normalized (i.e. f (∅) = 0) f (·) is
monotone if for any sets X ⊆ Y ⊆ V it holds f (X) ≤ f (Y ).
1
Proceedings of the 21st International Conference on Artificial Intelligence and Statistics (AISTATS) 2018, Lanzarote, Spain. PMLR: Volume 84. Copyright 2018 by the
author(s).
S⊆V,|S|≤k
f (S),
(1)
where V = {v1 , · · · vn } is the ground set of items. However, in many applications, we might require robustness of the solution set, meaning that the objective
value should deteriorate as little as possible after a
subset of elements is deleted.
For example, an important problem in machine learning is feature selection, where the goal is to extract a
subset of features that are informative w.r.t. a given
task (e.g. classification). For some tasks, it is of great
importance to select features that exhibit robustness
against deletions. This is particularly important in
domains with non-stationary feature distributions or
with input sensor failures [3]. Another important example is the optimization of an unknown function from
point evaluations that require performing costly experiments. When the experiments can fail, protecting
against worst-case failures becomes important.
In this work, we consider the following robust variant
of Problem (1):
max
min
S⊆V,|S|≤k E⊆S,|E|≤τ
f (S \ E),
(2)
where2 τ is the size of subset E that is removed from
the solution set S. When the objective function exhibits submodularity, a natural notion of diminishing
returns3 , a constant factor approximation guarantee
can be obtained for the robust Problem 2 [1, 2]. However, in many applications such as the above mentioned feature selection problem, the objective function f (·) is not submodular and the obtained guarantees are not applicable.
Background and related work. When the objective function is submodular, the simple Greedy algorithm [4] achieves a (1 − 1/e)-multiplicative approximation guarantee for Problem 1. The constant factor
2
When τ = 0, Problem (2) reduces to Problem (1).
f (·) is submodular if for any sets X ⊆ Y ⊆ V and
any element e ∈ V \ Y , it holds that f (X ∪ {e}) − f (X) ≥
f (Y ∪ {e}) − f (Y ).
3
Robust Maximization of Non-Submodular Objectives
can be further improved by exploiting the properties of
the objective function, such as the closeness to being
modular captured by the notion of curvature [5, 6, 7].
In many cases, the Greedy algorithm performs well
empirically even when the objective function deviates
from being submodular. An important class of such
objectives are γ-weakly submodular functions. Simply
put, submodularity ratio γ is a quantity that characterizes how close the function is to being submodular. It was first introduced in [8], where it was shown
that for such functions the approximation ratio of
Greedy for Problem (1) degrades slowly as the submodularity ratio decreases i.e. as (1 − e−γ ). In [9],
the authors obtain the approximation guarantee of the
form α−1 (1 − e−γα ), that further depends on the curvature α.
When the objective is submodular, the Greedy algorithm can perform arbitrarily badly when applied to
Problem (2) [1, 2]. A submodular version of Problem (2) was first introduced in Krause et al. [10],
while the first efficient algorithm and constant factor guarantees
were obtained in Orlin et al. [1] for
√
τ = o( k). In Bogunovic et al. [2], the authors introduce the PRo-GREEDY algorithm that attains
the same 0.387-guarantee but it allows for greater robustness, i.e. the allowed number of removed elements
is τ = o(k). It is not clear how the obtained guarantees generalize for non-submodular functions. In the
submodular setting, the curvature-dependent constant
factor approximation is obtained in [11] that holds for
any number of removals.
Deletion robust submodular maximization in the
streaming setting has been considered in [18, 19, 20].
Other versions of robust submodular optimization
problems have also been studied. In [10], the goal
is to select a set of elements that is robust against
the worst possible objective from a given finite set of
monotone submodular functions. The same problem
with different types of constraints is considered in [21].
It was further studied in the domain of influence maximization [22, 23]. The robust version of the budget
allocation problem was considered in [24].In [25], the
authors study the problem of maximizing a monotone
submodular function under adversarial noise. We conclude this section by noting that very recently a couple
of different works have further studied robust submodular problems [26, 27, 28, 29].
Main contributions:
• We initiate the study of the robust optimization Problem (2) for a wider class of monotone non-submodular functions. We present a
new algorithm Oblivious-Greedy and prove the
first constant factor approximation guarantees for
Problem (2). When the function is submodular
and under mild conditions, we recover the approximation guarantees obtained in the previous
works [1, 2].
• In the non-submodular setting, we obtain the first
constant factor approximation guarantees for the
linear regime, i.e. when τ = ck for some c ∈ (0, 1).
• Our theoretical bounds are expressed in terms of
parameters that further characterize a set function. Some of them have been used in previous works, e.g. submodularity ratio, and some of
them are novel, such as the inverse curvature. We
prove some interesting relations between these parameters and obtain theoretical bounds for them
in two important applications: (i) support selection and (ii) variance reduction objective used in
batch Bayesian optimization. This allows us to
obtain the first robust guarantees for these two
important objectives.
One important class of non-submodular functions that
we consider in this work are those used for support
selection:
f (S) :=
max
x∈X ,supp(x)⊆S
l(x),
(3)
where l(·) is a continuous function, X is a convex set
and supp(x) = {i : xi 6= 0}. A popular way to solve
the problem of finding a k-sparse vector that maximizes l, i.e. x ∈ arg maxx∈X ,kxk0 ≤k l(x) is to maximize
the auxiliary set function in (3) subject to the cardinality constraint k. This setting and its variants have
been used in various applications, for example, sparse
approximation [8, 12], feature selection [13], sparse
recovery [14], sparse M-estimation [15] and column
subset selection problems [16]. An important result
from [17] states that if l(·) is (m, L)-(strongly concave,
smooth) then f (S) is weakly submodular with submodularity ratio γ ≥ m
L . Consequently, this result enlarges the number of problems where Greedy comes
with guarantees. In this work, we consider the robust
version of this problem, where the goal is to protect
against the worst-case adversarial deletions of features.
• Finally, we experimentally validate the robustness
of Oblivious-Greedy in several scenarios, and
demonstrate that it outperforms other robust and
non-robust algorithms.
2
Preliminaries
Set function ratios. In this work, we consider a normalized monotone set function f : 2V → R+ ; we proceed by defining several quantities that characterize
it. Some of the quantities were introduced and used
Ilija Bogunovic† , Junyao Zhao† , Volkan Cevher
in various different works, while the novel ones that
we consider are inverse curvature, bipartite supermodularity ratio and (super/sub)additivity ratio.
Definition 1 (Submodularity [8] and Supermodularity ratio). The submodularity ratio of f (·) is the largest
scalar γ ∈ [0, 1] s.t.
P
i∈Ω f ({i}|S)
≥ γ, ∀ disjoint S, Ω ⊆ V.
(4)
f (Ω|S)
while the supermodularity ratio is the largest scalar γ̌ ∈
[0, 1] s.t.
P
f (Ω|S)
≥ γ̌,
i∈Ω f ({i}|S)
∀ disjoint S, Ω ⊆ V.
∀S, Ω ⊆ V, i ∈ S \ Ω,
(6)
while the inverse generalized curvature is the smallest
scalar α̌ ∈ [0, 1] s.t.
∀S, Ω ⊆ V, i ∈ S \ Ω.
(7)
The function f (·) is submodular (supermodular) iff
α̌ = 0 (α = 0). The function is modular iff α = α̌ = 0.
In general, α can be different from α̌.
Definition 3 (sub/superadditivity ratio). The subadditivity ratio of f (·) is the largest scalar ν ∈ [0, 1] such
that
P
i∈S f ({i})
≥ ν, ∀S ⊆ V.
(8)
f (S)
The superadditivity ratio is the largest scalar ν̌ ∈ [0, 1]
such that
f (S)
≥ ν̌,
i∈S f ({i})
P
∀S ⊆ V.
and ν̌ ≥ γ̌ ≥ 1 − α.
We also provide a more general definition of the bipartite subadditivity ratio used in [13].
Definition 4 (Bipartite subadditivity ratio). The bipartite subadditivity ratio of f (·) is the largest scalar
θ ∈ [0, 1] s.t.
f (A) + f (B)
≥ θ,
f (S)
∀S ⊆ V, A ∪ B = S, A ∩ B = ∅.
(10)
Remark 1. For any f (·), it holds that θ ≥ ν̌ν.
Definition 2 (Generalized curvature [6, 9] and inverse
generalized curvature). The generalized curvature of
f (·) is the smallest scalar α ∈ [0, 1] s.t.
f ({i}|S \ {i})
≥ 1 − α̌,
f ({i}|S \ {i} ∪ Ω)
ν ≥ γ ≥ 1 − α̌
(5)
The function f (·) is submodular (supermodular)
iff γ = 1 (γ̌ = 1).
Hence, the submodularity/supermodularity ratio measures to what extent
the function has submodular/supermodular properties. While f (·) is modular iff γ = γ̌ = 1, in general,
γ can be different from γ̌.
f ({i}|S \ {i} ∪ Ω)
≥ 1 − α,
f ({i}|S \ {i})
Proposition 1. For any f (·), the following relations
hold:
(9)
If the function is submodular (supermodular) then ν =
1 (ν̌ = 1).
The following proposition captures the relation between the above quantities.
Greedy guarantee. Different works [8, 9] have studied the performance of the Greedy algorithm [4] for
Problem 1 when the objective is γ-weakly submodular. In our analysis, we are going to make use of the
following important result from [8].
Lemma 1. For a monotone normalized set function
f : 2V → R+ , with submodularity ratio γ ∈ [0, 1] the
Greedy algorithm when run for l steps returns a set
Sl of size l such that
l
f (Sl ) ≥ 1 − e−γ k f (OPT(k,V ) ),
where OPT(k,V ) is used to denote the optimal set of
size k, i.e., OPT(k,V ) ∈ arg maxS⊆V,|S|≤k f (S).
3
Algorithm and its Guarantees
We present our Oblivious-Greedy algorithm in Algorithm 1. The algorithm requires a non-negative
monotone set function f : 2V → R+ , and the ground
set of items V . It constructs two sets S0 and S1 .
The first set S0 is constructed via oblivious selection,
i.e. ⌈βτ ⌉ items with the individually highest objective
values are selected. Here, β ∈ R+ is an input parameter, that together with τ , determines the size of S0
(|S0 | = ⌈βτ ⌉ ≤ k). We provide more information on
this parameter in the next section. The second set S1 ,
of size k − |S0 |, is obtained by running the Greedy
algorithm on the remaining items V \ S0 . Finally, the
algorithm outputs the set S = S0 ∪ S1 of size k that is
robust against the worst-case removal of τ elements.
Intuitively, the role of S0 is to ensure robustness, as
its elements are selected independently of each other
and have high marginal values, while S1 is obtained
greedily and it is near-optimal on the set V \ S0 .
Oblivious-Greedy is simpler than the submodular
algorithms PRo-GREEDY [2] and OSU [1]. Both
of these algorithms construct multiple sets (buckets)
Robust Maximization of Non-Submodular Objectives
Algorithm 1 Oblivious-Greedy algorithm
Require: Set V , k, τ , β ∈ R+ and ⌈βτ ⌉ ≤ k
Ensure: Set S ⊆ V such that |S| ≤ k
1: S0 , S1 ← ∅
2: for i ← 0 to ⌈βτ ⌉ do
3:
v ← arg maxv∈V \S0 f ({v})
4:
S0 ← S0 ∪ {v}
5: S1 ← Greedy(k − |S0 |, (V \ S0 ))
6: S ← S0 ∪ S1
7: return S
whose number and size depend on the input parameters k and τ . In contrast, Oblivious-Greedy always
constructs two sets, where the first set is obtained by
the fast Oblivious selection.
For Problem (1) and the weakly submodular objective, the Greedy algorithm achieves a constant factor approximation (Lemma 1), while Oblivious selection achieves (γ/k)-approximation [13]. For the harder
Problem (2), Greedy can fail arbitrarily badly [2].
Interestingly enough, the combination of these two algorithms reflected in Oblivious-Greedy leads to a
constant factor approximation for Problem (2).
3.1
θf (OPT(k−τ,V \ES∗ ) ) − (1 − e−
Intermediate results. Before stating our main result, we provide three lower bounds on f (S \ ES∗ ). For
the returned set S = S0 ∪S1 , we let E0 denote elements
removed from S0 , i.e., E0 := ES∗ ∩ S0 and similarly
E1 := ES∗ ∩ S1 . The first lemma is borrowed from [2],
and states that f (S\ES∗ ) is at least some constant fraction of the utility of the elements obtained greedily in
the second stage.
Lemma 2. For any f (·) (not necessarily submodular),
let µ ∈ [0, 1] be a constant such that f (E1 | (S \ES∗ )) =
µf (S1 ) holds. Then, f (S \ ES∗ ) ≥ (1 − µ)f (S1 ).
The next lemma generalizes the result obtained in [1,
2], and applies to any non-negative monotone set function with bipartite subadditivity ratio θ.
∗
As shown in [1], f (OPT(k−τ,V \ES∗ ) ) ≥ f (OPT \ EOPT
),
where OPT is the optimal solution to Problem (2).
k−|S0 |
k−τ
)−1 f (S1 ).
In other words, if f (S1 ) is small compared to the utility of the optimal solution, then f (S \ ES∗ ) is at least
a constant factor away from the optimal solution.
Next, we present our key lemma that further relates
f (S \ ES∗ ) to the utility of the set S1 with no deletions.
Lemma 4. Let β be a constant such that |S0 | = ⌈βτ ⌉
and |S0 | ≤ k, and let ν̌, α̌ ∈ [0, 1] be a superadditivity
ratio and generalized inverse curvature (Eq. (9) and
Eq. (7), respectively). Finally, let µ be a constant defined as in Lemma 2. Then,
f (S \ ES∗ ) ≥ (β − 1)ν̌(1 − α̌)µf (S1 ).
Proof. We have:
f (S \ ES∗ ) ≥ f (S0 \ E0 )
X
f ({ei })
≥ ν̌
(11)
ei ∈S0 \E0
|S0 \ E0 | X
ν̌
f ({ei })
≥
|E1 |
(12)
(β − 1)τ X
ν̌
f ({ei })
τ
(13)
ei ∈E1
Approximation guarantee
The quantity of interest in this section is the remaining utility after the adversarial removal of elements
f (S \ ES∗ ), where S is the set of size k returned by
Oblivious-Greedy, and ES∗ is the set of size τ chosen
by the adversary, i.e., ES∗ ∈ arg minE⊂S,|E|≤τ f (S \ E).
Let OPT(k−τ,V \ES∗ ) denote the optimal solution, of size
k − τ , when the ground set is V \ ES∗ . The goal in this
section is to compare f (S \ES∗ ) to f (OPT(k−τ,V \ES∗ ) ).4
All the omitted proofs from this section can be found
in the supplementary material.
4
Lemma 3. Let θ ∈ [0, 1] be a bipartite subadditivity
ratio defined in Eq. (10). Then f (S \ ES∗ ) is at least
≥
ei ∈E1
≥ (β − 1)ν̌(1 − α̌)
|E1 |
×
X
(i−1)
(14)
f {ei }|(S \ ES∗ ) ∪ E1
i=1
= (β − 1)ν̌(1 − α̌)f (E1 |(S \ ES∗ ))
= (β − 1)ν̌(1 − α̌)µf (S1 ).
(15)
(16)
Eq. (11) follows by the superadditivity. Eq. (12)
follows from the way S0 is constructed, i.e. via
Oblivious selection that ensures f ({i}) ≥ f ({j}) for
every i ∈ S0 \ E0 and j ∈ E1 . Eq. (13) follows from
|S0 \ E0 | = ⌈βτ ⌉ − |E0 | ≥ βτ − τ = (β − 1)τ , and
|E1 | ≤ τ .
To prove Eq. (14), let E1 = {e1 , · · · e|E1 | }, and let
(i−1)
E1
⊆ E1 denote the set {e1 , · · · , ei−1 }. Also, let
(0)
E1 = ∅. Eq. (14) then follows from
(i−1)
f ({ei }) ≥ (1 − α̌)f {ei }|(S \ ES∗ ) ∪ E1
,
which in turns follows from (7) by setting S = {ei }
(i−1)
.
and Ω = (S \ ES∗ ) ∪ E1
∗
Finally, Eq.
(15) follows from f (E1 |(S \ ES )) =
P
(i−1)
∗
(telescoping sum)
ei ∈E1 f {ei }|(S \ ES ) ∪ E1
and Eq. (16) follows from the definition of µ.
Ilija Bogunovic† , Junyao Zhao† , Volkan Cevher
approximation guarantee
0.40
0.36
0.31
0.27
0.22
0.18
0.13
0.09
0.04
0.00
1.0
0.0
0.2
0.4
γ
0.6
0.8
1.0 0.0
0.2
0.4
0.6
0.8
θ
Figure 1: Approximation guarantee obtained in Remark 2. The green cross represents the approximation
guarantee when f is submodular (γ = θ = 1).
Main result. We obtain the main result by examining
the maximum of the obtained lower bounds in Lemma
2, 3 and 4. Note, that all three obtained lower bounds
depend on f (S1 ). In Lemma 3, we benefit from f (S1 )
being small while the opposite is true for Lemma 2
and 4 (both bounds are increasing in f (S1 )). By examining the latter two, we observe that in Lemma 2
we benefit from µ being small (i.e. the utility that we
lose due to E1 is small compared to the utility of the
whole set S1 ) while the opposite is true for Lemma 4.
By carefully balancing between these cases (see Appendix C for details) we arrive at our main result.
Theorem 1. Let f : 2V → R+ be a normalized, monotone set function with submodularity ratio γ, bipartite
subadditivity ratio θ, inverse curvature α̌ and superadditivity ratio ν̌, every parameter in [0, 1]. For a
given budget k and τ = ⌈ck⌉, for some c ∈ (0, 1), the
Oblivious-Greedy algorithm with β s.t. ⌈βτ ⌉ ≤ k
and β > 1, , returns a set S of size k such that when
k → ∞ we have
1−βc
θP 1 − e−γ 1−c
f (OPT(k−τ,V \ES∗ ) ).
f (S \ ES∗ ) ≥
1−βc
1 + P 1 − e−γ 1−c
where P is used to denote
(β−1)ν̌(1−α̌)
1+(β−1)ν̌(1−α̌) .
Remark 2. Consider f (·) from Theorem
1 with ν̌ ∈
k
(0, 1] and α̌ ∈ [0, 1). When τ = o β and β ≥ log k
we have:
1 − e−γ
∗
+ o(1) f (OPT(k−τ,V \ES∗ ) ).
f (S \ ES ) ≥ θ
2 − e−γ
Interpretation. An open question from [2] is whether
a constant factor approximation guarantee is possible
in the linear regime, i.e. when the number of removals
is τ = ⌈ck⌉ for some constant c ∈ (0, 1) [2]. In Theorem 1 we obtain the first asymptotic constant factor
approximation in this regime.
Additionally, when f is submodular, all the parameters in the obtained bound are fixed (α̌ = 0 and
γ = θ = 1 due to submodularity) except the superadditivity ratio ν̌ which can take any value in [0, 1]. The
approximation factor improves for greater ν̌, i.e. the
closer the function is to being superadditive. On the
other hand, if f is supermodular then ν̌ = 1 while
α̌, θ, γ are in [0, 1], and the approximation factor improves for larger θ and γ, and smaller α̌.
From Remark 2, when f is submodular, ObliviousGreedy achieves an asymptotic approximation factor of at least 0.387. This matches the approximation guarantee obtained in [2, 1], while
it allows for
a greater number of deletions τ = o logk k in com
√
parison to τ = o logk3 k and τ = o( k) obtained in
[2] and [1], respectively. Most importantly, our result
holds for a wider range of non-submodular functions.
In Figure 1 we show how the asymptotic approximation factor changes as a function of γ and θ.
We also obtain an alternative formulation of our main
result, which we present in the following corollary.
Corollary 1. Consider the setting from Theorem 1
(β−1)ν̌ν
. Then we have
and let P := 1+(β−1)ν̌(1−ν)
1−βc
θ2 P 1 − e−γ 1−c
f (OPT(k−τ,V \ES∗ ) ).
f (S \ES∗ ) ≥
1−βc
1 + θP 1 − e−γ 1−c
Additionally,
consider f (·) with ν̌, ν ∈ (0, 1]. When
τ = o βk and β ≥ log k, as k → ∞, we have that
f (S \ ES∗ ) is at least
2
θ (1 − e−γ )
+
o(1)
f (OPT(k−τ,V \ES∗ ) ).
1 + θ(1 − e−γ )
The key observation is that the approximation factor depends on ν instead of inverse curvature α̌. The
asymptotic approximation ratio is slightly worse here
compared to Theorem 1. However, depending on the
considered application, it might be significantly harder
to provide bounds for the inverse curvature than bipartite subadditivty ratio, and hence in such cases, this
formulation might be more suitable.
4
Applications
In this section, we consider two important real-world
applications where deletion robust optimization is of
interest. We show that the parameters used in the
statement of our main theoretical result can be explicitly characterized, which implies that the obtained
guarantees are applicable.
Robust Maximization of Non-Submodular Objectives
4.1
Robust Support Selection
We first consider the recent results that connect submodularity with concavity [17, 13]. In order to obtain
bounds for robust support selection for general concave functions, we make use of the theoretical bounds
obtained for Oblivious-Greedy in Corollary 1.
Given a differentiable concave function l : X → R,
where X ⊆ Rd is a convex set, and k ≤ d, the support
selection problem is: maxkxk0 ≤k l(x). As in [17], we let
supp(x) = {i : xi 6= 0}, and consider the associated
normalized monotone set function
Different acquisition (i.e. auxiliary) functions have
been proposed to evaluate the utility of candidate
points for the next evaluations of the unknown function [33]. Recently in [34], the variance reduction objective was used as the acquisition function – the unknown function is evaluated at the points that maximally reduce variance of the posterior distribution over
the given set of points that represent potential maximizers. We formalize this as follows.
and f ’s submodularity ratio γ is lower bounded by m
L.
Subsequently, in [13] it is shown that θ can also be
lower bounded by the same ratio m
L.
Setup. Let f (x) be an unknown function defined over
a finite domain X = {x1 , · · · , xn }, where xi ∈ Rd .
Once we evaluate the function at some point xi ∈
X , we receive a noisy observation yi = f (xi ) + z,
where z ∼ N (0, σ 2 ). In Bayesian optimization, f
is modeled as a sample from a Gaussian process.
We use a Gaussian process with zero mean and kernel function k(x, x′ ), i.e. f ∼ GP(0, k(x, x′ )). Let
S = {e1 , · · · , e|S| } ⊆ [n] denote the set of points,
and XS := [xe1 , · · · , xe|S| ] ∈ R|S|×d and yS :=
[y1 , · · · , y|S| ] denote the corresponding data matrix
and observations, respectively. The posterior distribution of f given the points XS and observations yS
is again a GP, with the posterior variance given by:
−1
2
= k(x, x) − k(x, XS ) k(XS , XS ) + σ 2 I|S|
σx|S
In this paper, we consider the robust support selection
problem, that is, finding a set of features S ⊆ [d] of size
k that is robust against the deletion of limited number
of features. More formally, the goal is to maximize the
following objective over all S ⊆ [d]:
For a given set of potential maximizers M ⊆ [n], the
variance reduction objective is defined as follows:
X
2
FM (S) :=
,
(17)
σx2 − σx|S
f (S) :=
max
supp(x)⊆S,x∈X
l(x) − l(0).
Let Tl (x, y) := l(y) − l(x) − h∇l(x), y − xi. An important result from [17] can be rephrased as follows:
if l(·) is L-smooth and m-strongly concave then for all
x, y ∈ dom(l), it holds
−
L
m
ky − xk22 ≥ Tl (x, y) ≥ − ky − xk22 ,
2
2
min
max
|ES |≤τ,ES ⊆S supp(x)⊆S\ES
l(x) − l(0).
By inspecting the bound obtained in Corollary 1, it
remains to bound the (super/sub)additive ratio ν and
ν̌. The first bound follows by combining the result
m
γ≥ m
L with Proposition 1: ν ≥ γ ≥ L . To prove the
second bound, we make use of the following result.
Proposition 2. The supermodularity ratio γ̌ of the
considered objective f (·) can be lower bounded by m
L.
The second bound follows by combining the result in
Proposition 2 and Proposition 1: ν̌ ≥ γ̌ ≥ m
L.
4.2
Variance Reduction in Robust Batch
Bayesian Optimization
In batch Bayesian optimization, the goal is to optimize
an unknown non-convex function from costly concurrent function evaluations [30, 31, 32]. Most often, the
concurrent evaluations correspond to running an expensive batch of experiments. In the case where experiments can fail, it is beneficial to select a set of
experiments in a robust way.
×k(XS , x).
x∈XM
σx2
where
= k(x, x). We show in Appendix D.2.1 that
this objective is not submodular in general.
Finally, our goal is to find a set of points S of size k
that maximizes
X
2
min
.
σx2 − σx|S\E
S
|ES |≤τ,ES ⊆S
x∈XM
Proposition 3. Assume the kernel function is such
that k(xi , xi ) ≤ kmax , for every i ∈ [n]. The objective
function in (17) is normalized and monotone, and both
its curvature α and inverse curvature α̌ can be upper
max
.
bounded by σ2k+k
max
We can combine this result with Proposition 1, to obσ2
σ2
tain ν ≥ γ ≥ σ2 +k
and ν̌ ≥ γ̌ ≥ σ2 +k
. Also,
max
max
4
σ
we have θ ≥ (σ2 +k
2 (Remark 1). Consequently, all
max )
the parameters from Theorem 1 are explicitly characterized.
5
Experimental Results
Optimization performance. For a returned set
S, we measure the performance in terms of
Ilija Bogunovic† , Junyao Zhao† , Volkan Cevher
The minimum objective value f (S \ E) among all obtained sets E is reported. Most of the time, for all
the considered algorithms, Greedy Min finds E that
reduces utility the most.
1000
800
Obl.-Greedy
Obl.
Greedy
OSU
Pro
Sg
Rg
Omp
600
400
200
11
23
35
47
59
71
Obj. value
Obj. value
1200
600
400
200
83
31
40
49
58
67
76
85
Cardinality k
(a) Lin. reg. (τ = 10)
Cardinality k
(b) Lin. reg. (τ = 30)
0.6
0.4
0.4
0.3
Obl.-Greedy
Obl.
Greedy
OSU
Pro
Sg
Rg
Omp
0.2
0.1
11
23
35
47
59
71
83
Cardinality k
(c) Lin. reg. (τ = 10)
R2 test
0.5
R2 test
800
94
0.3
0.2
0.1
31
40
49
58
67
76
85
94
Cardinality k
(d) Lin. reg. (τ = 30)
Figure 2: Comparison of the algorithms on the linear
regression task.
5.1
Robust Support Selection
Linear Regression. Our setup is similar to the one
in [13]. Each row of the design matrix X ∈ Rn×d is
generated by an autoregressive process,
p
Xi,t+1 = 1 − α2 Xi,t + αǫi,t ,
(18)
5
The random adversaries are inspired by [36] and [37].
200
Obl.-Greedy
Obl.
Greedy
100
11
24
37
50
63
76
Obj. value
300
89
Cardinality k
(a) Log. synthetic (τ = 10)
150
100
50
31 40 49 58 67 76 85 94
Cardinality k
(b) Log. synthetic (τ = 30)
0.9
0.8
0.7
Obl.-Greedy
Obl.
Greedy
0.6
11
24
37
50
63
76
89
Cardinality k
(c) Log. synthetic (τ = 10)
Test accuracy
– Random Greedy adversaries:5 In order to introduce
randomness in the removal process we consider (iii)
Random Greedy Min – iteratively selects a random element from the top τ elements whose marginal gains
are the highest in terms of reducing the objective
value f (S \ E) and (iv) Stochastic Greedy Min – iteratively selects an element, from a random set R ⊆ V ,
with the highest marginal gain in terms of reducing
f (S \ E). At every step, R is obtained by subsampling
(|S|/τ ) log(1/ǫ) elements from S.
200
Obj. value
– Greedy adversaries: (i) Greedy Min – iteratively removes elements to reduce the objective value f (S \ E)
as much as possible, and (ii) Greedy Max – iteratively
adds elements from S to maximize the objective f (E).
400
Test accuracy
minE⊆S,|E|≤τ f (S \ E). Note that f (S \ E) is a submodular function in E. Finding the minimizer E s.t.
|E| ≤ τ is NP-hard even to approximate [35]. We rely
on the following methods in order to find E of size τ
that degrades the solution as much as possible:
0.8
0.75
0.7
0.65
0.6
0.55
31 40 49 58 67 76 85 94
Cardinality k
(d) Log. synthetic (τ = 30)
Figure 3: Logistic regression task with synth. dataset.
where ǫi,t is i.i.d. standard Gaussian with variance
α2 = 0.5. We use n = 800 training data points
and d = 1000. An additional 2400 points are used
for testing. We generate a 100-sparse regression vector by selecting random
entries of ω and set them
q
ωs = (−1)Bern(1/2) × 5 logn d + δs , where δs is a
standard i.i.d. Gaussian noise. The target is given
by y = Xω + z, where ∀i ∈ [n], zi ∼ N (0, 5).
We compare the performance of Oblivious-Greedy
against: (i) robust algorithms (in blue) such as
Oblivious, PRo-GREEDY [2], OSU [1], (ii)
greedy-type algorithms (in red) such as Greedy,
Stochastic-Greedy [37], Random-Greedy [36],
Orthogonal-Matching-Pursuit. We require β >
1 for our asymptotic results to hold, but we found
out that in practice (small k regime) β ≤ 1 usually gives the best performance. We use ObliviousGreedy with β = 1 unless stated otherwise.
The results are shown in Fig. 6. Since PRo-GREEDY
and OSU only make sense in the regime where τ
is relatively small, the plots show their performance
only for feasible values of k. It can be observed that
Oblivious-Greedy achieves the best performance
among all the methods in terms of both training error and test score. Also, the greedy-type algorithms
become less robust for larger values of τ .
Logistic Regression. We compare the performance
of Oblivious-Greedy vs. Greedy and Oblivious
selection on both synthetic and real-world data.
– Synthetic data: We generate a 100-sparse ω by letting ωs = (−1)Bern(1/2) × δs , with δs ∼ Unif([−1, 1]).
The design matrix X is generated as in (18), with
Robust Maximization of Non-Submodular Objectives
15
10
Obl.-Greedy
Obl.
Greedy
5
Obj. value
Obj. value
12
20
10
8
6
4
2
0
51 56 61 66 71 76 81 86 91 96 101
51 56 61 66 71 76 81 86 91 96 101
Cardinality k
(b) α = 0.1, τ = 50
Cardinality k
(a) α = 0.05, τ = 50
25
3
2.5
2
1.5
1
Obj. value
– MNIST: We consider the 10-class logistic regression task on the MNIST [38] dataset. In this experiment, we set β = 0.5 in Oblivious-Greedy, and
we sample 200 images for each digit for the training
phase and 100 images of each for testing. The results are shown in Fig. 4. It can be observed that
Oblivious-Greedy has a distinctive advantage over
Greedy and Oblivious, while when τ increases the
performance of Greedy decays significantly and more
robust Oblivious starts to outperform it.
25
Obj. value
α2 = 0.09. We set d = 200,and use n = 600 points
for training and additional 1800 points for testing.
The label of the i-th data point X(i,·) is set to 1
if 1/(1 + exp(X(i,·) β)) > 0.5 and 0 otherwise. The
results are shown in Fig. 3. We can observe that
Oblivious-Greedy outperforms other methods both
in terms of the achieved objective value and generalization error. We also note that the performance of
Greedy decays significantly when τ increases.
20
15
10
0.5
51 56 61 66 71 76 81 86 91 96 101
Cardinality k
(c) α = 0.2, τ = 50
5
1 6 11 16 21 26 31 36 41 46 51 56 61 66
Num. of removals τ
(d) α = 0.1, k = 100
Figure 5: Comparison of the algorithms on the variance reduction task.
2000
1000
Obl.-Greedy
Obl.
Greedy
Obj. value
Obj. value
2000
3000
1500
1000
500
47 50 53 56 59 62 65 68
17 24 31 38 45 52 59 66
Cardinality k
(a) MNIST (τ = 15)
Cardinality k
(b) MNIST (τ = 45)
0.7
0.6
0.5
0.4
0.3
Obl.-Greedy
Obl.
Greedy
17 24 31 38 45 52 59 66
Cardinality k
(c) MNIST (τ = 15)
Test accuracy
Test accuracy
0.8
0.5
0.4
0.3
47 50 53 56 59 62 65 68
Cardinality k
(d) MNIST (τ = 45)
Results. In Figure 5 (a), (b), (c), the performance
of all three algorithms is shown when τ is fixed to
50. Different figures correspond to different α values. We observe that when α = 0.1, Greedy outperforms Oblivious for most values of k, while Oblivious clearly outperforms Greedy when α = 0.2. For
all presented values of α, Oblivious-Greedy outperforms both Greedy and Oblivious selection. For
larger values of α, the correlation between the points
becomes small and consequently so do the objective
values. In such cases, all three algorithms perform
similarly. In Figure 5 (d), we show how the performance of all three algorithms decreases as the number
of removals increases. When the number of removals
is small both Greedy and our algorithm perform similarly, while as the number of removals increases the
performance of Greedy drops more rapidly.
Figure 4: Logistic regression with MNIST dataset.
5.2
Robust Batch Bayesian Optimization via
Variance Reduction
Setup. We conducted the following synthetic experiment. A design matrix X of size 600 × 20 is obtained
via the autoregressive process from (18). The function
values at these points are generated from a GP with
3/2-Mátern kernel [39] with both lengthscale and output variance set to 1.0. The samples of this function
are corrupted by Gaussian noise, σ 2 = 1.0. Objective
function used is the variance reduction (Eq. (17)). Finally, half of the points randomly chosen are selected
in the set M , while the other half is used in the selection process. We use β = 0.5 in our algorithm.
6
Conclusion
We have presented a new algorithm ObliviousGreedy that achieves constant-factor approximation
guarantees for the robust maximization of monotone
non-submodular objectives. The theoretical guarantees hold for general τ = ck for some c ∈ (0, 1), which
resolves the important question posed in [1, 2]. We
have also obtained the first robust guarantees for support selection and variance reduction objectives. In
various experiments, we have demonstrated the robust performance of Oblivious-Greedy by showing that it outperforms both Oblivious selection and
Greedy, and hence achieves the best of both worlds.
Ilija Bogunovic† , Junyao Zhao† , Volkan Cevher
Acknowledgement
The authors would like to thank Jonathan Scarlett and
Slobodan Mitrović for useful discussions. This work
was done during JZ’s summer internship at LIONS,
EPFL. IB and VC’s work was supported in part by
the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation
program (grant agreement number 725594), in part by
the Swiss National Science Foundation (SNF), project
407540 167319/1, in part by the NCCR MARVEL,
funded by the Swiss National Science Foundation.
References
[1] J. B. Orlin, A. S. Schulz, and R. Udwani, “Robust monotone submodular function maximization,” in Int. Conf. on Integer Programming and
Combinatorial Opt. (IPCO), Springer, 2016.
[2] I. Bogunovic, S. Mitrović, J. Scarlett, and
V. Cevher, “Robust submodular maximization: A
non-uniform partitioning approach,” in Int. Conf.
on Machine Learning (ICML), August 2017.
[3] A. Globerson and S. Roweis, “Nightmare at test
time: robust learning by feature deletion,” in Int.
Conf. Machine Learning (ICML), 2006.
[4] G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher,
“An analysis of approximations for maximizing
submodular set functions—i,” Mathematical Programming, vol. 14, no. 1, pp. 265–294, 1978.
[5] M. Conforti and G. Cornuéjols, “Submodular set
functions, matroids and the greedy algorithm:
tight worst-case bounds and some generalizations
of the rado-edmonds theorem,” Discrete applied
mathematics, vol. 7, no. 3, pp. 251–274, 1984.
[6] J. Vondrák, “Submodularity and curvature: The
optimal algorithm (combinatorial optimization
and discrete algorithms),” Kokyuroku Bessatsu,
p. 23:253–266, 2010.
[7] R. K. Iyer, S. Jegelka, and J. A. Bilmes, “Curvature and optimal algorithms for learning and
minimizing submodular functions,” in Adv. Neur.
Inf. Proc. Sys. (NIPS), pp. 2742–2750, 2013.
[8] A. Das and D. Kempe, “Submodular meets spectral: Greedy algorithms for subset selection,
sparse approximation and dictionary selection,”
in Proc. of Int. Conf. on Machine Learning,
ICML, pp. 1057–1064, 2011.
[9] A. A. Bian, J. Buhmann, A. Krause, and S. Tschiatschek, “Guarantees for greedy maximization of
non-submodular functions with applications,” in
Proc. Int. Conf. on Machine Learning (ICML),
August 2017.
[10] A. Krause, H. B. McMahan, C. Guestrin, and
A. Gupta, “Robust submodular observation selection,” Journal of Machine Learning Research,
vol. 9, no. Dec, pp. 2761–2801, 2008.
[11] V. Tzoumas, K. Gatsis, A. Jadbabaie, and
G. J. Pappas, “Resilient monotone submodular function maximization,” arXiv preprint
arXiv:1703.07280, 2017.
[12] V. Cevher and A. Krause, “Greedy dictionary selection for sparse representation,” IEEE Journal
of Selected Topics in Signal Processing, vol. 5,
no. 5, pp. 979–988, 2011.
[13] R. Khanna, E. Elenberg, A. G. Dimakis, S. Negahban, and J. Ghosh, “Scalable greedy feature
selection via weak submodularity,” in Proc. of
Int. Conf. on Artificial Intelligence and Statistics,
AISTATS, pp. 1560–1568, 2017.
[14] E. J. Candes, J. K. Romberg, and T. Tao, “Stable
signal recovery from incomplete and inaccurate
measurements,” Communications on pure and applied mathematics, vol. 59, no. 8, pp. 1207–1223,
2006.
[15] P. Jain, A. Tewari, and P. Kar, “On iterative hard
thresholding methods for high-dimensional mestimation,” in Adv. Neur. Inf. Proc. Sys. (NIPS),
pp. 685–693, 2014.
[16] J. Altschuler, A. Bhaskara, G. Fu, V. Mirrokni, A. Rostamizadeh, and M. Zadimoghaddam, “Greedy column subset selection: New
bounds and distributed algorithms,” in Int. Conf.
on Machine Learning (ICML), pp. 2539–2548,
2016.
[17] E. R. Elenberg, R. Khanna, A. G. Dimakis, and S. Negahban, “Restricted strong
convexity implies weak submodularity,” CoRR,
vol. abs/1612.00804, 2016.
[18] S. Mitrovic, I. Bogunovic, A. Norouzi-Fard, J. M.
Tarnawski, and V. Cevher, “Streaming robust
submodular maximization: A partitioned thresholding approach,” in Adv. Neur. Inf. Proc. Sys.
(NIPS), pp. 4560–4569, 2017.
[19] B. Mirzasoleiman, A. Karbasi, and A. Krause,
“Deletion-robust submodular maximization:
Data summarization with “the right to be
forgotten”,” in Int. Conf. Mach. Learn. (ICML),
pp. 2449–2458, 2017.
Robust Maximization of Non-Submodular Objectives
[20] E. Kazemi, M. Zadimoghaddam, and A. Karbasi, “Deletion-robust submodular maximization
at scale,” arXiv preprint arXiv:1711.07112, 2017.
[21] T. Powers, J. Bilmes, S. Wisdom, D. W. Krout,
and L. Atlas, “Constrained robust submodular
optimization.” NIPS OPT2016 workshop, 2016.
[22] X. He and D. Kempe, “Robust influence maximization,” in Int. Conf. Knowledge Discovery and
Data Mining (KDD), pp. 885–894, 2016.
[23] W. Chen, T. Lin, Z. Tan, M. Zhao, and X. Zhou,
“Robust influence maximization,” arXiv preprint
arXiv:1601.06551, 2016.
[24] M. Staib and S. Jegelka, “Robust budget allocation via continuous submodular functions,” in
Proc. of Int. Conf. on Machine Learning (ICML),
pp. 3230–3240, 2017.
[25] A. Hassidim and Y. Singer, “Submodular optimization under noise,” in Proc. of Conf. on Learning Theory, COLT, pp. 1069–1122, 2017.
[26] R. Udwani, “Multi-objective maximization of
monotone submodular functions with cardinality constraint,” arXiv preprint arXiv:1711.06428,
2017.
[27] B. Wilder, “Equilibrium computation for zero
sum games with submodular structure,” arXiv
preprint arXiv:1710.00996, 2017.
[28] N. Anari, N. Haghtalab, S. Pokutta, M. Singh,
A. Torrico, et al., “Robust submodular maximization: Offline and online algorithms,” arXiv
preprint arXiv:1710.04740, 2017.
[29] R. S. Chen, B. Lucier, Y. Singer, and V. Syrgkanis, “Robust optimization for non-convex objectives,” in Adv. in Neur. Inf. Proc. Sys., pp. 4708–
4717, 2017.
[30] T. Desautels, A. Krause, and J. W. Burdick,
“Parallelizing exploration-exploitation tradeoffs
in gaussian process bandit optimization,” The
Journal of Machine Learning Research, vol. 15,
no. 1, pp. 3873–3923, 2014.
[31] J. González, Z. Dai, P. Hennig, and N. Lawrence,
“Batch bayesian optimization via local penalization,” in Artificial Intelligence and Statistics,
pp. 648–657, 2016.
[32] J. Azimi, A. Jalali, and X. Z. Fern, “Hybrid batch
bayesian optimization,” in Proc. of Int. Conf. on
Machine Learning, ICML, 2012.
[33] B. Shahriari, K. Swersky, Z. Wang, R. P. Adams,
and N. de Freitas, “Taking the human out of the
loop: A review of bayesian optimization,” Proceedings of the IEEE, vol. 104, no. 1, pp. 148–175,
2016.
[34] I. Bogunovic, J. Scarlett, A. Krause, and
V. Cevher, “Truncated variance reduction: A unified approach to bayesian optimization and levelset estimation,” in Adv. in Neur. Inf. Proc. Sys.,
pp. 1507–1515, 2016.
[35] Z. Svitkina and L. Fleischer, “Submodular approximation: Sampling-based algorithms and
lower bounds,” SIAM Journal on Computing,
vol. 40, no. 6, pp. 1715–1737, 2011.
[36] N. Buchbinder, M. Feldman, J. S. Naor, and
R. Schwartz, “Submodular maximization with
cardinality constraints,” in Proc. of ACM-SIAM
symposium on Discrete algorithms, pp. 1433–
1452, Society for Industrial and Applied Mathematics, 2014.
[37] B. Mirzasoleiman, A. Badanidiyuru, A. Karbasi,
J. Vondrák, and A. Krause, “Lazier than lazy
greedy,” in Proc. Conf. Art. Intell. (AAAI), 2015.
[38] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner,
“Gradient-based learning applied to document
recognition,” Proc. of the IEEE, vol. 86, no. 11,
pp. 2278–2324, 1998.
[39] C. E. Rasmussen and C. K. Williams, Gaussian
processes for machine learning, vol. 1. MIT press
Cambridge, 2006.
[40] K. B. Petersen, M. S. Pedersen, et al., “The matrix cookbook,” Technical University of Denmark,
vol. 7, p. 15, 2008.
Ilija Bogunovic† , Junyao Zhao† , Volkan Cevher
Appendix
Robust Maximization of Non-Submodular Objectives
(Ilija Bogunovic† , Junyao Zhao† and Volkan Cevher, AISTATS 2018)
A
–
–
–
–
Organization of the Appendix
Appendix
Appendix
Appendix
Appendix
B
B:
C:
D:
E:
Proofs from Section 2
Proofs of the Main Result (Section 3)
Proofs from Section 4
Additional experiments
Proofs from Section 2
B.1
Proof of Proposition 1
Proof. We prove the following relations:
• ν ≥ γ, ν̌ ≥ γ̌:
By setting S = ∅ in both Eq. (4) and Eq. (5), we obtain ∀S ⊆ V :
X
f ({i}) ≥ γf (S),
(19)
i∈S
and
f (S) ≥ γ̌
X
f ({i}).
(20)
i∈S
The result follows since, by definition of ν and ν̌, they are the largest scalars such that Eq. (19) and Eq. (20)
hold, respectively.
• γ ≥ 1 − α̌, γ̌ ≥ 1 − α:
Let S, Ω ⊆ V be two arbitrary disjoint sets. We arbitrarily order elements of Ω = {e1 , · · · , e|Ω| } and we let
Ωj−1 denote the first j − 1 elements of Ω. We also let Ω0 be an empty set.
By the definition of α̌ (see Eq. (7)) we have:
|Ω|
X
j=1
f ({ej }|S) =
|Ω|
X
f ({ej }|S ∪ {ej } \ {ej })
≥
|Ω|
X
(1 − α̌)f ({ej }|S ∪ {ej } \ {ej } ∪ Ωj−1 )
j=1
j=1
= (1 − α̌)f (Ω|S) ,
(21)
where the last equality is obtained via telescoping sums.
Similarly, by the definition of α (see Eq. (6)) we have:
(1 − α)
|Ω|
X
j=1
f ({ej }|S) =
|Ω|
X
(1 − α)f ({ej }|S ∪ {ej } \ {ej })
≤
|Ω|
X
f ({ej }|S ∪ {ej } \ {ej } ∪ Ωj−1 )
j=1
j=1
= f (Ω|S) .
(22)
Because S and Ω are arbitrary disjoint sets, and both γ and γ̌ are the largest scalars such that for all disjoint
P
P|Ω|
sets S, Ω ⊆ V the following holds |Ω|
j=1 f ({ej }|S) ≥ γf (Ω|S) and γ̌
j=1 f ({ej }|S) ≤ f (Ω|S), it follows
from Eq. (21) and Eq. (22), respectively, that γ ≥ 1 − α̌ and γ̌ ≥ 1 − α.
Robust Maximization of Non-Submodular Objectives
B.2
Proof of Remark 1
Proof. Consider any set S ⊆ V , and A and B such that A ∪ B = S, A ∩ B = ∅. We have
P
P
P
ν̌ i∈A f ({i}) + ν̌ i∈B f ({i})
ν̌ i∈S f ({i})
f (A) + f (B)
≥
=
≥ ν ν̌,
f (S)
f (S)
f (S)
where the first and second inequality follow by the definition of ν and ν̌ (Eq. (8) and Eq. (9)), respectively.
By the definition (see Eq. (10)), θ is the largest scalar such that f (A) + f (B) ≥ θf (S) holds, hence, it follows
θ ≥ ν ν̌.
C
Proofs of the Main Result (Section 3)
C.1
Proof of Lemma 2
We reproduce the proof from [2] for the sake of completeness.
Proof.
f (S \ ES∗ ) = f (S) − f (S) + f (S \ ES∗ )
= f (S0 ∪ S1 ) + f (S \ E0 ) − f (S \ E0 ) − f (S) + f (S \ ES∗ )
= f (S1 ) + f (S0 | S1 ) + f (S \ E0 ) − f (S) − f (S \ E0 ) + f (S \ ES∗ )
= f (S1 ) + f (S0 | (S \ S0 )) + f (S \ E0 ) − f (E0 ∪ (S \ E0 )) − f (S \ E0 ) + f (S \ ES∗ )
= f (S1 ) + f (S0 | (S \ S0 )) − f (E0 | (S \ E0 )) − f (S \ E0 ) + f (S \ ES∗ )
= f (S1 ) + f (S0 | (S \ S0 )) − f (E0 | (S \ E0 )) − f (E1 ∪ (S \ ES∗ )) + f (S \ ES∗ )
= f (S1 ) + f (S0 | (S \ S0 )) − f (E0 | (S \ E0 )) − f (E1 | S \ ES∗ )
= f (S1 ) − f (E1 | S \ ES∗ ) + f (S0 | (S \ S0 )) − f (E0 | (S \ E0 ))
≥ (1 − µ)f (S1 ),
(23)
where we used S = S0 ∪ S1 , ES∗ = E0 ∪ E1 . and (23) follows from monotonicity, i.e., f (S0 | (S \ S0 )) − f (E0 | (S \
E0 )) ≥ 0 (due to E0 ⊆ S0 and S \ S0 ⊆ S \ E0 ), along with the definition of µ.
C.2
Proof of Lemma 3
Proof. We start by defining S0′ := OPT(k−τ,V \E0 ) ∩ (S0 \ E0 ) and X := OPT(k−τ,V \E0 ) \ S0′ .
f (S0 \ E0 ) + f (OPT(k−τ,V \S0 ) ) ≥ f (S0′ ) + f (X)
≥ θf (OPT(k−τ,V \E0 ) )
≥ θf (OPT(k−τ,V \ES∗ ) ),
(24)
(25)
(26)
where (24) follows from monotonicity as S0′ ⊆ (S0 \ E0 ) and (V \ S0 ) ⊆ (V \ E0 ). Eq. (25) follows from the fact
that OPT(k−τ,V \E0 ) = S0′ ∪ X and the bipartite subadditive property (10). The final equation follows from the
definition of the optimal solution and the fact that ES∗ = E0 ∪ E1 .
By rearranging and noting that f (S \ ES∗ ) ≥ f (S0 \ E0 ) due to (S0 \ E0 ) ⊆ (S \ ES∗ ) and monotonicity, we obtain
f (S \ ES∗ ) ≥ θf (OPT(k−τ,V \ES∗ ) ) − f (OPT(k−τ,V \S0 ) ).
C.3
Proof of Theorem 1
Before proving the theorem we outline the following auxiliary lemma:
Ilija Bogunovic† , Junyao Zhao† , Volkan Cevher
Lemma 5 (Lemma D.2 in [2]). For any set function f , sets A, B, and constant α > 0, we have
α
max{αf (A), βf (B) − f (A)} ≥
βf (B).
1+α
(27)
Next, we prove the main theorem.
Proof. First we note that β should be chosen such that the following condition holds |S0 | = ⌈βτ ⌉ ≤ k. When
τ = ⌈ck⌉ for c ∈ (0, 1) and k → ∞ the condition β < 1c suffices.
We consider two cases, when µ = 0 and µ 6= 0. When µ = 0, from Lemma 2 we have
f (S \ ES∗ ) ≥ f (S1 )
(28)
On the other hand, when µ 6= 0, by Lemma 2 and 4 we have
f (S \ ES∗ ) ≥ max{(1 − µ)f (S1 ), (β − 1)ν̌(1 − α̌)µf (S1 )}
≥
(β − 1)ν̌(1 − α̌)
f (S1 ).
1 + (β − 1)ν̌(1 − α̌)
(29)
(β−1)ν̌(1−α̌)
By denoting P := 1+(β−1)ν̌(1−
α̌) we observe that P ∈ [0, 1) once β ≥ 1. Hence, by setting β ≥ 1 and taking the
minimum between two bounds in Eq. (29) and Eq. (28) we conclude that Eq. (29) holds for any µ ∈ [0, 1].
By combining Eq. (29) with Lemma 1 we obtain
k−⌈βτ ⌉
f (S \ ES∗ ) ≥ P 1 − e−γ k−τ
f (OPT(k−τ,V \S0 ) ).
(30)
By further combining this with Lemma 3 we have
k−⌈βτ ⌉
f (OPT(k−τ,V \S0 ) )}
f (S \ ES∗ ) ≥ max{θf (OPT(k−τ,V \ES∗ ) ) − f (OPT(k−τ,V \S0 ) ), P 1 − e−γ k−τ
k−⌈βτ ⌉
P 1 − e−γ k−τ
f (OPT(k−τ,V \ES∗ ) )
≥θ
k−⌈βτ ⌉
1 + P 1 − e−γ k−τ
(31)
where the second inequality follows from Lemma 5. By plugging in τ = ⌈ck⌉ we further obtain
k−β⌈ck⌉−1
P 1 − e−γ (1−c)k
f (OPT(k−τ,V \ES∗ ) )
f (S \ ES∗ ) ≥ θ
k−β⌈ck⌉−1
1 + P 1 − e−γ (1−c)k
β
1−βc− 1 −
k
k
1−c
P 1 − e−γ
∗
≥θ
β f (OPT(k−τ,V \ES ) )
1−βc− 1 −
k
k
1−c
1 + P 1 − e−γ
−γ 1−βc
1−c
θP
1
−
e
k→∞
f (OPT(k−τ,V \ES∗ ) ).
−−−−→
1−βc
1 + P 1 − e−γ 1−c
Finally, Remark 2 follows from Eq. (30) when τ ∈ o
is thus satisfied), as k → ∞, we have both
α̌ ∈ [0, 1).
k−⌈βτ ⌉
k−τ
k
β
and β ≥ log k (note that the condition |S0 | = ⌈βτ ⌉ ≤ k
→ 1 and P =
(β−1)ν̌(1−α̌)
1+(β−1)ν̌(1−α̌)
→ 1, when ν̌ ∈ (0, 1] and
Robust Maximization of Non-Submodular Objectives
C.4
Proof of Corollary 1
To prove this result we need the following two lemmas that can be thought of as the alternative to Lemma 2
and 4.
Lemma 6. Let µ′ ∈ [0, 1] be a constant such that f (E1 ) = µ′ f (S1 ) holds. Consider f (·) with bipartite subadditivity ratio θ ∈ [0, 1] defined in Eq. (4). Then
f (S \ ES∗ ) ≥ (θ − µ′ )f (S1 ).
(32)
Proof. By the definition of θ, f (S1 \ E1 ) + f (E1 ) ≥ θf (S1 ). Hence,
f (S \ ES∗ ) ≥ f (S1 \ E1 )
≥ θf (S1 ) − f (E1 )
= (θ − µ′ )f (S1 ).
Lemma 7. Let β be a constant such that |S0 | = ⌈βτ ⌉ and |S0 | ≤ k, and let ν̌, ν ∈ [0, 1] be superadditivity and
subadditivity ratio (Eq. (9) and Eq. (8), respectively). Finally, let µ′ be a constant defined as in Lemma 6. Then,
f (S \ ES∗ ) ≥ (β − 1)ν̌νµ′ f (S1 ).
(33)
Proof. The proof follows that of Lemma 4, with two modifications. In Eq. (34) we used the subadditive property
of f (·), and Eq. (35) follows by the definition of µ′ .
f (S \ ES∗ ) ≥ f (S0 \ E0 )
X
f ({ei })
≥ ν̌
ei ∈S0 \E0
≥
|S0 \ E0 | X
ν̌
f ({ei })
|E1 |
ei ∈E1
(β − 1)τ X
ν̌
f ({ei })
≥
τ
ei ∈E1
≥ (β − 1)ν̌νf (E1 )
′
= (β − 1)ν̌νµ f (S1 ).
(34)
(35)
Next we prove the main corollary. The proof follows the steps of the proof from Appendix C.3, except that here
we make use of Lemma 6 and 7.
Proof. We consider two cases, when µ′ = 0 and µ′ 6= 0. When µ′ = 0, from Lemma 6 we have
f (S \ ES∗ ) ≥ θf (S1 ).
On the other hand, when µ′ 6= 0, by Lemma 6 and 7 we have
f (S \ ES∗ ) ≥ max{(θ − µ′ )f (S1 ), (β − 1)ν̌νµ′ f (S1 )}
≥θ
(β − 1)ν̌ν
f (S1 ).
1 + (β − 1)ν̌ν
(36)
(β−1)ν̌ν
and observing that P ∈ [0, 1) once β ≥ 1, we conclude that Eq. (36) holds for any
By denoting P := 1+(β−1)ν̌ν
′
µ ∈ [0, 1] once β ≥ 1.
By combining Eq. (36) with Lemma 1 we obtain
k−⌈βτ ⌉
f (OPT(k−τ,V \S0 ) ).
f (S \ ES∗ ) ≥ θP 1 − e−γ k−τ
(37)
Ilija Bogunovic† , Junyao Zhao† , Volkan Cevher
By further combining this with Lemma 3 we have
k−⌈βτ ⌉
f (S \ ES∗ ) ≥ max{θf (OPT(k−τ,V \ES∗ ) ) − f (OPT(k−τ,V \S0 ) ), θP 1 − e−γ k−τ
f (OPT(k−τ,V \S0 ) )}
k−⌈βτ ⌉
θ2 P 1 − e−γ k−τ
f (OPT(k−τ,V \ES∗ ) ),
≥
(38)
k−⌈βτ ⌉
1 + θP 1 − e−γ k−τ
where the second inequality follows from Lemma 5. By plugging in τ = ⌈ck⌉ in the last equation and by letting
k → ∞ we arrive at:
1−βc
θ2 P 1 − e−γ 1−c
f (OPT(k−τ,V \ES∗ ) ).
f (S \ ES∗ ) ≥
1−βc
1 + θP 1 − e−γ 1−c
(β−1)ν̌ν
⌉
Finally, from Eq. (38), when τ ∈ o βk and β ≥ log k, as k → ∞, we have both k−⌈βτ
→ 1 and P = 1+(β−1)ν̌ν
→
k−τ
1 (when ν, ν̌ ∈ (0, 1]). It follows
k→∞
f (S \ ES∗ ) −−−−→
D
D.1
θ2 (1 − e−γ )
f (OPT(k−τ,V \ES∗ ) ).
1 + θ(1 − e−γ )
Proofs from Section 4
Proof of Proposition 2
Proof. The goal is to prove: γ̌ ≥
m
L.
Let S ⊆ [d] and Ω ⊆ [d] be any two disjoint sets, and for any set A ⊆ [d] let x(A) = arg maxsupp(x)⊆A,x∈X l(x).
(A)
Moreover, for B ⊆ [d] let xB
denote those coordinates of vector x(A) that correspond to the indices in B.
We proceed by upper bounding the denominator and lower bounding the numerator in (5). By definition of x(S)
and strong concavity of l(·),
m (S∪{i})
x
− x(S)
2
2
m
≤
max
h∇l(x(S) ), v − x(S) i −
v − x(S)
v:v(S∪{i})c =0
2
2
1
=
∇l(x(S) )i
2m
l(x(S∪{i}) ) − l(x(S) ) ≤ h∇l(x(S) ), x(S∪{i}) − x(S) i −
where the last equality follows by plugging in the maximizer v = x(S) +
X
i∈Ω
X 1
∇l(x(S) )i
l(x(S∪{i}) ) − l(x(S) ) ≤
2m
i∈Ω
2
=
1
(S)
)i .
m ∇l(x
2
Hence,
1
∇l(x(S) )Ω
2m
On the other hand, from the definition of x(S∪Ω) and due to smoothness of l(·) we have
1
∇l(x(S) )Ω ) − l(x(S) )
L
L 1
1
∇l(x(S) )Ω
≥ h∇l(x(S) ), ∇l(x(S) )Ω i −
L
2 L
2
1
l(x(S) )Ω .
=
2L
l(x(S∪Ω) ) − l(x(S) ) ≥ l(x(S) +
2
2
.
Robust Maximization of Non-Submodular Objectives
It follows that
l(x(S∪Ω) ) − l(x(S) )
m
≥ ,
(S∪{i}) ) − l(x(S) )
L
l(x
i∈Ω
P
∀ disjoint S, Ω ⊆ [d]
We finish the proof by noting that γ̌ is the largest constant for the above statement to hold.
D.2
D.2.1
Variance Reduction in GPs
Non-submodularity of Variance Reduction
The goal of this section is to show that the GP variance reduction objective is not submodular in general.
Consider the following PSD kernel matrix:
√
1 − z2 0
√ 1
K = 1 − z2
1
z2 .
2
0
z
1
We consider a single x = {3} (i.e. M is a singleton) that corresponds to the third data point. The objective is
as follows:
2
2
.
− σ{3}|S∪i
F (i|S) = σ{3}|S
The submodular property implies F ({1}) ≥ F ({1}|{2}). We have:
2
2
− σ{3}|{1}
F ({1}) = σ{3}
= 1 − K({3}, {3}) − K({3}, {1})(K({1}, {1}) + σ 2 )−1 K({1}, {3})
= 1 − 1 + 0 = 0,
and
2
2
− σ{3}|{2}
F ({2}) = σ{3}
= 1 − K({3}, {3}) − K({3}, {2})(K({2}, {2}) + σ 2 )−1 K({2}, {3})
= 1 − (1 − z 2 (1 + σ 2 )−1 z 2 ) =
z4
,
1 + σ2
and
2
2
F ({1, 2}) = σ{3}
− σ{3}|{1,2}
= 1 − K({3}, {3}) + [K({3}, {1}), K({3}, {2})]
√
−1
2
1
+
σ
,
0
1 − z2
= 1 − 1 + [0, z ] √
z2
1 − z 2 , 1 + σ2
2
=
−1
1 + σ 2 , K({2}, {1})
K({1}, {3})
K({1}, {2}), 1 + σ 2
K({2}, {3})
z 4 (1 + σ 2 )
.
(1 + σ 2 )2 − (1 − z 2 )
We obtain,
F ({1}|{2}) = F ({1, 2}) − F ({2})
=
z4
z4
−
.
(1 + σ 2 ) − (1 − z 2 )(1 + σ 2 )−1
1 + σ2
When z ∈ (0, 1), F ({1}|{2}) is strictly greater than 0, and hence greater than F ({1}). This is in contradiction
with the submodular property which implies F ({1}) ≥ F ({1}|{2}).
Ilija Bogunovic† , Junyao Zhao† , Volkan Cevher
D.2.2
Proof of Proposition 3
Proof. We are interested in lower bounding the following ratios:
f ({i}|S\{i}∪Ω)
f ({i}|S\{i})
and
f ({i}|S\{i})
f ({i}|S\{i}∪Ω) .
Let kmax ∈ R+ be the largest variance, i.e. k(xi , xi ) ≤ kmax for every i. Consider the case when M is a singleton
set:
2
2
f (i|S) = σx|S
− σx|S∪i
.
By using Ω = {i} in Eq. (39), we can rewrite f (i|S) as
f (i|S) = a2i Bi−1 ,
where ai , Bi ∈ R+ , and are given by:
ai = k(x, xi ) − k(x, XS )(k(XS , XS ) + σ 2 I)−1 k(XS , xi )
and
Bi = σ 2 + k(xi , xi ) − k(xi , XS )(k(XS , XS ) + σ 2 I)−1 k(XS , xi ).
By using the fact that k(xi , xi ) ≤ kmax , for every i and S, we can upper bound Bi by σ 2 + kmax (note that
k(xi , xi ) − k(xi , XS )(k(XS , XS ) + σ 2 I)−1 k(XS , xi ) ≥ 0 as variance cannot be negative), and lower bound by σ 2 .
It follows that for every i and S we have:
σ2
a2i
a2
≤ f (i|S) ≤ i2 .
+ kmax
σ
Therefore,
σ2
a2 /(σ 2 + kmax )
f ({i}|S \ {i} ∪ Ω)
=
,
≥ i
f ({i}|S \ {i})
a2i /σ 2
σ 2 + kmax
σ2
a2 /(σ 2 + kmax )
f ({i}|S \ {i})
=
,
≥ i
f ({i}|S \ {i} ∪ Ω)
a2i /σ 2
σ 2 + kmax
∀S, Ω ⊆ V, i ∈ S \ Ω,
∀S, Ω ⊆ V, i ∈ S \ Ω.
It follows:
σ2
,
σ 2 + kmax
σ2
(1 − α̌) ≥ 2
.
σ + kmax
(1 − α) ≥
and
The obtained result also holds for any set M ⊆ [n].
D.2.3
Alternative GP variance reduction form
Here, the goal is to show that the variance reduction can be written as
2
2
F (Ω|S) = σx|S
− σx|S∪Ω
= aB−1 aT ,
1×|Ω\S|
where a ∈ R+
|Ω\S|×|Ω\S|
, B ∈ R+
and are given by:
a := k(x, XΩ\S ) − k(x, XS )(k(XS , XS ) + σ 2 I)−1 k(XS , XΩ\S ),
and
B := σ 2 I + k(XΩ\S , XΩ\S ) − k(XΩ\S , XS )(k(XS , XS ) + σ 2 I)−1 k(XS , XΩ\S ).
This form is used in the proof in Appendix D.2.2.
(39)
Robust Maximization of Non-Submodular Objectives
Proof. Recall the definition of the posterior variance:
2
σx|S
= k(x, x) − k(x, XS ) k(XS , XS ) + σ 2 I|S|
−1
k(XS , x).
We have
2
2
− σx|S∪Ω
F (Ω|S) = σx|S
−1
−1
= k(x, XS∪Ω ) k(XS∪Ω , XS∪Ω ) + σ 2 I|Ω∪S|
k(XS∪Ω , x) − k(x, XS ) k(XS , XS ) + σ 2 I|S|
k(XS , x)
−1 T
m1
A11 , A12
−1 T
= [m1 , m2 ]
− m1 A11
m1 ,
mT2
A21 , A22
where we use the following notation:
m1 := k(x, XS ),
m2 := k(x, XΩ\S ),
A11 := k(XS , XS ) + σ 2 I|S| ,
A12 := k(XS , XΩ\S ),
A21 := k(XΩ\S , XS ),
A22 := k(XΩ\S , XΩ\S ) + σ 2 I|Ω\S| .
By using the inverse formula [40, Section 9.1.3] we obtain:
−1
T
−1
−1
−1
A11 + A−1
A21 A11
, −A−1
m1
T
11 A12 B
11 A12 B
F (Ω|S) = [m1 , m2 ]
− m1 A−1
−1
11 m1 ,
mT2
−B−1 A21 A11
,
B−1
where
B := A22 − A21 A−1
11 A12 .
Finally, we obtain:
−1
T
−1
T
−1
T
F (Ω|S) = m1 A−1
A21 A−1
A21 A−1
11 m1 + m1 A11 A12 B
11 m1 − m2 B
11 m1
−1 T
T
− m1 A−1
m2 + m2 B−1 mT2 − m1 A−1
11 A12 B
11 m1
−1 T
−1
T
T
= m1 A−1
(A21 A11
m1 − mT2 ) − m2 B−1 (A21 A−1
11 A12 B
11 m1 − m2 )
−1
T
T
= (m1 A11
A12 − m2 )B−1 (A21 A−1
11 m1 − m2 )
−1
T
= (m2 − m1 A−1
(mT2 − A21 A−1
11 A12 )B
11 m1 ).
By setting
a := m2 − m1 A−1
11 A12
= k(x, XΩ\S ) − k(x, XS )(k(XS , XS ) + σ 2 I)−1 k(XS , XΩ\S )
and
T
aT := mT2 − A21 A−1
11 m1
= k(XΩ\S , x) − k(XΩ\S , XS )(k(XS , XS ) + σ 2 I)−1 k(XS , x),
we have
F (Ω|S) = aB−1 aT ,
where
B = σ 2 I|Ω\S| + k(XΩ\S , XΩ\S ) − k(XΩ\S , XS )(k(XS , XS ) + σ 2 I|S| )−1 k(XS , XΩ\S ).
Ilija Bogunovic† , Junyao Zhao† , Volkan Cevher
E
Additional Experiments
0.5
200
21 31 41 51 61 71 81 91
Obj. value
100
50
Cardinality k
(e) Log. synthetic (τ = 20)
65
73
81
89
21
31
41
51
61
71
81
0.85
50
49
57
65
73
81
41
0.75
0.7
0.65
1500
1000
2000
1500
1000
500
500
33
39
45
51
57
Cardinality k
(i) MNIST (τ = 25)
63
37
42
47
52
57
62
Cardinality k
(j) MNIST (τ = 35)
67
Cardinality k
(g) Log. synthetic (τ = 20)
0.6
0.5
0.4
0.3
0.2
27
65
73
81
89
33
39
45
51
57
63
Cardinality k
(k) MNIST (τ = 25)
0.75
0.7
0.65
0.6
0.55
41
49
57
65
73
81
89
Cardinality k
(h) Log. synthetic (τ = 40)
Test accuracy
2000
57
0.8
0.8
0.7
2500
49
Cardinality k
(d) Lin. reg. (τ = 40)
0.6
21 31 41 51 61 71 81 91
89
0.1
91
150
100
0.2
0.15
0.05
Cardinality k
(c) Lin. reg. (τ = 20)
2500
Obj. value
Obj. value
57
Cardinality k
(f) Log. synthetic (τ = 40)
3000
27
0.2
Cardinality k
(b) Lin. reg. (τ = 40)
41
21 31 41 51 61 71 81 91
49
Test accuracy
Obj. value
150
0.3
0.1
41
Cardinality k
(a) Lin. reg. (τ = 20)
200
200
R2 test
400
400
Test accuracy
Obl.-Greedy
Obl.
Greedy
OSU
Pro
Sg
Rg
Omp
600
0.4
R2 test
800
0.25
600
Test accuracy
1000
Obj. value
Obj. value
1200
0.6
0.5
0.4
0.3
0.2
37
42
47
52
57
62
Cardinality k
(l) MNIST (τ = 35)
Figure 6: Additional experiments for comparison of the algorithms on support selection task.
67
| 2 |
Stability Analysis of Monotone Systems via
Max-separable Lyapunov Functions
arXiv:1607.07966v1 [] 27 Jul 2016
H.R. Feyzmahdavian, B. Besselink, M. Johansson
Abstract
We analyze stability properties of monotone nonlinear systems via max-separable Lyapunov functions, motivated by the following observations: first, recent results have shown that asymptotic stability
of a monotone nonlinear system implies the existence of a max-separable Lyapunov function on a
compact set; second, for monotone linear systems, asymptotic stability implies the stronger properties of
D-stability and insensitivity to time-delays. This paper establishes that for monotone nonlinear systems,
equivalence holds between asymptotic stability, the existence of a max-separable Lyapunov function,
D-stability, and insensitivity to bounded and unbounded time-varying delays. In particular, a new and
general notion of D-stability for monotone nonlinear systems is discussed and a set of necessary and
sufficient conditions for delay-independent stability are derived. Examples show how the results extend
the state-of-the-art.
I. I NTRODUCTION
Monotone systems are dynamical systems whose trajectories preserve a partial order relationship on their initial states. Such systems appear naturally in, for example, chemical reaction
networks [2], consensus dynamics [3], systems biology [4], wireless networks [5]–[8], and as
comparison systems in stability analysis of large-scale interconnected systems [9]–[11]. Due to
The authors are with the ACCESS Linnaeus Centre and the Department of Automatic Control, School of Electrical Engineering,
KTH Royal Institute of Technology, Stockholm, Sweden. Email: [email protected], [email protected], [email protected].
A preliminary version of this work is submitted as the conference paper [1]. This manuscript significantly extends the work [1]
by providing additional technical results and illustrative examples. Namely, [1] only shows the equivalence of statements 2) and
3) in Theorem 1 (rather than the full Theorem 1), presents a notion of D-stability that is less general than the notion in the current
manuscript, and discusses delay-independent stability of monotone systems with constant delays (rather than time-varying and
potentially unbounded delays).
their wide applicability, monotone systems have attracted considerable attention from the control
community (see, e.g., [12]–[15]). Early references on the theory of monotone systems include
the papers [16]–[18] by Hirsch and the excellent monograph [19] by Smith.
For monotone linear systems (also called positive linear systems), it is known that asymptotic
stability of the origin implies further stability properties. First, asymptotically stable monotone
linear systems always admit a Lyapunov function that can be expressed as a weighted maxnorm [20]. Such Lyapunov functions can be written as a maximum of functions with onedimensional arguments, so they are a particular class of max-separable Lyapunov functions
[21]–[23]. Second, asymptotic stability of monotone linear systems is robust with respect to
scaling of the dynamics with a diagonal matrix, leading to the so-called D-stability property.
This notion appeared first in [24] and [25], with additional early results given in [26]. Third,
monotone linear systems possess strong robustness properties with respect to time-delays [27]–
[36]. Namely, for these systems, asymptotic stability of a time-delay system can be concluded
from stability of the corresponding delay-free system, simplifying the analysis.
For monotone nonlinear systems, it is in general unknown whether asymptotic stability of the
origin implies notions of D-stability or delay-independent stability, even though results exist for
certain classes of monotone systems, such as homogeneous and sub-homogeneous systems [37]–
[42]. Recent results in [43] and [44] show that for monotone nonlinear systems, asymptotic
stability of the origin implies the existence of a max-separable Lyapunov function on every
compact set in the domain of attraction. Motivated by this result and the strong robustness
properties of monotone linear systems, we study stability properties of monotone nonlinear
systems using max-separable Lyapunov functions. Here, monotone nonlinear systems are neither
restricted to be homogeneous nor sub-homogeneous.
The main contribution of this paper is to extend the stability properties of monotone linear
systems discussed above to monotone nonlinear systems. Specifically, we demonstrate that
asymptotic stability of the origin for a monotone nonlinear system leads to D-stability, and
asymptotic stability in the presence of bounded and unbounded time-varying delays. Furthermore,
this paper has the following four contributions:
First, we show that for monotone nonlinear systems, the existence of a max-separable Lyapunov function on a compact set is equivalent to the existence of a path in this compact set such
that, on this path, the vector field defining the system is negative in all its components. This
allows for a simple evaluation of asymptotic stability for such systems.
Second, we define a novel and natural notion of D-stability for monotone nonlinear systems
that extends earlier concepts in the literature [37]–[39]. Our notion of D-stability is based on the
composition of the components of the vector field with an arbitrary monotonically increasing
function that plays the role of a scaling. We then show that for monotone nonlinear systems,
this notion of D-stability is equivalent to the existence of a max-separable Lyapunov function
on a compact set.
Third, we demonstrate that for monotone nonlinear systems, asymptotic stability of the origin is
insensitive to a general class of time-delays which includes bounded and unbounded time-varying
delays. Again, this provides an extension of existing results for monotone linear systems to the
nonlinear case. In order to impose minimal restrictions on time-delays, our proof technique uses
the max-separable Lyapunov function that guarantees asymptotic stability of the origin without
delays as a Lyapunov-Razumikhin function.
Fourth, we derive a set of necessary and sufficient conditions for establishing delay-independent
stability of monotone nonlinear systems. These conditions can also provide an estimate of the
region of attraction for the origin. As in the case of D-stability, we extend several existing
results on analysis of monotone systems with time-delays, which often rely on homogeneity and
sub-homogeneity of the vector field (see, e.g., [38]–[42]), to general monotone systems.
The remainder of the paper is organized as follows. Section II reviews some preliminaries
on monotone nonlinear systems and max-separable Lyapunov functions, and discusses stability
properties of monotone linear systems. In Section III, our main results for stability properties of
monotone nonlinear systems are presented, whereas in Section IV, delay-independent stability
conditions for monotone systems with time-varying delays are derived. Section V demonstrates
through a number of examples how these results extend earlier work in the literature. Finally,
conclusions are stated in Section VI.
Notation. The set of real numbers is denoted by R, whereas R+ = [0, ∞) represents the set of
nonnegative real numbers. We let Rn+ denote the positive orthant in Rn . The associated partial
order is given as follows. For vectors x and y in Rn , x < y (x ≤ y) if and only if xi < yi
(xi ≤ yi ) for all i ∈ In , where xi ∈ R represents the ith component of x and In = {1, . . . , n}.
For a real interval [a, b], C [a, b], Rn denotes the space of all real-valued continuous functions
on [a, b] taking values in Rn . A continuous function ω : R+ → R+ is said to be of class K if
ω(0) = 0 and ω is strictly increasing. A function ω : Rn → R+ is called positive definite if
ω(0) = 0 and ω(x) > 0 for all x 6= 0. Finally, 1n ∈ Rn denotes the vector whose components
are all one.
II. P ROBLEM S TATEMENT
AND
P RELIMINARIES
Consider dynamical systems on the positive orthant Rn+ described by the ordinary differential
equation
ẋ = f (x).
(1)
Here, x is the system state, and the vector field f : Rn+ → Rn is locally Lipschitz so that local
existence and uniqueness of solutions is guaranteed [45]. Let x(t, x0 ) denote the solution to (1)
starting from the initial condition x0 ∈ Rn+ at the time t ∈ R+ . We further assume that (1) has
an equilibrium point at the origin, i.e., f (0) = 0.
A. Preliminaries on Monotone Systems
In this paper, monotone systems will be studied according to the following definition.
Definition 1: The system (1) is called monotone if the implication
x′0 ≤ x0 ⇒ x(t, x′0 ) ≤ x(t, x0 ),
∀t ∈ R+ ,
(2)
holds, for any initial conditions x0 , x′0 ∈ Rn+ .
The definition states that trajectories of monotone systems starting at ordered initial conditions
preserve the same ordering during the time evolution. By choosing x′0 = 0 in (2), since x(t, 0) = 0
for all t ∈ R+ , it is easy to see that
x0 ∈ Rn+ ⇒ x(t, x0 ) ∈ Rn+ ,
∀t ∈ R+ .
(3)
This shows that the positive orthant Rn+ is an invariant set for the monotone system (1). Thus,
monotone systems with an equilibrium point at the origin define positive systems1.
Monotonicity of dynamical systems is equivalently characterized by the so-called Kamke
condition, stated next.
1
A dynamical system given by (1) is called positive if any trajectory of (1) starting from nonnegative initial conditions remains
n
forever in the positive orthant, i.e., x(t) ∈ Rn
+ for all t ∈ R+ when x0 ∈ R+ .
Proposition 1 ([19]): The system (1) is monotone if and only if the following implication
holds for all x, x′ ∈ Rn+ and all i ∈ In :
x′ ≤ x and x′i = xi ⇒ fi (x′ ) ≤ fi (x).
(4)
Note that if f is continuously differentiable on Rn+ , then condition (4) is equivalent to the
requirement that f has a Jacobian matrix with nonnegative off-diagonal elements, i.e.,
∂fi
(x) ≥ 0,
∂xj
x ∈ Rn+ ,
(5)
holds for all i 6= j, i, j ∈ In [19, Remark 3.1.1]. A vector field satisfying (5) is called cooperative.
In this paper, we will consider stability properties of monotone nonlinear systems. To this
end, we use the following definition of (asymptotic) stability, tailored for monotone systems on
the positive orthant.
Definition 2: The equilibrium point x = 0 of the monotone system (1) is said to be stable if,
for each ε > 0, there exists a δ > 0 such that
0 ≤ x0 < δ1n ⇒ 0 ≤ x(t, x0 ) < ε1n ,
∀t ∈ R+ .
(6)
The origin is called asymptotically stable if it is stable and, in addition, δ can be chosen such
that
0 ≤ x0 < δ1n ⇒ lim x(t, x0 ) = 0.
t→∞
Note that, due to the equivalence of norms on Rn and forward invariance of the positive
orthant, Definition 2 is equivalent to the usual notion of Lyapunov stability [45].
B. Preliminaries on Max-separable Lyapunov Functions
We will characterize and study asymptotic stability of monotone nonlinear systems by means
of so-called max-separable Lyapunov functions
V (x) = max Vi (xi ),
i∈In
(7)
with scalar functions Vi : R+ → R+ . Since the Lyapunov function (7) is not necessarily
continuously differentiable, we consider its upper-right Dini derivative along solutions of (1)
(see, e.g., [46]) as
V
x
+
hf
(x)
−
V
x
.
D + V (x) = lim sup
h
h→0+
(8)
The following result shows that if the functions Vi in (7) are continuously differentiable, then (8)
admits an explicit expression.
Proposition 2 ([47]): Consider V : Rn+ → R+ in (7) and let Vi : R+ → R+ be continuously
differentiable for all i ∈ In . Then, the upper-right Dini derivative (8) is given by
∂Vj
(xj )fj (x),
j∈J (x) ∂xj
D + V (x) = max
(9)
where J (x) is the set of indices for which the maximum in (7) is attained, i.e.,
J (x) = j ∈ In | Vj (xj ) = V (x) .
(10)
C. Preliminaries on Monotone Linear Systems
Let f (x) = Ax with A ∈ Rn×n . Then, the nonlinear system (1) reduces to the linear system
ẋ = Ax.
(11)
It is well known that (11) is monotone in the sense of Definition 1 (and, hence, positive) if and
only if A is Metzler, i.e., all off-diagonal elements of A are nonnegative [20]. We summarize
some important stability properties of monotone linear systems in the next result.
Proposition 3 ([23], [48]): For the linear system (11), suppose that A is Metzler. Then, the
following statements are equivalent:
1) The monotone linear system (11) is asymptotically stable, i.e., A is Hurwitz.
2) There exists a max-separable Lyapunov function of the form
V (x) = max
i∈In
xi
,
vi
(12)
on Rn+ , with vi > 0 for each i ∈ In .
3) There exists a vector w > 0 such that Aw < 0.
4) For any diagonal matrix ∆ ∈ Rn×n with positive diagonal entries, the linear system
ẋ = ∆Ax,
is asymptotically stable, i.e., ∆A is Hurwitz,
In Proposition 3, the equivalence of statements 1) and 2) demonstrates that the existence of a
max-separable Lyapunov function is a necessary and sufficient condition for asymptotic stability
of monotone linear systems. The positive scalars vi in the max-separable Lyapunov function
(12) in the second item can be related to the positive vector w in the third item as vi = wi for
all i ∈ In . Statement 4) shows that stability of monotone linear systems is robust with respect
to scaling of the rows of matrix A. This property is known as D-stability [24]. Note that the
notions of asymptotic stability in Proposition 3 hold globally due to linearity.
Another well-known property of monotone linear systems is that their asymptotic stability
is insensitive to bounded and certain classes of unbounded time-delays. This property reads as
follows.
Proposition 4 ([49]): Consider the delay-free monotone (positive) system
ẋ(t) = (A + B)x(t),
(13)
with A Metzler and B having nonnegative elements. If (13) is asymptotically stable, then the
time-delay linear system
ẋ(t) = Ax(t) + Bx(t − τ (t))
(14)
is asymptotically stable for all time-varying and potentially unbounded delays satisfying
lim t − τ (t) = +∞.
t→+∞
This results shows that asymptotic stability of the delay-free monotone linear system (13)
implies that (14) is also asymptotically stable. This is a significant property of monotone
linear systems, since the introduction of time-delays may, in general, render a stable system
unstable [50].
D. Main Goals
The main objectives of this paper are (i) to derive a counterpart of Proposition 3 for monotone
nonlinear systems of the form (1); and (ii) to extend the delay-independent stability property of
monotone linear systems stated in Proposition 4 to monotone nonlinear systems with bounded
and unbounded time-varying delays.
III. S TABILITY
OF MONOTONE NONLINEAR SYSTEMS
The following theorem is our first key result, which establishes a set of necessary and sufficient
conditions for asymptotic stability of monotone nonlinear systems.
Theorem 1: Assume that the nonlinear system (1) is monotone. Then, the following statements
are equivalent:
1) The origin is asymptotically stable.
2) For some compact set of the form
X = x ∈ Rn+ | 0 ≤ x ≤ v ,
(15)
with v > 0, there exists a max-separable Lyapunov function V : X → R+ as in (7) with
Vi : [0, vi ] → R+ differentiable for each i ∈ In such that
ν1 (xi ) ≤ Vi (xi ) ≤ ν2 (xi ),
(16)
holds for all xi ∈ [0, vi ] and for some functions ν1 , ν2 of class K, and that
D + V (x) ≤ −µ(V (x)),
(17)
holds for all x ∈ X and some positive definite function µ.
3) For some positive constant s̄ > 0, there exists a function ρ : [0, s̄] → Rn+ with ρi of class
K, ρ−1
differentiable on [0, ρi (s̄)] and satisfying
i
dρ−1
i
(s) > 0,
ds
(18)
for all s ∈ 0, ρi (s̄) and all i ∈ In , such that
f ◦ ρ(s) ≤ −α(s),
(19)
holds for s ∈ [0, s̄] and some function α : [0, s̄] → Rn+ with αi positive definite for all
i ∈ In .
4) For any function ψ : Rn+ × Rn → Rn given by ψ(x, y) = ψ1 (x1 , y1 ), . . . , ψn (xn , yn )
where
T
•
ψi : R+ × R → R for i ∈ In ,
•
ψi (xi , 0) = 0 for any xi ∈ R+ and all i ∈ In ,
•
ψi (xi , yi ) is monotonically increasing in yi for each nonzero xi , i.e., the implication
yi′ < yi ⇒ ψi (xi , yi′ ) < ψi (xi , yi )
(20)
holds for any xi > 0 and all i ∈ In ,
the nonlinear system
ẋ = ψ x, f (x) ,
(21)
has an asymptotically stable equilibrium point at the origin.
Proof: The proof is given in Appendix A.
Theorem 1 can be regarded as a nonlinear counterpart of Proposition 3. Namely, choosing the
functions Vi in the second statement of Theorem 1 as Vi (xi ) = xi /vi , vi > 0, and the function
ρ in the third statement as ρ(s) = ws, w > 0, recovers statements 2) and 3) in Proposition 3,
respectively. Note that we can let vi = wi for each i ∈ In since, according to the proof of
Theorem 1, the relation between V and ρ is
−1
Vi (xi ) = ρ−1
i (xi ) and ρi (s) = Vi (s), i ∈ In .
Statement 4) of Theorem 1, which can be regarded as a notion of D-stability for nonlinear
monotone systems, is a counterpart of the fourth statement of Proposition 3. More precisely, the
choice ψ(x, y) = ∆y for some diagonal matrix ∆ with positive diagonal entries recovers the
corresponding result for monotone linear systems in Proposition 3.
Remark 1: According to the proof of Theorem 1, if there is a function ρ satisfying the third
statement for s ∈ [0, s̄], then the Lyapunov function (7) with components Vi (xi ) = ρ−1
i (xi )
guarantees asymptotic stability of the origin for any initial condition
x0 ∈ X = x ∈ Rn+ | 0 ≤ x ≤ ρ(s̄) .
This means that the set X is an estimate of the region of attraction for the origin.
⊳
We now present a simple example to illustrate the use of Theorem 1.
Example 1: Consider the nonlinear dynamical system
−5x1 + x1 x22
.
ẋ = f (x) =
x1 − 2x22
(22)
This system has an equilibrium point at the origin. As the Jacobian matrix of f is Metzler for
all (x1 , x2 ) ∈ R2+ , f is cooperative. Thus, according to Proposition 1, (22) is monotone on R2+ .
√
First, we will show that the origin is asymptotically stable. Let ρ(s) = (s, s), s ∈ [0, 4]. For
each i ∈ {1, 2}, ρi is of class K. It is easy to verify that
−1
2
ρ−1
1 (s) = s, ρ2 (s) = s .
Thus, ρ−1
i , i ∈ {1, 2}, is continuously differentiable and satisfies (18). In addition,
2
−5s + s
s
≤ − ,
f ◦ ρ(s) =
−s
s
for all s ∈ [0, 4], which implies that (19) holds. It follows from the equivalence of statements
1) and 3) in Theorem 1 that the origin is asymptotically stable.
Next, we will estimate the region of attraction of the origin by constructing a max-separable
Lyapunov function. According to Remark 1, the monotone system (22) admits the max-separable
Lyapunov function
V (x) = max x1 , x22
that guarantees the origin is asymptotically stable for
x0 ∈ x ∈ R2+ | 0 ≤ x ≤ (4, 2) .
Finally, we discuss D-stability of the monotone system (22). Consider ψ given by
T
x
3
2
1
.
ψ(x, y) = x2 +1 y1 , x2 y2
1
For any x ∈ R2+ , ψ(x, 0) = 0. Moreover, each component ψi (xi , yi ), i ∈ {1, 2}, is monotonically
increasing for any xi > 0. Since the origin is an asymptotically stable equilibrium point of (22),
by the equivalence of statements 1) and 4) in Theorem 1, the monotone nonlinear system
x1
2 3
(−5x1 + x1 x2 )
2
ẋ = ψ x, f (x) = x1 +1
x22 (x1 − 2x22 )
has an asymptotically stable equilibrium point at the origin.
Remark 2: A consequence of the proof of Theorem 1 is that all statements 1)–4) are equivalent
to the existence of a vector w > 0 such that f (w) < 0 and
lim x(t, w) = 0.
t→∞
(23)
Contrary to statement 3) of Proposition 3 for monotone linear systems, the condition f (w) < 0
without the additional assumption (23) does not necessarily guarantee asymptotic stability of
the origin for monotone nonlinear systems. To illustrate the point, consider, for example, a
scalar monotone system described by (1) with f (x) = −x(x − 1), x ∈ R+ . This system has
two equilibrium points: x⋆ = 0 and x⋆ = 1. Although f (2) < 0, it is easy to verify that any
trajectory starting from the initial condition x0 > 0 converges to x⋆ = 1. Hence, the origin is
⊳
not stable.
Remark 3: Several implications in Theorem 1 are based on similar results in the literature.
Namely, the implication 1) ⇒ 2) was shown in [43] (see also [44]) by construction of a maxseparable Lyapunov function that is not necessarily differentiable or even continuous (see [44,
Example 2]). The implication 3) ⇒ 2) was proven before in [51, Theorem III.2] for the case of
global asymptotic stability of monotone nonlinear systems considering a max-separable Lyapunov
function with possibly non-smooth components Vi . The implication 1) ⇒ 4) was shown in [37]–
[39] for particular classes of scaling function ψ. For example, if we choose ψi (xi , yi ) = di (xi )yi
with di (xi ) > 0 for xi > 0, then statement 4) recovers the results in [37], [39]. However, contrary
to [37]–[39], neither homogeneity nor sub-homogeneity of f is required in Theorem 1.
IV. S TABILITY
⊳
OF MONOTONE SYSTEMS WITH DELAYS
In this section, delay-independent stability of nonlinear systems of the form
ẋ(t) = g x(t), x(t − τ (t)) , t ≥ 0,
x(t) = ϕ(t)
, t ∈ [−τmax , 0],
(24)
is considered. Here, g : Rn+ × Rn+ → Rn is locally Lipschitz continuous with g(0, 0) = 0,
ϕ ∈ C [−τmax , 0], Rn+ is the vector-valued function specifying the initial state of the system,
and τ is the time-varying delay which satisfies the following assumption:
Assumption 1: The delay τ : R+ → R+ is continuous with respect to time and satisfies
lim t − τ (t) = +∞.
t→+∞
(25)
Note that τ is not necessarily continuously differentiable and that no restriction on its derivative
(such as τ̇ (t) < 1) is imposed. Roughly speaking, condition (25) implies that as t increases,
the delay τ (t) grows slower than time itself. It is easy to verify that all bounded delays,
irrespectively of whether they are constant or time-varying, satisfy Assumption 1. Moreover,
delays satisfying (25) may be unbounded (take, for example, τ (t) = γt with γ ∈ (0, 1)).
Unlike the non-delayed system (1), the solution of the time-delay system (24) is not uniquely
determined by a point-wise initial condition x0 , but by the continuous function ϕ defined over
the interval [−τmax , 0]. Assumption 1 implies that there is a sufficiently large T > 0 such that
t − τ (t) > 0 for all t > T . Define
τmax = − inf
0≤t≤T
t − τ (t) .
Clearly, τmax ∈ R+ is bounded (τmax < +∞). Therefore, the initial condition ϕ is defined on a
bounded set [−τmax , 0] for any delay satisfying Assumption 1, even if it is unbounded. Since g
is Lipschitz continuous and τ is a continuous function of time, the existence and uniqueness of
solutions to (24) follow from [52, Theorem 2]. We denote the solution to (24) corresponding to
the initial condition ϕ by x(t, ϕ).
From this point on, it is assumed that the time-delay system (24) satisfies the next assumption:
Assumption 2: The following properties hold:
1) g(x, y) satisfies Kamke condition in x for each y, i.e.,
x′ ≤ x and x′i = xi ⇒ gi (x′ , y) ≤ gi (x, y),
(26)
holds for any y ∈ Rn+ and all i ∈ In .
2) g(x, y) is order-preserving in y for each x, i.e.,
y ′ ≤ y ⇒ g(x, y ′) ≤ g(x, y),
(27)
holds for any x ∈ Rn+ .
System (24) is called monotone if given two initial conditions ϕ, ϕ′ ∈ C [−τmax , 0], Rn+ with
ϕ′ (t) ≤ ϕ(t) for all t ∈ [−τmax , 0], then
x(t, ϕ′ ) ≤ x(t, ϕ),
∀t ∈ R+ .
(28)
It follows from [19, Theorem 5.1.1] that Assumption 2 ensures the monotonicity of (24).
Furthermore, as the origin is an equilibrium for (24), the positive orthant Rn+ is forward invariant,
i.e., x(t, ϕ) ∈ Rn+ for all t ∈ R+ when ϕ(t) ∈ Rn+ for t ∈ [−τmax , 0].
We are interested in stability of the time-delay system (24) under the assumption that the
delay-free system
ẋ t = g x(t), x(t) =: f x(t) ,
(29)
has an asymptotically stable equilibrium point at the origin. Since time-delays may, in general,
induce oscillations and even instability [53], the origin is not necessarily stable for the time-delay
system (24). However, the following theorem shows that asymptotic stability of the origin for
monotone nonlinear systems is insensitive to time-delays satisfying Assumption 1.
Theorem 2: Consider the time-delay system (24) under Assumption 2. Then, the following
statements are equivalent:
1) The time-delay monotone system (24) has an asymptotically stable equilibrium point at
the origin for all time varying-delays satisfying Assumption 1.
2) For the non-delayed monotone system (29), any of the equivalent conditions in the statement
of Theorem 1 hold.
Proof: The proof is given in Appendix B.
According to Theorem 2, local asymptotic stability of the origin for a delay-free monotone
system of the form (29) implies local asymptotic stability of the origin also for (24) with bounded
and unbounded time-varying delays. Theorem 2 does not explicitly give any estimate of the region
of attraction for the origin. However, its proof shows that the stability conditions presented
in Theorem 1 for non-delayed monotone systems can provide such estimates, leading to the
following practical tests.
T1. Assume that for the delay-free monotone system (29), we can characterize asymptotic
stability of the origin through a max-separable Lyapunov function V satisfying the second
statement of Theorem 1. Then, for the time-delay system (24), the origin is asymptotically
stable with respect to initial conditions satisfying
ϕ(t) ∈ x ∈ Rn+ | 0 ≤ xi ≤ Vi−1 (c), i ∈ In ,
for t ∈ [−τmax , 0] with
c = min Vi (vi ).
i∈In
T2. If we demonstrate the existence of a function ρ such that the non-delayed system (29) satisfies the third statement of Theorem 1, then (24) with time-delays satisfying Assumption 1
has an asymptotically stable equilibrium point at the origin for which the region of attraction
includes initial conditions ϕ that satisfy
0 ≤ ϕ(t) ≤ ρ(s̄),
t ∈ [−τmax , 0].
T3. If we find a vector w > 0 such that g(w, w) < 0 and that the solution x(t, w) to the
delay-free monotone system (29) converges to the origin, then the solution x(t, ϕ) to the
time-delay system (24) converges to the origin for any initial condition ϕ that satisfies
0 ≤ ϕ(t) ≤ w,
t ∈ [−τmax , 0].
The following example illustrates the results of Theorem 2.
Example 2: Consider the time-delay system
−5x1 t + x1 t x22 t − τ (t)
ẋ(t) = g x(t), x(t − τ (t)) =
.
2
x1 t − τ (t) − 2x2 t
(30)
One can verify that g satisfies Assumption 2. Thus, the system (30) is monotone on R2+ . According to Example 1, this system without time-delays has an asymptotically stable equilibrium
at the origin. Therefore, Theorem 2 guarantees that for the time-delay system (30), the origin
is still asymptotically stable for any bounded and unbounded time-varying delays satisfying
Assumption 1.
We now provide an estimate of the region of attraction for the origin. Example 1 shows that
√
for the system (30) without time-delays, the function ρ(s) = (s, s), s ∈ [0, 4], satisfies the
third statement of Theorem 1. It follows from stability test T2 that the solution x(t, ϕ) to (30)
starting from initial conditions
0 ≤ ϕ(t) ≤ ( 4, 2 )T ,
t ∈ [−τmax , 0].
converges to the origin.
Remark 4: Our results can be extended to monotone nonlinear systems with heterogeneous
delays of the form
ẋi (t) = gi x(t), xτi (t) ,
i ∈ In ,
(31)
T
where g x, y = g1 (x, y), . . . , gn (x, y) satisfies Assumption 2, and
T
xτi t := x1 (t − τi1 (t)), . . . , xn (t − τin (t)) .
If the delays τij , i, j ∈ In , satisfy Assumption 1, then asymptotic stability of the origin for the
delay-free monotone system (29) ensures that (31) with heterogeneous time-varying delays also
has an asymptotically stable equilibrium point at the origin
⊳
V. A PPLICATIONS
OF THE MAIN RESULTS
In this section, we will present several examples to illustrate how our main results recover
and generalize previous results on delay-independent stability of monotone nonlinear systems.
A. Homogeneous monotone systems
First, we consider a particular class of monotone nonlinear systems whose vector fields are
homogeneous in the sense of the following definition.
Definition 3: Given an n-tuple r = (r1 , . . . , rn ) of positive real numbers and λ > 0, the
dilation map δλr : Rn → Rn is defined as
δλr x := λr1 x1 , . . . , λrn xn .
When r = 1n , the dilation map is called the standard dilation map. A vector field f : Rn → Rn
is said to be homogeneous of degree p ∈ R with respect to the dilation map δλr if
f δλr (x) = λp δλr f (x) ,
∀x ∈ Rn , ∀λ > 0.
Note that the linear mapping f (x) = Ax is homogeneous of degree zero with respect to the
standard dilation map.
The following result, which is a direct consequence of Theorem 2, establishes a necessary
and sufficient condition for global asymptotic stability of homogeneous monotone systems with
time-varying delays. By global asymptotic stability, we mean that the origin is asymptotically
stable for all nonnegative initial conditions.
Corollary 1: For the time-delay system (24), suppose Assumption 2 holds. Suppose also that
f (x) := g(x, x) is homogeneous of degree p ∈ R+ with respect to the dilation map δλr . Then,
the following statements are equivalent:
1) There exists a vector w > 0 such that f (w) < 0.
2) The homogeneous monotone system (24) has a globally asymptotically stable equilibrium
point at the origin for all ϕ ∈ C [−τmax , 0], Rn+ and all time-delays satisfying Assumption 1.
Proof: The implication 2) ⇒ 1) follows directly from Theorem 2 and Remark 2. We will
show that 1) implies 2).
1) ⇒ 2) : Let ρi (s) = sri /rmax wi , where
rmax = max ri .
i∈In
For any positive constant s̄ > 0, it is clear that ρi is of class K on [0, s̄], and ρ−1
is continuously
i
differentiable and satisfy (18). As f is homogeneous of degree p ∈ R+ with respect to the
dilation map δλr , it follows that
f ◦ ρ(s) = g ρ(s), ρ(s) = g δsr1/rmax (w), δsr1/rmax (w)
= sp/rmax δsr1/rmax g(w, w)
= sp/rmax δsr1/rmax f (w) .
(32)
Since f (w) < 0, the right-hand side of equality (32) is negative definite for all s ∈ [0, s̄].
Therefore, according to Theorem 2 and stability test T2, the time-delay system (24) has an
asymptotically stable equilibrium point at the origin for any 0 ≤ ϕ(t) ≤ ρ(s̄), t ∈ [−τmax , 0].
To prove global asymptotic stability, suppose we are given a nonnegative initial condition
ϕ ∈ C [−τmax , 0], Rn+ . Since wi > 0 for each i ∈ In and ϕ is continuous (hence, bounded) on
[−τmax , 0], there exists a sufficiently large s̄ > 0 such that ϕ(t) ≤ ρ(s̄) for all t ∈ [−τmax , 0].
It now follows immediately from the argument in the previous paragraph that the origin is
asymptotically stable with respect to any nonnegative initial condition ϕ.
Remark 5: Delay-independent stability of homogeneous monotone systems with time-varying
delays satisfying Assumption 1 was previously considered in [40] by using a max-separable
Lyapunov function with components
Vi (xi ) =
ρ−1
i (xi )
=
xi
wi
rmax /ri
,
i ∈ In .
Note, however, that the proof of Theorem 2 differs significantly from the analysis in [40]. The
main reason for this is that the homogeneity assumption, which plays a key role in the stability
proof in [40], is not satisfied for general monotone systems.
⊳
B. Sub-homogeneous monotone systems
Another important class of monotone nonlinear systems are those with sub-homogeneous
vector fields:
Definition 4: A vector field f : Rn+ → Rn is said to be sub-homogeneous of degree p ∈ R if
f (λx) ≤ λp f (x),
∀x ∈ Rn+ , ∀λ ≥ 1.
Theorem 2 allows us to show that global asymptotic stability of the origin for monotone
systems whose vector fields are sub-homogeneous is insensitive to bounded and unbounded
time-varying delays.
Corollary 2: Consider the time-delay system (24) under Assumption 2. Suppose also that
f (x) := g(x, x) is sub-homogeneous of degree p ∈ R+ . If the origin for the delay-free monotone
system (29) is globally asymptotically stable, then the sub-homogeneous monotone system (24)
has a globally asymptotically stable equilibrium at the origin for all time-varying delays satisfying Assumption 1.
Proof: The origin for the sub-homogeneous monotone system (29) is globally asymptotically
stable. Thus, for any constant α > 0, there exists a vector w > 0 such that α1n ≤ w and the
solution x(t, w) to the delay-free system (29) converges to the origin [37, Theorem 4.1]. It
follows from Theorem 2 and stability test T3 that the time-delay system (24) is asymptotically
stable with respect to initial conditions satisfying 0 ≤ ϕ(t) ≤ α1n , t ∈ [−τmax , 0].
To complete the proof, let ϕ ∈ C [−τmax , 0], Rn+ be an arbitrary initial condition. As α > 0
and ϕ is continuous (hence, bounded) on [−τmax , 0], we can find α > 0 such that ϕ(t) ≤ α1n for
t ∈ [−τmax , 0]. This together with the above observations implies that the origin is asymptotically
stable for all nonnegative initial conditions.
Remark 6: In [41], it was shown that global asymptotic stability of the origin for subhomogeneous monotone systems is independent of bounded time-varying delays. In this work,
we establish insensitivity of sub-homogeneous monotone systems to the general class of possibly
unbounded delays described by Assumption 1, which includes bounded delays as a special case.
⊳
C. Sub-homogeneous (non-monotone) positive systems
Finally, motivated by results in [54], we consider the time-delay system
ẋ(t) = g(x(t), x(t − τ (t))) = h x(t) + d x(t − τ (t)) .
We assume that h and d satisfy Assumption 3.
(33)
Assumption 3: The following properties hold:
1) For each i ∈ In , hi (x) ≥ 0 for x ∈ Rn+ with xi = 0;
2) For all x ∈ Rn+ , d(x) ≥ 0;
3) Both h and d are sub-homogeneous of degree p ∈ R+ ;
4) For any x ∈ Rn+ \ {0}, there is i ∈ In such that
sup di (z ′ ) | 0 ≤ z ′ ≤ x < − sup hi (z) | 0 ≤ z ≤ x, zi = xi .
Note that under Assumption 3, the time-delay system (33) is not necessarily monotone.
However, Assumptions 3.1 and 3.2 ensure the positivity of (33) [19, Theorem 5.2.1].
In [54], it was shown that if Assumption 3 holds, then the positive nonlinear system (33) with
constant delays (τ (t) = τmax , t ∈ R+ ) has a globally asymptotically stable equilibrium at the
origin for all τmax ∈ R+ . Theorem 2 helps us to extend the result in [54] to time-varying delays
satisfying Assumption 1.
Corollary 3: For the time-delay system (33), suppose Assumption 3 holds. Then, the origin
is globally asymptotically stable for all time-delays satisfying Assumption 1.
Proof: For any x, y ∈ Rn+ and each i ∈ In , define
ḡi (x, y) = sup hi (z) + di (z ′ ) | 0 ≤ z ≤ x, zi = xi , 0 ≤ z ′ ≤ y .
It is straightforward to show that ḡ(x, y) satisfies Assumption 2. Thus, the time-delay system
ẋ t = ḡ x(t), x(t − τ (t)) ,
(34)
is monotone. Under Assumption 3, the sub-homogeneous monotone system (34) without delays
(τ (t) = 0) has a globally asymptotically stable equilibrium at the origin [54, Theorem III.2].
Therefore, according to Corollary 2, the origin for the time-delay system (34) is also globally
asymptotically stable for any time-delays satisfying Assumption 1.
As g(x, y) ≤ ḡ(x, y) for any x, y ∈ Rn+ , it follows from [19, Theorem 5.1.1] that for any
initial condition ϕ,
x(t, ϕ, g) ≤ x(t, ϕ, ḡ),
t ∈ R+ ,
(35)
where x(t, ϕ, g) and x(t, ϕ, ḡ) are solutions to (33) and (34), respectively, for a common initial condition ϕ. Since x = 0 is a globally asymptotically stable equilibrium point for (34),
x(t, ϕ, ḡ) → 0 as t → ∞. Moreover, as (33) is a positive system, x(t, ϕ, g) ≥ 0 for t ∈ R+ . We
can conclude from (35) and the above observations that for any nonnegative initial condition ϕ,
x(t, ϕ, g) converges to the origin. Hence, for the time-delay system (33), the origin is globally
asymptotically stable.
VI. C ONCLUSIONS
In this paper, we have presented a number of results that extend fundamental stability properties
of monotone linear systems to monotone nonlinear systems. Specifically, we have shown that
for such nonlinear systems, equivalence holds between asymptotic stability of the origin, the
existence of a max-separable Lyapunov function on a compact set, and D-stability. In addition,
we have demonstrated that if the origin for a delay-free monotone system is asymptotically stable,
then the corresponding system with bounded and unbounded time-varying delays also has an
asymptotically stable equilibrium point at the origin. We have derived a set of necessary and
sufficient conditions for establishing delay-independent stability of monotone nonlinear systems,
which allow us to extend several earlier works in the literature. We have illustrated the main
results with several examples.
A PPENDIX
Before proving the main results of the paper, namely, Theorems 1 and 2, we first state a key
lemma which shows that all components of a max-separable Lyapunov function are necessarily
monotonically increasing.
Lemma 1: Consider the monotone system (1) and the max-separable function V : Rn+ → R+
as in (7) with Vi : R+ → R+ differentiable for all i ∈ In . Suppose that there exist functions
ν1 , ν2 of class K such that
ν1 (xi ) ≤ Vi (xi ) ≤ ν2 (xi ),
(36)
for all xi ∈ R+ and all i ∈ In . Suppose also that there exists a positive definite function µ such
that
D + V (x) ≤ −µ(V (x)),
(37)
for all x ∈ Rn+ . Then, the functions Vi satisfy, for all xi > 0,
∂Vi
(xi ) > 0.
∂xi
(38)
Proof: For some j ∈ In , consider the state x = ej xj , where ej is the j th column of the
identity matrix I ∈ Rn×n , and xj ∈ R+ . From (36), Vj (xj ) > 0 for any xj > 0 and Vi (xi ) = 0
for i 6= j. Thus, the set J in (10) satisfies J (ej xj ) = {j} for all xj > 0. Evaluating (37) through
Proposition 2 leads to
D + V (ej xj ) =
∂Vj
(xj )fj (ej xj ) ≤ −µ(V (ej xj )) < 0
∂xj
(39)
for xj > 0. The strict inequality in (39) implies that ∂Vj /∂xj is nonzero and that its sign is
constant for all xj > 0. As a negative sign would yield Vj (xj ) < 0 (and, hence violate (36)) and
j is chosen arbitrarily, (38) holds.
A. Proof of Theorem 1
The theorem will be proven by first showing the equivalence 1) ⇔ 2) ⇔ 3). Next, this
equivalence will be exploited to subsequently show 3) ⇒ 4) and 4) ⇒ 1). Consequently, we
have
1) ⇔ 2) ⇔ 3) ⇒ 4) ⇒ 1),
which proves the desired result.
First, we note that the implication 2) ⇒ 1) follows directly from Lyapunov stability theory
(e.g., [46]) and we proceed to prove that 1) implies 2).
1) ⇒ 2): By asymptotic stability of the origin as in Definition 2, the region of attraction of
x = 0 defined as
A := x0 ∈ Rn+ | limt→∞ x(t, x0 ) = 0
is nonempty. In fact, we can find some δ > 0 such that all states satisfying 0 ≤ x < δ1 are in
A. Then, as the system (1) is monotone, the reasoning from [11, Theorem 3.12] (see also [51,
Theorem 2.5] for a more explicit statement) can be followed to show the existence of a vector
v such that 0 < v < δ1 and f (v) < 0.
Let ω(t) = x(t, v), t ∈ R+ be the solution to (1) starting from such a v. By the local Lipschitz
continuity of f , ω is continuously differentiable. As v ∈ A, wi (t) → 0 when t → ∞ for each
i ∈ In . Moreover, note that v is an element of the set
Ω = {x ∈ Rn+ | f (x) < 0}.
According to [19, Proposition 3.2.1], Ω is forward invariant so ω(t) ∈ Ω for all t ∈ R+ . Thus,
the components ωi (t) are strictly decreasing in t, i.e., ω̇i (t) < 0 for t ∈ R+ . This further implies
that, for a given state component xi ∈ (0, vi ], there exists a unique t ∈ R+ such that xi = ωi (t).
Let
Ti (xi ) = t ∈ R+ | xi = ωi (t) .
From the definition, it is clear that Ti (xi ) = ωi−1(xi ). Since ω̇i (t) < 0 for t ∈ R+ , the inverse of
ωi , i.e., the function Ti , is continuously differentiable and strictly decreasing for all xi ∈ (0, vi ].
We define, as in [43], [44], the component functions
Vi (xi ) = e−Ti (xi ) ,
i ∈ In .
(40)
Note that Vi (0) = 0. Moreover, Vi are continuously differentiable and strictly increasing for all
xi ∈ (0, vi ] as a result of the properties of the function Ti . Therefore, the component functions
Vi in (40) satisfy (16) for some functions ν1 , ν2 of class K. Moreover, from [44, Theorem 3.2],
the upper-right Dini derivative of the max-separable Lyapunov function (7) with components
(40) is given by
D + V (x) ≤ −V (x),
for all x ∈ X with X = {x ∈ Rn+ | 0 ≤ x ≤ v}. This shows that (17) holds, and hence the
proof is complete.
We now proceed to show equivalence between 2) and 3). We begin with the implication
2) ⇒ 3).
2) ⇒ 3): According to Lemma 1, the functions Vi are monotonically increasing. Thus, their
inverses
ρi (s) = Vi−1 (s)
(41)
can be defined for all s ∈ [0, Vi (vi )]. Define
s̄ := min Vi (vi ).
i∈In
Since v > 0, it follows from (16) that s̄ > 0. Moreover, due to continuous differentiability of Vi
and Lemma 1, ρi is of class K and satisfies (18).
In the remainder of the proof, it will be shown that the function ρ with components ρi defined
in (41) satisfies (19) for all s ∈ [0, s̄]. Thereto, consider a state xs , parameterized by s according
to the definition xs := ρ(s). Since s̄ ≤ Vi (vi ) for all i ∈ In , it holds that xs ∈ X for all s ∈ [0, s̄].
Evaluating (17) for such an xs yields
∂Vj s
xj fj xs ≤ −µ V (xs ) .
D + V xs = maxs
j∈J (x ) ∂xj
(42)
The definition xsi = ρi (s), i ∈ In , implies, through (41), that Vi (xsi ) = s for all s ∈ [0, s̄].
Consequently, V (xs ) = s and the set J (xs ) in (42) satisfies J (xs ) = In , such that (42) implies
∂Vi
ρi (s) fi ρ(s) ≤ −µ s
∂xi
(43)
for s ∈ [0, s̄] and i ∈ In . Since Vi is strictly increasing and µ is positive definite, fi (ρi (s)) ≤ 0.
Define functions ri : [0, s̄] → R as
∂Vi
(zi ) ρi (s) ≤ zi ≤ ρi (s̄) .
ri (s) := sup
∂xi
By continuous differentiability of Vi and the result of Lemma 1, it follows that ri exists and
satisfies ri (s) > 0 for all s ∈ [0, s̄]. Moreover, it is easily seen that
ri (s) ≥
∂Vi
(ρi (s)).
∂xi
This together with (43) implies that
ri (s)fi (ρ(s)) ≤
∂Vi
(ρi (s))fi (ρ(s)) ≤ −µ(s)
∂xi
for all s ∈ [0, s̄]. Here, the inequality follows from the observation that fi (ρi (s)) ≤ 0. Then,
strict positivity of ri implies that
fi (ρ(s)) ≤ −
µ(s)
,
ri (s)
for all s ∈ [0, s̄] and any i ∈ In . Since ri is strictly positive and µ is positive definite, the
function αi (s) = µ(s)/ri (s) is positive definite and (19) holds.
We continue with the reverse implication 3) ⇒ 2).
3) ⇒ 2) : Define v := ρ(s̄). Since ρi , i ∈ In , are of class K and s̄ > 0, we have v > 0. Let
Vi be such that
Vi (xi ) = ρ−1
i (xi )
(44)
for xi ∈ [0, vi ]. Note that the inverse of ρi exists on this compact set as ρi is of class K and,
hence, is strictly increasing. Because of the same reason, it is clear that Vi as in (44) satisfies
(16) for some functions ν1 , ν2 of class K.
The remainder of the proof will show that the max-separable Lyapunov function (7) with
components (44) satisfies (17). By Proposition 2, the upper-right Dini derivative of V along
solutions of (1) is given by
∂Vj
(xj )fj (x).
j∈J (x) ∂xj
D + V (x) = max
(45)
Define X := {x ∈ Rn+ | 0 ≤ x ≤ v}, choose any x ∈ X and consider any j ∈ J (x), where
J (x) is defined in (10). Then, Vj (xj ) = V (x), such that the use of equality (44) leads to
xj = ρj (Vj (xj )) = ρj (V (x)).
(46)
Note that when i be different from j, a similar argument establishes that Vi (xi ) ≤ V (x) and,
thus, xi ≤ ρi (V (x)). Combining this with (46) gives
x ≤ ρ(V (x)).
(47)
Since f satisfies Kamke condition (4), it follows from of (46) and (47) that fj (x) ≤ fj (ρ(V (x))).
Moreover, ∂Vj /∂xj > 0 for all xj ∈ (0, vj ] due to (44) and the fact that ρi satisfies (18). The
above observations together with (45) imply that
∂Vj
(xj )fj ρ(V (x)) .
j∈J (x) ∂xj
D + V (x) ≤ max
(48)
Next, define functions r̄i : [0, s̄] → R+ as
∂Vi
(zi ) ρi (s) ≤ zi ≤ vi .
r̄i (s) := inf
∂xi
(49)
Note that r̄i is strictly positive for s ∈ (0, s̄]. Moreover, for any j ∈ J (x), the equality (46) is
recalled, from which it follows that
∂Vj
r̄j (V (x)) = r̄j (Vj (xj )) = inf
(zj ) xj ≤ zj ≤ vi
∂xj
∂Vj
(xj ).
≤
∂xj
(50)
Then, returning to the Dini derivative of V in (48), the use of (19), and subsequently, the
application of (50), leads to
D + V (x) ≤ max −
j∈J (x)
∂Vj
(xj )αj (V (x)),
∂xj
≤ max −r̄j (V (x))αj (V (x)),
j∈J (x)
(51)
for all x ∈ X . Recall that the functions αi are positive definite, whereas the functions r̄i in (49)
are strictly positive for s ∈ (0, s̄]. As a result, the functions r̄i (s)αi (s) are positive definite and
there exists a positive definite function µ such that µ(s) ≤ r̄i (s)αi (s) for all s ∈ [0, s̄] and all
i ∈ In . Applying this result to (51) yields
D + V (x) ≤ −µ(V (x)),
for all x ∈ X , proving (17) and finalizing the proof.
We now show that 3) implies 4) by exploiting the equivalence 1) ⇔ 2) ⇔ 3).
3) ⇒ 4): First, we show that the system (21) is monotone. Thereto, recall that monotonicity
of (1) implies
x′ ≤ x, x′i = xi ⇒ fi (x′ ) ≤ fi (x)
⇒ ψi x′i , fi (x′ ) ≤ ψi xi , fi (x) ,
(52)
where the latter implication follows from (20). Then, (52) represents the Kamke condition for
the vector field ψ(x, f ), such that monotonicity of (21) follows from Proposition 1.
Next, note that ϕ 0, f (0) = 0 implying that the origin is an equilibrium point of (21). In
order to prove asymptotic stability of the origin, we recall that f satisfies, by assumption, (19)
for some function ρ. From this, we have
ψ ρ(s), f (ρ(s) ≤ ψ ρ(s), −α(s) ,
(53)
where the inequality is maintained due to the fact that ψi is monotonically increasing in the
second argument for all i ∈ In . The functions αi and ρ are positive definite, hence −α(s) < 0
and ρ(s) > 0 for all s ∈ (0, s̄]. Then, from (20), we have
ψ ρ(s), −α(s) < ψ ρ(s), 0 = 0,
for s ∈ (0, s̄], where the equality follows from ϕ(x, 0) = 0 for any x ∈ Rn+ . Hence, ψ ρ(s), −α(s)
is negative definite and (53) is again of the form (19). From the implication 3) ⇒ 2) ⇒ 1), we
conclude the origin is asymptotically stable.
Finally, we prove that 4) implies 1).
4) ⇒ 1): Assume that the system (21) is asymptotically stable for any Lipschitz continuous
function ψ satisfying statement 4). Particularly, let ψ(x, y) = y. Then, the monotone system (1)
is asymptotically stable.
B. Proof of Theorem 2
1) ⇒ 2): Assume that x = 0 for the time-delay system (24) is asymptotically stable for
all delays satisfying Assumption 1. Particularly, let τ (t) = 0. Then, the non-delayed monotone
system (29) has an asymptotically stable equilibrium point at the origin.
2) ⇒ 1): For investigating asymptotic stability of the time-delay monotone system (24), we
employ Lyapunov-Razumikhin approach which allows us to impose minimal restrictions on timevarying delays [52]. In particular, we make use of the max-separable Lyapunov function that
guarantees asymptotic stability of the origin without time-delays as a Lyapunov-Razumikhin
function.
Let ρ be a function such that the delay-free system (29) satisfies (19), i.e.,
g ρ(s), ρ(s) ≤ −α s ,
(54)
holds for s ∈ [0, s̄]. Define v := ρ(s̄) and let X be the compact set (15). First, we show that for
any ϕ(t) ∈ X , t ∈ [−τmax , 0], the solution x(t, ϕ) satisfies x(t, ϕ) ∈ X for all t ∈ R+ .
Clearly, x(0, ϕ) = ϕ(0) ∈ X . In order to establish a contradiction, suppose that the statement
x(t, ϕ) ∈ X , t ∈ R+ , is not true. Then, there is i ∈ In and a time t̂ ∈ R+ such that x(t, ϕ) ∈ X
for all t ∈ [0, t̂], xi (t̂, ϕ) = vi , and
D + xi (t̂, ϕ) ≥ 0.
(55)
As x(t̂, ϕ) ≤ v, it follows from Assumption 2.1 that
gi x(t̂, ϕ), y ≤ gi v, y ,
(56)
for y ∈ Rn+ . As t̂−τ (t̂) ∈ [−τmax , t̂] and ϕ(t) ∈ X for t ∈ [−τmax , 0], we have x(t̂−τ (t̂), ϕ) ∈ X
irrespectively of whether t̂ − τ (t̂) is nonnegative or not. Thus, from Assumption 2.2,
gi y, x(t̂ − τ (t̂), ϕ) ≤ gi y, v ,
(57)
for any y ∈ Rn+ . Using (56) and (57), the Dini-derivative of xi (t, ϕ) along the trajectories of (24)
at t = t̂ is given by
D + xi (t̂, ϕ) = gi x(t̂, ϕ), x(t̂ − τ (t̂), ϕ)
≤ gi (v, v) = gi ρ(s̄), ρ(s̄)
≤ −αi (s̄) < 0,
which contradicts (55). Therefore, x(t, ϕ) ∈ X for t ∈ R+ .
We now prove the asymptotic stability of the origin. According to the proof of Theorem 1, if
the delay-free system (29) satisfies (54), then it admits a max-separable Lyapunov function (7)
with components Vi (xi ) = ρ−1
i (xi ) defined on X , see (44), such that
∂Vj
xj gj x, x ≤ −µ(V (x)),
j∈J (x) ∂xj
D + V (x) = max
(58)
holds for all x ∈ X . For any ϕ(t) ∈ X , from (44), we have
xj t, ϕ = ρj V (x(t, ϕ)) ,
j ∈ J (x(t, ϕ)),
(59)
with J as in (10). Combining (59) with the observation that
xi t, ϕ ≤ ρi V (x(t, ϕ)) ,
for any i ∈ In , implying that
x(t, ϕ) ≤ ρ V (x(t, ϕ)) =: x̄(t)
(60)
for ϕ(t) ∈ X . At this point we recall that x(t, ϕ) ∈ X for all t ∈ R+ . Thus, V (x(t, ϕ)) ≤ V (v).
This in turn implies that ρ(V (x(t, ϕ)) ≤ v and, hence, x̄(t) ∈ X for any t ∈ R+ . Next, it
follows from Assumption 2.1 that
gj x(t, ϕ), y ≤ gj x̄(t), y ,
j ∈ J (x(t, ϕ)),
for any ϕ(t) ∈ X and any y ∈ Rn+ . This condition will be exploited later in the proof.
(61)
In the remainder of the proof, the max-separable Lyapunov function V of the delay-free system
(29) will be used as a candidate Lyapunov-Razumikhin function for the time-delay system (24).
To establish a Razumikhin-type argument [52], it is assumed that
V x(s, ϕ) < q V (x(t, ϕ)) ,
(62)
for all s ∈ [t − τ (t), t], where q : R+ → R+ is a continuous non-decreasing function satisfying
q(r) > r for all r > 0. We will specify q later. By the definition of V , it follows that the
assumption (62) implies that
Vi (xi (t − τ (t), ϕ)) ≤ q(V (x(t, ϕ))),
for all i ∈ In . As a result,
x t − τ (t), ϕ ≤ ρ q(V (x(t, ϕ))) =: x̃(t),
(63)
such that the application of Assumption 2.2 yields
gi y, x(t − τ, ϕ) ≤ gi y, x̃(t) ,
(64)
for all i ∈ In and any y ∈ R+
n . Returning to the candidate Lyapunov-Razumikin function V , its
upper-right Dini derivative along trajectories x(·, ϕ) reads
∂Vj
xj (t, ϕ) gj x(t, ϕ), x(t − τ (t), ϕ)
j∈J (x(t,ϕ)) ∂xj
∂Vj
≤ max
xj (t, ϕ) gj x̄(t), x(t − τ (t), ϕ)
j∈J (x(t,ϕ)) ∂xj
∂Vj
= max
x̄j (t) gj x̄(t), x(t − τ (t), ϕ) ,
j∈J (x(t,ϕ)) ∂xj
D + V x(t, ϕ) =
max
(65)
where (61) was used to obtain the inequality (exploiting Proposition 1 as before) and (59) to get
the second equality. Next, recall that the assumption (62) implies (64), such that (65) is bounded
as
∂Vj
x̄j (t) gj x̄(t), x̃(t)
j∈J (x(t,ϕ)) ∂xj
∂Vj
≤ max
x̄j (t) gj x̄(t), x̃(t) .
j∈J (x̄(t)) ∂xj
D + V x(t, ϕ) ≤
max
(66)
Here, the second inequality follows from the observation that J (x̄(t)) = In for any t, such that
J (x(t, ϕ)) ⊆ J (x̄(t)).
We recall that the value of x̃(t) in (66) is dependent on the choice of the function q, see (63).
At this point, we assume that q can be chosen such that
µ(V (x̄(t)))
,
0 ≤ gi x̄(t), x̃(t) − gi x̄(t), x̄(t) ≤
2kD
(67)
holds for all x̄(t) ∈ X and for any i ∈ In . Here, k ≥ 1 will be chosen later and D > 0 is such
that
∂Vi
(xi ) ≤ D,
∂xi
∀x ∈ X , ∀i ∈ In .
(68)
Note that such D exists by continuous differentiability of V and the fact that X is a compact
set. Also, we stress that the first inequality in (67) follows from the observation that x̄(t) ≤ x̃(t)
(compare (60) with (63) and recall that q(r) > r for all r > 0 and the functions ρi are of class
K).
Now, under the assumption (67), it follows from (66) that
µ V (x̄(t))
∂Vj
∂Vj
+
x̄j (t) gj x̄(t), x̄(t) +
x̄j (t)
D V x(t, ϕ) ≤ max
j∈J (x̄(t)) ∂xj
∂xj
2kD
1
µ V (x̄(t)) .
≤− 1−
2k
(69)
Here, (58) as well as the bound (68) are used and it is recalled that V (x̄(t)) = V (x(t, ϕ)) by the
choice of x̄(t) as (60). As (69) holds for any trajectory that satisfies (62) and k ≥ 1, asymptotic
stability of the origin follows from the Razumikhin stability theorem, see [52, Theorem 7],
provided that the assumption (67) holds.
In the final part of the proof, we will construct a function q that satisfies the assumption (67).
To this end, define the compact set
X̃ := {x ∈ Rn+ | 0 ≤ x ≤ 2v}.
Clearly, X ⊂ X̃ . Since g in (24) is locally Lipschitz, there is a constant LX˜ > 0 such that
kg(x, y ′) − g(x, y)k∞ ≤ LX˜ ky ′ − yk∞
(70)
for all x, y, y ′ ∈ X˜ and with kxk∞ = maxi |xi |. Since x̄(t) ∈ X , we have x̄(t) ∈ X̃ . If x̃(t) ∈ X˜ ,
it follows from (70) that
gj x̄(t), x̃(t) − gj x̄(t), x̄(t) ≤ LX̃ max {x̃i (t) − x̄i (t)},
i∈In
= LX˜ max ρi q(V (x̄(t))) − ρi V (x̄(t)) ,
i∈In
(71)
(72)
where the property x̄(t) ≤ x̃(t) and the first inequality in (67) are used to obtain inequality (71).
Equality (72) follows from the definitions (60) and (63). The desired condition (67) holds if
µ̃(V (x̄(t)))
ρi q(V (x̄(t))) − ρi V (x̄(t))) ≤
2kDLX̃
µ(V (x̄(t)))
≤
2kDLX̃
(73)
for all i ∈ In . Here, µ̃ : [0, s] → R+ is a function of class K that lower bounds the positive
definite function µ. Such a function µ̃ exists by the fact that µ is positive definite on the compact
set [0, s] [55]. At this point, we note that, even though x̄(t) ∈ X̃ , this does not necessarily hold
for x̃(t). However, the condition (73) implies that
x̃(t) ≤ x̄(t) +
µ(V (x̄(t)))
µ(V (x̄(t)))
1n ≤ v +
1n ,
2kDLX̃
2kDLX̃
such that choosing k sufficiently large guarantees x̃(t) ∈ X̃ . For this choice, the Lipschitz
condition (70) indeed holds.
After fixing k ≥ 1 as above and denoting s = V (x̄), the function q : [0, s̄] → R+ defined as
µ̃(s)
−1
.
q(s) := min ρi ρi (s) +
i∈In
2kDLX̃
This function satisfies (73) and, hence, (67). In addition, since the functions ρi and µ̃ are of
class K, q is nondecreasing. Also, it can be observed that q(s) > s for all s > 0 as required.
Therefore, all conditions on q are satisfied. This finalizes the proof of the implication 2) ⇒ 1).
R EFERENCES
[1] B. Besselink, H.R. Feyzmahdavian, H. Sandberg, and M. Johansson. D-stability and delay-independent stability of monotone
nonlinear systems with max-separable Lyapunov functions. IEEE Conference on Decision and Control (CDC), 2016.
[2] P.D. Leenheer, D. Angeli, and E.D. Sontag. Monotone chemical reaction networks. Journal of mathematical chemistry,
41(3):295–314, 2007.
[3] L. Moreau. Stability of continuous-time distributed consensus algorithms. IEEE Conference on Decision and Control
(CDC), pages 3998–4003, 2004.
[4] E.D. Sontag. Molecular systems biology and control. European Journal of Control, 11(4-5):396–435, 2005.
[5] R.D. Yates. A framework for uplink power control in cellular radio systems. IEEE Journal on Selected Areas in
Communications, 13(7):1341–1347, 1995.
[6] H. R. Feyzmahdavian, M. Johansson, and T. Charalambous. Contractive interference functions and rates of convergence
of distributed power control laws. IEEE Transactions on Wireless Communications, 11(12):4494–4502, 2012.
[7] H. Boche and M. Schubert. The structure of general interference functions and applications. IEEE Transactions on
Information Theory, 54(11):4980–4990, 2008.
[8] H. R. Feyzmahdavian, T. Charalambous, and M. Johansson. Stability and performance of continuous-time power control
in wireless networks. IEEE Transactions on Automatic Control, 59(8):2012–2023, 2014.
[9] D. Angeli and A. Astolfi. A tight small-gain theorem for not necessarily ISS systems. Systems & Control Letters,
56(1):87–91, 2007.
[10] B.S. Rüffer. Small-gain conditions and the comparison principle. IEEE Transactions on Automatic Control, 55(7):1732–
1736, 2010.
[11] B.S. Rüffer, C.M. Kellett, and S.R. Weller. Connection between cooperative positive systems and integral input-to-state
stability of large-scale systems. Automatica, 46(6):1019–1027, 2010.
[12] P.D. Leenheer and D. Aeyels. Stability properties of equilibria of classes of cooperative systems. IEEE Transactions on
Automatic Control, 46(12):1996–2001, 2001.
[13] D. Angeli and E.D. Sontag. Monotone control systems. IEEE Transactions on Automatic Control, 48(10):1684–1698,
2003.
[14] A. Aswani and C. Tomlin. Monotone piecewise affine systems. IEEE Transactions on Automatic Control, 54(8):1913–1918,
2009.
[15] A. Rantzer and B. Bernhardsson. Control of convex monotone systems. IEEE Conference on Decision and Control (CDC),
pages 2378–2383, 2014.
[16] M.W. Hirsch. Systems of differential equations which are competitive or cooperative: I. limit sets. SIAM Journal on
Mathematical Analysis, 13(2):167–179, 1982.
[17] M.W. Hirsch. Systems of differential equations that are competitive or cooperative II: Convergence almost everywhere.
SIAM Journal on Mathematical Analysis, 16(3):423–439, 1985.
[18] M.W. Hirsch. Systems of differential equations which are competitive or cooperative: III. competing species. Nonlinearity,
1(1):51, 1988.
[19] H.L. Smith. Monotone Dynamical Systems: An Introduction to the Theory of Competitive and Cooperative Systems.
American Mathematical Society, 1995.
[20] L. Farina and S. Rinaldi. Positive Linear Systems: Theory and Applications. John Wiley & Sons, New York, 2000.
[21] H. Ito, S. Dashkovskiy, and F. Wirth. Capability and limitation of max- and sum-type construction of Lyapunov functions
for networks of iISS systems. Automatica, 48(6):1197–1204, 2012.
[22] H. Ito, B.S. Rüffer, and A. Rantzer. Max- and sum-separable Lyapunov functions for monotone systems and their level
sets. IEEE Conference on Decision and Control (CDC), pages 2371–2377, 2014.
[23] A. Rantzer. Scalable control of positive systems. European Journal of Control, 24:72–80, 2015.
[24] A.C. Enthoven and K.J. Arrow. A theorem on expectations and the stability of equilibrium. Econometrica, 24(3):288–293,
1956.
[25] K.J. Arrow and M. McManus. A note on dynamic stability. Econometrica, 26(3):448–454, 1958.
[26] C.R. Johnson. Sufficient conditions for D-stability. Journal of Economic Theory, 9(1):53–62, 1974.
[27] W. M. Haddad and V. Chellaboina. Stability theory for non-negative and compartmental dynamical systems with time
delay. Systems & Control Letters, 51(5):355–361, 2004.
[28] P. H. A. Ngoc. A Perron–Frobenius theorem for a class of positive quasi-polynomial matrices. Applied Mathematics
Letters, 19(8):747–751, 2006.
[29] M. Buslowicz. Simple stability conditions for linear positive discrete time systems with delays. Bulletin of the Polish
Academy of Sciences: Technical Sciences, 56:325–328, 2008.
[30] T. Kaczorek. Stability of positive continuous-time linear systems with delays. European Control Conference (ECC), pages
1610–1613, 2009.
[31] X. Liu, W. Yu, and L. Wang. Stability analysis of positive systems with bounded time-varying delays. IEEE Transactions
on Circuits and Systems II, 56(7):600–604, 2009.
[32] X. Liu, W. Yu, and L. Wang. Stability analysis for continuous-time positive systems with time-varying delays. IEEE
Transactions on Automatic Control, 55(4):1024–1028, 2010.
[33] X. Liu and C. Dang. Stability analysis of positive switched linear systems with delays. IEEE Transactions on Automatic
Control, 56(7):1684–1690, 2011.
[34] H. R. Feyzmahdavian, T. Charalambous, and M. Johansson. Asymptotic stability and decay rates of positive linear systems
with unbounded delays. IEEE Conference on Decision and Control Conference (CDC), pages 6337–6342, 2013.
[35] C. Briat. Robust stability and stabilization of uncertain linear positive systems via integral linear constraints: L1 -gain and
L∞ -gain characterization. International Journal of Robust and Nonlinear Control, 23(17):1932–1954, 2013.
[36] H. R. Feyzmahdavian, T. Charalambous, and M. Johansson. On the rate of convergence of continuous-time linear positive
systems with heterogeneous time-varying delays. European Control Conference (ECC), pages 3372–3377, 2013.
[37] V.S. Bokharaie, O. Mason, and F. Wirth. Stability and positivity of equilibria for subhomogeneous cooperative systems.
Nonlinear Analysis: Theory, Methods and Applications, 74(17):6416–6426, 2011.
[38] O. Mason and M. Verwoerd. Observations on the stability properties of cooperative systems. Systems & Control Letters,
58(6):461–467, 2009.
[39] V.S. Bokharaie, O. Mason, and M. Verwoerd. D-stability and delay-independent stability of homogeneous cooperative
systems. IEEE Transactions on Automatic Control, 55(12):2882–2885, 2010.
[40] H.R. Feyzmahdavian, T. Charalambous, and M. Johansson. Asymptotic stability and decay rates of homogeneous positive
systems with bounded and unbounded delays. SIAM Journal on Control and Optimization, 52(4):2623–2650, 2014.
[41] H.R. Feyzmahdavian, T. Charalambous, and M. Johansson. Sub-homogeneous positive monotone systems are insensitive to
heterogeneous time-varying delays. International Symposium on Mathematical Theory of Networks and Systems (MTNS),
pages 317–324, 2014.
[42] H. R. Feyzmahdavian, T. Charalambous, and M. Johansson. Exponential stability of homogeneous positive systems of
degree one with time-varying delays. IEEE Transactions on Automatic Control, 59:1594–1599, 2014.
[43] A. Rantzer, B.S. Rüffer, and G. Dirr. Separable Lyapunov functions for monotone systems. IEEE Conference on Decision
and Control, pages 4590–4594, 2013.
[44] G. Dirr, H. Ito, A. Rantzer, and B.S. Rüffer. Separable Lyapunov functions for monotone systems: Constructions and
limitations. Discrete and Continuous Dynamical Systems - Series B, 20(8):2497–2526, 2015.
[45] H.K. Khalil. Nonlinear Systems. Prentice Hall, third edition, 2002.
[46] N. Rouche, P. Habets, and M. Laloy. Stability Theory by Liapunov’s Direct Method. Applied Mathematical Sciences.
Springer-Verlag, 1977.
[47] J. Danskin. The theory of max-min, with applications. SIAM Journal on Applied Mathematics, 14(4):641–664, 1966.
[48] A. Berman and R.J. Plemmons. Nonnegative Matrices in the Mathematical Sciences. SIAM, 1994.
[49] Y. Sun. Delay-independent stability of switched linear systems with unbounded time-varying delays. Abstract and Applied
Analysis, 2012.
[50] K. Gu, V. Kharitonov, and J. Chen. Stability of Time-Delay Systems. Birkhauser, 2003.
[51] B.S. Rüffer, C.M. Kellett, and P.M. Dower. On copositive Lyapunov functions for a class of monotone systems. International
Symposium on Mathematical Theory of Networks and Systems (MTNS), pages 863–870, 2010.
[52] R.D. Driver. Existence and stability of solutions of a delay-differential system. Rational Mechanics and Analysis, 10(1):401–
426, 1962.
[53] E. Fridman. Introduction to Time-Delay Systems: Analysis and Control. Springer, 2014.
[54] V.S. Bokharaie and O. Mason. On delay-independent stability of a class of nonlinear positive time-delay systems. IEEE
Transactions on Automatic Control, 59(7):1974–1977, 2014.
[55] C.M. Kellett. A compendium of comparison function results. Mathematics of Control, Signals, and Systems, 26(3):339–374,
2014.
| 3 |
THE JACOBSON–MOROZOV THEOREM AND COMPLETE REDUCIBILITY
OF LIE SUBALGEBRAS
arXiv:1507.06234v4 [math.RT] 2 Oct 2017
DAVID I. STEWART AND ADAM R. THOMAS*
Abstract. In this paper we determine the precise extent to which the classical sl2 -theory of
complex semisimple finite-dimensional Lie algebras due to Jacobson–Morozov and Kostant can be
extended to positive characteristic. This builds on work of Pommerening and improves significantly
upon previous attempts due to Springer–Steinberg and Carter/Spaltenstein. Our main advance
arises by investigating quite fully the extent to which subalgebras of the Lie algebras of semisimple
algebraic groups over algebraically closed fields k are G-completely reducible, a notion essentially
due to Serre. For example, if G is exceptional and char k = p ≥ 5, we classify the triples (h, g, p) such
that there exists a non-G-completely reducible subalgebra of g = Lie(G) isomorphic to h. We do
this also under the restriction that h be a p-subalgebra of g. We find that the notion of subalgebras
being G-completely reducible effectively characterises when it is possible to find bijections between
the conjugacy classes of sl2 -subalgebras and nilpotent orbits and it is this which allows us to prove
our main theorems.
For absolute completeness, we also show that there is essentially only one occasion in which a
nilpotent element cannot be extended to an sl2 -triple when p ≥ 3: this happens for the exceptional
orbit in G2 when p = 3.
1. Introduction
The Jacobson–Morozov theorem is a fundamental result in the theory of complex semisimple Lie
algebras, due originally to Morozov, but with a corrected proof by Jacobson. One way to state it
is to say that for any complex semisimple Lie algebra g = Lie(G), there is a surjective map
(*)
{conjugacy classes of sl2 -triples} −→ {nilpotent orbits in g},
induced by sending the sl2 -triple (e, h, f ) to the nilpotent element e. That is, any nilpotent element
e can be embedded into some sl2 -triple. In [Kos59], Kostant showed that this can be done uniquely
up to conjugacy by the centraliser Ge of e; i.e. that the map (*) is actually a bijection. Much work
has been done on extending this important result to the modular case, that is where g = Lie(G) for
G a reductive algebraic group over an algebraically closed field k of characteristic p > 0. We mention
some critical contributions. In [Pom80], Pommerening showed that under the mild restriction that
p is a good prime, one can always find an sl2 -subalgebra containing a given nilpotent element, but
this may not be unique; in other words, the map (*) is still surjective, but not necessarily injective.
If h(G) denotes the Coxeter number1 of G, then in [SS70] Springer and Steinberg prove that the
uniqueness holds whenever p ≥ 4h(G) − 1 and in his book [Car93], Carter uses an argument due
to Spaltenstein to establish the result under the weaker condition p > 3h(G) − 3; both proofs
go essentially by an exponentiation argument. One major purpose of this article is to finish this
2010 Mathematics Subject Classification. 17B45.
*Supported by an LMS 150th Anniversary Postdoctoral Mobility Grant 2014-2015 Award.
1In case the root system of G is trivial, we define h(G) to be 0.
1
project and improve these bounds on the characteristic optimally, thus to state precisely when the
bijection (*) holds.
Theorem 1.1. Let G be a connected reductive group over an algebraically closed field k of characteristic p > 22 and let g be its Lie algebra. Then (*) is a bijection if and only if p > h(G).3
In fact, we will do even more than this, also determining when there is a bijection
(**)
{conjugacy classes of sl2 -subalgebras} −→ {nilpotent orbits in g},
and when a bijection exists, we will be able to realise it in a natural way. The equivalence of
bijections (*) and (**) is easily seen in large enough characteristics by exponentiation, but there
are quite a few characteristics where there exists a bijection (**), but not (*). To state our result,
we define for any reductive group the number b(G) as the largest prime p such that the Dynkin
diagram of G contains a subdiagram of type Ap−1 or p is a bad prime for G. Alternatively, b(G)
is the largest prime which is not very good for some Levi subgroup of G. If G is classical of type
(An , Bn , Cn , Dn ) we have b(G) is the largest prime which is no larger than (n + 1, n, n, n) and if G
is exceptional of type (G2 , F4 , E6 , E7 , E8 ) then b(G) = (3, 3, 5, 7, 7). If G is reductive then b(G) is
the maximum over all simple factors and is 0 in case the root system of G is trivial.
Theorem 1.2. Let G be a connected reductive group over an algebraically closed field k of characteristic p > 2 and let g be its Lie algebra. Then the number of conjugacy classes of sl2 -subalgebras
and nilpotent orbits is the same if and only if p > b(G). Moreover, when p > b(G), there is a natural
bijection (**) realised by sending an sl2 -subalgebra h to the nilpotent orbit of largest dimension that
intersects h non-trivially.
(To emphasise our improvement, [Car93, Thm. 5.5.11] gives the existence of such a bijection for E8
when p > 87, whereas we require just p > 7.)
For many applications, the Kempf–Rousseau theory of optimal cocharacters (whose consequences
were worked out in [Pre03]) is a sufficient replacement for much of the sl2 -theory one would typically
employ when working over C—indeed, this paper uses cocharacter theory quite extensively. But
it should not be a surprise that the unique smallest simple Lie algebra over k should continue to
play a role in modular Lie theory. We are aware of at least one example where our results are
likely to be used: on considering a maximal subgroup H of a finite group of Lie type G(q), one
frequently discovers the existence of a unique 3-dimensional submodule on the adjoint module that
must correspond to an sl2 -subalgebra of g. Then Theorem 1.2 promises certain useful properties of
this subalgebra which can be exploited to show that H lives in a positive-dimensional subgroup of
the ambient algebraic group G; typically this implies it is not maximal.
The question of the existence of bijections (*) and (**) turns out to be intimately connected to
J.-P. Serre’s notion of G-complete reducibility [Ser05]. Say a subgroup H of G is G-completely
reducible if whenever H is contained in a parabolic subgroup P of G, then H is contained in a
Levi subgroup L of P . The notion is inspired by a general philosophy of Serre, Tits and others to
generalise concepts of representation theory by replacing homomorphisms of groups H → GL(V )
2Note that if p = 2 then sl is non-simple and the question of finding subalgebras containing given nilpotent
2
elements becomes murky since one might consider it proper to consider the non-simple non-isomorphic subalgebras
pgl2 in addition.
3Interestingly, this theorem gives an optimal answer as to when the secondary demands of [DeB02, Hypothesis
4.2.5] are met; however it is known to the authors that [DeB02, Hypothesis 4.2.4] on which it is dependent holds for
every nilpotent orbit only under strictly worse conditions.
2
with homomorphisms of groups H → G, where G is any reductive algebraic group. Indeed, when
G = GL(V ), using the description of the parabolic subgroups and Levi subgroups of G as stabilisers
of flags of subspaces of V , the idea that a subgroup H is G-completely reducibly recovers the usual
idea of H acting completely reducibly on a representation V . There is a remarkably widespread web
of connections between G-complete reducibility and other areas of mathematics, such as geometric
invariant theory, the theory of buildings and the subgroup structure of algebraic groups, amongst
other things. In our proofs of Theorem 1.1 and 1.2 we will find yet another connection with Serre’s
notion, this time with the study of modular Lie algebras.
The natural extension of Serre’s idea to Lie algebras is due to McNinch, [McN07] and is developed
further in [BMRT13]. We say a subalgebra h of g is G-completely reducible (or G-cr) if whenever
h is contained in a parabolic subalgebra p of g, then h is in a Levi subalgebra of that parabolic.
(Recall that a parabolic subalgebra is by definition Lie(P ) for P a parabolic subgroup and a Levi
subalgebra is Lie(L) for L a Levi subgroup of P .) We will establish the following result, crucial for
our proof of Theorem 1.1.
Theorem 1.3. Let G be a connected reductive algebraic group over an algebraically closed field k
of characteristic p > 2. Then all semisimple subalgebras of g are G-completely reducible if and only
if p > h(G).
The proof of Theorem 1.3 reduces easily to the case where G is simple. Then work of S. Herpel and
the first author in [HS16b] on complete reducibility of representations of semisimple Lie algebras
can be adapted to prove the theorem when G is classical. The bulk of the work involved is showing
the result when G is exceptional. Let then G be an exceptional algebraic group. At least thanks
to [HS16a] and some work of A. Premet together with the first author, the isomorphism classes of
semisimple Lie subalgebras of the exceptional Lie algebras are known in all good characteristics.4
Our following theorem gives, for p ≥ 5 (in particular, for p a good prime) a full description of when
a simple subalgebra in one of those known isomorphism classes can be non-G-cr.
Theorem 1.4. Suppose h is a simple subalgebra of g = Lie(G) for G a simple algebraic group of
exceptional type over an algebraically closed field k of characteristic p ≥ 5. Then either h is G-cr
or one of the following holds:
(i) h is of type A1 and p < h(G);
(ii) h is of type W (1; 1), p = 7 and G is of type F4 ; or p = 5 or 7 and G is of type E6 , E7 or
E8 ;
(iii) Up to isomorphism we have (G, h, p) = (E7 , G2 , 7), (E8 , B2 , 5) or (E8 , G2 , 7).
Moreover, for each exception (G, h, p) above, there exists a non-G-cr subalgebra of the stated type.
Since we consider Lie algebras g = Lie(G) for G an algebraic group, g inherits a [p]-map arising
from the Frobenius morphism on the group. Then a subalgebra h of g is a p-subalgebra if and
only if it is closed under the [p]-map. Asking when p-subalgebras are G-cr gives a slightly different
answer, with an important connection to the existence of the bijection (**).
4Up to isomorphism, one gets only Lie algebras coming from algebraic groups and the first Witt algebra W (1; 1)
of dimension p, together with some semisimple subalgebras which are not the direct sum of simple Lie algebras,
existing only when p = 5 or 7 and g is of type E7 or E8 . An example of such a Lie algebra is the semidirect product
sl2 ⊗ (k[X]/hX p i) + 1 ⊗ W (1; 1) where the factor W (1; 1) commutes with the sl2 factor but acts by derivations on
the truncated polynomial ring k[X]/hX p i.
3
Theorem 1.5. Suppose h is a simple p-subalgebra of g = Lie(G) for G a simple algebraic group of
exceptional type over an algebraically closed field k of characteristic p ≥ 5. Then either h is G-cr
or one of the following holds:
(i) h is of type A1 and p ≤ b(G);
(ii) h is of type W (1; 1), p = 7 and G is of type F4 ; or p = 5 or 7 and G is of type E6 , E7 or
E8 ;
(iii) Up to isomorphism we have (G, h, p) = (E7 , G2 , 7), (E8 , B2 , 5) or (E8 , G2 , 7).
Moreover, for each exception, there exists a non-G-cr p-subalgebra of the stated type.
To appreciate fully the connection of Theorems 1.3 and 1.4 with Theorem 1.1 we will see that the
failure of the uniqueness part of the Jacobson–Morozov theorem to hold in characteristics less than
or equal to the Coxeter number h(G) comes exactly from the failure of some subalgebras isomorphic
to sl2 to be G-cr. (And this is precisely how we construct examples of extra conjugacy classes of
sl2 subalgebras when p ≤ h(G).) Moreover, so long as G contains neither a factor of type G2 when
p = 3 nor a factor of type Ap−1 , then the bijection (**) in Theorem 1.2 exists precisely when there
is an equivalence
H is G-completely reducible ⇐⇒ H is reductive
for all connected reductive subgroups H of G.
Another result concerns a connection between Seitz’s idea of subgroups of type A1 being good with
the study of modular Lie algebras. Recall from [Sei00] that a closed subgroup H of type A1 of an
algebraic group G is good if it has weights no bigger than 2p − 2 on the adjoint module. Again, this
idea forms part of the philosophy of generalising concepts of representation theory from GL(V ) to
other reductive groups. This time, Seitz’s notion gives us the correct generalisation of the notion
of a restricted representation of H := SL2 : If H acts with weights less than p on V , then it gives a
good A1 -subgroup of GL(V ), since H will have weights no more than 2p − 2 on gl(V )|H ∼
= V ⊗ V ∗.
In ibid. Seitz proves in particular that all unipotent elements of order p have a good A1 -overgroup
and that any two such overgroups are conjugate; this itself connects to questions raised in Serre’s
fundamental paper [Ser05] by providing a solution to finding overgroups of unipotent elements
which are so-called ‘saturated’. Our result is as follows.
Theorem 1.6. Let G be a connected reductive algebraic group over an algebraically closed field
of characteristic p > 2. Then every sl2 subalgebra of G is Lie(H) for H a good A1 if and only if
p > h(G).
Lastly, for completeness we have checked the following, improving the Jacobson–Morozov theorem
itself optimally, using the classification of nilpotent orbits in characteristic p ≥ 3.
Theorem 1.7. Let G be a simple algebraic group over an algebraically closed field of characteristic
p ≥ 3. Then any nilpotent element e ∈ g = Lie(G) belonging to the orbit O can be extended to an
(3)
sl2 -triple if and only if (G, p, O) 6= (G2 , 3, Ã1 ).
Acknowledgements. We thank Sasha Premet for some discussion and the referee for helpful
suggestions for improvement.
4
2. Preliminaries
2.1. Notation. In the following G will be a reductive algebraic group over an algebraically closed
field k of characteristic p > 2, and g will be its Lie algebra.
Throughout the paper we use the terms classical and exceptional when referring to both simple
algebraic groups and simple Lie algebras. When we say a simple Lie algebra is classical (or of
classical type) we mean that it is of type A–G. However, for a simple algebraic group, we use the
term classical to mean of type A–D, and exceptional otherwise.
All notation unless otherwise mentioned will be consistent with [Jan03]. In particular, all our
reductive groups are assumed to be connected. The root system R contains a simple system S
whose elements will be denoted αi , with corresponding fundamental dominant weight ̟i . We shall
denote roots in R by their coefficients in S labelled consistently with Bourbaki. For a dominant
weight λ = a1 ̟1 +a2 ̟2 +· · ·+an ̟n we write λ = (a1 , a2 , . . . , an ) and write L(λ) = L(a1 , a2 , . . . , an )
to denote the irreducible module of highest weight λ. Given modules M1 , . . . , Mk , the notation
V =
P
M1 | . . . |Mk denotes a module with a socle series as follows: Soc(V ) ∼
= Mk and Soc(V / j≥i Mj ) ∼
=
Mi−1 for k ≥ i > 1. We write M1 + M2 for M1 ⊕ M2 . We also write T (λ) for a tilting module of
high weight λ for an algebraic group G. In small cases, the structure of these is easy to write down.
For example, when V (λ) ∼
= L(λ)|L(µ), we have T (λ) ∼
= L(µ)|L(λ)|L(µ). The module VE6 (̟1 ) will
be denoted V27 and the module VE7 (̟7 ) denoted V56 .
When G is simple and simply-connected, we choose root vectors in g for a torus T ⊆ G and a
basis for t = Lie(T ) coming from a basis of subalgebras isomorphic to sl2 corresponding to each
of the simple roots. We write these elements as {eα : α ∈ R} and {hα : α ∈ S} respectively.
As g = Lie(G), we have that g inherits a [p]-map x 7→ x[p] , making it a restricted Lie algebra;
see [Jan03, I.7.10].
Recall also the first Witt algebra W (1; 1) := Derk (k[X]/X p ), henceforth denoted W1 . The Lie
algebra W1 is p-dimensional with basis {∂, X∂, . . . , X p−1 ∂} and commutator formula [X i ∂, X j ∂] =
(j − i)X i+j−1 ∂. In §4.3 we use a little of the representation theory of W1 . All that we need is
contained in [BNW09] for example. In particular, Derk (k[X]/X p ) is a module with structure S|k
where S is an irreducible module of dimension p − 1.
2.2. Parabolic subalgebras. Let P = LQ be a standard parabolic subgroup of an exceptional
algebraic group G with unipotent radical Q and Levi complement L, corresponding to a subset J
of S. In particular, letting RJ = R ∩ ZJ, we have P = hUα , T | α ∈ R+ ∪ RJ i. In this section
we discuss the structure of Q in terms of the action of L. Forgetting the Lie algebra structure on
q := Lie(Q), we obtain a module for l := Lie(L). We will see that if h is a subalgebra of l such that
q has no h-composition factors V with non-trivial first cohomology (which we will recall shortly)
then all complements to q in the semidirect product of Lie algebras h + q are conjugate to h by
elements of Q, hence all are G-cr.
The unipotent radical has (by [ABS90]) a well-known central filtration Q = Q1 ≥ Q2 ≥ . . . with
successive factors Qi /Qi+1 isomorphic to the direct product
of all root
P
P groups corresponding to the
set Φi of roots of level i, where the level of a root α = i∈S ci αi is i∈S\J ci , via the multiplication
Q
map π : Ga × · · · × Ga → G; (t1 , . . . , tn ) 7→ α∈Φi xα (ti ). The filtration {Qi } is L-stable and the
quotients have the structure of L-modules. That is, they are L-equivariantly isomorphic to the Lmodule Lie(Qi /Qi+1 ) = Lie(Qi )/ Lie(Qi+1 ), as is verified in [ABS90]. Moreover it is straightforward
to compute the L-composition factors of each subquotient; see [LS96, Lem. 3.1]. One observes all of
5
the high weights are restricted when p is good for G (and for p = 5 when G = E8 ). We may therefore
immediately conclude by differentiating the L-modules concerned that the same statement is true
of the l-composition factors of the l-module Lie(Qi )/ Lie(Qi+1 ). The following lemma records this.
Lemma 2.1. Let g be the Lie algebra of a simple exceptional algebraic group G in good characteristic
(or p = 5 when G = E8 ) and let p = l + q be a parabolic subalgebra of g. The l-composition factors
within q have the structure of high weight modules for l. If l0 = Lie(L0 ) for L0 a simple factor of
L, then the possible high weights λ of non-trivial l0 -composition factors are as follows:
(i) l0
if
(ii) l0
(iii) l0
(iv) l0
(v) l0
= An : λ = 2̟1 , 2̟n , 3̟1 , ̟j or ̟n+1−j (j = 1, 2, 3) (note that 2̟1 , 2̟n only occur
g = F4 and n ≤ 2 and 3̟1 only if g = G2 and n = 1);
= Bn or Cn (n = 2 or 3, g = F4 ): λ = ̟1 , ̟2 or ̟3 ;
= Dn : λ = ̟1 , ̟n−1 or ̟n ;
= E6 : λ = ̟1 or ̟6 ;
= E7 : λ = ̟7 .
We therefore find the following restrictions on the dimensions of l-composition factors of q (hence
also on the h-composition factors of q).
Corollary 2.2. With the hypothesis of the lemma, let V be an l-composition factor of q. Then
dim V ≤ 64.
Proof. This follows from the lemma if l′ is simple. Moreover, if g 6= E8 then the number of positive
roots is at most 56 and the result follows. So suppose g = E8 . The product of the dimensions
of the possible simple factors is at most 64 in all cases, except for l′ of type A1 A6 for which a
module L(1) ⊗ L(̟3 ) has dimension 2 × 35 = 70. However, an easy calculation shows the actual l′ composition factors are L(1)⊗L(̟2 ), L(̟4 ), L(1)⊗L(̟6 ) and L(̟1 ). Hence the largest dimension
of any l′ -composition factor is 42.
We recall a concrete description of the 1-cohomology of Lie algebras; see [Wei94, §7.4]. Let h be a
Lie algebra and V an h-module. A 1-cocycle is a map ϕ : h → V such that
(1)
ϕ([x, y]) = x(ϕ(y)) − y(ϕ(x)).
Let Z 1 (h, V ) denote the set of 1-cocycles. For v ∈ V the map h → V : x 7→ x(v) is a 1-cocycle
called a 1-coboundary; denote these by B 1 (h, V ). Two 1-cocycles are equivalent if they differ by a
1-coboundary; explicitly ϕ ∼ ψ if there is some v ∈ V such that ϕ(x) = ψ(x) + x(v) for all x ∈ h.
In this case we say ϕ and ψ are conjugate by v. One then has H1 (h, V ) = Z 1 (h, V )/B 1 (h, V ).
Note that V can be considered as a Lie algebra with trivial bracket. Then one may form the
semidirect product h + V . A complement to V in the semidirect product h + V is a subalgebra h′
such that h′ ∩ V = 0 and h′ + V = h + V . Just as for groups, one has a vector space isomorphism
Z 1 (h, V ) ←→ {complements to V in h + V } ,
by ϕ 7→ {x + ϕ(x) : x ∈ h}.
We wish to realise the conjugacy action of V on h + V in terms of a group action. Suppose
dim V = n. For our purposes it will do no harm to identify h with its image in gln . Furthermore,
it will be convenient to embed V into gln+1 as strictly upper-triangular matrices with non-zero
6
entries only in the last column, viz.
V :=
0
∗
0
∗
.. .
..
.
.
0 ∗
0
Then the action of GLn on V is realised as conjugation of the block diagonal embedding of GLn
in GLn+1 via x 7→ diag(x, 1). Clearly adding the identity endomorphism of V commutes with the
action of GLn . Hence the group Q := 1 + V is GLn -equivariant to its Lie algebra V . Now suppose
h′ is a complement to h in h + V given as h′ = {x + ϕ(x) : x ∈ h} ⊂ gln+1 . If q = 1 + v ∈ Q with
v ∈ V we have
(x + ϕ(x))q = xq + ϕ(x) = (1 − v)x(1 + v) + ϕ(x).
Then since x normalises V and any two endomorphisms v1 , v2 ∈ V ⊆ End(V ) satisfy v1 v2 = 0,
we see easily that (x + ϕ(x))q = x + [x, v] + ϕ(x) = x + x(v) + ϕ(x), showing that two cocycles
in Z 1 (h, V ) are equivalent if and only if they are conjugate under the action ϕq (x) = x(v) + ϕ(x),
where q = 1 + v.
We have proven the following proposition:
Proposition 2.3. Let h ⊆ gl(V ) be a subalgebra with dim V = n. Then realised as a subalgebra of
gln+1 as above, all complements to V in h + V are conjugate to h via elements of 1 + V if and only
if H1 (h, V ) = 0.
We also require the following crucial result, generalising the above proposition to the case where q
is non-abelian. This is one of our main tools in proving Theorem 1.4:
Proposition 2.4. Suppose P = LQ is a parabolic subgroup of a reductive algebraic group G
with unipotent radical Q and Levi factor L. Let h be a subalgebra of Lie(L) and suppose that
H1 (h, Lie(Qi /Qi+1 )) = 0 for each i ≥ 1. Then all complements to q in h + q are Q-conjugate to h.
Proof. Let Q = Q1 ≥ Q2 ≥ . . . denote the filtration of Q by L-modules described at the beginning
of §2.2 and let qi := Lie(Qi ). We prove inductively that all complements to q/qi in h + q/qi
are Q/Qi -conjugate to h. If i = 1 this is trivial, so assume all complements to q in h + q/qi−1 are
Q/Qi−1 -conjugate to h. Take a complement h′ to q/qi in h+q/qi . Then by the inductive hypothesis,
we may replace h′ by a Q/Qi -conjugate so that h + qi−1 /qi = (h + qi−1 )/qi = (h′ + qi−1 )/qi . Thus h′
is a complement to qi−1 /qi in the semidirect product h + qi−1 /qi . We have h′ = x + γ(x) for some
cocycle γ : h → qi−1 /qi . But since H1 (h, qi−1 /qi ) = 0, we may write γ(x) = [x, v] for some v ∈ q.
Now γ|ch (q) is identically zero. Thus γ factorises through h → gl(qi−1 /qi ), so it suffices to consider
the image of h in gl(qi−1 /qi ). Now by Proposition 2.3 the image of h′ is Qi−1 /Qi -conjugate to h.
It follows that h′ is Qi−1 /Qi -conjugate to h.
2.3. Cohomology for G, G1 and g. In this section, G will be simple and simply-connected. In
the subsequent analysis, it will be necessary to know the ordinary Lie algebra cohomology groups
H1 (g, V ) for V a simple restricted g-module of small dimension. We will first reduce our considerations to the restricted Lie algebra cohomology. In this section, G1 denotes the first Frobenius
kernel of G—we warn the reader that we sometimes use subscripts to index groups, but it will
7
G p
V
dim V H1 (G1 , V )[−1]
A1 p < 64 L(p − 2)
p − 1 L(1)
A2 p = 5, 7 L(p − 2, 1)
18, 33 L(1, 0)
p=5
L(3, 3)
63
k
B2 p = 5, 7 L(p − 3, 0)
13, 54 k
p=5
L(1, 3)
52
L(0, 1)
G2 p = 7
L(2, 0)
26
k
A3 p = 5
L(3, 1, 0)
52
L(1, 0, 0)
A4 p = 5
L(1, 0, 0, 1)
23
k
A6 p = 7
L(1, 0, 0, 0, 0, 1) 47
k
Table 1. Non-trivial G1 -cohomology of irreducible modules of dimension at most 64.
be clear from the context our intent. We may identify restricted cohomology for g with the usual
Hochschild cohomology for G1 . Recall the exact sequence [Jan03, I.9.19(1)]:
0 → H1 (G1 , V ) → H1 (g, V ) → Homs (g, V g )
(2)
→ H2 (G1 , V ) → H2 (g, V ) → Homs (g, H1 (g, V )).
From this it follows that if V g = 0 (i.e. g has no fixed points on V ) then H1 (G1 , V ) → H1 (g, V )
is an isomorphism. This happens particularly in the case that V is simple and non-trivial. The
main result of this section is Proposition 2.6 below, which gives every instance of a simple restricted
G-module V of dimension at most 64 such that the groups H1 (G1 , V ) ∼
= H1 (g, V ) are non-trivial.
In order to prove this proposition, we will need some auxiliary results. The following useful result
relates to weights in the closure of the lowest alcove, C̄Z = {λ ∈ X(T ) | 0 ≤ hλ + ρ, β ∨ i ≤
p for all β ∈ R+ }. It is immediate from [BNP02, Cor. 5.4 B(i)].
Lemma 2.5. Let G be simple and simply connected and suppose L = L(µ) with µ ∈ C̄Z and p ≥ 3.
Then we have H1 (G1 , L) = 0 unless G is of type A1 and L = L(p − 2).
In the next proposition and elsewhere, we use repeatedly the linkage principle for G1 , [Jan03,
II.9.19]. This states that if ExtiG1 (L(λ), L(µ)) 6= 0 for any i ≥ 0 then λ ∈ Wp · µ + pX(T ), where ·
denotes the usual action of the affine Weyl group Wp on X(T ), shifted by ρ.
If V is a G-module, then as G1 is a normal subgroup scheme of G, the cohomology group H1 (G1 , V )
inherits a G-module structure [Jan03, I.6.6]; since G1 acts trivially on this, such an action must
factor through the Frobenius morphism, hence can be untwisted to yield a G-module H1 (G1 , V )[−1] .
Of course, any simple module for G1 can always be given a G-structure in a unique way and is
associated to a unique highest weight λ ∈ X1 (T ), where X1 (T ) denotes the p-restricted dominant
weights; see [Jan03, II.3.15].
Proposition 2.6. Let G be a simple algebraic group of rank no more than 7, let p ≥ 5 and suppose
V is a restricted irreducible G-module of dimension at most 64. Then either H1 (G1 , V ) = 0 or
(G, V, p) is listed in Table 1 (up to taking duals).
Proof. The values of H1 (G1 , V ) where G is of type A1 are well-known and can be found, for example
in [Ste10, Prop. 2.2]. The statement in the cases where G is of type A2 or B2 , or type G2 when p ≥ 7
is immediate from, respectively, [Yeh82] (see [Ste12, Prop. 2.3]), [Ye90, Table 2] and [LY93, Table 2].
8
Consider the remaining cases. A list of all non-trivial restricted modules of dimension at most 64 is
available from [Lüb01]. We then use the G1 -linkage principle to remove any modules L(λ) such that
λ is not G1 -linked to µ = 0. Explicitly, one may calculate w · 0 for any w ∈ W (a finite list) and add
an appopriate (uniquely defined) element of pX(T ) to produce a collection of p-restricted weights
in X1 (T ); since we assume H1 (G1 , V ) 6= 0 we know that λ is one in this collection. Furthermore,
we remove any modules L(λ) such that λ is in the lowest alcove, since H1 (G1 , L(λ)) = 0 by Lemma
2.5. This reduces the list of possibilities considerably. For any cases still remaining, we appeal
to [BNP04, Thm. 3A] (case r = 1), recalling H 0 (λ) := IndG
B (λ). This implies that
(
H 0 (ωα )[1] if λ = pωα − α for α ∈ S
(3)
H1 (G1 , H 0 (λ)) =
.
0
otherwise
Let us take G = A3 and explicitly give the details. Using [Lüb01, 6.7], we see that the following
is a complete list of the high weights of all non-trivial restricted modules V = L(λ) such that
dim V ≤ 64 (up to taking duals): (1, 0, 0), (0, 1, 0), (2, 0, 0), (1, 0, 1), (1, 1, 0), (0, 2, 0), (3, 0, 0),
(2, 0, 1), (1, 1, 1), (0, 3, 0), (4, 0, 0), (5, 0, 0), (1, 2, 0) and when p = 5 the weight (3, 1, 0). A weight
λ = (a, b, c) is in C̄Z if and only if a + b + c ≤ p − 3. So if p ≥ 11, all weights are in C̄Z , if p = 7
only (5, 0, 0) is not in C̄Z and if p = 5 the first six weights in the list are in C̄Z .
We may reduce this list using the linkage principle for G1 . In our case, this implies that the only
restricted weights G1 -linked to (0, 0, 0) up to duals are (p−2, p−2, p−2), (p−2, 1, 0), (p−2, p−3, 0),
(0, p − 3, 2), (p − 4, 1, p − 2), (p − 4, 0, 0), (p − 2, 2, p − 2), (0, p − 4, 0), (1, p − 4, 1), (1, p − 2, 1),
(p − 3, 0, p − 3), (p − 3, 0, 1), (p − 3, p − 2, 1), (p − 2, p − 2, 2), (p − 3, 2, p − 3). By comparison with
the list above, we may discount the possibility that p = 7 and the list of possible modules V with
H1 (G1 , V ) 6= 0 to just L(1, 1, 1), L(2, 0, 1) and L(3, 1, 0) when p = 5.
We now use (3) to find that H1 (G1 , H 0 (λ)) = 0 for λ = (1, 1, 1), (2, 0, 1) and H1 (G1 , H 0 (λ)) =
L(1, 0, 0)[1] for λ = (3, 1, 0). Now, the structure of the induced modules H 0 (λ) can be deduced:
each is the indecomposable reduction modulo p of a certain lattice in the simple module LC (λ); by
comparing the weight multiplicities in [Lüb01], one finds there are just two composition factors, in
each case. Since L(λ) is the socle of H 0 (λ) one gets H 0 (1, 1, 1) = L(0, 1, 0)|L(1, 1, 1), H 0 (2, 0, 1) =
L(1, 0, 0)|L(2, 0, 1) and H 0 (3, 1, 0) = L(2, 0, 1)|L(3, 1, 0). Consider the following exact sequence for
a 2-step indecomposable G1 -module M = M1 |M2 .
0 → H0 (G1 , M2 ) → H0 (G1 , M ) → H0 (G1 , M1 )
→ H1 (G1 , M2 ) → H1 (G1 , M ) → H1 (G1 , M1 ) → . . .
Applying this to H 0 (λ) for λ = (1, 1, 1), (2, 0, 1) yields that H1 (G1 , L(1, 1, 1)) = H1 (G1 , L(2, 0, 1)) =
0. Moreover, applying the sequence to H 0 (3, 1, 0) and using the fact H1 (G1 , L(2, 0, 1)) = 0, it follows
that H1 (G1 , L(3, 1, 0)) ∼
= H1 (G1 , H 0 (3, 1, 0)) ∼
= L(1, 0, 0)[1] .
Finally, we record the following result, which is presumably well-known. We were unable to locate
a proof in the literature, so we give one.
Lemma 2.7. Let G be a simple connected algebraic group of type A1 , let p > 2 and 0 ≤ a, b ≤ p − 1.
Then Ext1G1 (L(a), L(b)) ∼
= Ext1g (L(a), L(b)) unless a = b = p − 1. Moreover, Ext1G1 (L(a), L(b)) 6= 0
if and only if a = p − 2 − b and a, b 6= p − 1, and Ext1g (L(p − 1), L(p − 1)) ∼
= (g∗ )[1] .
Proof. We prove the second statement about Ext1G1 (L(a), L(b)) first. Since w·a is either a or −a−2,
the linkage principle for G1 implies that Ext1G1 (L(a), L(b)) = 0 unless b = a or b = p − 2 − a. If
9
b = a then Ext1G1 (L(a), L(b)) = 0 by [Jan03, II.12.9] so we may now assume b = p − 2 − a. But now
Ext1G1 (L(a), L(p−2−a)) ∼
= Ext1G1 (k, L(a)⊗L(p−2−a)) ∼
= Ext1G1 (k, L(p−2)⊕L(p−4)⊕· · ·⊕L(0)).
Then the only term which survives in this expression is Ext1G1 (k, L(p − 2)) = H1 (G1 , L(p − 2)) =
H1 (G1 , H 0 (p − 2)) whose structure as a G-module can be read off from [Jan03, II.12.15] or [BNP04,
Thms. 3(A-C)].
Now, in sequence (2), put V = L(b) ⊗ L(a)∗ . Then we have an isomorphism Ext1G1 (L(a), L(b)) ∼
=
1
s
g
g
Extg (L(a), L(b)) if we can show that Hom (g, V ) = 0. But if V were non-zero then we must have
L(a) ∼
= k. Thus we now assume a = b.
= L(b) and V g ∼
The assignation of V to the sequence (2) is functorial, thus, associated to the G-map k → V , there
is a commutative diagram
∼
=
0 −−−−→ H1 (g, k) = 0 −−−−→ Homs (g, k g ) ∼
= (g∗ )[1] −−−−→ H2 (G1 , k)
∼
,
=y
θy
y
0 −−−−→
H1 (g, V )
ζ
−−−−→ Homs (g, V g ) ∼
= (g∗ )[1] −−−−→ H2 (G1 , V )
where the natural isomorphism kg → V g induces the middle isomorphism. We want to show that ζ
is injective, since then it would follow that H1 (g, V ) = 0. To do this it suffices to show that θ is an
injection (g∗ )[1] → H2 (G1 , V ) and for this, it suffices to show that the simple G-module (g∗ )[1] does
not appear as a submodule of H1 (G1 , V /k). If (g∗ )[1] did appear as a submodule, there must be a
composition factor L(ν) of V /k such that H1 (G1 , L(ν)) ∼
= (g∗ )[1] . Writing L(ν) = L(ν0 ) ⊗ L(ν1 )[1]
using Steinberg’s tensor product theorem, we have H1 (G1 , L(ν)) ∼
= H1 (G1 , L(ν0 )) ⊗ L(ν1 )[1] and the
latter is non-zero only when ν0 = p − 2. Since the weights of V are all even and bounded above by
2p − 2, with equality if and only if a = b = p − 1, we must have ν0 = p − 2, ν1 = 1 and indeed,
a = b = p − 1. But then H2 (G1 , V ) ∼
= Ext2G1 (L(p − 1), L(p − 1)) which is zero since L(p − 1) is the
projective Steinberg module. Thus from the bottom line of the diagram we have an isomorphism
H1 (g, V ) ∼
= (g∗ )[1] as required.
2.4. Nilpotent orbits. At various points we use the theory of nilpotent orbits, particularly the
results of [Pre95a]. Everything we need can be found in [Jan04, §1-5]. We particularly use the fact
that a nilpotent element e has an associated cocharacter; that is a homomorphism τ : Gm → G
such that under the adjoint action, we have τ (t) · e = t2 e and τ evaluates in the derived subgroup
of the Levi subgroup in which e is distinguished. Recall that an sl2 -triple is a triple of elements
(e, h, f ) ∈ g × g × g such that [h, e] = 2e, [h, f ] = −2f and [e, f ] = h. In the case that a nilpotent
element e is included in an sl2 -triple in g, the theory of associated cocharacters can be used to
prove the following useful result.
Proposition 2.8 (cf. [HS16a, Prop. 3.3(iii)]). Suppose the nilpotent element e is not in an orbit
containing a factor of type Ap−1 , and that h is a toral element in the image of ad e with [h, e] = 2e.
Then there is a cocharacter τ associated to e with Lie(τ (Gm )) = hhi.
We also need to use the Jordan block structure of nilpotent elements on the adjoint and minimal
modules in good characteristic. For the adjoint modules we may use [Law95], and see [PS16] which
provides the validity of these tables for the nilpotent analogues of the unipotent elements considered
there. For the minimal modules we may use [Ste16, Thm 1.1].
10
When referring to nilpotent orbits of Lie(G) for a simple algebraic group G, we use the labels defined
in [LS12]. In particular, when G is of exceptional type these labels are described in Chapter 9 of
[loc. cit].
At certain points we make use of the notion of a reachable nilpotent element. A (nilpotent) element
e ∈ g is said to be reachable if it is contained in the derived subalgebra of its centraliser. That is
e ∈ [ge , ge ]. The reachable elements of g have all been classified in [PS16]. Then the main point
is that long root elements in simple subalgebras are almost always reachable in those subalgebras,
hence are reachable elements of g. We will need the following result.
Lemma 2.9. Let h = Lie(H) for H a simple algebraic group, not of type A1 . Then any long root
element e is reachable, except possibly if H is of type Cn and p = 2.
Proof. Since all long root elements are in a single H-orbit, it suffices to prove the lemma in the case
e = eα̃ for α̃ the highest root. Then it suffices to find two root elements α and β with α + β = α̃
such that [eα , eβ ] 6= 0.
This is a simple case-by-case check of the root systems. In all cases except Cn , one can take α and
β to be long roots. For a Chevalley basis, we have [eα , eβ ] = ±eα̃ and so we are done. In case Cn ,
one may take two short roots α and β, and one has [eα , eβ ] = ±2 · eα̃ , which is non-zero provided
p 6= 2.
3. Irreducible and completely reducible subalgebras of classical Lie algebras.
The proof of Theorem 1.3.
For the time being, assume p > 2. In this section we show that Theorem 1.3 holds in the case G is
simple and classical; this is Proposition 3.4 below. Let G be a simple, simply-connected algebraic
group of classical type and let h be a subalgebra of g = Lie(G). We first give a condition for h to
be G-irreducible. That is, that h is in no proper parabolic subalgebra of g. This is given in terms
of the action of h on the natural module V for G, as it is in the group case—see [LT04, Lem. 2.2].
Proposition 3.1. The algebra h is G-irreducible if and only if one of the following holds:
(i) g = sl(V ) and h acts irreducibly on V ;
(ii) g = sp(V ) or so(V ) and h stabilises a decomposition V ∼
= V1 ⊥ V2 ⊥ · · · ⊥ Vn , where
the Vi are a set of mutually orthogonal, non-degenerate and non-isomorphic h-irreducible
submodules of V .
Proof. By [MT11, Prop 12.13], the (proper) parabolic subgroups of G are precisely the stabilisers
of (non-trivial) flags F • of totally isotropic subspaces of V (where G = SL(V ) preserves the 0form on V ). Let F • be a flag of subspaces such that the k-points of its stabiliser is a parabolic
subgroup P . We claim that Stab(F • ) is smooth. We certainly have Lie(Stab(F • )) contains Lie(P )
so that Lie(Stab(F • )) is maximal rank, thus corresponds to a subsystem of the root system. Any
root space heα i contained in Lie(Stab(F • )) gives rise to a root subgroup of Stab(F • )(k) = P , via
t 7→ exp(t.eα ). Thus Lie(Stab(F • )(k)) = Lie(Stab(F • )), as required.
The case G = SL(V ) is now clear since h fixes a non-trivial subspace of V if and only if it is
contained in a parabolic subalgebra.
Now suppose G = Sp(V ) or SO(V ). Firstly, let h be a G-irreducible subalgebra of g and suppose
V1 is a minimal non-zero h-invariant subspace of V , so V1 is an h-irreducible submodule of V . Then
11
V1 must be non-degenerate or else h would stabilise a non-zero totally isotropic subspace and hence
be contained in a proper parabolic subalgebra. We then use an inductive argument applied to V1⊥
to see that h stabilises a decomposition V ∼
= V1 ⊥ V2 ⊥ · · · ⊥ Vn of non-degenerate, mutually
orthogonal, h-irreducible submodules. If Vi |h ∼
= Vj |h by an isometry φ : Vi → Vj for i 6= j then h
preserves the totally isotropic subspace {vi + iφ(vj )} ⊂ Vi ⊕ Vj (where i2 = −1). Thus the Vi are
pairwise non-isomorphic. Finally, it remains to note that that any subalgebra h preserving such a
decomposition as in (ii) is G-irreducible since it stabilises no totally-isotropic subspaces of V by
definition.
Since the Levi subalgebras of classical groups are themselves classical one gets the following, using
precisely the same argument as in [Ser05, Ex. 3.2.2(ii)]. (We remind the reader of our assumption
that p > 2.)
Lemma 3.2. The subalgebra h of g is G-cr if and only if it acts completely reducibly on the natural
module V for g.
To prove the next proposition, we use Lemma 3.2 together with the following non-trivial result.
Theorem 3.3 ( [HS16b, Cor. 8.12]). Let G be a semisimple algebraic group and let V be a g-module
with p > dim V . Assume that g = [g, g]. Then V is semisimple.
Proposition 3.4. Let G be a simple algebraic group of classical type with h(G) its Coxeter number
and g its Lie algebra. If p > h(G) then any semisimple subalgebra h of g is G-cr.
Proof. Let G be a simple algebraic group of classical type with p > h(G). Now assume, looking for a
contradiction, that h is a non-G-cr subalgebra of g. Thus h is in a non-trivial parabolic subalgebra,
projecting isomorphically to some proper Levi subalgebra l with Coxeter number h1 < h(G). One
checks that the condition p > h(G) implies that p > dim V for V a minimal-dimensional module
for any simple factor of l. Now [Str73, Main Thm.] implies that the projection of h to the simple
factors of l are all direct products of classical-type Lie algebras, hence h itself is a direct product of
classical-type Lie algebras, isomorphic to Lie(H) for H some semisimple algebraic group. Thus by
Theorem 3.3 we have that h acts completely reducibly on V . Hence by Lemma 3.2 we have that h
is G-cr.
4. G-complete reducibility in exceptional Lie algebras. Proof of Theorem 1.4.
Let G be reductive and P be a parabolic subalgebra of G with Levi decomposition P = LQ. We
begin with some general results on G-complete reducibility of subalgebras. For our purposes, they
will be used in order to generate examples of non-G-completely reducible subalgebras.
Lemma 4.1 ( [McN07, Lem. 4]). Let G be a reductive algebraic group and let L be a Levi subgroup
of G. Suppose h ⊆ Lie(L) is a Lie subalgebra. Then h is G-cr if and only if h is L-cr.
Lemma 4.2 ( [BMRT13, Thm. 5.26(i)]). Let G be a reductive algebraic group and let P = LQ be
a parabolic subgroup of G. Suppose h is a subalgebra of g contained in p = Lie(P ). Then if h is not
Q-conjugate to a subalgebra of l = Lie(L) then h is non-G-cr.
The following lemma provides a strong connection between the structure of modular Lie algebras
and the notion of G-complete reducibility and will be used very often.
12
Lemma 4.3. Let G be a reductive algebraic group and suppose p is a good prime for G. Suppose
further that h is a simple G-cr subalgebra of g which is restrictable as a Lie algebra. Then either h
is a p-subalgebra of g or h is L-irreducible in a Levi subalgebra l = Lie(L) of g with a factor of type
Arp−1 for some r ∈ N.
Proof. If h is not a p-subalgebra of g then its p-closure, hp , is a p-envelope of h strictly containing
h. Since h is restrictable, by [SF88, Thm. 2.5.8] hp has structure hp = h ⊕ J for J an ideal of hp
centralised by h. Now suppose L is chosen minimally such that h ⊆ l; as h is G-cr, it follows that
h is L-irreducible. If p is a very good prime for l, then l = l′ ⊕ z(l) where l′ is the direct sum of
simple Lie algebras and a central torus z(l), both of which are p-subalgebras. Since h is simple, it
has trivial projection to z(l) and so J ⊆ l′ . But since p is good, the centraliser of an element of
the semisimple Lie algebra l′ is in a proper parabolic of l′ and h is G-cr; thus h is in a proper Levi
subalgebra of l′ and so L was not minimal, a contradiction. It follows that p is not a very good
prime for l′ , but this precisely means that L contains a factor of type Arp−1 .
Corollary 4.4. Let g = Lie(G) be an exceptional Lie algebra in good characteristic. Suppose
h = Lie(H) for H a simple algebraic group not of type A1 and that h is a non-G-cr subalgebra of g.
Choose p = l + q minimal subject to containing h. Then the projection h̄ of h to l is a p-subalgebra.
Proof. By the proof of Lemma 4.3, if h̄ is not a p-subalgebra then l is of type Ap−1 and h̄ projects
to a subalgebra of slp acting irreducibly on the natural module. Since h is not of type A1 , the KacWeisfeiler conjecture (which is true for this situation by [Pre95b]) implies the only non-restricted
representations of h have dimension at least p2 . This is a contradiction.
Unless otherwise mentioned, for the rest of this section G will denote an exceptional algebraic group
over an algebraically closed field k of characteristic p ≥ 5 and we will let g = Lie(G). Our strategy
to prove Theorem 1.4 is as follows—we use the opportunity to fix notation for the remainder of the
paper as we explain this:
Suppose there exists a non-G-cr simple subalgebra h of g of a given type. Then h must be contained
in a parabolic subalgebra p = l + q = Lie(P ) = Lie(LQ), which from now on, we assume is minimal
subject to containing h. It follows that the projection h̄ of h to l is L-irreducible. Since h is not
conjugate to a subalgebra of l, Proposition 2.4 implies that q contains an h-composition factor V
for which H1 (h, V ) 6= 0. If h is not isomorphic to pslp or W1 then h ∼
= Lie(H) for some simple
simply-connected algebraic group H of the same type and the remarks at the beginning of §2.3
imply that H1 (H1 , V ) 6= 0 for H1 the first Frobenius kernel of H. In fact the same isomorphism
holds with H = SLp and h = pslp as any pslp -module can be lifted to a module for slp by allowing
the centre to act trivially; then one may apply the exact sequence of [Wei94, 7.5.3]. By Lemma 2.2
we must have that the dimension of V is less than 64. By Proposition 2.6 it now follows that:
(4)
Either h ∼
= W1 or V appears in Table 1.
By analysing the structure of q closely, we will find that in most cases no such V appears as a
composition factor of q so that these cases are ruled out: see Lemmas 4.5 and 4.6 below.
One set of cases requires more work: If l is a Levi subalgebra of g of type E6 or E7 , then we investigate all possible actions of h on the smallest dimensional modules for E6 and E7 (of dimensions
27 and 56, respectively). We will see that in any such action, a regular nilpotent element of h does
not act consistently with the Jordan block sizes of nilpotent elements on the relevant modules, as
described in [Ste16]. Having reduced the possible cases as above, we show that the remaining cases
13
of (G, h, p) do indeed give rise to non-G-cr subalgebras, recorded in Lemmas 4.7, 4.11, 4.12 and
4.14.
Lemma 4.5. Suppose h is a simple non-G-cr subalgebra not of type A1 or W1 and that p = l + q
is a minimal parabolic containing it. Then (G, h, p) is (E8 , B2 , 5), (E7 , G2 , 7) or (E8 , G2 , 7); or l′
has a simple factor of type E6 or E7 .
Proof. Without loss of generality we may assume G is simply-connected. Suppose P = LQ with
Lie(P ) = p and let L′ = L1 . . . Lr be an expression for the derived subgroup of L as a central
product of simple factors not containing any exceptional factors. As each Li is simply connected,
we may write l′ = l1 ⊕ · · · ⊕ lr with li = Lie(Li ). Since P was minimal, we must have that the
projection of h to each li is Li -irreducible; call this hi .
Since h is not of type A1 , all of the Li factors have rank at least 2. As Li is classical by assumption,
it has a natural module Vi and it follows from Lemma 3.1 and Corollary 4.4 that there exists an
Li -irreducible restricted subgroup Hi whose action on Vi differentiates to that of hi . The action of
hi on Vi determines it up to Li -conjugacy, except if Li is of type Dn and there are two conjugacy
classes interchanged by a graph automorphism (or up to three if Li is of type D4 ). Thus we may
write h = Lie(H) for H a diagonal subgroup of H1 . . . Hr . One may now compute a list of possible
h-factors of q by differentiating the H-factors on Q; the latter are available from [LS96, Lem. 3.4].
By (4) this implies (H, p, λ) = (B2 , 5, (2, 0)), (B2 , 5, (1, 3)) or (G2 , 7, (2, 0)).
Suppose H is a subgroup of type B2 . The only Levi factors for which L(2, 0) or L(1, 3) occur as an
H-composition factor are A3 A4 and D7 . Hence G is of type E8 . Similarly, if H is a subgroup G2
then the only Levi factors for which L(2, 0) occurs as an H-composition factor are A6 and D7 , and
so G is of type E7 or E8 .
The next lemma reduces the proof of Theorem 1.4 to considering all simple subalgebras of rank at
most 2.
Lemma 4.6. Suppose h is a simple subalgebra of rank at least 3. Then h is G-cr.
Proof. Statement (4) above implies that we are done unless h is of type A3 , A4 or A6 . Firstly,
suppose h is of type A3 and is a non-G-cr subalgebra of g. Then (4) tells us p = 5 and either
V := L(3, 1, 0) or its dual is an h̄-composition factor of q. Now dim V = 52 and so we are forced
to conclude that G is of type E8 . By Lemma 4.5 and dimensions again, we must have l′ of type
E7 . But the only non-trivial factor of q has dimension 56. If V were a composition factor of the
self-dual module V56 , then so would be its dual; a contradiction by dimensions.
Now suppose h is of type A4 . Statement (4) tells us p = 5 and q contains an h̄-composition
factor V := L(1, 0, 0, 1). By Lemma 4.5 we may also assume that G is of type E7 or E8 with l′
chosen minimally of type E6 or E7 . If l′ is of type E6 , the non-trivial l′ -composition factors of q
are either V27 or its dual. Since V appears amongst q ↓ h̄, the h̄-composition factors of V27 are
L(1, 0, 0, 1)/k4 . Since the restriction of a natural module L(1, 0, 0, 0) for h to a Levi sl2 -subalgebra
has composition factors L(1)/k3 , and L(1, 0, 0, 1) is a composition factor of L(1, 0, 0, 0)⊗L(0, 0, 0, 1),
we may calculate that the restriction to a Levi sl2 -subalgebra of V27 is a completely reducible module
with composition factors L(2)/L(1)6 /k12 . A non-zero nilpotent element of this subalgebra acts with
Jordan blocks 3 + 26 + 112 , though this is impossible by [Ste16, Table 4]. In case l′ is of type E7 ,
we see that V56 contains a composition factor L(1, 0, 0, 1) and the remaining composition factors
must have dimension 33 or less, and if not self-dual, must have dimension 16 or less. Up to duals,
14
the possible composition factors together with their restrictions to a Levi subalgebra s of type sl2
are in the following table.
λ
(1, 0, 0, 1)
(1, 0, 0, 0)
(0, 1, 0, 0)
L(λ) ↓ s
L(2) + L(1)6 + k8
L(1) + k3
L(1)3 + k4
The restriction of any resulting module to s is completely reducible, and so the Jordan blocks
of a non-zero nilpotent element e ∈ s are determined by the h̄-composition factors on V56 . It is
easily checked that there is no way of combining these composition factors compatibly with the
possibilities in [Ste16, Tables 2, 3].
Finally, suppose that h is of type A6 . Arguing in a similar fashion, we find that G is of type E8 with
p = 7; the type of l′ is E7 ; and the h̄-composition factors on V56 are L(̟1 + ̟6 )/k9 . Restricting to
a Levi sl2 -subalgebra and comparing once again with [Ste16, Tables 2, 3] yields a contradiction.
4.1. Subalgebras of type A1 . In this section we show that Theorem 1.4 holds in case h is of type
A1 . The following also deals with one direction of Theorem 1.3.
Lemma 4.7. Let G be a reductive algebraic group and let p > 2. Then whenever p ≤ h(G), there
exists a non-G-cr sl2 -subalgebra containing a regular nilpotent element e, a toral element h and an
element f regular in a proper Levi subalgebra of g.
Proof. It suffices to tackle the case where G is simple. Let gZ be a lattice defined via a Chevalley
basis in the simple complex Lie algebra gC of the same type as g and let e be the regular element
given by taking a sum of all simple root vectors in gZ . Then there is an element f ∈ gZ which is a
sum of negative root vectors such that (e, h, f ) is an sl2 -triple. These are easily constructed in the
case G is classical and are given explicitly by [Tes92, Lem. 4] in the case G is exceptional:
(i)
(ii)
(iii)
(iv)
g = G2 , f = 6e−α1 + 10e−α2 ;
g = F4 , f = 22e−α1 + 42e−α2 + 30e−α3 + 16e−α4 ;
g = E7 , f = 34e−α1 + 49e−α2 + 66e−α3 + 96e−α4 + 75e−α5 + 52e−α6 + 27e−α7 ;
g = E8 , f = 92e−α1 + 136e−α2 + 182e−α3 + 270e−α4 + 220e−α5 + 168e−α6 + 114e−α7 + 58e−α8 .
Reducing everything modulo p > 2 gives an sl2 -subalgebra h of g. Moreover, since p < h(G) we have
that e[p] 6= 0, which follows from the description of the Jordan blocks of regular nilpotent elements
in [Jan04, §2] for G classical, and is immediate from [Law95, Table D] in case G is exceptional.
Therefore, in each case h is a non-p-subalgebra of type A1 (noting that there is only one p-structure
on h by [SF88, Cor. 2.2.2(1)]). Since E6 contains F4 as a p-subalgebra, the non-p-subalgebra h
contained in F4 is also a non-p-subalgebra of E6 .
Suppose G is not of type Ap−1 . Then since h contains a regular nilpotent element of g, it is certainly
not contained in a Levi subalgebra of type Arp−1 , hence being non-p-subalgebras, these subalgebras
are non-G-cr, by Lemma 4.3. In particular each is in a proper parabolic subalgebra of g. Thus f is
no longer regular in g. This can be seen explicitly in the case g is exceptional as p < h(G) implies
that p divides at least one of the coefficients of the root vectors of f . In the classical case, e acts as
a single Jordan block on the natural module in types A, B and C acting on standard (orthonormal)
basis vectors as e(ei ) = ei−1 whereas f (ei ) = λi ei+1 for λi the coefficient of the simple root vector
15
e−αi . Since h is in a proper parabolic subalgebra, it stabilises a subspace, meaning that some
non-zero collection of the λi must be congruent to 0 modulo p. The remainder determine a regular
element in some proper Levi subalgebra obtained by removing the appropriate nodes of the Dynkin
diagram. In type Dn , the regular nilpotent element e acts with Jordan blocks of size 2n − 1 and 1
on the natural module, and only regular elements act in such a way. Since V |so2n−1 ∼
= V ′ ⊕ k for V ′
the natural module for so2n−1 , and a regular element of the latter acts as a full Jordan block on V ′ ,
we must have that e is contained in subalgebra of type so2n−1 . Hence we may embed e in a regular
sl2 -subalgebra h in the so2n−1 subalgebra such that h is toral and f is regular in a Levi subalgebra
of so2n−1 . This implies that f is regular in a Levi subalgebra of g = so2n . Finally to check the
theorem in case G of type Ap−1 , simply observe from Proposition 2.6 that sl2 has indecomposable
representations of dimension p of the form k|L(p − 2), which can be used to embed h in g such that
h does not act completely reducibly on the natural module for G with e being regular but f not.
The subalgebra h is then non-G-cr by Lemma 3.2.
(Specifically, we may take
e :=
0 1 0 0 ...
.
0 1 0 ..
.
0 1 ..
.
0 ..
..
.
0
0
0
0
1
0
0
0
λ1 0
λ2
, f :=
0
0
0
0
0
..
.
0
0
λp−2
...
..
.
..
.
..
.
..
.
0
1
0
0
0
0
0
where λi = −i(i + 1) mod p. Thus f is a regular nilpotent element of type Ap−2 .)
Lemma 4.8. Theorem 1.4 holds in the case h is of type A1 .
Proof. The case p ≤ h(G) is supplied by Lemma 4.7, so suppose p > h(G) and h is a non-G-cr
subalgebra of type A1 . By statement (4), there is an h̄-composition factor of q isomorphic to
L(p − 2), of dimension p − 1. In each case we will show this is impossible.
When g is of type G2 , we have h(G) = 6 and the largest dimension l′ -composition factor occurring
is 4-dimensional. Hence g is not of type G2 .
When g is of type F4 , we have h(G) = 12. Only a Levi subalgebra of type C3 has an l′ -composition
factor of dimension 12 or more; the composition factor is L(0, 0, 1), which is 14-dimensional. Using
Proposition 3.1 we see that a Lie algebra of type C3 contains two subalgebras of type A1 not
contained in parabolics when p > 12, acting either as L(5) or L(3) + L(1) on the natural module.
V
Since 3 (L(1, 0, 0)) ∼
= L(0, 0, 1) + L(1, 0, 0) one calculates that the composition factors of such sl2
subalgebras on L(0, 0, 1) are L(9)/L(3) and L(5)/L(3)2 , respectively. In particular, neither has a
composition factor L(p − 2) for p > 12 and so g is not of type F4 .
When g is of type E6 , we also have h(G) = 12. Since all of the Levi subalgebras of g are of classical
type, we use Proposition 3.1 to find the L-irreducible subalgebras of type A1 . As in the F4 case, it
is straightforward to check that none of them has a composition factor L(p − 2) when p > 12. (To
find restrictions of the spin modules for Levi subalgebras of type D, one uses [LS96, Prop. 2.13].)
When g is of type E7 , we have h(G) = 18. The same approach as used above rules out h̄ ⊆ l for l
consisting of classical Levi factors. Suppose l′ is of type E6 and that h̄ is a subalgebra of type A1
16
with a composition factor L(p−2) on V27 . Then the action of a regular nilpotent element of h̄ on V27
has a Jordan block of size at least p − 1 ≥ h(G) = 18. This is a contradiction, since [Ste16, Table 4]
shows that the Jordan blocks of the action of any nilpotent element of E6 have size at most 17.
For g of type E8 , we have h(G) = 30. As above, one similarly rules out the cases where L is classical
or type E6 . So suppose L is of type A1 E6 . Restricting to the first factor, the only composition
factors are trivial or isomorphic to L(1), while the composition factors for the second factor on
q are trivial, or isomorphic to V27 or its dual. Since h̄ has a composition factor of high weight
p − 2 on q it follows that h̄ acts on V27 with a composition factor of high weight p − 3 or p − 2.
Therefore, the action of a regular nilpotent element e of h̄ on V27 has a Jordan block of size at least
p − 2 ≥ h(G) − 1 = 29. This contradicts [Ste16, Table 4], as before. Finally, if l′ is of type E7 , then
V56 has an h̄-composition factor L(p − 2). The action of a regular nilpotent element e of h̄ has a
Jordan block of size at least p − 1 ≥ h(G) = 30 on V56 . Using [Ste16, Tables 2, 3], we see this is a
contradiction since the largest Jordan block of a nilpotent element acting on V56 is 28.
4.2. Subalgebras of rank 2. In this section we prove Theorem 1.4 holds for all simple subalgebras
of g of rank 2 in a series of lemmas, taking each isomorphism class of h in turn. First of all we turn
to a very special case.
Lemma 4.9. Let p = 5, g = E7 and h ∼
= sl3 be a p-subalgebra of g. Suppose the highest root
element e = eα̃ ∈ h is nilpotent of type A4 + A1 . Then h = Lie(H) for H a maximal closed
connected subgroup of type A2 in G.
Proof. One has in h the relation [eα , eβ ] = e holds, with eα , eβ ∈ he ⊂ ge ⊆ g(≥ 0). Now one
sees from [LT11] that ge (0) is toral, hence the nilpotence of eα and eβ imply that they are in
fact contained in ge (> 0). The projections eα , eβ to ge (1) are hence non-trivial and we must have
[eα , eβ ] = e. Recall e is contained in an sl2 -triple (e, hα̃ , fα̃ ) ∈ h × h × h. Now [hα̃ , e] = 2e and hα̃
is in the image of ad e. Since e has a factor of type Ap−1 in g there is more than one element hα̃
satisfying these conditions. Indeed if h1 ∈ Lie(τ (Gm )) has this property, then so does h′ := h1 +λh0
for h0 ∈ z(sl5 ), where the projection of e to its A4 factor is regular in sl5 (for more details, see
the argument in [HS16a, §A.2]). The cases where λ ∈ F5 give all such instances where h′ is also a
toral element, therefore we may restrict to these five cases. Furthermore, the cases where λ is 1 or
2 respectively, are conjugate to the cases where λ is 4 or 3 respectively, via a representative of an
element of the Weyl group of E7 inducing a graph automorphism on the A4 factor, and centralising
the A1 factor ([Car72, Table 10]). Now it is an easy direct calculation using the elements given
′
in
[HS16a, Table 4] that
if λ 6= 0 then a basis for the elements of ge (1) on which h has weight 1
is
e−000001 , e000011 . One checks that the commutator of these two is eα6 . The latter is not of
0
0
type A4 + A1 . Hence we conclude λ = 0.
As hα̃ ∈ τ (Gm ), there is a standard sl2 -triple (e, hα̃ , f ), that is a regular sl2 -subalgebra of a Levi
subalgebra of type A4 A1 . We have fα̃ − f ∈ ge ⊆ ge (≥ 0) so that fα̃ projects to f in g(−2). Now
τ −1 is associated to f and hence ad f is injective on ge (1 + rp) for each r ≥ 1. It follows that eα and
eβ can have no non-zero component in ge (1 + rp), in other words they are homogeneous in ge (1).
Looking at ge (1) in [LT11], we conclude that eα and eβ are both of the form
λ1 · e000011 +4 · λ2 · e111100 + 3 · λ2 · e111110 + 4 · λ2 · e012100 + λ2 · e011110 + λ3 · e−000001
0
0
1
1
0
1
+ 2 · λ4 · e−111100 + 4 · λ4 · e−011100 + 4 · λ4 · e−001110 + λ4 · e−011110 ,
0
1
17
1
0
with λi ∈ k. If eα arises from the coefficients (λ1 , λ2 , λ3 , λ4 ) and eβ from (µ1 , µ2 , µ3 , µ4 ) then
calculating the commutator and insisting that the answer be e one sees that the equations
(*)
λ1 µ 2 = λ2 µ 1 ,
λ4 µ3 = λ3 µ4 ,
λ4 µ2 − λ2 µ4 = 1,
λ1 µ3 − λ3 µ1 = 1,
must be satisfied. If µ1 6= 0 then by replacing eα by eα − νeβ for suitable ν, we may assume that
λ1 = 0. Otherwise we may swap eα and eβ to assume λ1 = 0. Then using the equations of (∗)
we have λ3 µ1 = −1, thus µ1 6= 0 and so λ2 = 0. Subsequently λ4 µ2 = 1. Now replacing eα by
a multiple we can arrange λ3 = 1, thus µ1 = −1. Additionally, (using [LT11]) one checks that
the element h1 (t6 )h2 (t9 )h3 (t12 )h4 (t18 )h5 (t15 )h6 (t10 )h7 (t5 ) centralises e and the element of ge (1)
corresponding to coordinates (0, 0, 1, 0), while acting as a non-trivial scalar on (0, 0, 0, 1). It follows
that we may replace eα with a conjugate such that its coordinates are (0, 0, 1, 1), that is, λ1 = λ2 = 0
and λ3 = λ4 = 1. Thus µ2 = 1 and µ3 = µ4 . Replacing now eβ by eβ − µ3 eα , we may assume
µ3 = µ4 = 0 so that the coordinates of eβ are (−1, 1, 0, 0). Hence eα and eβ are completely
determined.
Now we show that fα̃ is unique up to conjugacy by Ge ∩ Ghα̃ ∩ Geα ∩ Geβ . Again if (e, hα̃ , f )
is
La standard sl2 -triple, then fα̃ − f ∈ ge (0) and as hα̃ has weight −2 on it, in fact, fα̃ − f ∈
r>0 ge (−2 + rp) = ge (3) ⊕ ge (8). Checking [LT11], we have fα̃ is of the form
λ1 ·e111000 + λ2 · e111110 + λ2 · e012110 − λ3 · e123211 + λ3 · e123221 + e−100000 + 2 · e−000000
1
0
1
2
1
1
1
+ 2 · e−001000 + e−000010 − λ4 · e−001100 + λ4 · e−011100 + λ5 · e−112211 + λ5 · e−012221
1
1
0
1
0
0
In h we have the relation [[fα̃ , eα ], eα ] = 0. This implies λ2 = λ3 = λ4 = 0. Additionally, the
relation [[fα̃ , eβ ], eβ ] = 0 implies λ5 = 0. Lastly, [[eα , fα̃ ], fα̃ ] = 0 implies λ1 = 0. Thus fα̃ is fully
determined.
We obtain in addition e−α = [fα̃ , eβ ] and e−β = [eα , fα̃ ] giving in total,
eα := e−000001 + 2 · e−111100 + 4 · e−011100 + 4 · e−001110 + e−011110
0
1
1
0
0
eβ := −1 · e000011 + 4 · e111100 + 3 · e111110 + 4 · e012100 + e011110
1
1
0
1
0
e−α := e000001 + 2 · e111100 + 4 · e011100 + 4 · e001110 + 4 · e011110
0
1
1
0
0
e−β := e−000011 + 4 · e−111100 + 2 · e−111110 + e−012100 + 4 · e−011110 .
0
0
1
1
1
It is automatic from the Serre relations that these elements generate a subalgebra isomorphic to
A2 . However if one chases through the proof of [LS04, Lem. 4.1.3], we discover that the conjugacy
class of maximal A2 subgroups of E7 have Lie algebras whose root elements are of type A4 + A1 .
It follows that h = Lie(H) for one of these subgroups.
Lemma 4.10. Theorem 1.4 holds when h is of type A2 .
Proof. Suppose, for a contradiction, that h is a non-G-cr subalgebra of type A2 . By (4) we have
that there is an h̄-composition factor V of q with V = L(λ) or L(λ)∗ where λ is (3, 1) or (3, 3) when
p = 5, or (5, 1) when p = 7. Furthermore, by Lemma 4.5 we may assume that l′ is of type E6 or
E7 and that h̄ is a p-subalgebra by Corollary 4.4.
18
λ
(2, 2)
(3, 1)
(3, 0)
(1, 1)
(2, 0)
(1, 0)
dim L(λ)
19
18
10
8
6
3
L(λ) ↓ sα̃
L(4) + L(3)2 + L(2)2
L(4) + L(3)2 + L(2) + L(1)
L(3) + L(2) + L(1) + k
L(2) + L(1)2 + k
L(2) + L(1) + k
L(1) + k
weights
43 /35 /25 /13 /03
44 /34 /24 /14 /02
42 /32 /22 /12 /02
42 /3/2/12 /02
4/3/2/1/02
4/1/0
Table 2. The restrictions of various sl3 -modules
Suppose p = 5. Since dim L(3, 3) = 63, it cannot occur as an h̄-composition factor of q. We will
require more work to show that L(3, 1) or L(1, 3) cannot occur as an h̄-composition factor of q.
Since the argument is the same for each, we assume V = L(3, 1).
By Lemma 2.9, any root vector of h̄, say eα̃ corresponding to the highest root α̃, has the property
that it is reachable. Then the possibilities are given by [PS16]. Let also hα̃ be an element in the
Cartan subalgebra of h̄ for which there is a nilpotent element fα̃ with sα̃ = heα̃ , hα̃ , fα̃ i ∼
= sl2 .
We need certain data about the restrictions to sα̃ of all irreducible restricted non-self-dual h̄-modules
of dimension at most 10 and all self-dual irreducible h̄-modules of dimension at most 20. We also
note that eα̃ has a Jordan block of size 5 on V .
Suppose first that l′ is of type E6 . Then V occurs as a h̄-composition factor of V27 or its dual.
One finds that in E6 there is just one reachable element with a Jordan block of size at least 5 on
V27 , namely that with label 2A2 + A1 . Using Table 2, we will now compare the weights of hα̃ on
V27 with those on various modules of dimension 27 containing V as a composition factor. It will
be convenient to do this by considering the action of elements of E6 on elements of q/[q, q] where
q is the nilradical of an E6 parabolic subalgebra of E7 . The tables in [LT11] give a cocharacter τ
associated to 2A2 + A1 in E7 . As eα̃ does not contain a factor of type Ap−1 then by Proposition 2.8,
we may assume hα̃ ∈ Lie(τ (Gm )) hence it suffices to compute the weights of τ on q/[q, q], which is
an easy computation in the root system of E7 (and which we perform in GAP). We find the weights
of τ on V27 are −4/ − 32 / − 24 / − 14 /05 /14 /24 /32 /4. Reduction modulo 5 then gives the weights of
hα̃ on V27 , namely 45 /36 /26 /15 /05 . Since one composition factor is V = L(3, 1) itself, we must find
4/32 /22 /13 /03 from the remaining composition factors of V27 . It is then a straightforward matter
to compare this with the weights of the tables above to see that this is impossible.
Thus we must have l′ of type E7 . Then V and its dual occur on the self-dual module V56 ↓ h̄. It
follows that a subset of the composition factors of V56 ↓ sα̃ is L(4)2 /L(3)4 /L(2)2 /L(1)2 . Hence a
root element eα̃ of sα̃ is represented on V56 by a matrix whose 4th power has rank at least two
and 3rd power has rank at least 6. Since it is also reachable, by comparison with [PS16] there
are just three options for the nilpotent orbit containing eα̃ , namely A3 + A2 + A1 , 2A2 + A1 and
A4 + A1 . The case A3 + A2 + A1 can be ruled out: in both sl3 and g there is a unique toral element
h ∈ im ad eα̃ which has weight 2 on a highest root element, and has weight 1 on the simple root
elements. As geα̃ ⊆ geα̃ (≥ 0), it follows that geα̃ (1) must be non-zero. But [LT11] reveals that for
eα̃ in E7 of type A3 + A2 + A1 the space geα̃ (1) is zero. (The element eα̃ can in fact be found in
[geα̃ (0), geα̃ (2)].) This is a contradiction. Hence eα̃ is not of type A3 + A2 + A1 .
Suppose eα̃ is of type 2A2 + A1 . Since eα̃ is not regular in a Levi with a factor of type Ap−1 ,
Proposition 2.8 implies that there is a unique hα̃ ∈ h such that hα̃ has weight 2 on eα̃ and hα̃ ∈
19
im ad eα̃ ; we take hα̃ ∈ Lie(τ (Gm )) for τ an associated cocharacter to eα̃ . The precise description
of such a τ comes from [LT11]. In an E7 -parabolic subalgebra of E8 , the space q/[q, q] affords a
representation V56 for the Levi. Applying τ to the roots of q/[q, q] gives the following multiplicities
for τ :
wt -4 -3 -2 -1 0 1 2 3 4
dim 2 4 8 8 12 8 8 4 2
Since ad eα̃ kills the L(4) weight space, on which hα̃ has weight 4, we must have two composition
factors isomorphic to L(4). These afford weights (4, 2, 0, −2, −4) on q/[q, q]. Removing the weights
of these composition factors and considering the remaining weights, we see inductively that the
composition factors of sα̃ are L(4)2 /L(3)4 /L(2)6 /L(1)4 /k4 .
Both V and its dual occur as composition factors of h̄ on V56 , so using Table 2, we find that the restriction to sα̃ of the remaining h̄-composition factors of V56 has composition factors L(2)4 /L(1)2 /k4 .
Again using Table 2, we find that either L(1, 1) occurs or both L(2, 0) and L(0, 2) occur as h̄composition factors of V56 . In the first case, the remaining sα̃ -composition factors are L(2)3 /k3 .
But no combination of modules in Table 2 have such restriction, a contradiction. So L(2, 0) and
L(0, 2) both occur, and the remaining sα̃ -composition factors are L(2)2 /k2 . Again we see that this
is impossible. This rules out the case eα̃ of type 2A2 + A1 .
If eα̃ is of type A4 + A1 , then Lemma 4.9 implies that h = Lie(H) for H a maximal A2 subgroup
of G. However from [LS04, Lem. 4.1.3] we have found the Lie algebra of the maximal connected
subgroup of type A2 in E7 . But this does not act on V56 with a composition factor of high weight
(3, 1) or (1, 3). This contradiction completes this case.
Finally, we consider the possibility that V = L(5, 1) when p = 7. Since dim L(5, 1) = 33 > 27, we
cannot have l′ of type E6 ; also the self-duality of V56 implies that we would need L(5, 1) and L(1, 5)
as composition factors of V56 , but 66 > 56. This is a contradiction.
Lemma 4.11. Theorem 1.4 holds when h is of type B2 .
Proof. Assume h is of type B2 and non-G-cr. By (4) we have at least one of L(2, 0) and L(1, 3)
occurs as an h̄-composition factor of q when p = 5 or L(4, 0) occurs as an h̄-composition factor of
q when p = 7. Moreover, by Lemma 4.5 we have g is of type E7 or E8 .
First suppose p = 5. When g is of type E8 we construct an example of a non-G-cr subalgebra
of type B2 . Let m be a maximal subsystem subalgebra of type D8 . We embed a subalgebra h of
type B2 into m via the representation T (2, 0) + k = k|L(2, 0)|k + k. (Note that T (2, 0) is self-dual
and odd-dimensional, hence is an orthogonal representation for h.) Therefore h is contained in a
D7 -parabolic subalgebra of m: by Lemma 3.1, it is in some parabolic, and it stabilises a singular
vector, the stabiliser of which is a D7 -parabolic. Then h is thus in a D7 -parabolic of g. We know
that a Levi subalgebra of type D7 acts as L(̟1 )2 + L(̟2 ) + L(̟6 ) + L(̟7 ) + 0 on g. In particular,
the largest summand is 91-dimensional.
Now consider H, a subgroup B2 of G = E8 embedded into
V
D8 via T (2, 0)+k. Then 2 (T (2, 0)+k) occurs as direct summand of g ↓ H. For any p > 2, we have
V
V
that 2 W is a direct summand of W ⊗2 , hence if W is tilting, 2 W is also. In our case, this implies
V2
(T (2, 0) + k) ∼
= T (2, 0) + L(0, 2) + T (2, 2). Therefore, H has a 95-dimensional indecomposable
summand M ∼
= L(2, 0)|L(2, 2)|L(2, 0) on g. We may identify T (2, 2) with the H1 -injective
= T (2, 2) ∼
hull of L(2, 0), which restricts to H1 indecomposably. As the category of representations for H1 is
equivalent to the category of p-representations for h, this shows h has an indecomposable summand
20
M ↓ h of dimension at least 95 on g; thus h cannot live in a Levi subalgebra of type D7 , proving it
is non-G-cr.
Now suppose g is of type E7 . Then by Lemma 4.5, we have l′ is of type E6 and V27 ↓ h̄ has a
composition factor L(2, 0) (we rule out L(1, 3) since it is 52-dimensional). Using [Lüb01, 6.22], the
only irreducible h̄-modules of dimension at most 14 are L(2, 0), L(1, 1), L(0, 2), L(1, 0), L(0, 1) and
k. Let s denote a long root sl2 -subalgebra of h̄. The following table lists the composition factors of
the restrictions of the irreducible h̄-modules above to s.
λ
(0, 0)
(0, 1)
(1, 0)
(0, 2)
(1, 1)
(2, 0)
dim L(λ)
1
4
5
10
12
13
L(λ) ↓ s
k
L(1)/k 2
L(1)2 /k
L(2)/L(1)2 /k3
L(2)2 /L(1)3
L(2)3 /L(1)2
A non-zero nilpotent element e of s satisfies e ∈ [h̄e , h̄e ] i.e. it is reachable in g. Thus from [PS16]
it is of type A1 , 2A1 , 3A1 , A2 + A1 , A2 + 2A1 or 2A2 + A1 . As in Lemma 4.10 we establish that
the composition factors of s on V27 must be as follows:
O
A1
2A1
3A1
A2 + A1
A2 + 2A1
2A2 + A1
V27 ↓ s
L(1)6 /k15
L(2)/L(1)8 /k8
L(2)3 /L(1)6 /k6
L(3)/L(2)4 /L(1)4 /k3
L(3)2 /L(2)3 /L(1)4 /k2
L(4)/L(3)2 /L(2)3 /L(1)2 /k
Comparing the above two tables, we find that there is just one possibility: O is of type 3A1 and
the h̄-composition factors of V27 are L(2, 0)/L(1, 0)2 /k4 . Let he, h, f i = s ⊂ h̄ be an sl2 -triple. By
Proposition 2.8, up to conjugacy by Ge we have that h ∈ Lie(τ (Gm )) for τ an associated cocharacter
to e. Up to conjugacy then, we may assume e = eα1 + eα3 + eα6 , h = hα1 + hα3 + hα6 and there is a
nilpotent element e′ in an sl2 -subalgebra s′ of h̄ which commutes with e and h. The subalgebra s′
must also be a long root sl2 -subalgebra of h̄, and so e′ is also reachable of type A31 . Now, from [LT11]
we see that ge (0) = Ce for Ce a reductive group of type A2 A1 . We have that Ce has six nilpotent
orbits on ge (0) corresponding to partitions of (3, 2); viz. {3, 2 + 1, 1 + 1 + 1} × {2, 1 + 1}. These can
be computed in GAP as having orbit types in E6 with labels 2A2 A1 , 2A2 , 3A1 , 2A1 , A1 , 0. Up to
conjugacy by Ce then, there is just one possibility for e′ , which may be taken as eα4 +α5 + eα5 +α6 .
The element es := e + e′ is then a subregular nilpotent element of h̄, which in g is checked to have
type A2 + 2A1 . A corresponding sl2 subalgebra, ss say has composition factors on L(2, 0) which
are L(4)/L(2)2 /k4 . But the existence of the L(4) composition factor is already incompatible with
the action of an ss on V27 from the table above. This is a contradiction.
Now suppose p = 7. Then by Proposition 2.6, we have L(4, 0) occurring as a h̄-composition factor of
q. Thus the h̄-composition factors on V56 are L(4, 0)/k 2 . One computes that the restriction of V56 to
a Levi subalgebra sl2 is completely reducible with composition factors L(4)5 /L(3)4 /L(2)3 /L(1)2 /k3 .
This action is inconsistent with any of the Jordan blocks in [Ste16, Tables 2, 3].
21
Lemma 4.12. Theorem 1.4 holds when h is of type G2 .
Proof. Assume that h is a non-G-cr subalgebra of type G2 . By (4), we have p = 7 and g is of type
E7 or E8 . In both cases, we will construct a non-G-cr subalgebra of type G2 .
Let g be of type E7 and p = 7. Then g contains a parabolic subalgebra p = l + q, with l′ of type
E6 and the l′ -composition factor of q is V27 . Now, l′ contains a maximal subalgebra h̄ of type G2
and V27 ↓ h̄ = L(2, 0). By Proposition 2.6, we have H1 (h̄, q) ∼
= k. Thus, by Lemma 4.2, there must
exist a non-G-cr subalgebra h of type G2 .
Now let g be of type E8 and p = 7. Then g has a Levi factor l′ of type E7 and by Lemma 4.1, the
non-L′ -cr subalgebra of E7 constructed above is therefore a non-G-cr subalgebra of g.
4.3. Subalgebras of type W1 . Our strategy in this section is slightly different. When p = 7 and
g is of type F4 or p = 5, 7 and g is of type E6 , E7 or E8 we construct an example of a non-G-cr
subalgebra of type W1 . In all other cases, we use calculations in GAP to show that each subalgebra
of type W1 is G-cr. To do this we rely on the following result:
Lemma 4.13 (cf. [HS16a, Thm. 1.1]). Let g be a simple classical Lie algebra of exceptional type.
Suppose h ∼
= W1 is a p-subalgebra of g and p is a good prime for g. Let ∂ ∈ h be represented by the
nilpotent element e ∈ g. Then the following hold:
(i) e is a regular element in a Levi subalgebra l of g and the root system associated to l is
irreducible.
(ii) For h(L) the Coxeter number of l, we have either p = h(L) + 1 or l is of type An and
p = h(L).
This will allow us to construct in GAP a generic subalgebra h of type W1 for a representative of
each possible nilpotent element representing ∂ and then show that X p−1 ∂ is also contained in l,
hence h = h∂, X p−1 ∂i is contained in l.
Lemma 4.14. Theorem 1.4 holds when h is of type W1 .
Proof. By Corollary 4.4, we may assume that h is non-G-cr and the projection of h to l′ is a
p-subalgebra unless p = 5, 7 and g has a Levi subalgebra of type Ap−1 .
When g is of type G2 all Levi factors are of type A1 , so this case is immediately discounted.
Now suppose g is of type F4 . When p = 5 we show that all subalgebras of type W1 are G-cr but
we postpone doing this here and give a general method below. When p = 7 we claim the following
subalgebra is non-G-cr:
h = he0100 + e0010 + e0001 , e−0122 + e−1222 + 4 · e−1231 i.
By checking the commutator relations hold, we see that h is isomorphic to W1 (with the first
generator mapped to ∂ and the second mapped to X p−1 ∂). Moreover, h is evidently contained in
a C3 -parabolic subalgebra. Now, using the MeatAxe in GAP, we calculate that the socle of the
adjoint module g ↓ h is 15-dimensional. On the other hand, any subalgebra of type W1 contained
in a Levi subalgebra of type C3 is conjugate to h̄ = he0100 + e0010 + e0001 , e−0122 i but using the
MeatAxe, we calculate that the socle of g ↓ h̄ is 24-dimensional. Therefore h is not contained in a
Levi subalgebra of C3 and is thus non-G-cr. When p ≥ 11, all subalgebras of g of type W1 are G-cr
by Lemma 4.13, since the largest Coxeter number of a proper Levi subalgebra of g is 6.
22
Now let g be of type E6 . We construct an example of a non-G-cr subalgebra of type W1 when
p = 5, 7. For p = 5 the subalgebra h ∼
= W1 embedded in a Levi subalgebra of type A4 via the
representation k[X]/X p ∼
= L(4)|k is non-A4 -cr and hence non-G-cr by Lemma 4.1. For p = 7,
consider the following subalgebra.
h = he10000 + e01000 + e00100 + e00010 + e00001 ,
0
0
0
0
0
e−11111 − 2 · e−11211 + e−12210 + e−01221 i
0
1
1
1
Again, one checks that h is isomorphic to W1 and is evidently contained in an A5 -parabolic subalgebra. We then use the MeatAxe to calculate that the socle of g ↓ h is 21-dimensional, whereas any
subalgebra of type W1 contained in a Levi subalgebra of type A5 acts on g with a 43-dimensional
socle. Therefore h is non-G-cr. When p ≥ 11, all subalgebras of type W1 are G-cr by Lemma 4.13,
since the largest Coxeter number of a proper Levi subalgebra of g is 8.
Finally, suppose g is of type E7 or E8 . Both contain an E6 -Levi subalgebra and therefore contain
a non-G-cr subalgebra of type W1 when p = 5, 7 by Lemma 4.1. We now consider the case p ≥ 11.
Therefore we have that h̄ is a p-subalgebra. By Lemma 4.13, it follows that (g, l′ , p) = (E7 , D6 , 11),
(E7 , E6 , 13), (E8 , D6 , 11), (E8 , E6 , 13), (E8 , D7 , 13) or (E8 , E7 , 19) and that ∂ = e is regular in l.
We rule out each possibility, as well as (F4 , B2 , 5), using calculation in GAP. All of the cases are
similar and we give the general method.
Let e be the regular nilpotent element of l. Then following the proof of [HS16a, Lem. 3.11] we
have an associated cocharacter τ with Lie(τ (Gm )) = hX∂i. This cocharacter is explicitly given
in [LT11]. Now suppose X p−1 ∂ is represented by the nilpotent element f . Then as [X∂, X p−1 ∂] =
(p − 1)X p−1 ∂, one calculates that f is in the direct sum of the τ weight spaces congruent to
−2p + 4 modulo p. As in [HS16a, §A.2] we use GAP to construct a generic nilpotent element f1
in such weight spaces. Using the commutator relations in W1 , for example ad(e)p−1 f = −e and
[f, ad(e)i (f )] = 0 for all 1 ≤ i ≤ p − 3, we then find that f1 , and hence f , is contained in l and thus
h = he, f i is contained in l. Thus all subalgebras of type W1 are G-cr in each possibility.
4.4. Proofs of Theorems 1.3 and 1.4. Of importance to us will be the following theorem:
Theorem 4.15 ( [HS16a, Thm. 1.3]). Let g be a simple classical Lie algebra of exceptional type.
Suppose p is a good prime for g and let h be a simple subalgebra of g. Then h is either isomorphic
to W1 or it is of classical type.
Proof of Theorem 1.4. As h must project to an isomorphic subalgebra of a proper Levi subalgebra
in good characteristic, the theorem now follows from Theorem 4.15 and Lemmas 4.6, 4.7, 4.8, 4.10,
4.11, 4.12 and 4.14 above.
Proof of Theorem 1.3. Suppose G is connected reductive with g its Lie algebra and h some semisimple subalgebra. Lemma 4.7 provides the forward implication and so it remains to prove the reverse
one. Since we are assuming that p > h(G), it is in particular a very good prime, and so we have
g∼
= g1 × g2 × · · · × gr × z where each gi is simple and z is a central torus of g. The parabolic subalgebras of g are the direct products of parabolic subalgebras of the simple factors, and similarly for
the corresponding Levi factors. Hence if h is in a parabolic subalgebra p of g, then it is in a Levi
subalgebra l of p if and only if the projection of h to each simple factor gi of g also has this property.
Thus we reduce the proof of the theorem to the case G is simple. Now if G is classical, the result
is supplied by Proposition 3.4. Thus we may assume that G is exceptional. It will be shown in
23
a forthcoming paper [PS] by A. Premet and the first author, that all semisimple subalgebras are
direct sums of simple Lie algebras when p > h(G). Putting this together with Theorem 4.15 and
Lemma 4.13, we have that h = h1 × · · · × hr for each hi a simple classical Lie algebra.
Assume h is a subalgebra of a parabolic subalgebra p = l + q of g. We will be done by Proposition
2.4 if we can show that any simple h-composition factor V ∼
= V1 ⊗· · ·⊗Vr with Vi a simple hi -module
satisfies H1 (h, V ) = 0. By the Künneth formula, we are done if we can show that H1 (hi , Vi ) = 0 for
each i. This has already been shown to be impossible in the proof of Theorem 1.4: in the context
of statement (4) we assumed the existence of such a module Vi , the contradiction of which proved
Theorem 1.4.
5. Unique embeddings of nilpotent elements into sl2 -subalgebras. Proofs of
Theorems 1.1 and 1.6
Proof of Theorem 1.1. First suppose p > h(G). We wish to show that the bijection (*) from the
introduction holds. To start with, [Pom80] provides the surjectivity. It remains to prove that the
map is injective.
Let (e, h, f ) be an sl2 -triple of g. By Theorem 1.3 we have that h = he, h, f i is G-cr and by Lemma
4.3, we have that h is a p-subalgebra. In particular, the element e is nilpotent with e[p] = 0. We
will show, under our hypotheses, that h is L-irreducible in a Levi subalgebra l = Lie(L) of G if and
only if the element e of h is distinguished in l. One way round is easy: If h is contained in a Levi
subalgebra l and e is a distinguished element, then h cannot be in a proper parabolic subalgebra
of l, since if it did, then by Theorem 1.3, h, hence also e, would be in a proper Levi subalgebra of
l. This is a contradiction as e is assumed distinguished.
For the other direction, assume h is L-irreducible in some Levi subalgebra l = Lie(L) of g and
assume, looking for a contradiction, that e is not distinguished. Let us see that this implies h is in a
proper Levi subalgebra of l. To do this, note first that h is a subalgebra of the Lie algebra l′ = [l, l];
we have l′ is semisimple, due to our hypothesis that p > h(G). We wish to show that h centralises a
vector w ∈ l′ , since then h ⊆ (l′ )w will be in a proper parabolic subalgebra of l′ , hence by Theorem
1.3, in a proper Levi subalgebra of l′ . To see that h does indeed fix a vector, let us start by noting
that e has at least one Jordan block of size 1 on the adjoint module of l′ . And for this, let e ∈ k ⊆ l
be a Levi subalgebra of l in which e is distinguished and note that p > h(G) implies that k contains
no factors of type Ap−1 so that k = k′ ⊕ z(k). That z(k) 6= 0 provides the existence of the requisite
Jordan block. Moreover, [HS16a, Prop. 3.3] implies that h ∈ k also, so that z(k) is also centralised by
h. Let 0 6= v ∈ z(k). Then [e, v] = 0 and v 6∈ im ad e. If [f, v] = 0 then h centralises v, so we are done.
Otherwise, consider the h-submodule W := hv, ad(f )v, . . . , ad(f )p−1 vi. (This is a submodule since
each ad(f )i (v) is an ad h-eigenvector and so W is ad e-stable; additionally it is ad(f ) stable since
the fact that h is a p-subalgebra implies that ad(f )p = 0.) Since W ′ := had(f )v, . . . , ad(f )p−1 vi is
both ad e- and ad f -stable, we have that the h-submodule W is a non-trivial extension of W ′ by k.
But dim W ′ ≤ p − 1 and the only simple h-module which extends the trivial is the module L(p − 2)
f be an indecomposable summand
of dimension p − 1. It follows that W ∼
= k|L(p − 2). Let W
f
of l containing W as a submodule. We cannot have W projective since then its restriction to
the subalgebra ke ⊂ h would give Jordan blocks of size p, which is not possible by the choice
f is indecomposable and reducible. The structure of such modules was determined
of v. Hence W
in [Pre91] (see [Far09, §4.1] for a more recent account): they have Loewy length 2 with isotypic socle
f consists of trivial modules. But then the socle of W
f ∗ ⊆ (l′ )∗ ∼
and head. Thus the head of W
= (l′ )
24
consists of trivial submodules. This implies that h fixes a 1-space on l′ . This justifies the claim
that h is in a proper Levi subalgebra of l, which is the contradiction sought.
We now wish to show that if h is an L-irreducible subalgebra containing the nilpotent element e
distinguished in l, then it is unique up to conjugacy in L. For this, recall that for any nilpotent
element
L e there is, by [Pre95a], an associated cocharacter τ : Gm → L which gives a grading
l =
i∈Z l(τ ; i). Moreover, the images of any two such cocharacters are conjugate by Le . By
Proposition 2.8, we may assume that if (e, h, f ) is an sl2 -triple, that the element h is contained in
Lie(τ (Gm )) and thus is unique up to conjugacy by Le , contained in the graded piece l(τ ; 0). Now
′
suppose e is distinguished and (e, h, f ) and (e, h, f ′ ) are two sl2 -triples. Then
P f − f ∈ le = le (≥ 0).
′
′
But the weight of h on f and f is −2, so that f − f is an element of i>0 le (−2 + ip). Since
p > h(G) and
P the largest j such that l(j) 6= 0 is 2h − 2 (this follows from [McN05, Prop. 30]), we
have that i>0 le (−2 + ip) = le (−2 + p). But since e is distinguished in l we have that le (i) = 0
for all odd i, hence that f − f ′ = 0 as required. This proves Theorem 1.1 for p > h(G).
For p ≤ h(G) we appeal to Lemma 4.7. If (e, h, f ) is an sl2 -triple as described in the statement
of the lemma, then f is a regular nilpotent in a proper Levi subalgebra l, say. Thus by [Pom80]
(or just another application of Lemma 4.7) it can be embedded into an sl2 -triple (e′ , h′ , f ) inside
l. Since (e, h, f ) is not contained in l we have that (f, −h, e) and (f, −h′ , e′ ) are non-conjugate
sl2 -triples containing the common nilpotent element f as required.
Proof of Theorem 1.6. We wish to see that h = Lie(H) for H a good A1 -subgroup of G. Fix a pair
e ∈ h. The assumption p > h(G) implies that all unipotent elements of G are of order p. Let u be
one corresponding to e under a Springer isomorphism. Now [Sei00, Props. 4.1 & 4.2] furnish us with
a good A1 -overgroup of any unipotent element u. Since all unipotent elements of H are conjugate,
the Lie algebra of a root group of H will contain e. Thus e ∈ h = Lie(H) as required.
6. Complete reducibility of p-subalgebras and bijections of conjugacy classes of
sl2 -subalgebras with nilpotent orbits. Proofs of Theorems 1.2 and 1.5
In this section we prove that there is a bijection
{conjugacy classes of sl2 subalgebras} → {nilpotent orbits}
if and only if p > b(G), where b(G) is defined in the introduction. Here, the bijection is realised by
sending a conjugacy class of sl2 -subalgebras to the nilpotent orbit of largest dimension meeting it.
It is not a priori clear that this would be well defined (as one conjugacy class of sl2 subalgebras
could contain two non-conjugate nilpotent orbits of the same dimension) but we show that this
never happens.
We will need two lemmas.
Lemma 6.1. If p > b(G) then for any sl2 -triple (e, h, f ) the elements e and f are nilpotent and
the element h is toral.
Proof. Suppose the statement is false. Then we may assume that h = he, h, f i ∼
= sl2 is a nonp-subalgebra. As there are no Levi subalgebras of type Arp−1 , we have that h is non-G-cr by
Lemma 4.3. Thus h lives in a parabolic subalgebra p = l + q which may be chosen minimally
subject to containing h such that the projection h̄ of h to a Levi subalgebra l is G-irreducible. By
Lemma 4.3 again, it follows that h̄ is a p-subalgebra. In particular, the images ē and f¯ of e and
25
f respectively are p-nilpotent. But as e and f are contained in the p-nilpotent spaces hēi + q and
hf¯i + q, respectively, they are also p-nilpotent.
Now h is a complement to q in the semidirect product h̄ + q and q has a filtration by restricted
l-modules, thus a filtration by restricted h̄-modules. By (4), it follows that one of the h̄-composition
factors of q is isomorphic to the h̄-module L(p − 2). We have H1 (sl2 , L(p − 2)) ∼
= k2 . Let us describe
a set of cocycle classes explicitly. Define γa,b : sl2 → L(p − 2) on a basis e, h, f ∈ sl2 via γa,b (h) = 0,
γa,b (e) = av−p+2 , γa,b (f ) = bvp−2 where vp−2 and v−p+2 are a chosen pair of highest and lowest
weight vectors in L(p − 2). Then one checks that the γa,b satisfy the cocycle condition (1), so for
instance
0 = γa,b (h) = γa,b ([e, f ]) = eγa,b (f ) − f γa,b (e) = 0 − 0.
Further, if for some v ∈ L(p − 2), we have γa,b (x) = γa′ ,b′ (x) + x(v) for all x ∈ sl2 , then applying to
h, we see that h(v) = 0, so that 0 is a weight of L(p − 2). This happens if and only if p = 2, which is
excluded from our analysis. Now, since H1 (sl2 , L(p−2)) ∼
= k2 , we see that the classes [γa,b ] are a basis
1
for H (sl2 , L(p − 2)). In particular, in any equivalence class of cocycles in H1 (sl2 , L(p − 2)), there is
a cocycle which vanishes on h. In the present situation, this means, following the argument in 2.4,
(or simply by observing that the restriction map H1 (h̄, Lie(Qi /Qi+1 )) → H1 (π(h), Lie(Qi /Qi+1 )) is
zero) that h can be replaced by a conjugate in which the element h ∈ h satisfies π(h) = h, so that
in particular, h is toral.
Lemma 6.2. Up to conjugacy by G = SO(V ) or Sp(V ), there is precisely one self-dual representation V of h := he, h, f i ∼
= sl2 of dimension r with p < r < 2p. We may construct V in such a
way that e ∈ h acts with a single Jordan block of size r and such that f acts with Jordan blocks
of sizes (p − 1 − i), (i + 1)2 . Moreover, V is uniserial with structure L(i)|L(p − 2 − i)|L(i) where
r = p + i + 1. Conversely, if V is uniserial with structure L(i)|L(p − 2 − i)|L(i) then up to swapping
e and f , we may assume e acts with a single Jordan block and V is self-dual.
Proof. Concerning the first sentence of the lemma, the existence of a representation V is implied
by [Pom80] applied to the regular nilpotent orbits in type Bn and Cn depending on the parity
of r. Suppose V did not consist of restricted composition factors. Then as p < r < 2p, there
would be precisely one which is not restricted, with at least one restricted factor. Thus V would
be decomposable and e would certainly not act with a single Jordan block. Hence the composition
factors of V are restricted. If the socle were not simple, then there would be two linearly independent
vectors u, v for which e · u = e · v = 0. But then the rank of e cannot be r − 1, contradicting the
hypothesis that it acts with a single Jordan block.
Thus the socle is simple, isomorphic to L(i) say. Then all the composition factors of V are L(i) or
L(p − 2− i) since otherwise the vanishing of Ext1 between either of these and any other composition
factor (by Lemma 2.7) would force a non-trivial direct summand. Also V / Soc V must contain a
submodule L(p − 2 − i), or the socle would split off as a direct summand. Since V contains the
submodule L(p − 2 − i)|L(i), self-duality forces it to contain a quotient L(i)|L(p − 2 − i). As the
simple socle is isomorphic to L(i), the head is also simple and isomorphic to L(i). Now, if there
were two composition factors isomorphic to L(p − 2 − i) then the dimension of V would be at least
2p, a contradiction. Thus the structure of V is precisely L(i)|L(p − 2 − i)|L(i), which has dimension
p + i + 1. Thus r = p + i + 1, as required.
We must now show that there is just one such representation up to conjugacy. As e has rank r − 1,
and h is toral by 6.1, there must be a vector of weight −i, say w, generating V under h. We have
her−i−1 · w, . . . , er−1 · wi spans Soc V , and if r − p ≤ j ≤ r − 1 then f · ej · w ∈ hej−1 · wi, by
26
uniqueness of the weights in the submodule L(p − 2 − i)|L(i). Also by uniqueness of the weights of
the quotient L(p − 2 − i)|L(i) we must have f · ei · w ∈ hei−1 · w, ei+p−1 · wi. As the endomorphism ep
commutes with e, f and h, the automorphism v 7→ v + ep · v of V is an sl2 -module homomorphism.
Moreover, as e preserves the form on V , so does ep and thus conjugating by an element 1 + ep of
G we may assume that f · ej · w ∈ hej−1 · wi for some 0 ≤ j ≤ i. It is easy to check that this
determines the action of f completely, hence the action of h.
Finally suppose V is of the form L(i)|L(p − 2 − i)|L(i). Consider the submodule U with two
composition factors. Then it is clear that either e or f has a Jordan block of size at least p and by
applying an automorphism if necessary, we may assume the former. Thus there is a vector v−ı̄ of
h-weight −ı̄ where ı̄ = p − 2 − i under which e generates U . Let v−ı̄+2s := es · v−ı̄ for 0 ≤ s ≤ p − 1
so that U is spanned by the vj . Inductively this determines the action of f except for the action of
f on v−ı̄ itself. Since f sends this vector into the −ı̄ − 2 = ith weight space of U which is spanned
by vi , we have f · v−ı̄ = λvi , for some λ ∈ k. Now take an −ith weight vector w−i ∈ V \ U . Define
w−i+2s = es · w−i for 0 ≤ s ≤ i. Then V is spanned by the wj and vj . Furthermore the action of f
is determined except on w−i itself. Since the ı̄ = p − 2 − i weight space is 1-dimensional we must
have f · w−i = µvı̄ . But calculating h · w−i = e · f · w−i − f · e · w−i we get −iw−i = µv−i − iw−i .
Thus µ = 0. Now the wj do not generate a submodule, so we must have e · wi = νv−ı̄ for some
ν 6= 0. But then h · wi = e · f · wi − f · e · wi gives us iwi = iwi − νλv−ı̄ so that λ = 0. Replacing the
vj by their division by ν we see that the action of e, f and h are now completely determined. It is
easy to check directly that the module is self-dual and the Jordan blocks of f are as claimed.
Lemma 6.3. An indecomposable representation V of h := he, h, f i ∼
= sl2 of dimension r with
p < r < 2p is either a quotient or submodule of a projective restricted module, or self-dual of the
type described in Lemma 6.2.
Proof. A similar argument as used in the previous proof reduces us to the case that V consists of
restricted composition factors. We may assume V contains at least one composition factor each
of types L(i), L(ı̄) = L(p − 2 − i). We will show that the multiplicity of L(i) is 1 or 2. Assume
V has three composition factors isomorphic to L(i). It suffices to show there is no representation
V with composition series L(i)|L(ı̄)|(L(i) + L(i)). Let U be a module with composition series
L(ı̄)|(L(i) + L(i)). Then it is easy to see that e and f satisfy ep = f p = 0 as endomorphisms of
U . Thus U is a restricted representation. As it has simple head it is a quotient of the projective
cover P (ı̄) of L(ı̄) with composition series L(ı̄)|(L(i) + L(i))|L(ı̄). Let vı̄ , . . . , v−ı̄ be a set of weight
vectors for the 1-dimensional weight spaces of U coming from the composition factor L(ı̄). Then
one can check that an action of h on U is given (up to isomorphism) by e · vı̄ = u−i and f · v−ı̄ = wi
where {ui , . . . , u−i } and {wi , . . . , w−i } are a basis of weight vectors for the socle of U .
Now if V exists of this form, we may take a weight vector y−i of weight −i in V \ U . Then
y−i+2s := es · yi for 0 ≤ s ≤ i − 1 together with u, v and w spans V . If e · yi 6= 0 then it is a non-zero
multiple of vı̄ . Now h · yi = e · f · yi − f · e · yi leads to a contradiction. Similarly, we see that
f · y−i = 0. But this implies that hyi , . . . , y−i i is a submodule, which is a contradiction.
Thus V contains at most two composition factors isomorphic to L(i). If it contains two, then it
contains at most one of type L(ı̄). It is now clear that there are only three possible structures for
V , namely L(ı̄)|(L(i) + L(i)) or its dual, or L(i)|L(ı̄)|L(i), which is unique up to isomorphism by
Lemma 6.2.
Proof of Theorem 1.2. The reduction to the case where G is simple is easy. If G is classical, we
may argue using the natural module V for G. If G is of type An , then whenever p > b(An ) = n + 1
27
we may appeal to Theorem 1.1. On the other hand, whenever p ≤ b(An ) there is an Ap−1 -Levi
subgroup L of G, for which h acts on the natural module for L′ as an indecomposable module
k|L(p − 2). Now there is a bijection between nilpotent orbits of Ap−1 and isomorphism classes of
completely reducible sl2 -representations via partitions of p, so that h is in an extra conjugacy class
of sl2 -subalgebras, showing that there is no bijection between the sets in (**). It remains to consider
the cases where G is of type Bn , Cn or Dn . Note that p > b(G) implies 2p > dim V for V the natural
module for G. First note that completely reducible restricted actions of h are in 1-1 correspondence
with nilpotent orbits of G, which have associated partitions of dim V of size at most p, and these
account for all completely reducible actions by Lemma 4.3. Moreover, this bijection is realised by
sending h to any of the (G-conjugate) nilpotent elements it contains. If V ↓ h is not completely
reducible, we have that V ↓ h contains an indecomposable summand W which is not irreducible.
Suppose L(i) is a submodule of W . We have 0 ≤ i < p − 1 since L(p − 1) is in its own block
and we can have at most one factor of this type in V by dimensions. Then U ∼
= L(p − 2 − i)|L(i)
∗
∼
must be a submodule of W also. If U were a direct summand then U = L(i)|L(p − 2 − i) is
another submodule of V . But there can be no intersection between U and U ∗ . This implies that
dim V ≥ 2p, a contradiction. Hence U is not a direct summand. By dimensions there is at most one
indecomposable summand W , taking one of the forms discussed in Lemma 6.3. If it is the quotient
or submodule of a projective then a similar argument shows it has no intersection with its dual,
contradicting the dimension of W . Hence U lies in an indecomposable direct summand of the type
discussed in Lemma 6.2. Being the unique such in V , it must be non-degenerate. Thus h lives in
Lie(X × Y ) where X is the stabiliser of W ⊥ in G, of type Sp(W ⊥ ), SO(W ⊥ ) or O(W ⊥ ) (as the case
may be) with Y a similar stabiliser of W in G. Now, the necessarily restricted completely reducible
image of h in Lie(X) is determined up to X-conjugacy by the image of any nilpotent element in h.
By Lemma 6.2 the projection of h to Lie(Y ) is determined uniquely up to conjugacy in Y , with an
element of largest orbit dimension acting with a full Jordan block on W . In particular, a nilpotent
element of h of largest orbit dimension always determines h up to conjugacy.
In case G is exceptional, the analysis here is very case-by-case. Firstly, let e be any nilpotent
element of h. By Lemma 6.1, there is a toral element h ∈ h such that [h, e] = 2e and h ∈ im ad e.
By Proposition 2.8 we have h ∈ Lie(τ (Gm )) for τ a cocharacter associated to e. Now in the
grading of g associated with τ , we have e ∈ g(2), h ∈ g(0) and since [e, f ] = h, projecting f to its
¯
¯
component f¯ ∈ g(−2), we must have (e, h,
= ge (≥ 0). As also
Lf ) an sl2 -triple, with f − f ∈ ge L
[h, f − f¯] = −2(f − f¯), we have f − f¯ ∈ r>0 ge (−2 + rp). If the subspace r>0 ge (−2 + rp) is
trivial, we are automatically done. Looking at the tables in [LT11]5, this already rules out 111 of
the 152 orbits.
The strategy employed in the remaining cases is more subtle. The idea is to work inductively
through the remaining nilpotent orbits from largest dimension downwards, proving that for a given
nilpotent element e of orbit dimension d there is just one conjugacy class of nilpotent elements f
whose orbit dimension is d or lower and such that (e, h, f ) is an sl2 -triple. That is, we will show
that whenever f is not conjugate to f¯ by an element in Ge ∩ Gh , then f has higher orbit dimension
than that of e. To show this, we will effectively find all possible f such that (e, h, f ) is an sl2 -triple
and check each case.
To progress further, recall that Ge is the semidirect product Ce Re of its reductive part Ce and
unipotent radical Re . Since p is a good prime
L at this stage, one has Lie(Ce ) = ge (0) and Lie(Re ) =
ge (> 0). We also have Lie(Ge ∩ Gh ) ⊆
r≥0 ge (rp). We present two tools, which together deal
5In [LT11], the values of r such that g (r) 6= 0 are listed in the columns marked m.
e
28
with the remaining cases.
L For the first, henceforth Tool (a), suppose Ce acts with finitely many
orbits on the subspace r>0 ge (−2 + rp). In these cases one can write down all possibilities for the
element f up to conjugacy by Ge ∩ Gh . Then it is a simple matter to check the Jordan blocks of
the element f on the adjoint module for g in GAP and observe, by comparing with [Law95], that
the orbit dimension is larger than that of e. For example, if (g, O, p) is (E7 , (A5 )′ , 11) then we may
take
e = e100000 + e010000 + e001000 + e000100 + e000010 ,
0
0
0
0
0
h = 5 · hα1 + 8 · hα3 + 9 · hα4 + 8 · hα5 + 5 · hα6 ,
f¯ = 5 · e−100000 + 8 · e−010000 + 9 · e−001000 + 8 · e−000100 + 5 · e−000010 .
0
0
0
0
0
Ce◦
A21
and ge (−2 + p) is a module for C of high weight ̟1 for the first
of type
We have C :=
factor say. Thus C has two orbits on ge (−2 + p), namely the zero orbit and the non-zero orbit.
The element f¯ itself corresponds to the zero orbit, whereas if f = f¯ + f1 for 0 6= f1 ∈ ge (−2 + p)
then one checks that the Jordan blocks of the action of f on g are 23 + 173 + 15 + 11 + 93 + 3 + 13 ,
whereas those of f¯ are 11 + 102 + 93 + 7 + 66 + 53 + 42 + 3 + 16 . Comparing with [Law95] one sees
that f is in the orbit E6 whereas f¯ is in the orbit (A5 )′ .
The remaining cases all have the property that ge (p) 6= 0 with ge (p) having a basis of commuting
sums of root vectors. To describe Tool (b), suppose x ∈ ge (p) is a sum of commuting root vectors.
Then as p 6= 2, one may form the endomorphism δx := 1 + ad x + 21 (ad x)2 . Since x ∈ ge (p),
it follows that x commutes with both e and h, hence δx ∈ Ge ∩ Gh . Thus he, h, δx (f¯)i is an
sl2 -triple. For any y ∈ ge (p), we have [δx (f¯), δy (f¯)] = 0 modulo ge (> −2 + p) and so we get a
linear map δ• (f¯) : ge (p) → ge (−2 + p) by x 7→ δx (f¯). Now if z ∈ ge (0) then as f¯ ∈ ge (−2), we
have [f¯, z] ∈ g(−2) and since [e, [f¯, z]] = [h, z] = 0, we have [f¯, z] ∈ ge (−2) = 0. Thus f¯ is in the
centraliser of ge (0) so that Ce◦ also commutes with δ• (f¯); this means that δ• (f¯) is a Ce◦ -module map
from ge (p) → ge (−2 + p). Thus one may assume, replacing f by a conjugate, that the projection
of f to the image of δ• (f ) in ge (−p + 2) is zero. In particular, if this map is an isomorphism, one
concludes that any sl2 -tripleL(e, h, f ) is conjugate to another (e, h, f ′ ) such that the projection of
f ′ to ge (−2 + p) is zero. If r>0 ge (−2 + rp) = ge (−2 + p) (which it almost always is) this shows
that f is unique up to conjugacy. When using the fact that δ• (f¯) is a Ce◦ -module map, to check
the isomorphism, one finds that it always suffices to check that δ• (f¯) is non-zero on restriction to
high weights of the Ce◦ -modules ge (p), since these modules are always semisimple.
For example, suppose (g, O, p) is (E6 , D5 (a1 ), 7). Checking [LT11], one may choose
e = e10000 + e00000 + e01000 + e00010 + e00100 + e00110
0
1
0
0
1
0
and
h = 6 · hα1 + 7 · hα2 + 10 · hα3 + 12 · hα4 + 7 · hα5 .
Under these circumstances, one may calculate that the component of f in g(−2) is
f¯ := 6 · e−10000 + e−00000 + 10 · e−01000 + e−00010 + 6 · e−00100 + 6 · e−00110 .
0
L
0
1
0
1
0
From [LT11], one has r>0 ge (−2 + rp) = ge (5) of dimension 2, while ge (p) is generated over k by
the (commuting) root vectors x1 := e−00001 and x2 := e12321 . One checks that the images of x1 and
2
0
x2 under δ• (f ) are linearly independent. Thus δ• (f ) induces an isomorphism ge (p) → ge (−2 + p)
as required, showing the uniqueness of f in this case.
29
Lastly, using Tool (b) to assume that f projects trivially to the image of δ• (f ) and then applying
Tool (a) finishes the analysis in any remaining cases where ge (−2 + p) 6= 0. For example, if
(g, O, p) = (E8 , E7 (a4 ), 11), then Ce◦ is of type A1 and we may take
e = e1000000 + e0010000 + e0000010 + e0000110 + e0110000 + e0011000 + e0111000 + e0011100 .
0
0
0
1
0
1
0
Then δ• (f ) acts non-trivially on the element x = e2465431 ∈ ge (p) inducing a
3
1
Ce◦ -isomorphism
ge (p) → kCe◦ · (e2465421 + e2465321 ) ⊆ ge (−2 + p) and so one may assume
3
3
f ∈ f¯ + kCe◦ · (e2465431 ),
3
· (e2465431 ) ∼
with
= L(1) as a Ce◦ -module. Now, we use Tool (a). As Ce◦ has just one non-zero
3
orbit on the representation L(1), we may assume that f = f¯+ e2465431 . But computing the Jordan
kCe◦
3
blocks of f on the adjoint representation, one finds that f has a higher orbit dimension than that
of f¯.
There remain some cases where ge (−2 + 2p) 6= 0. These are (E8 , E8 (a5 ), 11), (E8 , E8 (b4 ), 11) and
(F4 , F4 (a2 ), 5). Since these cases are distinguished orbits, ge (−2 + p) = 0. Precisely the same
analysis as used in Tool (b) will work here, replacing (ge (−2 + p), ge (p)) with (ge (−2 + 2p), ge (2p)).
It remains to show that no bijection exists when 2 < p ≤ b(G). It is well known that the number of nilpotent orbits of g is finite and so it suffices to show there are infinitely
Pp−1 many classes
of sl2 -subalgebras. Suppose l is a Lie subalgebra of type Ap−1 and let e =
i=1 eαi and f =
Pp−1 2
he, f i is an sl2 -subalgebra with [e, f ] = diag(p − 1, p − 3, . . . ,
i=0 −i e−αi . Then one checks thatP
p
−p + 3, −p + 1). Further, let f0 = p−1
i=0 ie−αi and λ ∈ k with λ 6= λ. Then again one checks
that he, (f + λf0 )i is an sl2 -subalgebra, this time with [e, f ] = h + λI. We therefore have infinitely
many sl2 -subalgebras with pairwise non-isomorphic representations on the restriction of the natural
representation of l. The condition 2 < p ≤ b(G) implies that if G is not of type G2 then g has
a Levi subalgebra of type Ap−1 and when G is of type G2 then p = 3 and g has a pseudo-Levi
subalgebra of type A2 . In all cases we therefore have a subalgebra l of type Ap−1 and moreover, the
restriction of the adjoint representation of g to l contains a copy of the natural representation of
l. Thus we have infinitely many GL(g)-conjugacy classes of sl2 -subalgebras of g (all with pairwise
non-isomorphic representations) and so we certainly have infinitely many G-conjugacy classes of
sl2 -subalgebras.
With Theorem 1.2 in hand, we turn our attention to p-subalgebras and prove the last result,
Theorem 1.5.
Proof of Theorem 1.5. In light of Theorem 1.4, it suffices to prove the following three claims: All
p-subalgebras of type A1 are G-cr when b(G) < p ≤ h(G); there exists a non-G-cr p-subalgebra of
type A1 when 5 ≤ p ≤ b(G); and the examples of non-G-cr subalgebras of type B2 , G2 and W1
given in Section 4 are all p-subalgebras.
First, let b(G) < p ≤ h(G) and h = he, h, f i be a p-subalgebra of g, with e belonging to the largest
nilpotent orbit meeting h. Since h is a p-subalgebra we have e[p] = 0 and the restriction on p implies
that p is a good prime for G. As in the proof of Theorem 1.6, we are therefore furnished with a
good A1 -subgroup H of G such that e ∈ Lie(H), by [Sei00, Props. 4.1 & 4.2]. But now Theorem 1.4
implies that Lie(H) is conjugate to h, since both contain e and all nilpotent elements of Lie(H) are
30
conjugate (since all unipotent elements of H are conjugate) so e belongs to the largest dimensional
nilpotent orbit meeting Lie(H). By [Sei00, Prop. 7.2], good A1 -subgroups are G-cr. Therefore
Lie(H) is G-cr by [McN07, Thm. 1] and hence h is G-cr, as required.
Now suppose 5 ≤ p ≤ b(G). This implies that either G is of type E6 and p = 5 or G is of type
E7 , E8 and p = 5, 7. In each case we present a non-G-cr p-subalgebra of type A1 : By definition of
b(G), we have a Levi subgroup of type Ap−1 . We know l = Lie(L) has a p-subalgebra of type sl2
acting as L(p − 2)|k on the natural module for l (see the proof of Lemma 4.7). This subalgebra is
therefore non-L-cr and hence non-G-cr by Lemma 4.1.
Finally, we consider the subalgebras of type B2 , G2 and W1 from Section 4. The claim is clear for
the subalgebras of type B2 and G2 , since the given examples are Lie(H) for H a subgroup of G,
hence p-subalgebras. So it remains to consider the subalgebras of type W1 constructed in Section
4.3. Firstly, the subalgebra of type W1 contained in A4 when p = 5 is a p-subalgebra since it acts
via its canonical representation k[X]/X p ∼
= L(4)|k. For p = 7 we have two explicit constructions
of subalgebras of type W1 , one contained in a C3 -parabolic of F4 and the other contained in an
A5 -parabolic of E6 (and E7 , E8 ). One checks in GAP that the elements representing ∂ and X p−1 ∂
are both sent to 0 by the [p]-map in both cases and hence the subalgebras are p-subalgebras.
Remark 6.4. Suppose G is simple and of exceptional type with Lie algebra g. Then we can extend
Theorem 1.5 to the case p = 3 when h is of type A1 . We have checked computationally that all
p-subalgebras of type A1 are G-cr when G is of type G2 . If G is not of type G2 then it has a Levi
subgroup L of type A2 . We know l = Lie(L) has a p-subalgebra of type sl2 acting as L(1)|k on
the natural 3-dimensional module. This subalgebra is therefore non-L-cr and hence non-G-cr by
Lemma 4.1.
Remark 6.5. The proofs and statements of the main theorems show that whenever p > b(G), there
is in fact a bijection of the form (*) between conjugacy classes of p-subalgebras of type sl2 and
nilpotent elements e ∈ g = Lie(G) such that e[p] = 0.
Finally, we turn our attention to the proof of Theorem 1.7.
Proof of Theorem 1.7. If the nilpotent element e belongs to an orbit whose label is defined over C
then we will see that e is contained in a Z-defined sl2 -subalgebra of gC such that the image of e in
gZ ⊗Z Fp = gFp is of the same type as e. In particular e is always contained in an sl2 -subalgebra
of g. If G is of classical type, this is straightforward as the natural representation is defined over
Z such that e is a sum of Jordan blocks with 1 on the super-diagonal, h is diagonal and f is
determined by a combinatorial formula given in [Car93, Proof of Prop. 5.3.1]. More carefully, for
sufficiently large p, there is a 1-1 correspondence between partitions of n of an appropriate sort,
depending on the root system of G, and direct sums of irreducible sl2 -representations of dimensions
corresponding to the partition which give an embedding into G. The irreducible representations
satisfy the condition of being defined over Z, with e represented by a sum of Jordan blocks with
1s on the superdiagonal. Thus we may assume that G is exceptional. Now [Tes95, Lem. 2.1, 2.4]
gives a Z-defined sl2 -triple containing e in the cases that e is not contained in a maximal rank
subalgebra. (That the representatives for the distinguished nilpotent elements in [Tes95] have the
same label over all primes follows from the results of [LS12].) Of course, if e is in a maximal rank
subalgebra then we are done by induction.
We are left with the distinguished elements that belong to orbits whose labels are not defined over
C. There are just two of these for p ≥ 3: one is the exceptional nilpotent orbit in G2 with label
31
(3)
(3)
A1 . The other is the exceptional nilpotent orbit in E8 with label A7 . For the latter we simply
exhibit an example cooked up with GAP. From [LS12] one may take
e := e0001110 + e0000111 + e1110000 + e1111000 + e0011100 + e0111100 + e0121000 + e0011111
0
0
1
0
1
0
0
1
and this is filled out to an sl2 -triple with
h = hα4 + hα5 + 2 · hα6 + hα8
f = 2 · e−0001110 + e−0011100 + e−0111100 + e−1111100 + 2 · e−0121000
0
1
0
1
0
+ 2 · e−0011110 + e−0111110 + e−0011111 + 2 · e−1121000 + 2 · e−0121100 .
1
0
0
1
1
(3)
Finally, if e is of type A1 in G2 then e can be taken to be e2α1 +α2 + e3α1 +2α2 . Now it is straightforward to check that the image of (ad e)2 does not contain e. (One can even do this by hand.)
References
[ABS90]
H. Azad, M. Barry, and G. Seitz, On the structure of parabolic subgroups, Comm. Algebra 18 (1990),
no. 2, 551–562. MR MR1047327 (91d:20048)
[BMRT13] M. Bate, B. Martin, G. Röhrle, and R. Tange, Closed orbits and uniform S-instability in geometric
invariant theory, Trans. Amer. Math. Soc. 365 (2013), no. 7, 3643–3673. MR 3042598
[BNP02] C. P. Bendel, D. K. Nakano, and C. Pillen, Extensions for finite Chevalley groups. II, Trans. Amer. Math.
Soc. 354 (2002), no. 11, 4421–4454 (electronic). MR 1926882 (2003k:20063)
[BNP04]
, Extensions for Frobenius kernels, J. Algebra 272 (2004), no. 2, 476–511. MR 2028069
(2004m:20089)
[BNW09] B. D. Boe, D. K. Nakano, and E. Wiesner, Ext1 -quivers for the Witt algebra W (1, 1), J. Algebra 322
(2009), no. 5, 1548–1564. MR 2543622 (2011b:17027)
[Car72]
R. W. Carter, Conjugacy classes in the Weyl group, Compositio Math. 25 (1972), 1–59. MR 0318337 (47
#6884)
[Car93]
Roger W. Carter, Finite groups of Lie type, Wiley Classics Library, John Wiley & Sons Ltd., Chichester, 1993, Conjugacy classes and complex characters, Reprint of the 1985 original, A Wiley-Interscience
Publication. MR MR1266626 (94k:20020)
[DeB02]
Stephen DeBacker, Parametrizing nilpotent orbits via Bruhat-Tits theory, Ann. of Math. (2) 156 (2002),
no. 1, 295–332. MR 1935848
[Far09]
R. Farnsteiner, Group-graded algebras, extensions of infinitesimal groups, and applications, Transform.
Groups 14 (2009), no. 1, 127–162. MR 2480855
[HS16a]
Sebastian Herpel and David I. Stewart, Maximal subalgebras of Cartan type in the exceptional Lie algebras,
Selecta Math. (N.S.) 22 (2016), no. 2, 765–799. MR 3477335
[HS16b]
Sebastian Herpel and David I. Stewart, On the smoothness of normalisers, the subalgebra structure of
modular Lie algebras and the cohomology of small representations, Doc. Math. (2016), to appear.
[Jan03]
J. C. Jantzen, Representations of algebraic groups, second ed., Mathematical Surveys and Monographs,
vol. 107, American Mathematical Society, Providence, RI, 2003. MR MR2015057 (2004h:20061)
[Jan04]
, Nilpotent orbits in representation theory, Lie theory, Progr. Math., vol. 228, Birkhäuser Boston,
Boston, MA, 2004, pp. 1–211. MR 2042689 (2005c:14055)
[Kos59]
Bertram Kostant, The principal three-dimensional subgroup and the Betti numbers of a complex simple
Lie group, Amer. J. Math. 81 (1959), 973–1032. MR 0114875 (22 #5693)
[Law95]
R. Lawther, Jordan block sizes of unipotent elements in exceptional algebraic groups, Comm. Algebra 23
(1995), no. 11, 4125–4156. MR MR1351124 (96h:20084)
[LS96]
M. W. Liebeck and G. M. Seitz, Reductive subgroups of exceptional algebraic groups, Mem. Amer. Math.
Soc. 121 (1996), no. 580, vi+111. MR MR1329942 (96i:20059)
[LS04]
, The maximal subgroups of positive dimension in exceptional algebraic groups, Mem. Amer. Math.
Soc. 169 (2004), no. 802, vi+227. MR MR2044850 (2005b:20082)
32
[LS12]
[LT04]
[LT11]
[Lüb01]
[LY93]
[McN05]
[McN07]
[MT11]
[Pom80]
[Pre91]
[Pre95a]
[Pre95b]
[Pre03]
[PS]
[PS16]
[Sei00]
[Ser05]
[SF88]
[SS70]
[Ste10]
[Ste12]
[Ste16]
[Str73]
[Tes92]
[Tes95]
[Wei94]
[Ye90]
Martin W. Liebeck and Gary M. Seitz, Unipotent and nilpotent classes in simple algebraic groups and Lie
algebras, Mathematical Surveys and Monographs, vol. 180, American Mathematical Society, Providence,
RI, 2012. MR 2883501
Martin W. Liebeck and Donna M. Testerman, Irreducible subgroups of algebraic groups, Q. J. Math. 55
(2004), no. 1, 47–55. MR 2043006 (2005b:20087)
R. Lawther and D. M. Testerman, Centres of centralizers of unipotent elements in simple algebraic groups,
Mem. Amer. Math. Soc. 210 (2011), no. 988, vi+188. MR 2780340 (2012c:20127)
Frank Lübeck, Small degree representations of finite Chevalley groups in defining characteristic, LMS J.
Comput. Math. 4 (2001), 135–169 (electronic). MR 1901354 (2003e:20013)
Jia Chun Liu and Jia Chen Ye, Extensions of simple modules for the algebraic group of type G2 , Comm.
Algebra 21 (1993), no. 6, 1909–1946. MR 1215553 (94h:20051)
G. J. McNinch, Optimal SL(2)-homomorphisms, Comment. Math. Helv. 80 (2005), no. 2, 391–426.
MR 2142248 (2006f:20055)
George McNinch, Completely reducible Lie subalgebras, Transformation Groups 12 (2007), no. 1, 127–135.
Gunter Malle and Donna Testerman, Linear algebraic groups and finite groups of Lie type, Cambridge
Studies in Advanced Mathematics, vol. 133, Cambridge University Press, Cambridge, 2011. MR 2850737
(2012i:20058)
Klaus Pommerening, Über die unipotenten Klassen reduktiver Gruppen. II, J. Algebra 65 (1980), no. 2,
373–398. MR 585729 (83d:20031)
A. A. Premet, The Green ring of a simple three-dimensional Lie p-algebra, Izv. Vyssh. Uchebn. Zaved.
Mat. (1991), no. 10, 56–67. MR 1179217
Alexander Premet, An analogue of the Jacobson-Morozov theorem for Lie algebras of reductive groups of
good characteristics, Trans. Amer. Math. Soc. 347 (1995), no. 8, 2961–2988. MR 1290730 (95k:17012)
, Irreducible representations of Lie algebras of reductive groups and the Kac-Weisfeiler conjecture,
Invent. Math. 121 (1995), no. 1, 79–117. MR 1345285 (96g:17007)
, Nilpotent orbits in good characteristic and the Kempf-Rousseau theory, J. Algebra 260 (2003),
no. 1, 338–366, Special issue celebrating the 80th birthday of Robert Steinberg. MR 1976699
Alexander Premet and David I. Stewart, Classification of the maximal subalgebras of exceptional lie
algebras over fields of good characteristic, in preparation.
, Rigid orbits and sheets in reductive lie algebras over fields of prime characteristic, J. Inst. Math.
Jussieu (2016), to appear.
Gary M. Seitz, Unipotent elements, tilting modules, and saturation, Invent. Math. 141 (2000), no. 3,
467–502. MR 1779618 (2001j:20074)
J-P. Serre, Complète réductibilité, Astérisque (2005), no. 299, Exp. No. 932, viii, 195–217, Séminaire
Bourbaki. Vol. 2003/2004. MR 2167207 (2006d:20084)
H. Strade and R. Farnsteiner, Modular Lie algebras and their representations, Monographs and Textbooks
in Pure and Applied Mathematics, vol. 116, Marcel Dekker Inc., New York, 1988. MR 929682 (89h:17021)
T. A. Springer and R. Steinberg, Conjugacy classes, Seminar on Algebraic Groups and Related Finite
Groups (The Institute for Advanced Study, Princeton, N.J., 1968/69), Lecture Notes in Mathematics,
Vol. 131, Springer, Berlin, 1970, pp. 167–266. MR 0268192 (42 #3091)
David I. Stewart, The second cohomology of simple SL2 -modules, Proc. Amer. Math. Soc. 138 (2010),
no. 2, 427–434. MR 2557160 (2011b:20134)
D. I. Stewart, The second cohomology of simple SL3 -modules, Comm. Algebra 40 (2012), no. 12, 4702–
4716. MR 2989676
David I. Stewart, On the minimal modules for exceptional Lie algebras: Jordan blocks and stabilizers,
LMS Journal of Computation and Mathematics 19 (2016), no. 1, 235–258.
H. Strade, Lie algebra representations of dimension p − 1, Proc. Amer. Math. Soc. 41 (1973), 419–424.
MR 0330247 (48 #8585)
D. M. Testerman, The construction of the maximal A1 ’s in the exceptional algebraic groups, Proc. Amer.
Math. Soc. 116 (1992), no. 3, 635–644. MR 1100666 (93a:20073)
Donna M. Testerman, A1 -type overgroups of elements of order p in semisimple algebraic groups and the
associated finite groups, J. Algebra 177 (1995), no. 1, 34–76. MR 1356359 (96j:20067)
Charles A. Weibel, An introduction to homological algebra, Cambridge Studies in Advanced Mathematics,
vol. 38, Cambridge University Press, Cambridge, 1994. MR 1269324 (95f:18001)
Jia Chen Ye, Extensions of simple modules for the group Sp(4, K), J. London Math. Soc. (2) 41 (1990),
no. 1, 51–62. MR 1063542 (91j:20105a)
33
[Yeh82]
S. el B. Yehia, Extensions of simple modules for the Chevalley groups and its parabolic subgroups, Ph.D.
thesis, University of Warwick, 1982.
University of Newcastle, UK
E-mail address: [email protected] (Stewart)
School of Mathematics, University of Bristol, Bristol, BS8 1TW, UK, and the Heilbronn Institute
for Mathematical Research, Bristol, UK.
E-mail address: [email protected] (Thomas)
34
| 4 |
Submetido para TEMA
Simulação Numérica da Dinâmica de Coliformes
Fecais no Lago Luruaco, Colômbia 1
arXiv:1712.00009v1 [q-bio.QM] 30 Nov 2017
T.M. SAITA2 , P.L. NATTI 3, E.R. CIRILO 4 , N.M.L. ROMEIRO 5 , Departamento
de Matemática, CCE, Universidade Estadual de Londrina, Rodovia Celso Garcia
Cid, 900 Campus Universitário, 86057-970, Londrina, PR, Brasil.
M.A.C. CANDEZANO 6 , Departamento de Matemática, Universidad del
Atlántico, Barranquilla, Atlántico, Colômbia.
R.B ACUÑA 7 , L.C.G. MORENO, 8 , Departamento de Biología, Universidad del Atlántico, Barranquilla, Atlántico, Colômbia
Resumo
O lago Luruaco localizado no Departamento de Atlántico, Colômbia, sofre com o
despejo de esgoto sem tratamento, trazendo riscos à saúde de todos que utilizam
suas águas. O presente estudo tem como objetivo realizar a simulação numérica
da dinâmica da concentração de coliformes fecais no lago. A simulação do fluxo
hidrodinâmico do lago é realizada por meio de um modelo bidimensional horizontal
(2DH), dado por um sistema de equações de Navier-Stokes. Já a simulação do
transporte de coliformes fecais é descrita por uma equação convectiva-dispersivareativa. Essas equações são resolvidas numericamente pelo Método de Diferenças
Finitas (MDF) e pelo método Mark and Cell (MAC), em coordenadas generalizadas. Quanto à construção da malha computacional do lago Luruaco, os métodos
spline cúbica e multibloco foram utilizados. Os resultados obtidos nas simulações permitiram uma melhor compreensão da dinâmica de coliformes fecais no lago
Luruaco, evidenciando as regiões mais poluídas. Os resultados também podem orientar órgãos públicos quanto à identificação dos emissores de poluentes no lago e
o desenvolvimento de um tratamento otimizado para a recuperação do ambiente
poluído.
Palavras-chave. Lago Luruaco, Coliformes Fecais, Malha Multibloco, Método de
Diferenças Finitas, Método MAC, Coordenadas Generalizadas.
1 Trabalho apresentado no XXXVI Congresso Nacional de Matemática Aplicada e Computacional.
2 [email protected]
3 [email protected]
4 [email protected]
5 [email protected]
6 [email protected]
7 [email protected]
8 [email protected]
2
1.
SAITA, NATTI, CIRILO, ROMEIRO, CANDEZANO, ACŨNA e MORENO
Introdução
A poluição hídrica é um problema mundial que prejudica a saúde, a segurança e
o bem-estar da população, afetando desfavoravelmente todos os seres vivos de um
determinado ambiente. Um caso particular de poluição hídrica é a contaminação
por um produto ou organismo, tal como o lançamento de esgotos domésticos em
corpos d’água. Como fatores negativos da poluição hídrica tem-se a deterioração
da qualidade da água, a proliferação de doenças, a morte de espécies aquáticas, a
eutrofização, entre outros. Segundo a Organização Mundial da Saúde (OMS) pelo
menos 2 milhões de pessoas, principalmente crianças com menos de 5 anos de idade,
morrem por ano no mundo devido a doenças causadas pela água contaminada [1].
O corpo d’água em estudo, o lago Luruaco localizado no Departamento de Atlántico, Colômbia, sofre com o problema da poluição, principalmente devido ao esgoto
gerado por cerca de 28000 habitantes do município de Luruaco, cidade situada à
montante do lago [2]. As consequências da contaminação do lago Luruaco são sentidas pelos próprios moradores da região, que utilizam essa água para o consumo
diário.
A análise periódica da qualidade da água é uma importante ferramenta de auxílio para que os órgãos responsáveis possam avaliar os riscos que uma poluição
pode causar à população. A OMS recomenda que as bactérias do grupo coliformes
fecais sejam utilizadas como parâmetro microbiológico de qualidade d’água, quando
se deseja mensurar a presença de organismos patogênicos [3]. Nos últimos anos um
grande esforço tem sido realizado para desenvolver modelos matemáticos que descrevam adequadamente a dinâmica de coliformes fecais em diferentes corpos d’água
[4, 5, 6]. Nesse contexto, estudar a dinâmica de coliformes fecais no corpo d’água
do lago Luruaco é uma forma de determinar as regiões que apresentam maior risco
de contaminação para a população local.
A utilização de modelos matemáticos para a análise dos parâmetros de qualidade
da água começou a se desenvolver em 1925, a partir do modelo unidimensional de
Streeter-Phelps que avaliava a demanda bioquímica de oxigênio (DBO) e o oxigênio
dissolvido (OD) [7]. Com o desenvolvimento da tecnologia e com a maior necessidade de estudar problemas de poluição ambiental, outros modelos matemáticos
foram desenvolvidos e se tornaram cada vez mais complexos, descrevendo também
propriedades biológicas ou químicas do corpo d’água [8, 9]. Dentre os principais
modelos desenvolvidos citam-se HSPF, MIKE, QUAL2E, QUASAR, SIMCAT e
WASP [10, 11]. Embora esses modelos já sejam bem testados e desenvolvidos, eles
são modelos comerciais e do tipo ”caixa preta”. Em geral, eles não permitem que as
equações e/ou modelo numérico sejam alterados conforme as necessidades do pesquisador. Neste trabalho, ao invés de utilizar modelos tipo ”caixa preta”, optou-se por
modelar a dinâmica de concentrações de coliformes fecais no lago Luruaco a partir
das equações de conservação, e implementar um modelo numérico compatível com o
problema. Assim, caso seja necessário, é possível modificar: (a) o modelo matemático, agregando novos termos, (b) o modelo geométrico, através de readequações via
coordenadas generalizadas, (c) o modelo numérico, implementando novos esquemas
numéricos que tratam termos convectivos.
Simulação numérica da dinâmica de coliformes fecais no lago Luruaco, Colômbia 3
O objetivo desse trabalho é definir um modelo representativo da dinâmica de
coliformes fecais no lago Luruaco, auxiliando assim no monitoramento da qualidade
das águas do referido corpo hídrico. Neste contexto, utiliza-se as bactérias do tipo
coliformes fecais (presentes principalmente no esgoto doméstico) como parâmetro
para a qualidade da água. A dinâmica de concentração dos coliformes fecais é
obtida por meio da simulação numérica de um modelo bidimensional horizontal
(2DH) durante um determinado intervalo de tempo [6]. O modelo de transporte de
coliformes fecais é dado por uma equação convectiva-dispersiva-reativa, cujo termo
convectivo contém o campo de velocidades do fluxo hidrodinâmico, obtido a partir
de um sistema de equações de Navier-Stokes.
As técnicas de discretização mais utilizadas em análise numérica são o Método
de Diferenças Finitas (MDF), o Método de Volumes Finitos (MVF) e o Método de
Elementos Finitos (MEF) [12]. Em nosso estudo a discretização dessas equações
diferenciais é realizada pelo Método de Diferenças Finitas devido à simplicidade
matemática e a fácil implementação computacional. Por se tratar de um modelo
não linear são adotados métodos específicos para a discretização desses termos que
contém não linearidades [6]. O esquema First Order Upwind (FOU) é aplicado e
a solução do sistema hidrodinâmico é obtida utilizando o método Mark and Cell
(MAC ), gerando o campo de velocidades para as águas do lago Luruaco.
Quanto ao domínio físico do problema (lago), ele é construído com o uso do
método multibloco em coordenadas generalizadas, onde a malha computacional é
feita a partir da união de sub-malhas menores. Uma das vantagens desse método é a
de preservar as características da geometria e mapear com mais fidelidade a região
de interesse, com a possibilidade de refinar apenas regiões específicas da malha.
Atualmente esse método é bastante utilizado em aplicações de problemas realísticos
[13].
O artigo é apresentado na seguinte forma: na Seção 2 apresenta-se as propriedades físicas e geográficas do lago Luruaco. Na Seção 3 descreve-se a Modelagem
Matemática e Numérica desenvolvida para esse estudo. Na Seção 4 a Modelagem
Geométrica do interior da malha do lago é realizada através do método multiblocos,
enquanto o contorno é ajustado por splines cúbicas. Enfim, na Seção 5 são apresentadas e discutidas as simulações numéricas da dinâmica de coliformes fecais no
lago Luruaco. Considerações finais também são apresentadas.
2.
O Lago Luruaco
O lago Luruaco está localizado no departamento de Atlántico, Colômbia, entre as
coordenadas 10o 16’ e 11o 04’ de latitude norte e 74o 43’ e 75o 16’ de longitude oeste.
O lago está a 31 metros de altitude em relação ao nível do mar, ocupando uma
região de cerca de 420 hectares com profundidade média de 5 metros. Estima-se
que a capacidade de armazenamento de água no lago seja de 12,5 × 106 m3 [14].
Situado à noroeste da Colômbia, o lago Luruaco está numa região de clima
tropical, apresentando características predominantemente quentes durante o ano
com temperaturas variando de 24o C a 28o C. Os ventos na região sopram no sentido
4
SAITA, NATTI, CIRILO, ROMEIRO, CANDEZANO, ACŨNA e MORENO
nordeste-sudeste com velocidade média de 15 a 20 km/h. O regime de precipitações,
quando não há a influência de fenômenos climáticos (El Niño e La Niña), ocorre
nos períodos entre maio/junho e agosto/novembro, alternando com períodos secos.
Por outro lado, nos anos que tais fenômenos são mais intensos, longos períodos de
estiagem (devido ao El Niño) ou de chuvas intensas (devido ao El Niña) ocorrem
[14].
O lago Luruaco pertence à bacia do Canal del Dique e depende de riachos para
ser abastecido. Os principais riachos ligados ao lago Luruaco são: riacho Limón,
riacho Mateo e riacho Negro. O córrego que comunica o lago Luruaco ao lago San
Juan de Tocagua faz o papel de efluente do lago, devido à diferença de altitude entre
os lagos. A Figura 1 indica a localização dos afluentes e efluente no lago Luruaco.
Figura 1: Localização das entradas de água do lago Luruaco através do Riacho
Mateo, Riacho Negro e Riacho Limón e saída através do canal para o lago San Juan
de Tocagua. Fonte: Adaptado de Google Earth, 2016.
A água do lago é utilizada em atividades como agricultura, pecuária, pesca,
turismo e consumo. Além disso, quase às margens do lago, está situado o Município
de Luruaco, que possui cerca de 28000 habitantes [2]. Essa população depende
diretamente do lago. Por outro lado, o esgoto doméstico gerado pelos moradores do
município de Luruaco e regiões próximas são uma fonte de matéria orgânica para
o lago. Sem o tratamento adequado, a maior parte do esgoto doméstico gerado
pelas cerca de 5000 habitações [14], são introduzidos no meio ambiente através de
fontes difusas, isto é, não possuem pontos de lançamentos específicos. Somado a
isso, existe também, os resíduos da agricultura e pecuária. Esta contaminação é
canalizada para os riachos que abastecem o lago Luruaco, alterando a composição
física e química da água, e em algumas situações tornando-a imprópria para o
consumo [14].
Simulação numérica da dinâmica de coliformes fecais no lago Luruaco, Colômbia 5
3.
Modelo Matemático e Numérico
O modelo utilizado neste estudo fornece a concentração de coliformes fecais na
superfície do lago Luruaco através de um sistema de equações diferenciais parciais
bidimensionais horizontais (2DH). O lago Luruaco não tem alta concentração de
coliformes fecais em suspensão no corpo d’água, de modo que nossa suposição é
que os coliformes fluem com o campo de velocidades hidrodinâmico do lago. Nessa
situação diz-se que os coliformes fecais estão em regime passivo e o estudo de seu
transporte pode ser realizado de modo desacoplado do modelo hidrodinâmico. Neste
contexto, nossa modelagem matemática supõem que os coliformes fecais escoam com
a mesma velocidade do fluido [15, 16].
Nesse trabalho o modelo hidrodinâmico das águas do lago Luruaco é dado por
um sistema de equações de Navier-Stokes, onde considera-se a água incompressível
de viscosidade constante, isto é, ρ = cte e µ = cte. Esse sistema, em conjunto com
a equação da continuidade, em coordenadas generalizadas (ξ, η, τ ), escreve-se como:
∂V
∂U
+
∂ξ
∂η
| {z }
=
0
∂
∂
(U u) +
(V u) =
∂ξ
∂η
{z
}
|
1
ρ
(3.1)
termo da eq. de continuidade
∂ u
|∂τ {zJ }
+
termo temporal
termo convectivo
∂
∂u
∂u
ν
J α
−β
∂ξ
∂ξ
∂η
|
{z
}
termo difusivo
∂ v
|∂τ {zJ }
termo temporal
+
termo convectivo
∂v
∂v
∂
J α
−β
ν
∂ξ
∂ξ
∂η
|
{z
}
termo difusivo
∂p ∂y ∂p ∂y
+
−
∂η ∂ξ
∂ξ ∂η
{z
}
|
termo de pressão
∂
+
∂η
|
1
∂
∂
(U v) +
(V v) =
∂ξ
∂η
ρ
|
{z
}
|
∂u
∂u
J γ
−β
∂η
∂ξ
{z
}
(3.2)
termo difusivo
∂p ∂x ∂p ∂x
−
+
∂ξ ∂η
∂η ∂ξ
{z
}
termo de pressão
∂
+
∂η
|
∂v
∂v
J γ
−β
∂η
∂ξ
{z
}
(3.3)
termo difusivo
onde (u, v) é o campo de velocidades do escoamento, U e V são as componentes
contravariantes do campo de velocidades, ρ e ν = µ/ρ são a densidade e a viscosidade
cinemática constantes, respectivamente, p é a pressão, J é o Jacobiano [17] dado
por
−1
∂x ∂y
∂x ∂y
J=
−
(3.4)
∂ξ ∂η
∂η ∂ξ
6
SAITA, NATTI, CIRILO, ROMEIRO, CANDEZANO, ACŨNA e MORENO
e as quantidades α, β e γ são dadas por
α=
∂x
∂η
2
+
∂y
∂η
2
, β=
2 2
∂x ∂x ∂y ∂y
∂x
∂y
+
, γ=
+
.
∂ξ ∂η
∂ξ ∂η
∂ξ
∂ξ
(3.5)
O sistema (3.1-3.3) fornece o campo hidrodinâmico de velocidades da água no lago
Luruaco. O desenvolvimento deste modelo pode ser encontrado em [18].
Do campo de velocidades da água e do modelo de transporte pode-se determinar a dinâmica de concentração de coliformes fecais em todo lago Luruaco. Nesse
trabalho o modelo de transporte convectivo-dispersivo-reativo em coordenadas generalizadas é dado por
∂ C
∂τ J
| {z }
termo temporal
+
∂
∂
(U C) +
(V C) =
∂ξ
∂η
{z
}
|
termo convectivo
∂C
∂C
∂
J α
−β
+D
∂ξ
∂ξ
∂η
|
{z
}
termo difusivo
+
−
KC
J
|{z}
+
termo reativo
∂
∂C
∂C
D
J γ
−β
, (3.6)
∂η
∂η
∂ξ
|
{z
}
termo difusivo
onde C = C(ξ, η, τ ) representa a concentração local de coliformes fecais ao longo do
tempo. Os termos K e D são as constantes de decaimento e difusão dos coliformes
fecais. Note que considera-se as difusões de coliformes fecais nas direções ξ e η
iguais [6].
Os termos das equações (3.1-3.3) e (3.6) estão agrupados de acordo com as características (conservação de massa, temporal, convectivo, difusivo, pressão e reativo)
que descrevem as propriedades do modelo. Note que os termos convectivos contém
não-linearidades, que em geral são tratadas com técnicas específicas [19, 20].
As equações hidrodinâmicas (3.1-3.3) e a equação de concentração de coliformes
fecais (3.6) são discretizadas através do Método de Diferenças Finitas. O termo
convectivo (não linear) é discretizado por meio do Método upwind FOU (First order upwind ) [17, 19]. Evidentemente existem técnicas mais sofisticadas e de maior
acurácia do que o método FOU, no entanto para as condições iniciais e de contorno abordadas no presente trabalho o método FOU produz resultados numéricos
consistentes.
Para que o modelo analisado descreva a dinâmica no lago é necessário que as características reais do lago sejam preservadas no modelo. O domínio onde as equações
(3.1-3.6) serão calculadas, também chamado de malha computacional, deve conter
as características da geometria do lago Luruaco. O uso do método multibloco para
regiões com contornos irregulares, como é o caso do lago Luruaco, preserva os contornos originais, podendo aumentar ou diminuir o grau de refinamento em regiões
específicas da malha. Na próxima seção constrói-se a malha computacional do lago
Luruaco.
Simulação numérica da dinâmica de coliformes fecais no lago Luruaco, Colômbia 7
4.
Malha Computacional do Lago Luruaco
A malha computacional é o domínio onde o modelo matemático-numérico deve ser
simulado, de modo que quanto mais próxima da geometria real a malha for, mais
realística será a simulação numérica.
Nesse contexto, a fronteira do domínio do lago contém informações à respeito
das entradas e saídas de água do lago, além de caracterizar a geometria real do
lago com a presença de curvas suaves e formas angulosas (bicos). Para captar
essas características foi utilizado o programa WebPlotDigitizer [21], onde a partir
de uma imagem ou gráfico são disponibilizadas as coordenadas dos pontos da região
analisada. Foram coletadas 309 coordenadas do contorno do lago, os pontos foram
interpolados através do método spline cúbico parametrizado e o bordo foi obtido.
Em seguida, via um conjunto de equações de geração de malhas, o grid inscrito
ao contorno foi obtido. Os detalhes sobre as técnicas matemáticas do processo de
geração de malhas podem ser encontrados em [22, 23].
Figura 2: Malha multibloco do lago Luruaco composta por 13 sub-blocos.
Quanto à geração do interior da malha computacional, utilizou-se o método
multibloco. O método multibloco faz a construção da malha computacional através
da união de sub-malhas menores, interligadas através de suas arestas. As equações
do fluxo hidrodinâmico e de concentração de coliformes fecais são calculadas em
cada sub-bloco e transmitidas para os sub-blocos adjacentes na forma de condição
8
SAITA, NATTI, CIRILO, ROMEIRO, CANDEZANO, ACŨNA e MORENO
de contorno. Assim, garantida a comunicação entre os sub-blocos, a simulação
numérica fornece a dinâmica de coliformes fecais em todo o lago Luruaco.
Uma das vantagens para o uso do método multibloco está na questão do detalhamento da malha. Cada bloco pode ter suas linhas coordenadas ajustadas de
maneira independente, desde que os blocos com adjacência sejam interligados sob
o mesmo número de linhas. Essa construção permite que regiões que necessitem de
maior detalhamento, como regiões irregulares ou entradas e saída de água, possam
ser melhor adaptadas localmente. A Figura 2 apresenta o domínio geométrico do
lago Luruaco discretizado.
5.
Simulações Numéricas
Nesta seção apresenta-se as simulações numéricas do transporte de coliformes fecais
no corpo d’água do lago Luruaco. Inicialmente ordenam-se as características e as
hipóteses utilizadas em nosso modelo a ser simulado numericamente.
Sobre o lago Luruaco, o mesmo caracteriza-se por ser uma lâmina d’água, quando
compara-se sua área com sua profundidade. Nesse contexto, um modelo do tipo 2DH
é utilizado nas simulações. Quanto à discretização da geometria do lago, é suposto
que o lago Luruaco tem 3 fontes (afluentes) e 1 sorvedouro (efluente), Figura 1.
Sobre o escoamento do corpo d’água, na modelagem é suposto que o transporte
dos coliformes fecais ocorre de modo passivo. A hipótese do escoamento passivo
permite desacoplar a simulação numérica em duas partes, ou seja, primeiramente
calcula-se o campo de velocidades advectivo do escoamento da água na geometria
modelada do lago, para posteriormente simular o transporte das concentrações de
coliformes fecais, nessa geometria, através do modelo advectivo-difusivo-reativo.
Sobre o modelo hidrodinâmico, supomos que o escoamento da água é descrito
pelas equações de Navier-Stokes e da pressão (3.1-3.3) com as hipóteses do fluido
ser incompressível do tipo newtoniano com coeficiente de Reynolds Re = 555.
Sobre o modelo de transporte de coliformes fecais (3.6) considera-se que o campo
de velocidades de escoamento é suposto ser a soma vetorial dos campos advectivo,
fornecido pelo modelo hidrodinâmico, e do campo difusivo. Supõem-se também
que os coeficientes de difusão molecular D e de decaimento K dos coliformes fecais
sejam constantes em toda a geometria do lago.
Sobre o coeficiente de difusão D, note que a difusão molecular localmente espalha os coliformes fecais por movimento aleatório, mas, em larga escala, são os
redemoinhos (vórtices) e turbilhões que espalham os coliformes fecais através da
chamada difusão turbulenta. Como a escala dos redemoinhos são muito maiores que as escalas da difusão molecular, a difusão turbulenta é várias ordens de
grandeza maior que aquela da difusão molecular. Conforme Chapra [15], as várias espécies reativas têm valores de difusão molecular no intervalo de valores entre
D = 10−3 m2 /h e D = 10−1 m2 /h. Já o coeficiente de difusão turbulenta, em lagos,
que depende da escala do fenômeno turbulento, assume valores entre D = 101 m2 /h
e D = 1010 m2 /h.
Com o objetivo de simular em nossa modelagem os fenômenos de difusão mole-
Simulação numérica da dinâmica de coliformes fecais no lago Luruaco, Colômbia 9
cular e turbulenta, envolvidos no transporte de coliformes fecais na lâmina d’água
do lago Luruaco, e devido à semelhança dos escoamentos no lago Luruaco e no
Lago Igapó I, localizado no município de Londrina, Paraná, supomos para ambos
os mesmos valores para D. Em [16] o melhor ajuste para o coeficiente de difusão
foi D = 3, 6 m2 /h. Quanto ao coeficiente de decaimento de coliformes fecais, em [6]
o melhor ajuste foi K = 0, 02h−1 , que será utilizado nessa simulação.
Neste contexto, utilizando o nosso modelo matemático, vamos descrever qualitativamente o impacto que uma descarga contínua de coliformes fecais, nas 3 entradas
(afluentes) do lago Luruaco, produz em toda sua extensão. Considera-se as seguintes
condições iniciais e de fronteira.
Condições iniciais para o modelo hidrodinâmico
Considerou-se no instante inicial, para os campos de velocidades e de pressões, que
o lago encontra-se em estado de quiescência, ou seja, suas águas estão supostamente
paradas em todo o domínio, exceto nas regiões dos afluentes e efluentes. Nos afluentes 1, 2 e 3 foram consideradas entradas instantâneas de água, e saída instantânea
de água pelo efluente, de modo que o coeficiente de Reynolds adotado (Re = 555)
preserve conservação da massa.
Condições de fronteira para o modelo hidrodinâmico
Considerou-se que o campo de velocidades, para t > 0, é nulo em toda a fronteira da geometria do lago, exceto nas entradas (afluentes) e saída (efluente) onde
considerou-se os mesmos valores da condição inicial. Quanto à pressão, a condição
tipo Neumann foi aplicada em toda a borda da geometria.
Condições iniciais para o modelo de transporte e reações
Considerou-se que no instante inicial o campo escalar de concentrações de coliformes
fecais é nulo em todos os ponto da malha (interiores e de fronteira) do lago.
Condições de fronteira para o modelo de transporte e reações
Para t > 0 considerou-se as seguintes concentrações constantes nas entradas e saídas
do lago Luruaco:
C(XN egro , t) = 100 M P N/100ml,
C(XM ateo , t) = 100 M P N/100ml
C(XLimon , t) = 500 M P N/100ml
(5.1)
C(Xef luent , t) condição tipo Neumann,
onde utilizamos a seguinte notação compacta XN egro = (x, y)N egro , as coordenadas
dos pontos de fronteira da malha que pertencem à entrada do Riacho Negro, idem
para os demais afluentes e efluentes do lago. A unidade M P N/100 ml significa Most
10
SAITA, NATTI, CIRILO, ROMEIRO, CANDEZANO, ACŨNA e MORENO
Probable Number (Número Mais Provável) de coliformes fecais por 100 ml em uma
amostra de água. Nos demais pontos da fronteira toma-se condição tipo Neumann.
A seguir são apresentadas as simulações numéricas para o campo de velocidades,
Figura 3, e para o campo de concentrações de coliforme fecais, Figura 4. As simulações foram realizada em um período de 72 horas, até que o regime estacionário
fosse atingido. Salienta-se que a Figura 3, à esquerda, exibe a magnitude do vetor
resultante da velocidade, enquanto que a Figura 3, à direita, mostra a direção e
sentido do vetor resultante da velocidade.
Figura 3: Simulação numérica do campo de velocidades gerado na malha multibloco
do lago Luruaco para Re = 555. Mapa com gradiente de tons de cinza (esquerda)
e mapa de campo vetorial (direita).
6.
Considerações Finais
Através das simulações numéricas do sistema de equações (3.1-3.3) e (3.6) pretendese descrever qualitativamente a dinâmica do transporte de coliformes fecais no corpo
d’água do lago Luruaco. Salienta-se que essa simulação numérica não tem a pretensão de fornecer previsões quantitativas sobre o índice de poluição em um dado
local do domínio físico do lago, num dado instante. Sabe-se que as condições de
entrada (iniciais e fronteira) de coliformes fecais variam diariamente. Nessas condições, o nosso objetivo é fornecer informações qualitativas, por exemplo, os locais
mais poluídos no domínio do lago, independentemente das concentrações iniciais e
de fronteira. Assim, estudar a dinâmica de coliformes fecais no corpo d’água do
lago Luruaco é uma forma de determinar as regiões que apresentam maior risco de
contaminação para a população local.
Observando a Figura 4, nota-se que a escala de concentrações de coliformes fecais
Simulação numérica da dinâmica de coliformes fecais no lago Luruaco, Colômbia11
Figura 4: Simulação numérica do campo de concentrações de coliformes fecais gerado na malha multibloco do lago Luruaco para D = 3, 6 m2 /h e K = 0, 02h−1 .
varia de C = 104 M P N/m3 = 1M P N/100ml, correspondente a excelente qualidade
de água, até C = 5 × 1010 M P N/m3 = 5 × 106 M P N/100ml, que corresponde a
águas muito poluídas.
Como consequência da geometria do lago Luruaco e do campo hidrodinâmico
de velocidades, Figura 3, nota-se que a poluição injetada no lago Luruaco pelo
Riacho Negro flui em parte na direção do Riacho Mateo, somando-se à poluição
injetada por esse último, gerando nessa região concentrações altas de coliformes
fecais, Figura 4. Na margem oposta, margem norte, devido à injeção de coliformes
pelo Riacho Limón, tem-se também uma região muito poluída que se estende até o
Canal efluente para o Lago San Juan de Tocagua. Nota-se que as regiões central e
sudeste do lago apresentam a melhor qualidade de água com uma concentração de
coliformes fecais relativamente baixa, Figura 4.
Enfim, essas simulações numéricas permitiram uma melhor compreensão qualita-
12
SAITA, NATTI, CIRILO, ROMEIRO, CANDEZANO, ACŨNA e MORENO
tiva da dinâmica de coliformes fecais no lago Luruaco, evidenciando as regiões mais
poluídas. Assim, o uso da simulação numérica é um instrumento útil e importante
para avaliar a qualidade da água em massas de água, apresentando resultados que
podem ser adotados por órgãos públicos para a recuperação do ambiente poluído,
para a identificação de emissores de poluentes e para proporcionar uma melhor qualidade de vida para os utilizadores do lago, bem como para os habitantes da cidade
de Luruaco que dependem dessa água.
Agradecimentos
Os autores agradecem o suporte financeiro da CAPES, oriundo do edital 24/2012 Pró-Equipamentos Institucional, convênio 774568/2012.
Abstract
The Luruaco Lake located in the Department of Atlántico, Colombia, is damaged
by the discharge of untreated sewage, bringing risks to the health of all who use its
waters. The present study aims to perform the numerical simulation of the concentration dynamics of fecal coliforms in the lake. The simulation of the hydrodynamic
flow is carried out by means of a two-dimensional horizontal (2DH) model, given by
a Navier-Stokes system. The simulation of fecal coliform transport is described by
a convective-dispersive-reactive equation. These equations are solved numerically
by the Finite Difference Method (FDM) and the Mark and Cell (MAC) method, in
generalized coordinates. Regarding the construction of the computational mesh of
the Luruaco Lake, the cubic spline and multiblock methods were used. The results
obtained in the simulations allow a better understanding of the dynamics of fecal
coliforms in the Luruaco Lake, showing the more polluted regions. They can also
advise public agencies on identifying the emitters of pollutants in the lake and on
developing an optimal treatment for the recovery of the polluted environment.
Keywords. Luruaco Lake, Fecal Coliforms, Multiblock Mesh, Finite Difference
Method, Mark and Cell Method, Curvilinear Coordenates.
Referências
[1] World Health Organization, Pelo menos 2 milhões de pessoas morrem por
ano no mundo por causa de água contaminada. Brasília: Agência Brasil,
2011. URL http://agenciabrasil.ebc.com.br/geral/noticia/2015-04/oms-estima2-milhoes-de-mortes-por-comida-e-agua-contaminadas-todos-os-anos.
[2] Município de Luruaco, Plan de desarrollo municipal de Luruaco 2012-2015. Barranquilha: Editorial Universidad del Atlántico, 2012.
[3] World Health Organization, Guías para la calidad del agua potable. Geneva:
Ediciones de la OMS, 1988.
Simulação numérica da dinâmica de coliformes fecais no lago Luruaco, Colômbia13
[4] A. De Brauwere, N. K. Quattara, and P. Servais, “Modeling fecal indicator
bacteria concentrations in natural surface waters: A review,” Crit Rev Env Sci
Tec, vol. 44, pp. 2380–2453, 2014.
[5] W. -C. Liu, W. -T. Chan, and C. -C. Young, “Modeling fecal coliform contamination in tidal Danshuei River estuarine system,” Sci Total Environ, vol. 50,
pp. 632–640, 2014.
[6] N. M. L. Romeiro, R. G. Castro, E. R. Cirilo, and P. L. Natti, “Local calibration
of coliforms parameters of water quality problem at Igapó I Lake, Londrina,
Paraná, Brazil,” Ecol Model, vol. 222, pp. 1888–1896, 2011.
[7] H. W. Streeter, and E. B. Phelps, “A study of the pollution and natural purification of the Ohio River,” U.S. Public Health Buletin, vol. 146, 1925. Reprinted by U.S. Department of Health, Education, and Welfare (1958). URL
http://udspace.udel.edu/handle/19716/1590.
[8] P. R. Couto, and S. M. Malta, “Interaction between sorption and biodegradation
processes in the contaminant transport,” Ecol Model, vol. 214, pp. 65–73, 2008.
[9] K. M. Salvage, and G. T. Yeh, “Development and application of a numerical
model of kinetic and equilibrium microbiological and geochemical reactions (BIOKEMOD),” J Hydrol, vol. 209, pp. 27–52, 1998.
[10] D. Sharma, and A. Kansal, “Assessment of river quality models: a review,” Rev
Environ Sci Biotechnol, vol. 12, pp. 285–311, 2013.
[11] G. Tsakiris, and D. Alexakis, “Water quality models: An review,” Eur Water,
vol. 37, pp. 33–46, 2012.
[12] J. H. Ferziger, and M. Peric, Computational methods for fluid dynamics. Berlin:
Springer Science & Business Media, 2012.
[13] N. P. Weatherill, “An introduction to grid generation using the multiblock
approach,” in Multiblock grid generation, pp. 6-17, Wiesbaden: Vieweg+Teubner
Verlag, 1993.
[14] Universidad del Atlántico, Diagnóstivo ambiental y estrategias de rehabilitación
de la ciénega de Luruaco. Barranquilha: Editorial Universidad del Atlántico,
2012.
[15] S. C. Chapra, Surface water-quality modeling. New York: McGraw-Hill, 1997.
[16] S. R. Pardo, P. L. Natti, N. M. L. Romeiro, and E. R. Cirilo, “A transport
modeling of the carbon-nitrogen cycle at Igapó I Lake -Londrina, Paraná State,
Brazil,” Acta Scientiarum Technology, vol. 34, pp. 217–226, 2012.
[17] C. R. Maliska, Transferência de calor e mecânica dos fluidos computacional:
fundamentos e coordenadas generalizadas. Rio de Janeiro: Livros Técnicos e
Científicos, 1994.
14
SAITA, NATTI, CIRILO, ROMEIRO, CANDEZANO, ACŨNA e MORENO
[18] A. N. D. Barba, Estudo e implementação de esquema upwind na resolução
de um modelo de dinâmica dos fluidos computacional em coordenadas generalizadas. Dissertação de mestrado, Departamento de Matemática, Universidade
Estadual de Londrina, Londrina, PR, pp. 117, 2015.
[19] A. O. Fortuna, Técnicas computacionais para dinâmica dos fluidos: conceitos
básicos e aplicações. São Paulo: Edusp, 2000.
[20] V. G. Ferreira, G. A. B. Lima, L. Correa, M. A. C. Candezano, E. R. Cirilo,
P. L. Natti, and N. M. L. Romeiro, “Avaliação computacional de esquemas convectivos em problemas de dinâmica dos fluidos,” Semina: Ciências Exatas e
Tecnológicas, vol. 32, pp. 107–116, 2012.
[21] A.
Rohatgi,
WebPlotDigitizer
http://arohatgi.info/WebPlotDigitizer.
Version
3.11,
2011.
URL
[22] E. R. Cirilo, and A. L. De Bortoli, “Geração da malha da traquéia e dos tubos
bronquiais por splines cúbico,” Semina: Ciências Exatas e Tecnológicas, vol. 27,
pp. 147–155, 2006.
[23] J. F. Thompson, Z. U. Warsi, and C. W. Mastin, Numerical grid generation:
foundations and applications. Amsterdam: North-Holland, vol. 146, 1985.
| 5 |
Robust Wald-type tests for non-homogeneous
observations based on minimum density power
divergence estimator
arXiv:1707.02333v1 [stat.ME] 7 Jul 2017
Ayanendranath Basu1∗ , Abhik Ghosh1 , Nirian Martin2 and Leandro Pardo2
1
Indian Statistical Institute, Kolkata, India
2
Complutense University, Madrid, Spain
∗
Corresponding author; Email: [email protected]
Abstract
This paper considers the problem of robust hypothesis testing under non-identically distributed data. We
propose Wald-type tests for both simple and composite hypothesis for independent but non-homogeneous
observations based on the robust minimum density power divergence estimator of the common underlying
parameter. Asymptotic and theoretical robustness properties of the proposed tests have been discussed.
Application to the problem of testing the general linear hypothesis in a generalized linear model with fixeddesign has been considered in detail with specific illustrations for its special cases under normal and Poisson
distributions.
Keywords: Non-homogeneous Data; Robust Hypothesis Testing; Wald-Type Test; Minimum Density Power
Divergence Estimator; Power Influence Function; Linear Regression; Poisson Regression.
1
Introduction
Suppose that the parametric model for the sample Y1 , ..., Yn asserts that the distribution of Yi is Fi,θ , θ ∈ Θ
⊂ Rp . We shall denote by fi,θ the probability density function associated with Fi,θ with respect to a convenient
σ-finite measure, for i = 1, ..., n. This situation is very common for statistical modeling in many applications
and an important example is the generalized linear model (GLM) with fixed design set-up.
The maximum likelihood score equation for these independently and non identically distributed data,
Y1 , ..., Yn , is given by
n
X
ui,θ (yi ) = 0,
(1)
i=1
∂
∂θ
b obtained as the
with ui,θ (yi ) =
log fi,θ (yi ). It is well-known that the maximum likelihood estimator, θ,
solution of the system of equation (1), has serious problems of robustness. For this reason statistical solutions
for several special cases like linear regression with normal errors (Huber, 1983; Muller, 1998) as well as some
general cases (Beran, 1982; Ghosh and Basu, 2013) have been considered in the literature. In this paper, we
shall follow the approach presented by Ghosh and Basu (2013) in order to propose some robust Wald-type
tests. In the cited approach given by Ghosh and Basu (2013) a robust estimator was introduced based on the
density power divergence (DPD) measure; details about this family of divergence measures can be found in
Basu et al. (1998, 2011). This estimator, called the minimum density power divergence estimator (MDPDE)
for non-homogeneous observations, is obtained as the solution of the system of equations
Z
n
X
τ +1
τ
fi,θ (Yi )ui,θ (Yi ) − fi,θ (y)ui,θ (y) dy = 0, τ > 0.
(2)
i=1
Note that in the limit as τ → 0, the system of equations given in (2) tends to the system given in (1).
In Ghosh and Basu (2013) it was established under some standard regularity conditions, that the asymptotic
bτ , at the true model distribution {fi,θ :
distribution of the MDPDE for non-homogeneous observations, say θ
0
1
i = 1, . . . , n}, is given by
Ω−1/2
n,τ (θ 0 )Ψn,τ (θ 0 )
or equivalently,
√
h√
i
L
bτ − θ 0 ) −→
n(θ
N (0, I p ) ,
n→∞
L
bτ − θ 0 ) −→ N (0, Στ (θ 0 )) ,
n(θ
n→∞
(3)
(4)
with
−1
Στ (θ 0 ) = lim Ψ−1
n,τ (θ 0 )Ωn,τ (θ 0 )Ψn,τ (θ 0 ),
n→∞
(5)
where we define
n
1X
J i,τ (θ),
n i=1
Z
τ +1
J i,τ (θ) = ui,θ (y) uTi,θ (y) fi,θ
(y)dy,
Ψn,τ (θ) =
with
(6)
and
n Z
1X
2τ +1
ui,θ (y) uTi,θ (y) fi,θ
(yi )dy − ξ i,τ (θ)ξ Ti,τ (θ) ,
n
Z i=1
τ
ξ i,τ (θ) = ui,θ (y) fi,θ
(y)dy.
Ωn,τ (θ) =
with
(7)
(8)
The required regularity conditions are listed in Appendix A for the sake of completeness and will be referred as
the “Ghosh-Basu Conditions” throughout the rest of the paper.
Motivated by the strong robustness properties of the Wald-type test statistics based on MDPDEs (Basu
et al., 2016, 2017a; Ghosh et al., 2016, 2017) in case of independently and identically distributed observations,
in this paper we shall introduce and study the corresponding MDPDE based Wald-type tests for independently
but non identically distributed data. In particular, we will develop the asymptotic and theoretical robustness
of these Wald-type tests for both simple and composite hypotheses, along with applications to the generalized
linear models (GLMs) and its important subclasses. It is important to note that there is no established robust
hypothesis testing procedure under such general non-homogeneous data set-up except for one recent attempt by
Ghosh and Basu (2017), who developed the divergence based test statistics with DPD measure. However, the
asymptotic null distribution of their proposed test statistics is a linear combination of chi-square distributions
occasionally limiting its application in complicated situations. On the contrary, our proposed Wald-type test
statistics in this paper will be shown to have an ordinary chi-square distribution along with all the other
competitive properties and hence easier to implement in any practical applications.
The rest of the paper is organized as follows: In Section 2, we shall introduce the Wald-type tests for testing
simple null hypothesis as well as composite null hypothesis and we study their asymptotic distributions under
the null hypotheses as well as alternative hypotheses. The robustness of these Wald-type tests will be studied in
Section 3. In Section 4 the results are particularized to the GLM model. Some examples are studied in Section
5 and the paper ends with some insightful discussions in Section 6.
2
Wald-type tests under independent but non-homogeneous data
In the following two sections we shall consider the simple null hypothesis as well as composite null hypothesis
versions of the Wald-type test statistics for independently and non identically distributed data.
2.1
Wald-type tests for simple null hypotheses
Let Y1 , ..., Yn independently and non identically distributed data according to the probability density function
fi,θ , where θ ∈ Θ ⊂ Rp . In this section we define a family of Wald-type test statistics based on MDPDE for
testing the hypothesis
H0 : θ = θ 0 against H1 : θ 6= θ 0 ,
(9)
for a given θ 0 ∈ Θ, which will henceforth be referred to as the proposed Wald-type statistics.
2
bτ be the MDPDE of θ. The family of proposed Wald-type test statistics for testing the null
Definition 1 Let θ
hypothesis (9) is given by
bτ − θ 0 )T Σ−1 (θ 0 )(θ
bτ − θ 0 ),
Wn0 (θ 0 ) = n(θ
(10)
τ
where Στ (θ 0 ) is as defined in (5).
The asymptotic distribution of Wn0 (θ 0 ) is presented in the next theorem. The result follows easily from
the asymptotic distribution of the MDPDE considered in (4) and so we omit the proof. Throughout the rest
of the paper, for all the theoretical results, we will assume that the Ghosh-Basu conditions hold and Στ (θ) is
continuous in θ ∈ Θ.
Theorem 2 The asymptotic distribution, under the null hypothesis considered in (9), of the proposed Wald-type
test statistics given in (10) is χ2p , a chi-square distribution with p degrees of freedom.
In the next Theorem we are going to present a result that will be important in order to get an approximation
to the power function of the proposed Wald-type test statistic given in (10) because in many practical situation
is not possible to get a simple expression for the exact power function.
Theorem 3 Let θ ∗ be the the true value of parameter with θ ∗ 6= θ 0 . Then, we have
√
L
bτ ) − s(θ ∗ )) −→
n(s(θ
N 0, σ 2Wn0 (θ0 ) (θ ∗ )) ,
n→∞
T
T
∗
∗
2
where s (θ) = (θ − θ 0 ) Σ−1
τ (θ 0 ) (θ − θ 0 ) and σ W 0 (θ 0 ) (θ ) = 4 (θ − θ 0 )
n
−1
∗
Στ (θ 0 )Στ (θ ∗ )Σ−1
τ (θ 0 ) (θ − θ 0 ) .
bτ is given by
Proof. A first-order Taylor expansion of s (θ) around θ ∗ at θ
bτ ) − s(θ ∗ ) =
s(θ
∂s (θ)
∂θ T
bτ − θ ∗ ) + op (n−1/2 ).
(θ
θ=θ ∗
√
bτ )−s(θ ∗ )) matches the asymptotic distribution
Then the asymptotic distribution of therandom variable n(s(θ
√
bτ − θ ∗ and the desired result follows.
of the random variable ∂s(θ)
n θ
∂θ T
∗
θ=θ
Based on Theorem 2 we shall reject the null hypothesis given in (9) if
Wn0 (θ 0 ) > χ2p,α ,
(11)
and Theorem 3 makes it possible to have an approximation of the power function for the test given in (11).
This is given by
π τWn0 (θ0 ) (θ ∗ ) = Pr (Rejecting H0 |θ = θ ∗ ) = Pr Wn0 (θ 0 ) > χ2p,α |θ = θ ∗
!
χ2p,α
∗
∗
bτ ) − s(θ ) >
= Pr s(θ
− s(θ )
n
!!
χ2p,α
n1/2
− s(θ ∗ )
,
(12)
= 1 − Φn
σ Wn0 (θ0 ) (θ ∗ )
n
where χ2p,α denote the (1 − α)-th quantile of χ2p distribution, and Φn (·) is a sequence of distribution functions
tending uniformly to the standard normal distribution function Φ(·). We can observe that the Wald-type tests
are consistent in the Fraser sense since
lim π τWn0 (θ0 ) (θ ∗ ) = 1 ∀τ ≥ 0.
n→∞
(13)
This result can be applied in the sense of getting the necessary sample size for the Wald-type tests to have a
predetermined power, π τW 0 (θ0 ) (θ ∗ ) ≈ π ∗ and size α. The necessary sample size is given by
n
"
#
p
A + B + A(A + 2B)
n=
+1
2s2 (θ ∗ )
3
2
where [z] denotes the largest integer less than or equal to z, A = σ 2W 0 (θ0 ) (θ ∗ ) Φ−1 (1 − π ∗ ) , and B =
n
2s(θ ∗ )χ2p,α .
In order to produce a non-trivial asymptotic power, see (13), Cochran (1952) suggested using a set of local
alternatives contiguous to the null hypothesis given in (9) as n increases. In the next theorem we shall present
the asymptotic distribution of the Wald-type test statistics under contiguous alternative hypotheses.
Theorem 4 Under the contiguous alternative hypothesis
H1,n : θ n = θ 0 + n−1/2 d,
(14)
where d is a fixed vector in Rp such that θ n ∈ Θ ⊂ Rp , the asymptotic distribution of the Wald-type test
statistics given in (10) is χ2p (δ), a non-central chi-square distribution with p degrees of freedom and non-centrality
parameter
δ = dT Σ−1
(15)
τ (θ 0 )d.
Proof. We can write
bτ − θ 0 = θ
bτ − θ n + θ n − θ 0 = (θ
bτ − θ n ) + n−1/2 d.
θ
Therefore, under H1,n given in (14), we have from (4),
√
L
bτ − θ n ) −→ N (0, Στ (θ 0 ))
n(θ
n→∞
and hence
√
L
bτ − θ 0 ) −→ N (d, Στ (θ 0 )) .
n(θ
n→∞
Note that Wn0 (θ 0 ) can be written by
T
bτ − θ 0 )
bτ − θ 0 )
(θ 0 )(θ
(θ 0 )(θ
n1/2 Σ−1/2
Wn (θ 0 ) = n1/2 Σ−1/2
τ
τ
and under H1,n given in (14), we have,
L
−1/2
bτ − θ 0 ) −→
n1/2 Σ−1/2
(θ
)(
θ
N
Σ
(θ
)d,
I
.
0
0
p×p
τ
τ
n→∞
We apply the following result concerning quadratic forms. “If Z ∼ N (µ, Σ), Σ is a symmetric projection
of rank k and Σµ = µ, then Z T Z is a chi-square distribution with k degrees of freedom and non-centrality
parameter µT µ”. Therefore
L
Wn0 (θ 0 ) −→ χ2p (δ) ,
n→∞
where δ was defined in (15).
The last theorem permits us to get an approximation to the power function at θ n by
π τWn0 (θ0 ) (θ n ) = 1 − Gχ2p (δ) χ2p,α ,
(16)
where Gχ2p (δ) (z) is the distribution function of χ2p (δ) evaluated at the point z.
Based on this result we can also obtain an approximation of the power function at a generic point θ ∗ , because
we can consider d = n1/2 (θ ∗ − θ 0 ) and then θ n = θ ∗ .
2.2
Wald-type tests for composite null hypotheses
In many practical hypothesis testing problems, the restricted parameter space Θ0 ⊂ Θ is defined by a set of
r < p non-redundant restrictions of the form
h(θ) = 0
(17)
on Θ, where h : Rp → Rr is a vector-valued function such that the full rank p × r matrix
H (θ) =
4
∂h(θ)
∂θ T
(18)
exists and is continuous in θ.
Our interest will be in testing
H0 : θ ∈ Θ0 ⊂ Rp−r against H1 : θ ∈ Θ − Θ0
(19)
using Wald-type test statistics.
bτ be the MDPDE of θ. The family of proposed Wald-type test statistics for testing the
Definition 5 Let θ
composite null hypothesis (19) is given by
−1
bτ ) = nhT (θ
bτ ) H T (θ
bτ )Στ (θ
bτ )H(θ
bτ )
bτ ),
Wn ( θ
h(θ
(20)
bτ ).
In the next theorem we are going to present the asymptotic distribution of Wn (θ
bτ ), under the null hypothesis
Theorem 6 The asymptotic distribution of the Wald-type test statistics Wn (θ
given in (19), is chi-squared with r degrees of freedom.
Proof. Let θ 0 ∈ Θ0 be the true value of the parameter. A Taylor expansion gives
bτ ) = h(θ 0 ) +H T (θ 0 )(θ
bτ − θ 0 ) + op (n−1/2 1)
h(θ
bτ − θ 0 ) + op (n−1/2 1).
= H T (θ 0 )(θ
Under H0
√
L
bτ − θ 0 ) −→ N (0p , Στ (θ 0 ))
n(θ
n→∞
and so
√
L
bτ ) −→
nh(θ
N 0p , H T (θ 0 )Στ (θ 0 )H(θ 0 ) .
n→∞
Taking into account that rank (H (θ)) = r, we get
−1
L
bτ ) H T (θ 0 )Στ (θ 0 )H(θ 0 )
bτ ) −→
nhT (θ
h(θ
χ2r .
n→∞
bτ )Στ (θ
bτ )H(θ
bτ ) is a consistent estimator of H T (θ 0 )Στ (θ 0 )H(θ 0 ) by continuity of the matrices H(θ)
But H T (θ
and Στ (θ) at θ = θ 0 . Then, it holds that
L
bτ ) −→
Wn (θ
χ2r .
n→∞
Based on the previous result, we shall reject the null hypothesis given in (19) if
bτ ) > χ2 .
Wn (θ
r,α
(21)
It is not easy to get an exact expression for the power function of the test given in (21). For that reason we are
going to present a theorem that will be important in order to get an approximation of the power function for
the test statistic presented in (21).
P
bτ −→
Theorem 7 Let θ ∗ ∈
/ Θ0 the true value of the parameter with θ
θ ∗ . Define
n→∞
−1
s∗ (θ 1 , θ 2 ) = hT (θ 1 ) H T (θ 2 )Στ (θ 2 )H(θ 2 )
h(θ 1 ).
Then, we have
√ ∗
L
bτ , θ
bτ ) − s∗ (θ ∗ , θ ∗ ) −→
n s (θ
N 0, σ 2 (θ ∗ ) ,
n→∞
where
−1
σ 2 (θ ∗ ) = 4hT (θ ∗ ) H T (θ ∗ )Στ (θ ∗ )H(θ ∗ )
h(θ ∗ ).
5
P
bτ , θ
bτ ) and s∗ (θ
bτ , θ ∗ ) have the same asymptotic distribution because θ
bτ −→
Proof. We can observe that s∗ (θ
n→∞
bτ around θ ∗ gives
θ ∗ . A first-order Talyor expansion of s∗ (θ, θ ∗ ) at θ
bτ , θ ∗ ) − s∗ (θ ∗ , θ ∗ ) =
s∗ (θ
∂s∗ (θ, θ ∗ )
∂θ T
bτ − θ ∗ ) + op (n−1/2 ).
(θ
θ=θ ∗
Now the result follows, because
σ 2 (θ ∗ ) =
and
∂s∗ (θ, θ ∗ )
∂θ T
Στ (θ ∗ )
θ=θ ∗
∂s∗ (θ, θ ∗ )
∂θ
.
θ=θ ∗
−1
∂s∗ (θ, θ ∗ )
T
∗
∗
∗
T
=
2h
(θ)
H
(θ
)Σ
(θ
)H(θ
)
H T (θ).
τ
∂θ T
Using the above theorem we can derive an approximation to the power of the proposed Wald-type tests of
composite null hypothesis at any θ ∗ ∈
/ Θ0 using an argument similar to that of the derivation of the expression
in (12) for the case of simple null hypothesis. This further indicates the consistency of our proposal at any fixed
alternatives even for the composite hypotheses.
bτ ) at an alternative close to the null hypothesis.
We may also find an approximation of the power of Wn (θ
Let θ n ∈ Θ − Θ0 be a given alternative and let θ 0 be the element in Θ0 closest to θ n in the Euclidean distance
sense. A first possibility to introduce contiguous alternative hypotheses is to consider a fixed d ∈ Rp and to
permit θ n to move towards θ 0 as n increases through the relation
H1,n : θ n = θ 0 + n−1/2 d.
(22)
A second approach is to relax the condition h (θ) = 0 defining Θ0 . Let d∗ ∈ Rr and consider the following
sequence, {θ n }, of parameters moving towards θ 0 according to
∗
H1,n
: h(θ n ) = n−1/2 d∗ .
(23)
Note that a Taylor series expansion of h(θ n ) around θ 0 yields
h(θ n ) = h(θ 0 ) + H T (θ 0 ) (θ n − θ 0 ) + o (kθ n − θ 0 k 1) .
(24)
By substituting θ n = θ 0 + n−1/2 d in (24) and taking into account that h(θ 0 ) = 0, we get
h(θ n ) = n−1/2 H T (θ 0 )d + o (kθ n − θ 0 k 1) ,
so that the equivalence in the limit is obtained for d∗ = H T (θ 0 )d.
Theorem 8 Under the contiguous alternative hypotheses given in (22) and (23), we have
L
bτ ) −→ χ2 (a) under H1,n given in (22), where the non-centrality parameter “a” is given by
i) Wn (θ
r
n→∞
−1
a = dT H(θ 0 ) H T (θ 0 )Στ (θ 0 )H(θ 0 )
H T (θ 0 )d.
L
bτ ) −→ χ2 (b) under H ∗ given in (23), where the non-centrality parameter “b” is given by
ii) Wn (θ
r
1,n
n→∞
−1
b = d∗T H T (θ 0 )Στ (θ 0 )H(θ 0 )
d∗ .
bτ ) around θ n yields
Proof. A Taylor series expansion of h(θ
bτ ) = h(θ n ) + H T (θ n )(θ
bτ − θ n ) + o
h(θ
6
bτ − θ n 1 .
θ
(25)
From (25), we have
bτ ) = n−1/2 H T (θ 0 )d + H T (θ n )(θ
bτ − θ n ) + o
h(θ
As
√
bτ − θ n 1 + o (kθ n − θ 0 k 1) .
θ
√ b
n o θ τ − θ n 1 + o (kθ n − θ 0 k 1) = op (1), we have
L
bτ − θ n ) −→ N (0, Στ (θ 0 )) and
n(θ
n→∞
√
L
bτ ) −→ N (H T (θ 0 )d, H T (θ 0 )Στ (θ 0 )H(θ 0 )).
nh(θ
n→∞
We can observe by the relationship d∗ = H T (θ 0 )d, if h(θ n ) = n−1/2 d∗ that
√
L
bτ ) −→ N (d∗ , H T (θ 0 )Στ (θ 0 )H(θ 0 )).
nh(θ
n→∞
In our case, the quadratic form is Wn = Z T Z with Z =
L
Z −→ N
n→∞
√
−1/2
bτ ) H T (θ 0 )Στ (θ 0 )H(θ 0 )
nh(θ
and
−1/2
H T (θ 0 )Στ (θ 0 )H(θ 0 )
H T (θ 0 )d, I ,
where I is the identity r × r matrix. Hence, the application of the result is immediate and the non-centrality
parameter is
−1
−1
dT H(θ 0 ) H T (θ 0 )Στ (θ 0 )H(θ 0 )
H T (θ 0 )d = d∗T H T (θ 0 )Στ (θ 0 )H(θ 0 )
d∗ .
3
3.1
Robustness of the Wald-type tests for Non-homogeneous Observations
Influence functions of the Wald-type test statistics
In order to study the robustness of a testing procedure, the first measure to consider is Hampel’s influence
function (IF) of the test statistics, introduced by Rousseeuw and Ronchetti (1979) for i.i.d. data; see also
Rousseeuw and Ronchetti (1981) and Hampel et al. (1986) for detail. In case of non-homogeneous data, the
concept of IF has been extended suitably by Huber (1983) and Ghosh and Basu (2013, 2016) for the estimators
and by Ghosh and Basu (2017) and Aerts and Haesbroeck (2016) for test statistics. Here, we will follow these
extended definitions of IF to study the robustness of our proposed Wald-type test statistics for non-homogeneous
observations.
In order to define and study the IF for the Wald-type test statistics, we first need the same for the MDPDE
used in constructing the Wald-type test statistics; we will briefly recall the IF of the MDPDE under nonhomogeneous observations for the sake of completeness. Suppose Gi denote the true distribution of Yi having
corresponding density gi for each i = 1, . . . , n; under the model null distribution with true parameter value θ 0
we have Gi = Fi,θ and gi = fi,θ for each i. Denote G = (G1 , · · · , Gn ) and Fθ0 = (F1,θ0 , · · · , Fn,θ0 ). Then the
minimum DPD functional T τ (G) for independent but non-homogeneous observations at the true distribution
n
X
G is defined as the minimizer, with respect to θ ∈ Θ, of the average DPD measure n1
dτ (gi , fi,θ ) with
i=1
Z
dτ (gi , fi,θ ) =
fi,θ (y)1+τ − 1 +
1
τ
fi,θ (y)τ gi (y) +
1
gi (y)1+τ
τ
dy.
Now, for each i = 1, . . . , n, let us denote by Gi, = (1 − )Gi + ∧ti the -contaminated distribution in the i-th
direction, where ∧ti denotes the degenerate distribution at the contamination point ti . Note that, in the case
of non-homogeneous data the contamination can be either in any fixed direction, say i0 -th direction, or in all
7
the n directions. The corresponding IF of the minimum DPD functional T τ (G) has been established in Ghosh
and Basu (2013); their forms at the model null distributions are given by
IFi0 (ti0 ; T τ , Fθ0 )
IF (t1 , . . . , tn ; T τ , Fθ0 )
1
= Ψ−1
n,τ (θ 0 ) D τ ,i0 (ti0 ; θ 0 ),
n
n
1X
= Ψ−1
D τ ,i (ti ; θ 0 ),
n,τ (θ 0 )
n i=1
(26)
(27)
where D τ ,i (t; θ) = fi,θ (t)τ ui,θ (t) − ξ i,τ with ξ i,τ being as defined in Equation (8). Note that these IFs are
bounded at τ > 0 and unbounded at τ = 0, implying the robustness of the MDPDEs with τ > 0 over the
classical MLE (at τ = 0).
Now, we can defined the IF for the proposed Wald-type test statistics. We define the associated statistical
functional, evaluated at G, as (ignoring the multiplier n)
Wτ0 (G)
=
(T τ (G) − θ 0 )T Σ−1
τ (θ 0 )(T τ (G) − θ 0 )
(28)
corresponding to (10) for the simple null hypothesis, and
Wτ (G)
=
−1
hT (T τ (G)) H T (T τ (G))Στ (T τ (G))H(T τ (G))
h(T τ (G))
(29)
corresponding to (20) for the composite null hypothesis.
First we consider the Wald-type test functional Wτ0 for the simple null hypothesis and contamination only
one direction, say i0 -th direction. The corresponding IF is then defined as
IFi0 (ti0 ; Wτ0 , G) =
∂ 0
W (G1 , · · · , Gi0 −1 , Gi0 , , Gi0 +1 , · · · , Gn )
∂ τ
= 2(T τ (G) − θ 0 )T Σ−1
τ (θ 0 )IFi0 (ti0 ; T τ , G),
=0
which, when evaluated at the null distribution G = Fθ0 , becomes identically zero as T τ (Fθ0 ) = θ 0 . So, one
need to consider the second order IF of the proposed Wald-type test functional Wτ0 defined as
(2)
IFi0 (ti0 ; Wτ0 , G)
=
∂2 0
W (G1 , · · · , Gi0 −1 , Gi0 , , Gi0 +1 , · · · , Gn )
∂2 τ
=0
.
When evaluated at the null model distribution G = Fθ0 , this second order IF has the simplified form
(2)
IFi0 (ti0 ; Wτ0 , Fθ0 )
=
=
2IFi0 (ti0 ; T τ , Fθ0 )T Σ−1
τ (θ 0 )IFi0 (ti0 ; T τ , Fθ 0 )
T
−1
1
1
−1
2
D τ ,i0 (ti0 ; θ 0 )
Ψn,τ (θ 0 )Σ−1
(θ
)Ψ
(θ
)
D
(t
;
θ
)
.
0
0
τ
,i
i
0
τ
n,τ
0
0
n
n
(30)
Similarly, we can derive the first and second order IF of Wτ0 for contamination in all directions at the point
t = (t1 , . . . , tn ) respectively defined as
IF (t; Wτ0 , G) =
∂ 0
W (G1, , · · · , Gn, )
∂ τ
, and IF (2) (t; Wτ0 , G) =
=0
∂2 0
W (G1, , · · · , Gn, )
∂2 τ
=0
.
A direct calculation shows that, at the simple null model distribution G = Fθ0 , these IFs simplifies to
IF (t; Wτ0 , Fθ0 )
IF
(2)
(t; Wτ0 , Fθ0 )
=
0,
=
2IF (t; T τ , Fθ0 )T Σ−1
τ (θ 0 )IF (t; T τ , Fθ 0 )
" n
#T
" n
#
−1
1X
1X
−1
−1
Ψn,τ (θ 0 )Στ (θ 0 )Ψn,τ (θ 0 )
2
D τ ,i (ti ; θ 0 )
D τ ,i (ti ; θ 0 ) .
n i=1
n i=1
=
(31)
Note that, both the second order IF in (30) and (31) of the Wald-type test functional Wτ0 for testing simple
null hypothesis under contamination in one or all directions are bounded, whenever the corresponding MDPDE
functional has bounded IF, i.e., for any τ > 0. This implies robustness of our proposed Wald-type tests for
simple null hypothesis with τ > 0.
8
Next we can similarly derive the first and second order IFs of the proposed Wald-type tests functional Wτ
in (29) for composite null hypotheses. For brevity, we will skip the details and present only the final results
under composite null G = Fθ0 with θ 0 ∈ Θ. In particular, the first order IF for contamination in either one or
all directions are both identically zero, i.e.,
IFi0 (ti0 ; Wτ0 , Fθ0 ) = 0,
IF (t; Wτ , Fθ0 ) = 0,
and the corresponding second order IF has the form
(2)
IFi0 (ti0 ; Wτ , Fθ0 )
=
IF (2) (t; Wτ , Fθ0 )
=
h
i−1
2IFi0 (ti0 ; T τ , Fθ0 )T H(θ 0 ) H T (θ 0 )Στ (θ 0 )H(θ 0 )
H T (θ 0 )IFi0 (ti0 ; T τ , Fθ0 )
h
i−1
2IF (t; T τ , Fθ0 )T H(θ 0 ) H T (θ 0 )Στ (θ 0 )H(θ 0 )
H T (θ 0 )IF (t; T τ , Fθ0 ).
Again these second order IFs are bounded for any τ > 0 and unbounded at τ = 0 implying the robustness of
our proposed Wald-type tests for composite hypothesis testing also.
3.2
Level and Power Influence Functions
We will now study the robustness of the level and power of the proposed Wald-type tests through the corresponding influence functions for their asymptotic level and powers (Hampel et al., 1986; Heritier and Ronchetti, 1994;
Toma and Broniatowski, 2010; Ghosh and Basu, 2017). Noting the consistency of these proposed Wald-type
tests, we consider their asymptotic power under the contiguous alternatives in (14) and (22) respectively for
the simple and composite hypotheses. Additionally, considering suitable contamination over these alternatives
and the null hypothesis, we define the contaminated distributions, for each i = 1, . . . , n,
P
L
√
√
√
∧
,
and
F
=
1
−
F
+
Fi,θn + √ ∧ti ,
Fi,n,,t
=
1
−
t
i,θ 0
i,n,,ti
i
n
n i
n
n
P
P
respectively for the analysis of level and power stability. Denote t = (t1 , · · · , tn )T , FP
n,,t = (F1,n,,ti , · · · , Fn,n,,ti )
L
L
L
and Fn,,t = (F1,n,,ti , · · · , Fn,n,,ti ). Then the level influence function (LIF) and the power influence function
(PIF) for the proposed Wald-type test statistics Wn0 (θ 0 ) for the simple null hypothesis (9) are defined, assuming
the nominal level of significance to be α, as
LIF (t; Wn0 , Fθ0 )
=
P IF (t; Wn0 , Fθ0 )
=
∂
P L (W 0 (θ 0 ) > χ2p,α )
∂ Fn,,t n
∂
lim
PFPn,,t (Wn0 (θ 0 ) > χ2p,α )
n→∞ ∂
lim
n→∞
=0
,
=0
.
bτ ) for the composite null hypothesis
Similarly the LIF and PIF of the proposed Wald-type test statistics Wn (θ
0
bτ ) and χ2 respectively,
(19) can be defined through above expressions by replacing Wn (θ 0 ) and χ2p,α by Wn (θ
r,α
where θ 0 is now the true null parameter in Θ0 .
Let us first consider the case of simple null hypothesis and derive the asymptotic power under the contiguous
contaminated distribution FP
n,,t in the following theorem.
Theorem 9 Consider the problem of testing the simple null hypothesis (9) by the proposed Wald-type test
statistics Wn0 (θ 0 ) at α-level of significance and consider the contiguous alternative hypotheses given by (14).
Then the following results hold.
2
1. The asymptotic distribution of Wn0 (θ 0 ) under FP
n,,t is χp (δ ) with
eT (θ 0 )Σ−1 (θ 0 )d
e,t,τ (θ 0 ),
δ = d
,t,τ
τ
e,t,τ (θ 0 ) = d + IF (t; T τ , F ) with IF (t; T τ , F ) being given by (27).
where d
θ0
θ0
2. The corresponding asymptotic power under contiguous contaminated distribution FP
n,,t is given by
π Wn0 (θ n , , t) = lim PFPn,,t (Wn0 (θ 0 ) > χ2p,α ) = 1 − Gχ2p (δ ) (χ2p,α )
=
n→∞
∞
X
2
e,t,τ (θ 0 ), Σ−1 (θ 0 ) P χ2
Cv d
τ
p+2v > χp,α ,
v=0
9
(32)
v
where Cv (s, A) =
(sT As)
v!2v
1 T
e− 2 s
As
.
Proof. Denote θ ∗n = T τ (FP
n,,t ). Then, we can express our Wald-type test statistics in (10) as
=
bτ − θ 0 )T Σ−1 (θ 0 )(θ
bτ − θ 0 )
Wn0 (θ 0 ) = n(θ
τ
∗
∗
bτ − θ )T Σ−1 (θ 0 )(θ
bτ − θ ) + 2n(θ
bτ − θ ∗ )T Σ−1 (θ 0 )(θ ∗ − θ 0 ) + n(θ ∗ − θ 0 )T Σ−1 (θ 0 )(θ ∗ − θ 0 ). (33)
n(θ
n
τ
n
n
τ
n
n
τ
n
A suitable Taylor series expansion of θ ∗n , as a function of at n = 0 yields (Ghosh et al., 2016)
θ ∗n = θ n +
√ IF (t; T τ , F )
θ0
n
1
+ o p ( √ 1p )
n
and hence
√
√
n(θ ∗n − θ n ) = IF (t; T τ , Fθ0 ) + op (1p ),
n(θ ∗n − θ 0 − n−1/2 d) = IF (t; T τ , Fθ0 ) + op (1p ).
Writing θ ∗n in terms of θ 0 , we get
√
n(θ ∗n − θ 0 ) = d + IF (t; T τ , Fθ0 ) + op (1p )
e,t,τ (θ 0 ) + op (1p ).
=d
(34)
Using these, we can rewrite (33) as
√
T
√
bτ − θ ∗ ) + d
e,x,τ (θ 0 ) Σ−1 (θ 0 )
bτ − θ ∗ ) + d
e,x,τ (θ 0 ) + op (1).
Wn0 (θ 0 ) = n
n(θ
n(θ
n
τ
n
(35)
But, under FP
n,,t , the asymptotic distribution of MDPDE and continuity of Στ (θ) implies that
√
L
bτ − θ ∗ ) −→
N (0p , Στ (θ 0 )).
n(θ
n
(36)
n→∞
Hence combining (35) and (36), we finally get
L
bτ ) −→
Wn0 (θ
χ2p (δ )
n→∞
with δ as defined in the statement of the theorem. This completes the proof of Part 1 of the theorem.
Next, Part 2 of the theorem follows directly from the infinite series expansion of non-central distribution
functions in terms of that of the central chi-square variables as follows:
π Wn0 (θ n , , t) = lim PFPn,,t (Wn0 (θ 0 ) > χ2p,α )
n→∞
= P (χ2p (δ ) > χ2p,α ) = 1 − Gχ2p (δ ) χ2p,α
∞
X
2
e,t,τ (θ 0 ), Σ−1 (θ 0 ) P χ2
=
Cv d
τ
p+2v > χp,α .
v=0
Note that, substituting = 0 in Expression (32) of the above theorem, we have an infinite series expression
for the asymptotic power of the proposed Wald-type tests under the contiguous alternative hypotheses (14) as
given by
∞
X
2
2
π Wn0 (θ n ) = π Wn0 (θ n , 0, t) =
Cv d, Σ−1
τ (θ 0 ) P χp+2v > χp,α ,
v=0
which has been previously obtained in terms of suitable distribution function in (16).
Further, substituting d = 0p in Expression (32), we get the asymptotic level of the proposed Wald-type tests
under the contaminated distribution FL
n,,t as given by
αWn0 (, t) = π Wn0 (θ 0 , , t)
d=0p
=
∞
X
2
2
Cv IF (t; T τ , Fθ0 ), Σ−1
τ (θ 0 ) P χp+2v > χp,α .
v=0
Finally, using the expression of the asymptotic power π Wn0 (θ 0 , , t) from (32) and differentiating it suitably,
we get the required PIF and then get the LIF by substituting d = 0p as presented in the following theorem.
10
Theorem 10 Under the assumptions of Theorem 9, the power and level influence functions of the proposed
Wald-type tests for the simple null hypothesis (9) are given by
T −1
P IF (t; Wn0 , Fθ0 ) = Kp∗ dT Σ−1
τ (θ 0 )d d Στ (θ 0 )IF (t; T τ , Fθ 0 ),
s
with Kp∗ (s) = e− 2
∞
P
v=0
sv−1
v!2v (2v
− s)P χ2p+2v > χ2p,τ and
LIF (t; Wn0 , Fθ0 ) = 0.
Proof. Considering the expression of π Wn0 (θ 0 , , t) from (32) in Theorem 9 and using the definition of the PIF
along with the chain rule of derivatives, we get
P IF (t; Wn0 , Fθ0 )
∂
π Wn0 (θ 0 , , t) =0
∂
∞
X
∂
e,t,τ (θ 0 ), Σ−1 (θ 0 )
=
P χ2p+2v > χ2p,α
Cv d
τ
∂
=0
v=0
T
∞
X
∂
∂ e
−1
=
Cv s, Στ (θ 0 ) s=de0,t,τ (θ0 )
d,t,τ (θ 0 )
∂s
∂
v=0
=
e0,x,τ (θ 0 ) = d,
Now, one can check that d
=0
P χ2p+2v > χ2p,α .
(37)
∂ e
∂ d,t,τ (θ 0 )
= IF (t; T τ , Fθ0 ) and
v−1
tT At
1 T
∂
Cv (t, A) =
2v − tT At Ate− 2 t At .
v
∂t
v!2
Substituting these expressions in (37) and simplifying, we obtain the required expression of the PIF as given in
the theorem.
Finally, the LIF follows from the PIF by substituting d = 0p .
Note that the above PIF is clearly bounded if and only if the IF of the MDPDE functional T τ is bounded,
i.e., whenever τ > 0. This again implies the robustness of the asymptotic power of the proposed Wald-type
tests with τ > 0 under contiguous contamination, over the classical MLE based Wald test (at τ = 0) that has an
unbounded PIF. Further, the asymptotic level of the Wald-type tests will not also be affected by a contiguous
contamination as suggested by its zero LIF.
bτ ) for composite
Next we can similarly derive the PIF and LIF for the proposed Wald-type test statistics Wn (θ
null hypotheses also. For brevity, we only present the main results corresponding to Theorem 9 and 10 for
the composite hypotheses case in the following theorems; proofs are similar and hence omitted. The main
implications are again the same proving the claimed robustness of our proposal with τ > 0 in terms of its
asymptotic level and power under contiguous contamination through zero LIF and bounded PIF.
Theorem 11 Consider the problem of testing the composite null hypothesis (19) by the proposed Wald-type test
bτ ) at α-level of significance and consider the contiguous alternative hypotheses given by (22).
statistics Wn (θ
Then the following results hold.
bτ ) under FP
1. The asymptotic distribution of Wn (θ
is χ2 (δ ∗ ) with
n,,t
δ ∗ =
eT (θ 0 )H(θ 0 )
d
,t,τ
r
h
i−1
e,t,τ (θ 0 ),
H T (θ 0 )Στ (θ 0 )H(θ 0 )
H T (θ 0 )d
2. The corresponding asymptotic power under contiguous contaminated distribution FP
n,,t is given by
bτ ) > χ2 ) = 1 − Gχ2 (δ∗ ) (χ2 )
π Wn (θ n , , t) lim PFPn,,t (Wn (θ
r,α
r,α
r
n→∞
∞
h
i−1
X
e,t,τ (θ 0 ), H T (θ 0 )Στ (θ 0 )H(θ 0 )
=
Cv H T (θ 0 )d
P χ2r+2v > χ2r,α .
(38)
v=0
Theorem 12 Under the assumptions of Theorem 11, the power and level influence functions of the proposed
Wald-type test of the composite null hypothesis are given by
h
i−1
H T (θ 0 )IF (t; T τ , Fθ0 ),
P IF (t; Wn , Fθ0 ) = Kr∗ (δ ∗0 ) dT H(θ 0 ) H T (θ 0 )Στ (θ 0 )H(θ 0 )
LIF (t; Wn , Fθ0 )
=
0.
(39)
11
4
Application: Testing for Linear Hypotheses in Generalized Linear
Models (GLMs) with fixed design
In this Section we apply the theoretical results obtained in this paper for non-homogeneous observations to
the generalized linear model (GLM). Therefore now the density function associated to the independent random
variables Yi , 1 ≤ i ≤ n, is given by
yi η i − b(η i )
fi,θ (yi ) = f (yi , η i , φ) = exp
+ c (yi , φ) , 1 ≤ i ≤ n,
(40)
a(φ)
where the canonical parameter, η i , is an unknown measure of localization depending on the given fixed design
points xi ∈ Rk , 1 ≤ i ≤ n and φ is a known or unknown nuisance scale or dispersion parameter typically required
to produce standard errors following Gaussian, Gamma or inverse Gaussian distributions. The functions a(φ),
b(η i ) and c (yi , φ) are known. In particular, a(φ) is set to 1 for binomial, Poisson, and negative binomial
distribution (known φ) and it does not enter into the calculations for standard errors. The mean µi of Yi is
given by µi = µ (η i ) = E [Yi ] = b0 (η i ) and the variance by σ 2i = σ 2 (η i , φ) = Var [Yi ] = a(φ)b00 (η i ). The mean
response is assumed, according to GLMs, to be modeled linearly with respect to xi through a known link
function, g, i.e., g(µi ) = xTi β, where g is a monotone
and differentiable function and β ∈ Rk is an unknown
T
parameter. In this setting, since η i = η i xi β , we shall also denote (40) by f yi , xTi β, φ and the common
T
T
bτ = β
b ,φ
b
parameters of the GLM by θ = (β T , φ)T , p = k + 1. At the same time we denote by θ
the
τ
τ
minimum density power divergence estimator of θ with tuning parameter τ . The estimating equations, based
bτ in this present case are given by
on (2), to get θ
n
X
γ 1,τ (xi ) − K1 (yi , xTi β, φ)f τ (yi , xTi β, φ) xi = 0,
(41)
i=1
and
n
X
γ 2,τ (xi ) − K2 (yi , xTi β, φ)f τ (yi , xTi β, φ) = 0.
(42)
i=1
where
K1 (yi , xTi β, φ) =
yi
2
σ (η
yi η − b (η i ) 0
∂c (yi , φ)
− µ(η i )
, K2 (yi , xTi β, φ) = − i 2
a (φ) +
.
0
a (φ)
∂φ
i )g (µ(θ i ))
and
Z
γ j,τ (xi ) =
Kj (y, xTi β, φ)f 1+τ (y, xTi β, φ)dy,
for j = 1, 2.
(43)
If we want to ignore the parameter φ and to estimate β taking φ fixed (or, substituted suitably), it is enough
to consider only the set of estimating equations in (41). Further, τ = 0, we have γ 1,0 (xi ) = 0 and the estimating
equations for β are
n
X
yi − µ(η i )
xi = 0.
2
σ (η i )g 0 (µ(η i ))
i=1
bτ is then given by (4), where we now have
The asymptotic distribution of θ
n
n
P
P
2
T
γ
(x
)
−
γ
(x
)γ
(x
)
x
γ
(x
)
−
γ
(x
)
x
x
i
i
i
i
i
i
i
12,2τ
1,τ
1,τ
1,τ
i
i=1 11,2τ
i=1
,
Ωn,τ (θ) =
n
n
T
P
P
γ 12,2τ (xi ) − γ 1,τ (xi )γ 1,τ (x) xi
γ 22,2τ (xi ) − γ 22,τ (xi )
i=1
and
i=1
n
P
γ 11,τ (xi )xi xTi
i=1
Ψn,τ (θ) =
n
P
γ 12,τ (xi )xi
i=1
n
P
γ 12,τ (xi )xi
,
γ 22,τ (xi )
i=1
n
P
i=1
12
with γ j,τ (x), j = 1, 2, being given by (43) and
Z
γ jh,τ (x) = Kj y, xT β, φ Kh y, xT β, φ f 1+τ y, xT β, φ dy, for j, h = 1, 2.
Notice that for the case where φ is known we get
Ωn,τ (θ) =
n
X
n
X
γ 11,τ (xi ) − γ 21,τ (xi ) xi xTi , and Ψn,τ (θ) =
γ 11,τ (xi )xi xTi .
i=1
(44)
i=1
bτ under this fixed-design GLM.
See Ghosh and Basu (2016) for more details on the properties of the MDPDE θ
Here, we consider the most important hypothesis testing problem in the context of GLM, namely testing
the linear hypothesis on regression coefficient β as given by
H0 : Lβ = l0 versus H1 : Lβ 6= l0 ,
(45)
with L being an r × k known matrix of rank r and l0 being an known r-vector with r ≤ k. Note that,
this particular hypothesis (45) belongs to the general class of hypothesis in (19) with h (η) = Lβ − l0 and
H (η) = LT . Now for testing (45), we can consider the family of Wald-type test statistics presented in Section
2.2, given by
i−1
T h
b − l0 .
b τ ) = n Lβ
b − l0
bτ )LT
Lβ
(46)
Wn (θ
LΣτ (θ
τ
τ
Based on our Theorem 6, the null hypothesis given in (45) will be rejected if we have that
bτ ) > χ2 .
Wn (θ
r,α
(47)
Further, following discussions in Section 2.2, this proposed Wald-type test is consistent at any fixed alternatives
and one can obtain an approximation to its power function at any fixed alternatives.
T
Next, suppose the true null parameter value is θ 0 = β T0 , φ0
and consider the sequence of contiguous
alternatives H1,n : β n = β 0 + n−1/2 d with d ∈ Rk − {0}. This is also equivalent to the alternative contiguous
hypothesis H1,n : Lβ n = l0 + n−1/2 d∗ with d∗ = Ld ∈ Rr − {0}. Under these contiguous alternatives, the
proposed Wald-type test statistics have the asymptotic distribution as non-central chi-square with degrees of
freedom r and non-centrality parameter
h
i−1
h
i−1
δ = dT LT LΣτ (θ 0 )LT
Ld = d∗T LΣτ (θ 0 )LT
d∗ .
(48)
Then, the asymptotic power at these contiguous alternatives can easily be obtained through the upper cumulative
distribution functions of the above non-central chi-square distributions. We will examine their behavior for some
special cases of GLM in the next section.
Next, considering the robustness of the proposed Wald-type tests under GLM, the first order influence
function of the test statistics and the level influence functions are always identically zero under contamination
in any fixed direction or in all directions following the general theory developed in Section 3. For both types
of contaminations, the non-zero second order influence function of the proposed Wald-type test statistics (46)
under fixed-design GLM is given by
h
i−1
(2)
LIFi0 (ti0 ; T τ , Fθ0 )
IFi0 (ti0 ; Wτ , Fθ0 ) = 2IFi0 (ti0 ; T τ , Fθ0 )T LT LΣτ (θ 0 )LT
h
i−1
IF (2) (t; Wτ , Fθ0 ) = 2IF (t; T τ , Fθ0 )T LT LΣτ (θ 0 )LT
LIF (t; T τ , Fθ0 ),
bτ under
where IFi0 (ti0 ; T τ , Fθ0 ) and IF (t; T τ , Fθ0 ) are corresponding influence functions of the MDPDE θ
the fixed-design GLM for contamination in the i0 -th direction and all directions respectively. These influence
functions of the MDPDE under fixed-design GLM have been studied by Ghosh and Basu (2016); using the
explicit form of these IFs, the second order IFs of our test statistics become
T
1
1
(2)
S τ ,i0 (ti0 ; β 0 , φ0 ) L∗0,τ
S τ ,i0 (ti0 ; β 0 , φ) ,
IFi0 (ti0 ; Wτ , F(β0 ,φ0 ) ) = 2
n
n
" n
#T
" n
#
X
X
1
1
IF (2) (t; Wτ , F(β0 ,φ0 ) ) = 2
S τ ,i (ti ; β 0 , φ0 ) L∗0
S τ ,i (ti ; β 0 , φ0 ) ,
n i=1
n i=1
13
h
i−1
T
where L∗0,τ = Ψ−1
LΣτ (β 0 , φ0 )LT
LΨ−1
n,τ (β 0 , φ0 )L
n,τ (β 0 , φ0 ) and
K1 (ti , xTi β, φ)f τ (ti , xTi β, φ) − γ 1,τ (xi ) xi
.
S τ ,i (ti ; β, φ) =
K2 (ti , xTi β, φ)f τ (ti , xTi β, φ) − γ 2,τ (xi )
(49)
Clearly these influence functions will be bounded whenever the function S τ ,i (ti ; β 0 , φ0 ) is bounded in ti . However, due to the particular exponential form of the density in GLM, we have Kj (ti , xTi β, φ) is a polynomial
function of ti and the integral γ j,τ (xi ) is bounded for any given finite xi for each j = 1, 2. Hence, for any
τ > 0, the function S τ ,i (ti ; β 0 , φ0 ) will bounded in ti and it will be unbounded at τ = 0. This implies that
the proposed Wald-type tests with τ > 0 under fixed-design GLM will be robust compared to the non-robust
classical Wald-test at τ = 0.
We can similarly also check the power robustness of our proposal at τ > 0 under fixed-design GLM by
deriving the form of PIF from Theorem 12, whose boundedness again depends directly on the boundedness of
the function S τ ,i (ti ; β 0 , φ0 ) with respect to the contamination points ti s. In particular, the form of the PIF
under contiguous alternatives H1,n for the present case of fixed-design GLM simplifies to
" n
#
h
i−1
X
1
S τ ,i (ti ; β 0 , φ0 ) ,
L
P IF (t; Wn , F(β0 ,φ0 ) ) = Kr∗ (δ) dT LT LΣτ (β 0 , φ0 )LT
n i=1
where δ is as given by Equation (48). We will further study the behavior of these influence functions for some
particular examples of GLM in the next section.
It is worthwhile to note that the GLM considered in this paper as an special case of general non-homogeneous
set-up is different from the usual GLM with stochastic covariates (random design); here we are assuming that
the values of covariates (design-points) xi are fixed and known previously. The problem of robust hypothesis
testing under GLM with random design has been considered in Basu et al. (2017a,b).
5
5.1
Examples and Illustrations
Testing Significance of a Normal Linear Regression Model
As our first illustrative example, we will consider the most common and simplest case of GLM, namely the normal
regression model where the model density f (yi , xTi β, φ) is normal with mean xTi β and common variance φ > 0.
This model has a simpler representation given by
yi = xTi β + εi , i = 1, . . . , n,
where εi s are independent normally distributed errors with mean 0 and variance φ. When the design points xi s
are pre-fixed, we can apply the results derived above to construct and study robust Wald-type tests for general
linear hypothesis under this simpler model. In particular, for illustration, let us consider the problem of testing
for the significance of this linear model characterized by the hypothesis
H0 : β = β 0 versus H1 : β 6= β 0 ,
(50)
where β 0 is a known k-vector of hypothesized regression coefficients (usually a zero vector) and we assume φ
to be unknown under both hypotheses. The classical F-test for this problem is a version of the classical Wald
test based on the MLE of the parameters θ = (β T , φ)T and hence known to be highly non-robust. We will now
study the performances of the proposed Wald-type tests for this hypothesis.
Note that the hypothesisin (50) under
the normal linear regression model belongs to the class of general
Ik 0
linear hypothesis with L =
with I k being the identity matrix of order k and l0 = (β T0 0)T . So,
0 0
using the results of the previous section, the robust Wald-type test statistics for this testing problem simplifies
to
T h
i−1
b ,φ
b )=n β
b −β
b ,φ
b )LT
b −β ,
Wn (β
LΣ
(
β
β
(51)
τ
τ
τ
τ
0
τ
τ
τ
0
b and φ
b are the MDPDE of β and φ respectively with tuning parameter τ and have asymptotic joint
where β
τ
τ
covariance matrix Στ (β 0 , φ0 ) at the true null parameter values (β 0 , φ0 ). Ghosh and Basu (2013) studied the
14
properties of these MDPDEs under fixed design normal linear model in detail. In particular it follows that,
under assumptions (R1)–(R2) of their paper (also listed in Appendix A), asymptotically
√ T
b ,φ
b )T − (β T , φ )T
n (β
τ
0
τ
0
follows a k + 1-variate normal distribution with mean vector 0 and covariance matrix given by
β −1
υτ C x
0
Στ (β 0 , φ0 ) =
,
0T
υ φτ
1
(X T X) with X T = [x1 · · · xn ] being the design matrix and
n
#
"
3/2
5/2
τ2
4φ2
τ2
β
φ
2
2
2
υτ = φ 1 +
, υτ =
2(1 + 2τ ) 1 +
− τ (1 + τ ) .
1 + 2τ
(2 + τ 2 )2
1 + 2τ
where C x = lim
n→∞
Using these expressions, our proposed Wald-type test statistics (51) for testing (50) further simplifies to
b ,φ
b )= n
Wn ( β
τ
τ
b
φ
τ
τ2
1+
1 + 2τ
−3/2
b −β
β
τ
0
T
b −β ,
Cx β
τ
0
(52)
which coincides with the classical Wald test at τ = 0. Following the theory of Section 4, these Wald-type test
statistics have asymptotic null distribution as χ2k and consistent against any fixed alternatives. To study its
power against contiguous alternatives H1,n : β n = β 0 + n−1/2 d with d ∈ Rk − {0}, note that the asymptotic
distribution of the proposed Wald-type test statistics under H1,n is non-central χ2 with degrees of freedom r
and non-centrality parameter
−3/2 h
i
1
τ2
δ=
dT C x d .
(53)
1+
φ0
1 + 2τ
Clearly the asymptotic contiguous power hof our proposed
test statistics depends on the given fixed values of
i
T
design points through the quantity dx = d C x d along with the tuning parameter τ . Table 1 presents the
empirical values of these contiguous powers over τ for different values of dx , with φ0 = 1 and 5% level of
significance. Note that, as the number (k) of regressors to be tested increases, we need larger values of dx
to achieve any fixed values of the contiguous power; for a given fixed design this corresponds to larger values
of ||d||. Further, for any fixed τ the values of contiguous power increases as dx increases but for any fixed
dx > 0 it decreases as τ increases as expected; the choice dx = 0 leads to the level of the tests for all τ ≥ 0.
However, interestingly, the loss in power compared to the classical Wald test at τ = 0 is not quite significant
at small values of τ > 0. And, against this relatively small price, we will gain substantially robustness against
contamination in data as illustrated below with the specific forms of the influence functions.
Table 1: Contiguous power of the proposed Wald-type test for testing (50) under the normal regression model
k=1
k = 20
τ
τ
dx
0
0.1
0.3
0.5
0.7
1
0
0.1
0.3
0.5
0.7
1
0
0.050 0.050 0.050 0.050 0.050 0.050 0.050 0.050 0.050 0.050 0.050 0.050
2
0.293 0.290 0.274 0.254 0.234 0.207 0.096 0.096 0.092 0.088 0.083 0.078
5
0.609 0.603 0.574 0.535 0.494 0.437 0.193 0.191 0.179 0.164 0.150 0.133
10 0.885 0.882 0.859 0.825 0.786 0.722 0.402 0.397 0.367 0.331 0.296 0.252
15 0.972 0.971 0.961 0.944 0.921 0.877 0.611 0.604 0.565 0.513 0.461 0.391
20 0.994 0.994 0.990 0.984 0.973 0.950 0.775 0.768 0.730 0.675 0.616 0.531
25 0.999 0.999 0.998 0.996 0.992 0.981 0.883 0.878 0.847 0.800 0.745 0.657
50 1.000 1.000 1.000 1.000 1.000 1.000 0.944 0.941 0.920 0.885 0.840 0.761
Again, based on the general theory developed in Section 4, one can readily check that the second order
influence function of the proposed Wald-type tests at the true null distribution having parameters θ 0 = (β T0 , φ0 )T
15
under the present case simplifies to
(2)
IFi0 (ti0 ; Wτ , Fθ0 )
=
IF (2) (t; Wτ , Fθ0 )
=
2
i
2 τ (ti0 −xTi0 β0 ) h T
2
φ0
(1 + 2τ )3/2 ti0 − xTi0 β 0 e−
xi0 (X T X)−1 C x (X T X)−1 xi0 ,
φ0
2
n
i
X
2 − τ (ti −xTi β0 ) h T
2
3/2
T
φ0
(1 + 2τ )
xi (X T X)−1 C x (X T X)−1 xi .
ti − xi β 0 e
φ0
i=1
Clearly, these influence functions depend on the values of the given fixed design-points in the direction of
contamination. However, for any given finite design-points, they are bounded in contamination points ti s for
each τ > 0 and unbounded at τ = 0. We will explicitly examine their nature for some particular cases of design
matrix; in particular, we consider the following four fixed designs:
Design
Design
Design
Design
1:
2:
3:
4:
xi
xi
xi
xi
= (1, xi1 )T ; xi1 = a, i = 1, . . . , n/2; xi1 = b, i = n/2 + 1, . . . , n.
(two-point design)
= (1, xi1 )T ; xi1 , i = 1, . . . , n, are pre-fixed iid observations from N (µx , σ 2x ). (Fixed-Normal design)
= (1, xi1 )T ; xi1 = i for i = 1, . . . , n.
(Divergent design)
= (1, xi1 , xi2 )T ; xi1 = 1i , xi2 = i12 for i = 1, . . . , n.
(Convergent design)
Note that, the C x matrix is finitely defined and is positive definite for the first two designs with values
1
1
(a + b)
1
µx
2
Cx =
and C x =
1
1 2
2
µx σ 2x + µ2x
2 (a + b)
2 (a + b )
respectively. In our illustrations, we have taken a = 1, b = 2 in design 1 and µx = 0, σ x = 1 in Design 2
so that the first one is asymmetric and non-orthogonal but the second one is symmetric and orthogonal. The
design matrix for Designs 3 and 4 are positive definite for any finite sample sizes, but the corresponding C x
matrices have all elements except their (1, 1)-th element as ∞ and 0 respectively; however, for the computation
of the above fixed sample IFs in these cases we can use the finite sample approximation of C x by n1 (X T X).
Figure 1 presents the second order IF of our test statistics for different contamination direction under these four
designs at the finite sample size n = 50 with β 0 = 1, the vector of ones, φ0 = 1 and different values of τ . The
boundedness of these IFs at τ > 0 clearly indicates the robustness of our proposed Wald-type tests. Further,
the (absolute) supremum of these IFs also decreases as τ increases which implies the increasing robustness
of the proposal with increasing τ . The extent of this increase in robustness for τ > 0 over τ = 0 becomes
more prominent when contamination is in all directions and/or the size of the fixed design matrix increases (an
extreme increment is in the case of Figure 3i).
Noting that the LIF is always zero, we can next study the power influence function also. In the present case,
the PIF can be seen to have the form
P IF (t; Wn , Fθ0 )
=
Kk∗ (δ)
2
n
i
X
τ (ti −xTi β0 ) h T
2
2φ0
(1 + 2τ )3/2 (1 + τ )−3/2
ti − xTi β 0 e−
d C x (X T X)−1 xi ,
φ0
i=1
where δ is as given by Equation (53). Figure 2 presents these PIFs for the above four designs with different τ
at the finite sample size n = 50 with β 0 = 1, φ0 = 1, d = 10−2 β and 5% level of significance. Again, the power
of the proposed Wald-type tests under the normal model seems to be robust for all τ > 0 and for all the fixed
designs over the classical non-robust choice of τ = 0. Further, the extent of robustness increases as τ increases
or the size of the design matrix decreases.
5.2
Testing for individual regression coefficients in Poisson model for Count Data
Let us consider another popular special class of GLM applicable to the analysis of count responses, namely
T
the Poisson regression model. Here the response yi is assumed to have a Poisson distribution with mean exi β
depending on the given predictor values xi . In terms of the GLM notations of Section 4, the density in (40)
is then a Poisson density with η i = xTi β, known φ = 1 and the logarithmic link function. So, we can obtain
bτ = β
b of the regression parameter θ = β in this case following the general theory of
the robust MDPDE θ
τ
b under the fixed-design Poisson regression model
Section 4. Ghosh and Basu (2016) studied these MDPDEs β
τ
and their properties in detail with examples. In particular, in the notations of Section 4, we have estimating
16
(a) Design 1, i0 = 10
(b) Design 1, i0 = 40
(c) Design 1, all directions
(d) Design 2, i0 = 10
(e) Design 2, i0 = 40
(f) Design 2, all directions
(g) Design 3, i0 = 10
(h) Design 3, i0 = 40
(j) Design 4, i0 = 10
(k) Design 4, i0 = 40
(i)
∗
Design 3, all directions
(l) Design 4, all directions
Figure 1: Second order influence function of the proposed Wald-type test statistics for testing (50) under the
normal regression model with fixed designs 1 – 4 and contamination in the direction i0 = 10, 40 or in all
directions at t = t1 [solid line: τ = 0; dash-dotted line: τ = 0.1; dotted line: τ = 0.3; dashed line: τ = 0.5].
∗
indicates that the values for τ = 0 (solid line) has been shown in multiple of 10−2 for this graph only.
equations given only by (41) with K1 (yi , xTi β, φ) = yi − xTi β , and the required asymptotic variance matrix
Στ can be obtained in terms of Ωn,τ and Ψn,τ as defined in (44).
Here, as our second illustration of the proposed Wald-type testing procedures, we will consider the problem
of testing for the significance of any predictor (say, h-th predictor) in the model. For a fixed integer h between
17
(a) Design 1
(b) Design 2
(c) Design 3
(d) Design 4
Figure 2: Power influence function of the proposed Wald-type test statistics for testing (50) under the normal
regression model with fixed designs 1 – 4 at contamination point t = t1 [solid line: τ = 0; dash-dotted line:
τ = 0.1; dotted line: τ = 0.3; dashed line: τ = 0.5].
1 to k, the corresponding hypothesis is given by
H0 : β h = 0 versus H1 : β h 6= 0,
(54)
where β h denotes the h-th component of the regression vector β. Clearly this important hypothesis in (54) is
a special case of the general linear hypotheses in (45) with r = 1, LT being an k-vector with all entries zero
except the h-th entry as 1 and l0 = 0. So, our proposed Wald-type test statistics for this problem, following the
general theory of Section 4, can be simplified as
2
b )=
Wn ( β
τ
b
nβ
h,τ
,
2
b )
σ hh,τ (β
τ
(55)
2
b
b
where β
h,τ is the h-th entry of β τ denoting the MDPDE of β h and σ hh,τ denote the h-th diagonal element of the
√ b
asymptotic covariance matrix Στ at the null parameter values denoting the null asymptotic variance of nβ
h,τ .
Following Section 4, this test statistics in (55) is asymptotically distributed as χ21 distribution and consistent
at any fixed alternatives. Further, denoting the null parameter value as β 0 having h-th entry β h,0 = 0, the
asymptotic distribution of the proposed Wald-type test statistics under the contiguous alternatives
H1,n : β n with β h,n = n−1/2 d, β l,n = β l,0 for l 6= h, d ∈ R − {0}
is a non-central χ2 distribution with one degree of freedom and non-centrality parameter
δ=
d2
.
σ 2hh,τ (β 0 )
(56)
Note that σ 2hh,τ (β 0 ) has no closed form expression in this case but can be estimated numerically for any fixed
sample size and any given design-matrix by σ
b2hh,τ (β 0 ), the h-th diagonal entry of the matrix
−1
Ψ−1
n,τ (β 0 )Ωn,τ (β 0 )Ψn,τ (β 0 )
18
estimating Στ (β 0 ). Therefore, the effect of given design points can not be separated out explicitly from the
form of asymptotic contiguous power based on this non-central distribution as was the case for previous normal
model. We again consider the four given designs 1–4 from Section 5.1 and numerically compute the asymptotic
contiguous power of the proposed Wald-type tests for testing (54) with different values of h, d and τ assuming
n = 50, β l,0 = 1 for all l 6= h and 5% level of significance; the results are shown in Table 2. Once again, the
power loss is not quite significant for any small positive values of τ . Also, we need larger values of d to attain
any fixed power by the proposed Wald-type test statistics with fixed tuning parameter whenever the values of
the fixed design variables increases.
Table 2: Contiguous power of the
Design
dx
0
0.1
Design 1
0
0.050 0.050
2
0.200 0.199
3
0.388 0.383
5
0.796 0.792
7
0.974 0.973
10
1.000 1.000
Design 2
0
0.050 0.050
1
0.140 0.139
2
0.414 0.408
3
0.743 0.735
5
0.992 0.991
7
1.000 1.000
Design 3
0
0.050 0.050
0.01 1.000 1.000
0.05 1.000 1.000
0.1
1.000 1.000
0.2
1.000 1.000
0.5
1.000 1.000
Design 4
0
0.050 0.050
10
0.153 0.152
20
0.459 0.455
30
0.795 0.792
50
0.996 0.996
70
1.000 1.000
proposed Wald-type test for testing (54) under the Poisson regression model
τ
τ
0.3
0.5
0.7
1
0
0.1
0.3
0.5
0.7
1
h=1
h=2
0.050 0.050 0.050 0.050 0.050 0.050 0.050 0.050 0.050 0.050
0.189 0.178 0.167 0.152 0.378 0.375 0.355 0.331 0.308 0.276
0.364 0.339 0.315 0.282 0.696 0.691 0.663 0.627 0.589 0.534
0.766 0.730 0.692 0.634 0.985 0.984 0.978 0.968 0.954 0.926
0.964 0.950 0.932 0.897 1.000 1.000 1.000 1.000 0.999 0.998
1.000 0.999 0.999 0.996 1.000 1.000 1.000 1.000 1.000 1.000
h=1
h=2
0.050 0.050 0.050 0.050 0.050 0.050 0.050 0.050 0.050 0.050
0.132 0.124 0.117 0.110 0.320 0.316 0.300 0.280 0.261 0.234
0.381 0.351 0.326 0.299 0.846 0.842 0.819 0.786 0.749 0.693
0.700 0.658 0.619 0.574 0.994 0.994 0.991 0.985 0.977 0.959
0.985 0.976 0.965 0.947 1.000 1.000 1.000 1.000 1.000 1.000
1.000 1.000 1.000 0.999 1.000 1.000 1.000 1.000 1.000 1.000
h=1
h=2
0.050 0.050 0.050 0.050 0.050 0.050 0.050 0.050 0.050 0.050
1.000 1.000 1.000 1.000 0.057 0.056 0.056 0.056 0.055 0.055
1.000 1.000 1.000 1.000 0.221 0.219 0.209 0.196 0.183 0.166
1.000 1.000 1.000 1.000 0.662 0.657 0.630 0.593 0.557 0.503
1.000 1.000 1.000 1.000 0.997 0.997 0.996 0.993 0.988 0.976
1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000
h=2
h=3
0.050 0.050 0.050 0.050 0.050 0.050 0.050 0.050 0.050 0.050
0.145 0.137 0.129 0.119 0.168 0.167 0.159 0.149 0.140 0.128
0.431 0.402 0.373 0.333 0.510 0.506 0.479 0.445 0.412 0.366
0.765 0.728 0.689 0.630 0.845 0.841 0.816 0.781 0.740 0.679
0.994 0.990 0.983 0.968 0.999 0.999 0.998 0.995 0.992 0.981
1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000
Further, the robustness of our proposed Wald-type tests can also be verified by examining the second order
influence function of the test statistics and the power influence functions from their general expressions derived
in Section 4. However, in the present case of Poisson regression model, we cannot have more simplified explicit
expressions for them except for the particular form of K1 (yi , xTi β, φ) as defined above and no K2 (yi , xTi β, φ);
but they can be easily computed numerically for any given fixed design. We again present the numerical values of
the second order influence functions of the proposed Wald-type test statistics for testing significance of the first
slope parameter β 2 (h = 2 in (54)) at different values of τ under the four designs considered in Section 5.1 with
n = 50 and β l,0 = 1 for all l 6= 2; these are presented in Figures 3. The redescending nature of all the influence
functions with increasing τ is again quite clear from the figures, which indicates the increasing robustness of
our proposed Wald-types tests as τ > 0 increases over the non-robust choice τ = 0 (having unbounded IFs).
The nature of the power influence functions in this case are also seen to be very similar implying the robustness
of our proposal at τ > 0; so we skip them for brevity.
19
(a) Design 1, i0 = 10
(b) Design 1, i0 = 40
(c) Design 1, all directions
(d) Design 2, i0 = 10
(e) Design 2, i0 = 40
(f) Design 2, all directions
(g) Design 3, i0 = 10
(h) Design 3, i0 = 40
(i) Design 3, all directions
(j) Design 4, i0 = 10
(k) Design 4, i0 = 40
(l) Design 4, all directions
Figure 3: Second order influence function of the proposed Wald-type test statistics for testing (54) under Poisson
regression model with fixed designs 1 – 4 and contamination in the direction i0 = 10, 40 or in all directions at
t = t1 [solid line: τ = 0; dash-dotted line: τ = 0.1; dotted line: τ = 0.3; dashed line: τ = 0.5].
6
Concluding remarks and the Choice of τ
We have proposed a robust parametric hypothesis testing approach for general non-homogeneous observations
involving a common model parameter. The test statistics have been constructed by generalizing the Wald
test statistics using the robust minimum density power divergence estimator (with parameter τ ≥ 0) of the
underlying common parameter in place of its non-robust maximum likelihood estimator. The properties of the
20
proposed test have been studied for both simple and composite hypotheses under general non-homogeneous setup and applied to the cases of several fixed design GLMs. In particular, it has been observed that the proposed
tests have simple chi-square asymptotic limit under null hypothesis in contrast to the linear combination of chisquare limit for the robust tests of Ghosh and Basu (2017) making its application much easier under complex
models. Also, the tests are always consistent at any fixed alternatives and have bounded second order influence
functions of the test statistics and bounded power influence functions for τ > 0 implying robustness of our
proposal.
Further, in each of the examples considered, we have seen that the asymptotic power of the proposed
Wald-type tests under any contiguous alternatives as well as the extent of robustness depends on the tuning
parameter τ . In particular, as τ increases, the contiguous power decreases slightly from its highest value at τ = 0
corresponding to the non-robust classical MLE based Wald test but the robustness increases significantly. Thus,
the tuning parameter τ yields a trade-off between asymptotic contiguous power and robustness of these Waldtype tests; note the similarity with the trade-off between asymptotic efficiency and robustness of the underlying
MDPDE as studied in Ghosh and Basu (2013). In fact one can explicitly examine, from the theoretical results
derived here, that the dependence of the power and robustness of the proposed Wald-type tests comes directly
through the similar dependence of the efficiency and robustness of the MDPDE used in constructing the test
statistics. Hence a proper choice of the tuning parameter τ balancing the asymptotic power and robustness
can be equivalently obtained by balancing the corresponding trade-off for the underlying MDPDE. This latter
problem under the non-homogeneous set-up has been proposed and studied by Ghosh and Basu (2013, 2015,
2016), where it is proposed that a data-driven estimate of the mean square error of the MDPDE be minimized
to obtain the optimum tuning parameter for any given practical dataset. The same optimum τ can also be used
as well for applying our proposed Wald-type tests for any practical hypothesis testing problems. However, more
detailed investigation on this issue could ba an interesting future research work.
Another possible future extension of the present paper will be to construct similar robust testing procedures
for two independent samples of non-homogeneous data. This problem is of high practical relevance as one
can then use the construction to test between two regression lines from fixed design clinical trials occurring
frequently in medical sciences and epidemiology. The corresponding problem with homogeneous sample has
been recently tackled by Ghosh et al. (2017) which should be extended to the cases with non-homogeneous data
and fixed-design regressions as in the present paper. We hope to pursue some such extensions in future.
A
Assumptions
Assumptions required for Asymptotic distributions of the MDPDE under non-homogeneous data
(Ghosh and Basu, 2013):
(A1) For all i = 1, . . . , n, the support of model distribution given by χ = {y|fi,θ (y) > 0} is independent of i
and θ and is the same as the support of true distribution Gi for each i.
(A2) There exists an open subset ω ⊆ Θ that contains the best fitting parameter θ g = T τ (G) and, at each
θ ∈ ω, the model density fi,θ (y) is thrice continuously differentiable with respect to θ for almost all y ∈ χ
and all i = 1, . . . , n.
R
R
(A3) fi,θ (y)1+α dy and fi,θ (y)α gi (y)dy can be differentiated three times with respect to θ, and the derivatives
can be taken under the integral sign for any i = 1, . . . , n.
(A4) The matrix J i,τ is positive definite for any i = 1, . . . , n, and, inf [min eigenvalue of Ψn,τ ] > 0.
n
1
α
fi,θ (y) . For all θ ∈ Θ, i = 1, . . . , n, and all j, h, l = 1, . . . , k,
(A5) Define Vi,θ (y) =
fi,θ (y)
dy − 1 +
the (j, h, l)-th (third order) partial derivatives of Vi,θ (y) is bounded in absolute value by some functions
n
h
i
1X
(i)
(i)
Mjhl (y) satisfying
Egi Mjhl (Y ) = O(1).
n i=1
R
1+α
α
(1)
(2)
(A6) For all j, h = 1, . . . , k, define N ijh (y) = ∇j Vi,θ (y) and N ijh (y) = ∇jh Vi,θ (y) − Egi (∇jh Vi,θ (y)). Then,
we have
( n
)
1X
(l)
(l)
lim sup
Egi |N ijh (y)|I(|N ijh (y))| > N ) = 0, l = 1, 2.
N →∞ n>1 n
i=1
21
(A7) For any > 0,
(
lim
n→∞
)
n
h
√ i
1X
−1/2
−1/2
2
Eg ||Ωn ∇Vi,θ (y)|| I(||Ωn ∇Vi,θ (y)|| > n)
= 0.
n i=1 i
(57)
Assumptions required for Asymptotic distributions of the MDPDE under Normal Fixed-Design
Linear Model (Ghosh and Basu, 2013):
The values of given design point xi = (x1i , . . . , xki )T are such that
n
(R1) sup max |xji | = O(1), sup max |xji xli | = O(1), and
n>1 1≤i≤n
n>1 1≤i≤n
1X
|xji xli xhi | = O(1), for all j, l, h = 1, . . . , k.
n i=1
1
T
(R2) inf min eigenvalue of (X X) > 0.
n
n
References
Aerts, S. and Haesbroeck, G. (2016). Robust asymptotic tests for the equality of multivariate coefficients of
variation Test, doi: 10.1007/s11749-016-0504-4.
Basu, A., Harris, I. R., Hjort, N. L., and Jones, M. C. (1998). Robust and efficient estimation by minimising a
density power divergence. Biometrika, 85, 549–559.
Basu, A., Shioya, H. and Park, C. (2011). Statistical Inference: The Minimum Distance Approach. Chapman &
Hall/CRC, Boca de Raton.
Basu, A., Mandal, A., Martin, N. and Pardo, L. (2016) Generalized Wald-type tests based on minimum density
power divergence estimators. Statistics, 50(1), 1–26.
Basu, A., Ghosh, A., Mandal, A., Martin, N. and Pardo, L. (2017a) A Wald-type test statistic for testing linear
Hypothesis in logistic regression models based on minimum density power divergence estimator. Electronic
Journal of Statistics, 11, 2741–2772.
Basu, A., Ghosh, A., Mandal, A., Martin, N. and Pardo, L. (2017b) Robust Wald-type tests in GLM with
random design based on minimum density power divergence estimators. Pre-print.
Beran, R. (1982). Robust estimation in models for independent non-identically distributed data. The Annals of
Statistics, 10, 2, 418-428.
Cochran, W. G. (1952). The χ2 test of goodness of fit. Annals of Mathematical Statistics, 23, 15-28.
Ghosh, A., and Basu, A. (2013). Robust estimation for independent non-homogeneous observations using density
power divergence with applications to linear regression. Electronic Journal of Statistics, 7, 2420–2456.
Ghosh, A., and Basu, A. (2015). Robust Estimation for Non-Homogeneous Data and the Selection of the Optimal
Tuning Parameter: The DPD Approach Journal of Applied Statistics, 42(9), 2056–2072.
Ghosh, A., and Basu, A. (2016). Robust Estimation in Generalized Linear Models : The Density Power Divergence Approach. Test, 25(2), 269–290.
Ghosh, A., and Basu, A. (2017). Robust Bounded Influence Tests for Independent but Non-Homogeneous
Observations. Statistica Sinica, doi:10.5705/ss.202015.0320.
Ghosh, A., Basu, A., and Pardo, L. (2015). On the robustness of a divergence based test of simple statistical
hypotheses. Journal of Statistical Planning and Inference, 116, 91–108.
Ghosh, A., Mandal, A., Martin, N. and Pardo, L. (2016). Influence analysis of robust Wald-type tests. Journal
of Multivariate Analysis, 147, 102–126.
22
Ghosh, A., Martin, N., Basu, A., and Pardo, L. (2017). A New Class of Robust Two-Sample Wald-Type Tests.
ArXiv Pre-print, arXiv:1702.04552 [stat.ME].
Hampel, F. R., Ronchetti, E., Rousseeuw, P. J., and Stahel W.(1986). Robust Statistics: The Approach Based
on Influence Functions. New York, USA: John Wiley & Sons.
Heritier, S. and Ronchetti, E. (1994). Robust bounded-influence tests in general parametric models. Journal of
the American Statistical Association, 89, 897–904.
Huber, P. J. (1983). Minimax aspects of bounded-influence regression (with discussion). Journal of the American
Statistical Association, 69, 383-393.
Muller, C. (1998). Optimum robust testing in linear models. The Annals of Statistics, 26, 3, 1126-1146.
Rousseeuw, P. J. and Ronchetti, E. (1979). The influence curve for tests. Research Report 21, Fachgruppe für
Statistik, ETH, Zurich.
Rousseeuw, P. J. and Ronchetti, E. (1981). Influence curves for general statistics. J. Comput. Appl. Math., 7,
161–166.
Toma, A. and Broniatowski, M. (2010). Dual divergence estimators and tests: robustness results. Journal of
Multivariate Analysis, 102, 20–36.
23
| 10 |
Under consideration for publication in Theory and Practice of Logic Programming
1
arXiv:cs/0005018v2 [cs.LO] 30 Jul 2001
On Modular Termination Proofs of
General Logic Programs
ANNALISA BOSSI, NICOLETTA COCCO, SABINA ROSSI
Dipartimento di Informatica, Università Ca’ Foscari di Venezia
via Torino 155, 30172 Venezia, Italy
SANDRO ETALLE
Department of Computer Science, University of Twente
P.O. Box 217, 7500 AE Enschede, The Netherlands
and
CWI – Center for Mathematics and Computer Science,
P.O. Box 94079, 1090 GB Amsterdam, The Netherlands
Abstract
We propose a modular method for proving termination of general logic programs (i.e.,
logic programs with negation). It is based on the notion of acceptable programs, but it
allows us to prove termination in a truly modular way. We consider programs consisting of
a hierarchy of modules and supply a general result for proving termination by dealing with
each module separately. For programs which are in a certain sense well-behaved, namely
well-moded or well-typed programs, we derive both a simple verification technique and an
iterative proof method. Some examples show how our system allows for greatly simplified
proofs.
1 Introduction
It is standard practice to tackle a large proof by decomposing it into more manageable pieces (lemmata or modules) and proving them separately. By composing appropriately these simpler results, one can then obtain the final proof. This methodology
has been recognized an important one also when proving termination of logic programs. Moreover most practical logic programs are engineered by assembling different modules and libraries, some of which might be pre-compiled or written in a
different programming language. In such a situation, a compositional methodology
for proving termination is of crucial importance.
The first approach to modular termination proofs of logic programs has been
proposed by Apt and Pedreschi in (Apt and Pedreschi 1994). It extends the seminal
work on acceptable programs (Apt and Pedreschi 1993) which provides an algebraic
characterization of programs terminating under Prolog left-to-right selection rule.
The class of acceptable programs contains programs which terminate on ground
queries. To prove acceptability one needs to determine a measure on literals (level
mapping) such that, in any clause, the measure of the head is greater than the
measure of each body literal. This implies the decreasing of the measure of the
2
A. Bossi, N. Cocco, S. Etalle, and S. Rossi
literals resolved during any computation starting from a ground or bounded query
and hence termination.
The significance of a modular approach to termination of logic programs has been
recognized also by other authors; more recent proposals can be found in (Pedreschi
and Ruggieri 1996, Marchiori 1996, Verbaeten, Sagonas and De Schreye 1999, Etalle,
Bossi and Cocco 1999, Verbaeten, Sagonas and De Schreye 2001).
All previous proposals (with the exception of (Verbaeten et al. 1999, Etalle et al.
1999)) require the existence of a relation between the level mappings used to prove
acceptability of distinct modules. This is not completely satisfactory: it would be
nice to be able to put together modules which were independently proved terminating, and be sure that the resulting program is still terminating.
We propose a modular approach to termination which allows one to reason independently on each single module and get a termination result on the whole program.
We consider general logic programs, i.e., logic programs with negation, employing
SLDNF-resolution together with the leftmost selection rule (also called LDNFresolution) as computational mechanism. We consider programs which can be divided into modules in a hierarchical way, so that each module is an extension of the
previous ones. We show that in this context the termination proof of the entire program can be given in terms of separate proofs for each module, which are naturally
much simpler than a proof for the whole program. While assuming a hierarchy still
allows one to tackle most real-life programs, it leads to termination proofs which,
in most cases, are extremely simple.
We characterize the class of queries terminating for the whole program by introducing a new notion of boundedness, namely strong boundedness. Intuitively, strong
boundedness captures the queries which preserve (standard) boundedness through
the computation. By proving acceptability of each module wrt. a level mapping
which measures only the predicates defined in that module, we get a termination
result for the whole program which is valid for any strongly bounded query. Whenever the original program is decomposed into a hierarchy of small modules, the
termination proof can be drastically simplified with respect to previous modular
approaches. Moreover strong boundedness can be naturally guaranteed by common
persistent properties of programs and queries, namely properties preserved through
LDNF-resolution such as well-modedness (Dembiński and Maluszyński 1985) or
well-typedness (Bronsard, Lakshman and Reddy 1992).
The paper is organized as follows. Section 2 contains some preliminaries. In particular we briefly recall the key concepts of LDNF-resolution, acceptability, boundedness and program extension. Section 3 contains our main results which show how
termination proofs of separate programs can be combined to obtain proofs of larger
programs. In particular we define the concept of strongly bounded query and we
prove that for general programs composed by a hierarchy of n modules, each one
independently acceptable wrt. its own level mapping, any strongly bounded query
terminates. In Section 4 we show how strong boundedness is naturally ensured
by some program properties which are preserved through LDNF-resolution such as
well-modedness and well-typedness. In Section 5 we show how these properties allow
us to apply our general results also for proving termination of modular programs
Theory and Practice of Logic Programming
3
in an iterative way. In Section 6 we compare our work with Apt and Pedreschi’s
approach. Other related works and concluding remarks are discussed in Section 7.
2 Preliminaries
We use standard notation and terminology of logic programming (Lloyd 1987, Apt
1990, Apt 1997). Just note that general logic programs are called in (Lloyd 1987)
normal logic programs.
2.1 General Programs and LDNF-Resolution
A general clause is a construct of the form
H ← L1 , . . . , Ln
with (n ≥ 0), where H is an atom and L1 , . . . , Ln are literals (i.e., either atoms or
the negation of atoms). In turn, a general query is a possibly empty finite sequence
of literals L1 , . . . , Ln , with (n ≥ 0). A general program is a finite set of general
clauses1 . Given a query Q := L1 , . . . , Ln , a non-empty prefix of Q is any query
L1 , . . . , Li with i ∈ {1, . . . , n}. For a literal L, we denote by rel(L) the predicate
symbol of L.
Following the convention adopted in (Apt 1997), we use bold characters to denote
sequences of objects (so that L indicates a sequence of literals L1 , . . . , Ln , while t
indicates a sequence of terms t1 , . . . , tn ).
For a given program P , we use the following notations: BP for the Herbrand base
of P , ground (P ) for the set of all ground instances of clauses from P , comp(P ) for
the Clark’s completion of P (Clark 1978).
Since in this paper we deal with general queries, clauses and programs, we omit
from now on the qualification “general”, unless some confusion might arise.
We consider LDNF-resolution, and following Apt and Pedreschi’s approach in
studying the termination of general programs (Apt and Pedreschi 1993), we view
LDNF-resolution as a top-down interpreter which, given a general program P and
a general query Q, attempts to build a search tree for P ∪ {Q} by constructing
its branches in parallel. The branches in this tree are called LDNF-derivations of
P ∪ {Q} and the tree itself is called LDNF-tree of P ∪ {Q}. Negative literals are
resolved using the negation-as-failure rule which calls for the construction of a subsidiary LDNF-tree. If during this subsidiary construction the interpreter diverges,
the (main) LDNF-derivation is considered to be infinite. An LDNF-derivation is
finite also if during its construction the interpreter encounters a query with the
first literal being negative and non-ground. In such a case we say that the LDNFderivation flounders.
1
In the examples through the paper, we will adopt the syntactic conventions of Prolog so that
each query and clause ends with the period “.” and “←” is omitted in the unit clauses.
4
A. Bossi, N. Cocco, S. Etalle, and S. Rossi
By termination of a general program we actually mean termination of the underlying interpreter. Hence in order to ensure termination of a query Q in a program
P , we require that all LDNF-derivations of P ∪ {Q} are finite.
By an LDNF-descendant of P ∪ {Q} we mean any query occurring during the
LDNF-resolution of P ∪ {Q}, including Q and all the queries occurring during the
construction of the subsidiary LDNF-trees for P ∪ {Q}.
For a non-empty query Q, we denote by first(Q) the first literal of Q. Moreover
we define Call P (Q) = {first(Q′ ) | Q′ is an LDNF-descendant of P ∪ {Q}}. It is
worth noting that if ¬A ∈ Call P (Q) and A is a ground atom, then A ∈ Call P (Q)
too. Notice that, for definite programs, the set Call P (Q) coincides with the call
set Call (P, {Q}) in (De Schreye, Verschaetse and Bruynooghe 1992, Decorte, De
Schreye and Vandecasteele 1999).
The following trivial proposition holds.
Proposition 1
Let P be a program and Q be a query. All LDNF-derivations of P ∪ {Q} are finite
iff for all positive literals A ∈ Call P (Q), all LDNF-derivations of P ∪ {A} are finite.
2.2 Acceptability and Boundedness
The method we are going to use for proving termination of modular programs is
based on the concept of acceptable program (Apt and Pedreschi 1993). In order to
introduce it, we start by the following definition, originally due to (Bezem 1993)
and (Cavedon 1989).
Definition 2 (Level Mapping)
A level mapping for a program P is a function | | : BP → N of ground atoms to
natural numbers. By convention, this definition is extended in a natural way to
ground literals by putting |¬A| = |A|. For a ground literal L, |L| is called the level
of L.
We will use the following notations. Let P be a program and p and q be relations.
We say that p refers to q if there is a clause in P that uses p in its head and q in
its body; p depends on q if (p, q) is in the reflexive, transitive closure of the relation
refers to. We say that p and q are mutually recursive and write p ≃ q, if p depends
on q and q depends on p. We also write p ❂ q, when p depends on q but q does not
depend on p.
We denote by Neg P the set of relations in P which occur in a negative literal in
a clause of P and by Neg ∗P the set of relations in P on which the relations in Neg P
depend. P − denotes the set of clauses in P defining a relation of Neg ∗P .
In the sequel we refer to the standard definition of model of a program and model
of the completion of a program, see (Apt 1990, Apt 1997) for details. In particular
we need the following notion of complete model for a program.
Definition 3 (Complete Model )
A model M of a program P is called complete if its restriction to the relations from
Neg ∗P is a model of comp(P − ).
Theory and Practice of Logic Programming
5
Notice that if I is a model of comp(P ) then its restriction to the relations in Neg ∗P
is a model of comp(P − ); hence I is a complete model of P .
The following notion of acceptable program was introduced in (Apt and Pedreschi 1993). Apt and Pedreschi proved that such a notion fully characterizes lefttermination, namely termination wrt. any ground query, both for definite programs
and for general programs which have no LDNF-derivations which flounder.
Definition 4 (Acceptable Program)
Let P be a program, | | be a level mapping for P and M be a complete model of P .
P is called acceptable wrt. | | and M if for every clause A ← A, B, B in ground (P )
the following implication holds:
if M |= A then |A| > |B|.
Note that if P is a definite program, then both P − and Neg ∗P are empty and M
can be any model of P .
We also need the notion of bounded atom.
Definition 5 (Bounded Atom)
Let P be a program and | | be a level mapping for P . An atom A is called bounded
wrt. | | if the set of all |A′ |, where A′ is a ground instance of A, is finite. In this
case we denote by max |A| the maximum value in this set.
Notice that if an atom A is bounded then, by definition of level mapping, also
the corresponding negative literal, ¬A, is bounded.
Note also that, for atomic queries, this definition coincides with the definition
of bounded query introduced in (Apt and Pedreschi 1993) in order to characterize
terminating queries for acceptable programs. In fact, in case of atomic queries the
notion of boundedness does not depend on a model.
2.3 Extension of a Program
In this paper we consider a hierarchical situation where a program uses another one
as a subprogram. The following definition formalizes this situation.
Definition 6 (Extension)
Let P and R be two programs. A relation p is defined in P if p occurs in a head
of a clause of P ; a literal L is defined in P if rel (L) is defined in P ; P extends R,
denoted P ❂ R, if no relation defined in P occurs in R.
Informally, P extends R if P defines new relations with respect to R. Note that
P and R are independent if no relation defined in P occurs in R and no relation
defined in R occurs in P , i.e. P ❂ R and R ❂ P .
In the sequel we will study termination in a hierarchy of programs.
Definition 7 (Hierarchy of Programs)
Let P1 , . . . , Pn be programs such that for all i ∈ {1, . . . , n−1}, Pi+1 ❂ (P1 ∪· · ·∪Pi ).
Then we call Pn ❂ · · · ❂ P1 a hierarchy of programs.
6
A. Bossi, N. Cocco, S. Etalle, and S. Rossi
3 Hierarchical Termination
This section contains our main results which show how termination proofs of separate programs can be combined to obtain proofs of larger programs. We start with
a technical result, dealing with the case in which a program consists of a hierarchical combination of two modules. This is the base both of a generalization to a
hierarchy of n programs and of an iterative proof method for termination presented
in Section 5. Let us first introduce the following notion of P -closed class of queries.
Definition 8 (P-closed Class)
Let C be a class of queries and P be a program. We say that C is P -closed if it
is closed under non-empty prefix (i.e., it contains all the non-empty prefixes of
its elements) and for each query Q ∈ C, every LDNF-descendant of P ∪ {Q} is
contained in C.
Note that if C is P -closed, then for each query Q ∈ C, Call P (Q) ⊆ C.
We can now state our first general theorem. Notice that if P extends R and P
is acceptable wrt. some level mapping | | and model M , then P is acceptable also
wrt. the level mapping | |′ and M , where | |′ is defined on the Herbrand base of the
union of the two programs BP ∪R and it takes the value 0 on the literals which are
not defined in P (and hence, in particular, on the literals which occur in P but are
defined in R). This shows that in each module it is sufficient to compare only the
level of the literals defined inside it, while we can ignore literals defined outside the
module. In the following we make use of this observation in order to associate to
each module in a hierarchy a level mapping which is independent from the context.
Theorem 9
Let P and R be two programs such that P extends R, M be a complete model of
P ∪ R and C be a (P ∪ R)-closed class of queries. Suppose that
• P is acceptable wrt. a level mapping | | and M ,
• for all queries Q ∈ C, all LDNF-derivations of R ∪ {Q} are finite,
• for all atoms A ∈ C, if A is defined in P then A is bounded wrt. | |.
Then for all queries Q ∈ C, all LDNF-derivations of (P ∪ R) ∪ {Q} are finite.
Proof
By the fact that C is (P ∪ R)-closed and Proposition 1, it is sufficient to prove that
for all positive literals A ∈ C, all LDNF-derivations of (P ∪ R) ∪ {A} are finite. Let
us consider an atom A ∈ C.
If A is defined in R, then the thesis trivially holds by hypothesis.
If A is defined in P , A is bounded wrt. | | by hypothesis and thus max |A| is
defined. The proof proceeds by induction on max |A|.
Base. Let max |A| = 0. In this case, by acceptability of P , there are no clauses
in P whose head unifies with A and whose body is non-empty. Hence, the thesis
holds.
Induction step. Let max |A| > 0. It is sufficient to prove that for all direct descendants (L1 , . . . , Ln ) in the LDNF-tree of (P ∪ R) ∪ {A}, if θi is a computed answer
for P ∪ {L1 , . . . , Li−1 } then all LDNF-derivations of (P ∪ R) ∪ {Li θi } are finite.
Theory and Practice of Logic Programming
7
Let c : H ′ ← L′1 , . . . , L′n be a clause of P such that σ = mgu(H ′ , A). Let H = H ′ σ
and for all i ∈ {1, . . . , n}, let Li = L′i σ and θi be a substitution such that θi is a
computed answer of L1 , . . . , Li−1 in P ∪ R.
We distinguish two cases. If Li is defined in R then the thesis follows by hypothesis.
Suppose that Li is defined in P . We prove that Li θi is bounded and max |A| >
max |Li θi |. The thesis will follow by the induction hypothesis.
Let γ be a substitution such that Li θi γ is ground. By soundness of LDNFresolution (Clark 1978), there exists γ ′ such that M |= (L1 , . . . , Li−1 )γ ′ and cσγ ′
is a ground instance of c and Li γ ′ = Li θi γ. Therefore
|Li θi γ| =
=
<
=
|Li γ ′ |
|L′i σγ ′ | (since Li = L′i σ)
|H ′ σγ ′ | (since P is acceptable)
|Aσγ ′ | (since σ = mgu(H ′ , A)).
Since A is bounded, we can conclude that Li θi is bounded and also that max |A| >
max |Li θi |.
We are going to extend the above theorem in order to handle the presence of
more than two modules. We need to introduce more notation. Let us consider the
case of a program P consisting of a hierarchy Rn ❂ . . . ❂ R1 of distinct modules,
and satisfying the property that each module, Ri , is acceptable wrt. a distinct
level mapping, | |i , and a complete model, M , of the whole program. Under these
assumptions we identify a specific class of queries which terminate in the whole
program. We characterize the class of terminating queries in terms of the following
notion of strong boundedness. This class enjoys the property of being P -closed.
Definition 10 (Strongly Bounded Query)
Let the program P := R1 ∪ . . . ∪ Rn be a hierarchy Rn ❂ . . . ❂ R1 and | |1 , . . . , | |n
be level mappings for R1 , . . . , Rn , respectively. A query Q is called strongly bounded
wrt. P and | |1 , . . . , | |n if
• for all atoms A ∈ Call P (Q), if A is defined in Ri (with i ∈ {1, . . . , n}) then
A is bounded wrt. | |i .
Notice that the notion of boundedness for an atom (see Definition 5) does not
depend on the choice of a particular model of P . As a consequence, also the definition of strong boundedness does not refer to any model of P ; however, it refers
to the LDNF-derivations of P . For this reason, a ground atom is always bounded
but not necessarily strongly bounded. On the other hand, if A is strongly bounded
then it is bounded too.
The following remark follows immediately.
Remark 11
Let the query Q be strongly bounded wrt. P and | |1 , . . . , | |n , where P is a hierarchy
Rn ❂ · · · ❂ R1 . Let i ∈ {1, . . . , n}. If Q is defined in R1 ∪. . .∪Ri then Q is strongly
bounded wrt. R1 ∪ . . . ∪ Ri and | |1 , . . . , | |i .
8
A. Bossi, N. Cocco, S. Etalle, and S. Rossi
In order to verify whether a query Q is strongly bounded wrt. a given program P
one can perform a call-pattern analysis (Janssen and Bruynooghe 1992, Gabbrielli
and Giacobazzi 1994, Codish and Demoen 1995) which allows us to infer information
about the form of the call-patterns, i.e., the atoms that will be possibly called
during the execution of P ∪ {Q}. However this is not the only way for guaranteeing
strong boundedness. There are classes of programs and queries for which strong
boundedness can be proved in a straightforward way. This is shown in the following
section.
Let us illustrate the notion of strong boundedness through an example.
Example 12
Let LIST01 be the following program which defines the proper lists of 0’s and 1’s,
i.e. lists containing only 0’s and 1’s and at least two distinct elements, as follows:
r1: list01([ ],0,0).
r2: list01([0|Xs],s(N0),N1) ← list01(Xs,N0,N1).
r3: list01([1|Xs],N0,s(N1)) ← list01(Xs,N0,N1).
r4: length([ ],0).
r5: length([X|Xs],s(N)) ← length(Xs,N).
r6: plist01(Ls) ← list01(Ls,N0,N1),
¬length(Ls,N0), ¬length(Ls,N1).
Let us distinguish two modules in LIST01: R1 = {r1 , r2 , r3 , r4 , r5 } and R2 = {r6 }
(R2 extends R1 ). Let | |1 be the natural level mapping for R1 defined by:
|list01(ls, n0 , n1 )|1 = |ls|length
|length(ls, n)|1 = |n|size
where for a term t , if t is a list then |t |length is equal to the length of the list,
otherwise it is 0, while |t |size is the number of function symbols occurring in the
term t . Let also | |2 be the trivial level mapping for R2 defined by:
|plist01(ls)|2 = 1
and assume that |L|2 = 0, if L is not defined in R2 .
Let us consider the following sets of atomic queries for LIST01 := R1 ∪ R2 :
Q1 = {list01(ls, n0 , n1 )| ls is a list, possibly non-ground, of a fixed length};
Q2 = {length(ls, n)| n is a ground term of the form either 0 or s(s(...(0)))};
Q3 = {plist01(ls)| ls is a list, possibly non-ground, of a fixed length}.
By definition of | |1 , all the atoms in Q1 and Q2 are bounded wrt. | |1 . Analogously,
all the atoms in Q3 are bounded wrt. | |2 . Notice that for all atoms A ∈ Call P (Qj ),
with j ∈ {1, 2, 3}, there exists k ∈ {1, 2, 3} such that A ∈ Qk . Hence, if A is defined
in Ri then A is bounded wrt. | |i . This proves that the set of queries Q1 , Q2 and
Q3 are strongly bounded wrt. LIST01 and | |1 , | |2 .
Here we introduce our main result.
Theory and Practice of Logic Programming
9
Theorem 13
Let P := R1 ∪ . . . ∪ Rn be a program such that Rn ❂ . . . ❂ R1 is a hierarchy,
| |1 , . . . , | |n be level mappings for R1 , . . . , Rn , respectively, and M be a complete
model of P . Suppose that
• Ri is acceptable wrt. | |i and M , for all i ∈ {1, . . . , n}.
• Q is a query strongly bounded wrt. P and | |1 , . . . , | |n .
Then all LDNF-derivations of P ∪ {Q} are finite.
Proof
Let Q be a query strongly bounded wrt. P and | |1 , . . . , | |n . We prove the theorem
by induction on n.
Base. Let n = 1. This case follows immediately by Theorem 9, where P = R1 , R
is empty and C is the class of strongly bounded queries wrt. R1 and | |1 , and the
fact that a strongly bounded atom is also bounded.
Induction step. Let n > 1. Also this case follows by Theorem 9, where P = Rn ,
R = R1 ∪. . .∪Rn−1 and C is the class of strongly bounded queries wrt. R1 ∪. . .∪Rn
and | |1 , . . . , | |n . In fact,
• Rn is acceptable wrt. | |n and M ;
• for all queries Q ∈ C, all LDNF-derivations of (R1 ∪ . . . ∪ Rn−1 ) ∪ {Q} are
finite, by Remark 11 and the inductive hypothesis;
• for all atoms A ∈ C, if A is defined in Rn then A is bounded wrt. | |n , by
definition of strong boundedness.
Here are a few examples applying Theorem 13.
Example 14
Let us reconsider the program of Example 12. In the program LIST01, R1 and
R2 are acceptable wrt. any complete model and the level mappings | |1 and | |2 ,
respectively. We already showed that Q1 , Q2 and Q3 are strongly bounded wrt.
LIST01 and | |1 , | |2 . Hence, by Theorem 13, all LDNF-derivations of LIST01∪ {Q},
where Q is a query in Q1 , Q2 or Q3 , are finite.
Notice that in the previous example the top module in the hierarchy, R2 , contains
no recursion. Hence it is intuitively clear that any problem for termination cannot
depend on it. This is reflected by the fact that the level mapping for R2 is completely
trivial. This shows how the hierarchical decomposition of the program can simplify
the termination proof.
Example 15
Consider the sorting program MERGESORT (Apt 1997):
c1: mergesort([ ],[ ]).
c2: mergesort([X],[X]).
c3: mergesort([X,Y|Xs],Ys) ←
split([X,Y|Xs],X1s,X2s),
10
A. Bossi, N. Cocco, S. Etalle, and S. Rossi
mergesort(X1s,Y1s),
mergesort(X2s,Y2s),
merge(Y1s,Y2s,Ys).
c4: split([ ],[ ],[ ]).
c5: split([X|Xs],[X|Ys],Zs) ← split(Xs,Zs,Ys).
c6:
c7:
c8:
c9:
merge([ ],Xs,Xs).
merge(Xs,[ ],Xs).
merge([X|Xs],[Y|Ys],[X|Zs]) ← X<=Y, merge(Xs,[Y|Ys],Zs).
merge([X|Xs],[Y|Ys],[Y|Zs]) ← X>Y, merge([X|Xs],Ys,Zs).
Let us divide the program MERGESORT into three modules, R1 , R2 , R3 , such that
R3 ❂ R2 ❂ R1 as follows:
• R3 := {c1, c2, c3}, it defines the relation mergesort,
• R2 := {c4, c5}, it defines the relation split,
• R1 := {c6, c7, c8, c9}, it defines the relation merge.
Let us consider the natural level mappings
|merge(xs, ys, zs)|1 = |xs|length + |ys|length
|split(xs, ys, zs)|2 = |xs|length
|mergesort(xs, ys)|3 = |xs|length
and assume that for all i ∈ {1, 2, 3}, |L|i = 0 if L is not defined in Ri .
All ground queries are strongly bounded wrt. the program MERGESORT and the
level mappings | |1 , | |2 , | |3 . Moreover, since the program is a definite one, R1 and
R2 are acceptable wrt. any model and the level mappings | |1 and | |2 , respectively,
while R3 is acceptable wrt. the level mapping | |3 and the model M below:
M =[mergesort(Xs, Ys)] ∪ [merge(Xs, Ys, Zs)]∪
{split([ ], [ ], [ ])}∪
{split([x ], [ ], [x ])| x is any ground term}∪
{split([x ], [x ], [ ])| x is any ground term}∪
{split(xs, ys, zs)| xs, ys, zs are ground terms and
|xs|length ≥ 2, |xs|length > |ys|length , |xs|length > |zs|length }
where we denote by [A] the set of all ground instances of an atom A.
Hence, by Theorem 13, all LDNF-derivations of MERGESORT ∪ {Q}, where Q is a
ground query, are finite.
Note that by exchanging the roles of R1 and R2 we would obtain the same result.
In fact the definition of merge and split are independent from each other.
4 Well-Behaving Programs
In this section we consider the problem of how to prove that a query is strongly
bounded. In fact one could argue that checking strong boundedness is more difficult
and less abstract than checking boundedness itself in the sense of (Apt and Pedreschi
Theory and Practice of Logic Programming
11
1993): we have to refer to all LDNF-derivations instead of referring to a model,
which might well look like a step backwards in the proof of termination of a program.
This is only partly true: in order to check strong boundedness we can either employ
tools based on abstract interpretation or concentrate our attention only on programs
which exhibit useful persistence properties wrt. LDNF-resolution.
We now show how the well-established notions of well-moded and well-typed
programs can be employed in order to verify strong boundedness and how they can
lead to simple termination proofs.
4.1 Well-Moded Programs
The concept of a well-moded program is due to (Dembiński and Maluszyński 1985).
The formulation we use here is from (Rosenblueth 1991), and it is equivalent to that
in (Drabent 1987). The original definition was given for definite programs (i.e.,
programs without negation), however it applies to general programs as well, just
by considering literals instead of atoms. It relies on the concept of mode, which is
a function that labels the positions of each predicate in order to indicate how the
arguments of a predicate should be used.
Definition 16 (Mode)
Consider an n-ary predicate symbol p. By a mode for p we mean a function mp
from {1, . . . , n} to the set {+, −}. If mp (i) = + then we call i an input position
of p; if mp (i) = − then we call i an output position of p. By a moding we mean a
collection of modes, one for each predicate symbol.
In a moded program, we assume that each predicate symbol has a unique mode
associated to it. Multiple moding may be obtained by simply renaming the predicates. We use the notation p(mp (1), . . . , mp (n)) to denote the moding associated
with a predicate p (e.g., append(+, +, −)). Without loss of generality, we assume,
when writing a literal as p(s, t), that we are indicating with s the sequence of terms
filling in the input positions of p and with t the sequence of terms filling in the
output positions of p. Moreover, we adopt the convention that p(s, t) could denote
both negative and positive literals.
Definition 17 (Well-Moded )
• A query p1 (s1 , t1 ), . . . , pn (sn , tn ) is called well-moded if for all i ∈ {1, . . . , n}
Var(si ) ⊆
i−1
[
Var(tj ).
j=1
• A clause p(t0 , sn+1 ) ← p1 (s1 , t1 ), . . . , pn (sn , tn ) is called well-moded if for all
i ∈ {1, . . . , n + 1}
Var(si ) ⊆
i−1
[
Var(tj ).
j=0
• A program is called well-moded if all of its clauses are well-moded.
12
A. Bossi, N. Cocco, S. Etalle, and S. Rossi
Note that well-modedness can be syntactically checked in a time which is linear
wrt. the size of the program (query).
Remark 18
If Q is a well-moded query then all its prefixes are well-moded.
The following lemma states that well-moded queries are closed under LDNFresolution. This result has been proved in (Apt and Pellegrini 1994) for LD-derivations
and definite programs.
Lemma 19
Let P and Q be a well-moded program and query, respectively. Then all LDNFdescendants of P ∪ {Q} are well-moded.
Proof
It is sufficient to extend the proof in (Apt and Pellegrini 1994) by showing that if
a query ¬A, L1 , . . . , Ln is well-moded and A is ground then both A and L1 , . . . , Ln
are well-moded. This follows immediately by definition of well-modedness. If A is
non-ground then the query above has no descendant.
When considering well-moded programs, it is natural to measure atoms only in
their input positions (Etalle et al. 1999).
Definition 20 (Moded Level Mapping)
Let P be a moded program. A function | | is a moded level mapping for P if it is a
level mapping for P such that
• for any s, t and u, |p(s, t)| = |p(s, u)|.
Hence in a moded level mapping the level of an atom is independent from the
terms in its output positions.
The following Remark and Proposition allow us to exploit well-modedness for
applying Theorem 13.
Remark 21
Let P be a well-moded program. If Q is well-moded, then first(Q) is ground in
its input position and hence it is bounded wrt. any moded level mapping for P .
Moreover, by Lemma 19, every well-moded query is strongly bounded wrt. P and
any moded level mapping for P .
Proposition 22
Let P := R1 ∪ . . . ∪ Rn be a well-moded program and Rn ❂ . . . ❂ R1 a hierarchy,
and | |1 , . . . , | |n be moded level mappings for R1 , . . . , Rn , respectively.
Then every well-moded query is strongly bounded wrt. P and | |1 , . . . , | |n .
Example 23
Let MOVE be the following program which defines a permutation between two lists
such that only one element is moved. We introduce modes and we distinguish the
two uses of append by renaming it as append1 and append2.
Theory and Practice of Logic Programming
mode
mode
mode
mode
13
delete(+, −, −).
append1(−, −, +).
append2(+, +, −).
move(+, −).
r1: delete([X|Xs],X,Xs).
r2: delete([X|Xs],Y,[X|Ys]) ← delete(Xs,Y,Ys).
r3: append1([ ],Ys,Ys).
r4: append1([X|Xs],Ys,[X|Zs]) ← append1(Xs,Ys,Zs).
r5: append2([ ],Ys,Ys).
r6: append2([X|Xs],Ys,[X|Zs]) ← append2(Xs,Ys,Zs).
r7: move(Xs,Ys) ← append1(X1s,X2s,Xs),
delete(X1s,X,Y1s), append2(Y1s,[X|X2s],Ys).
Let us partition MOVE into the modules R1 = {r1 , r2 , r3 , r4 , r5 , r6 } and R2 = {r7 }
(R2 extends R1 ). Let | |1 be the natural level mapping for R1 defined by:
|append1(xs, ys, zs)|1 = |zs|length
|append2(xs, ys, zs)|1 = |xs|length .
|delete(xs, x , ys)|1 = |xs|length .
R2 does not contain any recursive definition hence let | |2 be the trivial level mapping
defined by:
|move(xs, ys)|2 = 1
and assume that |L|2 = 0, if L is not defined in R2 .
The program MOVE := R1 ∪ R2 is well-moded and hence by Proposition 22 every
well-moded query is strongly bounded wrt. MOVE and | |1 , | |2 .
Example 24
Let R1 be the program which defines the relations member and is, R2 be the
program defining the relation count and R3 be the program defining the relation
diff with the moding and the definitions below.
mode
mode
mode
mode
member(+, +).
is(−, +).
diff(+, +, +, −).
count(+, +, −).
r1: member(X,[X|Xs]).
r2: member(X,[Y|Xs]) ← member(X,Xs).
r3: diff(Ls,I1,I2,N) ← count(Ls,I1,N1), count(Ls,I2,N2),
N is N1-N2.
r4: count([ ],I,0).
r5: count([H|Ts],I,M) ← member(H,I), count(Ts,I,M1),
M is M1+1.
r6: count([H|Ts],I,M) ← ¬ member(H,I), count(Ts,I,M).
14
A. Bossi, N. Cocco, S. Etalle, and S. Rossi
The relation diff(ls, i1 , i2 , n), given a list ls and two check-lists i1 and i2 , defines
the difference n between the number of elements of ls occurring in i1 and the
number of elements of ls occurring in i2 . Clearly R3 ❂ R2 ❂ R1 . It is easy to see
that R1 is acceptable wrt. any complete model and the moded level mapping
|member(e, ls)|1 = |ls|length
R2 is acceptable wrt. any complete model and the moded level mapping:
|count(ls, i, n)|2 = |ls|length
and R3 is acceptable wrt. any complete model and the trivial moded level mapping:
|diff(ls, i1 , i2 , n)|3 = 1
where |L|i = 0, if L is not defined in Ri .
The program DIFF := R1 ∪ R2 ∪ R3 is well-moded. Hence, by Proposition 22,
every well-moded query is strongly bounded wrt. DIFF and | |1 , | |2 , | |3 .
Note that the class of strongly bounded queries is generally larger than the class
of well-moded queries. Consider for instance the program MOVE and the query Q :=
move([X1, X2], Ys), delete(Ys, Y, Zs) which is not well-moded since it is not ground
in the input position of the first atom. However Q can be easily recognized to be
strongly bounded wrt. MOVE and | |1 , | |2 defined in Example 23. We will come back
to this query later.
4.2 Well-Typed Programs
A more refined well-behavior property of programs, namely well-typedness, can also
be useful in order to ensure the strong boundedness property.
The notion of well-typedness relies both on the concepts of mode and type. The
following very general definition of a type is sufficient for our purposes.
Definition 25 (Type)
A type is a set of terms closed under substitution.
Assume as given a specific set of types, denoted by Types, which includes Any,
the set of all terms, and Ground the set of all ground terms.
Definition 26 (Type Associated with a Position)
A type for an n-ary predicate symbol p is a function tp from {1, . . . , n} to the set
Types. If tp (i) = T , we call T the type associated with the position i of p. Assuming
a type tp for the predicate p, we say that a literal p(s1 , . . . , sn ) is correctly typed in
position i if si ∈ tp (i).
In a typed program we assume that every predicate p has a fixed mode mp and
a fixed type tp associated with it and we denote it by
p(mp (1) : tp (1), . . . , mp (n) : tp (n)).
So, for instance, we write
Theory and Practice of Logic Programming
15
append(+ : List , + : List , − : List )
to denote the moded atom append(+, +, −) where the type associated with each
argument position is List , i.e., the set of all lists.
We can then talk about types of input and of output positions of an atom.
The notion of well-typed queries and programs relies on the following concept of
type judgement.
Definition 27 (Type Judgement )
By a type judgement we mean a statement of the form s : S ⇒ t : T. We say that
a type judgement s : S ⇒ t : T is true, and write |= s : S ⇒ t : T, if for all substitutions θ, sθ ∈ S implies tθ ∈ T.
For example, the type judgements (x : Nat, l : ListNat ) ⇒ ([x|l] : ListNat ) and
([x|l] : ListNat ) ⇒ (l : ListNat ) are both true.
A notion of well-typed program has been first introduced in (Bronsard et al.
1992) and also studied in (Apt and Etalle 1993) and in (Apt and Luitjes 1995).
Similarly to well-moding, the notion was developed for definite programs. Here we
extend it to general programs.
In the following definition, we assume that is : Is is the sequence of typed terms
filling in the input positions of Ls and os : Os is the sequence of typed terms filling
in the output positions of Ls .
Definition 28 (Well-Typed )
• A query L1 , . . . , Ln is called well-typed if for all j ∈ {1, . . . , n}
|= oj1 : Oj1 , . . . , ojk : Ojk ⇒ ij : Ij
where Lj1 , . . . , Ljk are all the positive literals in L1 , . . . , Lj−1 .
• A clause L0 ← L1 , . . . , Ln is called well-typed if for all j ∈ {1, . . . , n}
|= i0 : I0 , oj1 : Oj1 , . . . , ojk : Ojk ⇒ ij : Ij
where Lj1 , . . . , Ljk are all the positive literals in L1 , . . . , Lj−1 , and
|= i0 : I0 , oj1 : Oj1 , . . . , ojh : Ojh ⇒ o0 : O0
where Lj1 , . . . , Ljh are all the positive literals in L1 , . . . , Ln .
• A program is called well-typed if all of its clauses are well-typed.
Note that an atomic query is well-typed iff it is correctly typed in its input positions
and a unit clause p(s : S, t : T) ← is well-typed if |= s : S ⇒ t : T.
The difference between Definition 28 and the one usually given for definite programs is that the correctness of the terms filling in the output positions of negative
literals cannot be used to deduce the correctness of the terms filling in the input
positions of a literal to the right (or the output positions of the head in a clause).
The two definitions coincide either for definite programs or for general programs
whose negative literals have only input positions.
As an example, let us consider the trivial program
16
A. Bossi, N. Cocco, S. Etalle, and S. Rossi
p(− : List ).
q(+ : List ).
p([]).
q([]).
By adopting a straightforward extension of well-typedness to normal programs
which considers also the outputs of negative literals, we would have that the query
¬p(a), q(a) is well-typed even if a is not a list. Moreover well-typedness would not
be persistent wrt. LDNF-resolution since q(a), which is the first LDNF-resolvent of
the previous query, is no more well-typed. Our extended definition and the classical
one coincide either for definite programs or for general programs whose negative
literals have only input positions.
For definite programs, well-modedness can be viewed as a special case of welltypedness if we consider only one type: Ground. With our extended definitions of
well-moded and well-typed general programs this is no more true. We could have
given a more complicated definition for well-typedness in order to capture also wellmodedness as a special case. For the sake of simplicity, we prefer to give two distinct
and simpler definitions.
Remark 29
If Q is a well-typed query, then all its non-empty prefixes are well-typed. In particular, first(Q) is well-typed.
The following Lemma shows that well-typed queries are closed under LDNFresolution. It has been proved in (Bronsard et al. 1992) for definite programs.
Lemma 30
Let P and Q be a well-typed program and query, respectively. Then all LDNFdescendants of P ∪ {Q} are well-typed.
Proof
Similarly to the case of well-moded programs, to extend the result to general programs it is sufficient to show that if a query Q := ¬A, L1 , . . . , Ln is well-typed then
both A and L1 , . . . , Ln are well-typed. In fact, by Remark 29, ¬A = first (Q) is
well-typed and by Definition 28, if the first literal in a well-typed query is negative,
then it is not used to deduce well-typedness of the rest of the query.
It is now natural to exploit well-typedness in order to check strong boundedness.
Analogously to well-moded programs, there are level mappings that are more natural in presence of type information. They are the level mappings for which every
well-typed atom is bounded. By Lemma 30 we have that a well-typed query Q is
strongly bounded wrt. a well-typed program P and any such level mapping. This
is stated by the next proposition.
Proposition 31
Let P := R1 ∪ . . . ∪ Rn be a well-typed program and Rn ❂ . . . ❂ R1 be a hierarchy,
and | |1 , . . . , | |n be level mappings for R1 , . . . , Rn , respectively. Suppose that for
every well-typed atom A, if A is defined in Ri then A is bounded wrt. | |i , for i ∈
{1, . . . , n}. Then every well-typed query is strongly bounded wrt. P and | |1 , . . . , | |n .
Theory and Practice of Logic Programming
17
Example 32
Let us consider again the modular proof of termination for MOVE := R1 ∪ R2 , where
R1 defines the relations append1, append2 and delete, while R2 , which extends
R1 , defines the relation move. We consider the moding of Example 23 with the
following types:
delete(+ : List , − : Any, − : List )
append1(− : List, − : List , + : List )
append2(+ : List, + : List , − : List )
move(+ : List , − : List ).
Program MOVE is well-typed in the assumed modes and types.
Let us consider the same level mappings as used in Example 23. We have already
seen that R2 is acceptable wrt. | |2 and any model, and R1 is acceptable wrt. | |1
and any model. By definition of | |2 and | |1 , one can easily see that
• every well-typed atom A defined in Ri is bounded wrt. | |i .
Hence, by Proposition 31,
• every well-typed query is strongly bounded wrt. MOVE and | |1 , | |2 .
Let us consider again the query Q := move([X1, X2], Ys), delete(Ys, Y, Zs) which
is not well-moded but it is well-typed. We have that Q is strongly bounded wrt.
MOVE and | |1 , | |2 , and consequently, by Theorem 13, that all LDNF-derivations of
MOVE ∪ {Q} are finite.
Example 33
Consider the program COLOR MAP from (Sterling and Shapiro 1986) which generates
a coloring of a map in such a way that no two neighbors have the same color. The
map is represented as a list of regions and colors as a list of available colors. In
turn, each region is determined by its name, color and the colors of its neighbors,
so it is represented as a term region(name,color,neighbors), where neighbors
is a list of colors of the neighboring regions.
c1: color map([ ],Colors).
c2: color map([Region|Regions],Colors) ←
color region(Region,Colors),
color map(Regions,Colors).
c3: color region(region(Name,Color,Neighbors),Colors) ←
select(Color,Colors,Colors1)
subset(Neighbors,Colors1).
c4: select(X,[X|Xs],Xs).
c5: select(X,[Y|Xs],[Y|Zs]) ← select(X,Xs,Zs).
c6: subset([ ],Ys).
c7: subset([X|Xs],Ys) ← member(X,Ys), subset(Xs,Ys).
c8: member(X,[X|Xs]).
c9: member(X,[Y|Xs]) ← member(X,Xs).
18
A. Bossi, N. Cocco, S. Etalle, and S. Rossi
Consider the following modes and types for the program COLOR MAP:
color map(+ : ListRegion, + : List )
color region(+ : Region, + : List )
select(+ : Any, + : List , − : List )
subset(+ : List , + : List)
member(+ : Any, + : List )
where
• Region is the set of all terms of the form region(name,color,neighbors)
with name, color ∈ Any and neighbors ∈ List ,
• ListRegion is the set of all lists of regions.
We can check that COLOR MAP is well-typed in the assumed modes and types.
We can divide the program COLOR MAP into four distinct modules, R1 , R2 , R3 , R4 ,
in the hierarchy R4 ❂ R3 ❂ R2 ❂ R1 as follows:
•
•
•
•
R4
R3
R2
R1
:= {c1, c2} defines the relation color map,
:= {c3} defines the relation color region,
:= {c4, c5, c6, c7} defines the relations select and subset,
:= {c8, c9} defines the relation member.
Each Ri is trivially acceptable wrt. any model M and the simple level mapping
| |i defined below:
|color map(xs, ys)|4 = |xs|length
|color region(x , xs)|3 = 1
|select(x , xs, ys)|2 = |xs|length
|subset(xs, ys)|2 = |xs|length
|member(x , xs)|1 = |xs|length
where for all i ∈ {1, 2, 3, 4}, |L|i = 0, if L is not defined in Ri .
Moreover, for every well-typed atom A and i ∈ {1, 2, 3, 4}, if A is defined in Ri
then A is bounded wrt. | |i . Hence, by Proposition 31,
• every well-typed query is strongly bounded wrt. the program COLOR MAP and
| |1 , . . . , | | 4 .
This proves that all LDNF-derivations of the program COLOR MAP starting in a welltyped query are finite. In particular, all the LDNF-derivations starting in a query
of the form color map(xs, ys), where xs is a list of regions and ys is a list, are finite.
Note that in proving termination of such queries the choice of a model is irrelevant.
Moreover, since such queries are well-typed, their input arguments are required to
have a specified structure, but they are not required to be ground terms as in the
case of well-moded queries. Hence, well-typedness allows us to reason about a larger
class of queries with respect to well-modedness.
This example is also discussed in (Apt and Pedreschi 1994). In order to prove
its termination they define a particular level mapping | |, obtained by combining
Theory and Practice of Logic Programming
19
the level mappings of each module, and a special model M wrt. which the whole
program COLOR MAP is acceptable. Both the level mapping | | and the model M are
non-trivial.
5 Iterative Proof Method
In the previous section we have seen how we can exploit properties which are
preserved by LDNF-resolution, such as well-modedness and well-typedness, for developing a modular proof of termination in a hierarchy of programs. In this section
we show how these properties allow us to apply our general result, i.e., Theorem 9,
also in an iterative way.
Corollary 34
Let P and R be two programs such that P ∪ R is well-moded and P extends R,
and M be a complete model of P ∪ R. Suppose that
• P is acceptable wrt. a moded level mapping | | and M ,
• for all well-moded queries Q, all LDNF-derivations R ∪ {Q} are finite.
Then for all well-moded queries Q, all LDNF-derivations of (P ∪R)∪{Q} are finite.
Proof
Let C be the class of well-moded queries of P ∪ R. By Remark 18 and Lemma 19,
C is (P ∪ R)-closed. Moreover
• P is acceptable wrt. a moded level mapping | | and M , by hypothesis;
• for all well-moded queries Q, all LDNF-derivations of R ∪ {Q} are finite, by
hypothesis;
• for all well-moded atoms A, if A is defined in P then A is bounded wrt. | |,
by Remark 21, since | | is a moded level mapping.
Hence by Theorem 9 we get the thesis.
Note that this result allows one to incrementally prove well-termination for general programs thus extending the result given in (Etalle et al. 1999) for definite
programs.
A similar result can be stated also for well-typed programs and queries, provided
that there exists a level mapping for P implying boundedness of atomic well-typed
queries.
Corollary 35
Let P and R be two programs such that P ∪ R is well-typed and P extends R, and
M be a complete model of P ∪ R. Suppose that
• P is acceptable wrt. a level mapping | | and M ,
• every well-typed atom defined in P is bounded wrt. | |,
• for all well-typed queries Q, all LDNF-derivations of R ∪ {Q} are finite.
Then for all well-typed queries Q, all LDNF-derivations of (P ∪ R) ∪ {Q} are finite.
20
A. Bossi, N. Cocco, S. Etalle, and S. Rossi
Proof
Let C be the class of well-typed queries of P ∪ R. By Remark 29 and Lemma 30, C
is (P ∪ R)-closed. Moreover
• P is acceptable wrt. a level mapping | | and M , by hypothesis;
• for all well-typed queries Q, all LDNF-derivations of R ∪ {Q} are finite, by
hypothesis;
• for all well-typed atoms A, if A is defined in P then A is bounded wrt. | |, by
hypothesis.
Hence by Theorem 9 we have the thesis.
Example 36
Let us consider again the program COLOR MAP with the same modes and types as
in Example 33. We apply the iterative termination proof given by Corollary 35 to
COLOR MAP.
First step. We can consider at first two trivial modules, R1 := {c8, c9} which
defines the relation member, and R0 := ∅. We already know that
• R1 is acceptable wrt. any model M and the level mapping | |1 already defined;
• all well-typed atoms A, defined in R1 , are bounded wrt. | |1 ;
• for all well-typed queries Q, all LDNF-derivations of R0 ∪ {Q} are trivially
finite.
Hence, by Corollary 35, for all well-typed queries Q, all LDNF-derivations of (R1 ∪
R0 ) ∪ {Q} are finite.
Second step. We can now iterate the process one level up. Let us consider the
two modules, R2 := {c4, c5, c6, c7} which defines the relations select and subset,
and R1 := {c8, c9} which defines the relation member and it is equal to (R1 ∪ R0 )
of the previous step. We already showed in Example 33 that
• R2 is acceptable wrt. any model M and the level mapping | |2 already defined;
• all well-typed atoms A, defined in R2 , are bounded wrt. | |2 ;
• for all well-typed queries Q, all LDNF-derivations of R1 ∪ {Q} are finite.
Hence, by Corollary 35, for all well-typed queries Q, all LDNF-derivations of (R2 ∪
R1 ) ∪ {Q} are finite.
By iterating the same reasoning for two steps more, we can prove that all LDNFderivations of the program COLOR MAP starting in a well-typed query are finite.
Our iterative method applies to a hierarchy of programs where on the lowest module,
R, we require termination wrt. a particular class of queries. This can be a weaker
requirement on R than acceptability as shown in the following contrived example.
Example 37
Let R define the predicate lcount which counts the number of natural numbers in
a list.
Theory and Practice of Logic Programming
21
lcount(+ : List , − : Nat)
nat(+ : Any).
r1:
r2:
r3:
r4:
lcount([ ],0).
lcount([X|Xs],s(N)) ← nat(X), lcount(Xs,N).
lcount([X|Xs],N) ← ¬ nat(X), lcount(Xs,N).
lcount(0,N) ← lcount(0,s(N)).
r5: nat(0).
r6: nat(s(N)) ← nat(N).
R is well-typed wrt. the specified modes and types. Note that R cannot be acceptable due to the presence of clause r4. On the other hand, the program terminates
for all well-typed queries.
Consider now the following program P which extends R. The predicate split,
given a list of lists, separates the list elements containing more than max natural
numbers from the other lists:
split(+ : ListList , − : ListList , − : ListList )
>(+ : Nat , + : Nat )
<=(+ : Nat, + : Nat)
p1: split([ ],[ ],[ ]).
p2: split([L|Ls],[L|L1],L2) ← lcount(L,N), N > max,
split(Ls,L1,L2).
p3: split([L|Ls],L1,[L|L2]) ← lcount(L,N), N <= max,
split(Ls,L1,L2).
where ListList denotes the set of all lists of lists, and max is a natural number.
The program P ∪ R is well-typed. Let us consider the simple level mapping | | for
P defined by:
|split(ls, l1 , l2 )| = |ls|length
which assigns level 0 to any literal not defined in P . Note that
• P is acceptable wrt. the level mapping | | and any complete model M ,
• all well-typed atoms defined in P are bounded wrt. | |,
• for all well-typed queries Q, all LDNF-derivations of R ∪ {Q} are finite.
Hence, by Corollary 35, for all well-typed queries Q, all LDNF-derivations of (P ∪
R) ∪ {Q} are finite.
This example shows that well-typedness could be useful to exclude what might
be called “dead code”.
6 Comparing with Apt and Pedreschi’s Approach
Our work can be seen as an extension of a proposal in (Apt and Pedreschi 1994).
Hence we devote this section to a comparison with their approach.
On one hand, since our approach applies to general programs, it clearly covers
22
A. Bossi, N. Cocco, S. Etalle, and S. Rossi
cases which cannot be treated with the method proposed in (Apt and Pedreschi
1994), which was developed for definite programs. On the other hand, for definite
programs the classes of queries and programs which can be treated by Apt and
Pedreschi’s approach are properly included in those which can be treated by our
method as we show in this section.
We first recall the notions of semi-acceptability and bounded query used in (Apt
and Pedreschi 1994).
Definition 38 (Semi-acceptable Program)
Let P be a definite program, | | be a level mapping for P and M be a model of
P . P is called semi-acceptable wrt. | | and M if for every clause A ← A, B, B in
ground(P ) such that M |= A
• |A| > |B|, if rel(A) ≃ rel(B),
• |A| ≥ |B|, if rel(A) ❂ rel(B).
Definition 39 (Bounded Query)
Let P be a definite program, | | be a level mapping for P , and M be a model of P .
• With each query Q := L1 , . . . , Ln we associate n sets of natural numbers
defined as follows: For i ∈ {1, . . . , n},
′
′
′
′
′
|Q|M
i = {|Li | | L1 , . . . , Ln is a ground instance of Q and M |= L1 , . . . , Li−1 }.
M
• A query Q is called bounded wrt. | | and M if |Q|M
i is finite (i.e., if |Q|i has
a maximum in N) for all i ∈ {1, . . . , n}.
Lemma 40
Let P be a definite program which is semi-acceptable wrt. | | and M . If Q is a query
bounded wrt. | | and M then all LD-descendants of P ∪ {Q} are bounded wrt. | |
and M .
Proof
It is a consequence of Lemma 3.6 in (Apt and Pedreschi 1994) and (the proof of)
Lemma 5.4 in (Apt and Pedreschi 1994).
We can always decompose a definite program P into a hierarchy of n ≥ 1 programs P := R1 ∪ . . . ∪ Rn , where Rn ❂ . . . ❂ R1 in such a way that for every
i ∈ {1, . . . , n} if the predicate symbols pi and qi are both defined in Ri then neither
pi ❂ qi nor qi ❂ pi (either they are mutually recursive or independent). We call
such a hierarchy a finest decomposition of P .
The following property has two main applications. First it allows us to compare
our approach with (Apt and Pedreschi 1994), then it provides an extension of
Theorem 13 to hierarchies of semi-acceptable programs.
Proposition 41
Let P be a semi-acceptable program wrt. a level mapping | | and a model M and
Q be a query strongly bounded wrt. P and | |. Let P := R1 ∪ . . . ∪ Rn be a finest
decomposition of P into a hierarchy of modules. Let | |i , with i ∈ {1, . . . , n}, be
defined in the following way: if A is defined in Ri then |A|i = |A| else |A|i = 0.
Then
Theory and Practice of Logic Programming
23
• every Ri is acceptable wrt. | |i and M (with i ∈ {1, . . . , n}),
• Q is strongly bounded wrt. R1 ∪ . . . ∪ Rn and | |1 , . . . , | |n .
Proof
Immediate by the definitions of semi-acceptability and strongly boundedness, since
we are considering a finest decomposition.
In order to compare our approach to the one presented in (Apt and Pedreschi
1994) we consider only Theorem 5.8 in (Apt and Pedreschi 1994), since this is
their most general result which implies the other ones, namely Theorem 5.6 and
Theorem 5.7.
Theorem 42 (Theorem 5.8 in (Apt and Pedreschi 1994))
Let P and R be two definite programs such that P extends R, and let M be a
model of P ∪ R. Suppose that
• R is semi-acceptable wrt. | |R and M ∩ BR ,
• P is semi-acceptable wrt. | |P and M ,
• there exists a level mapping || ||P such that for every ground instance of a
clause from P , A ← A, B, B, such that M |= A
— ||A||P ≥ ||B||P , if rel (B) is defined in P ,
— ||A||P ≥ |B|R , if rel (B) is defined in R.
Then P ∪ R is semi-acceptable wrt. | | and M , where | | is defined as follows:
|A| = |A|P + ||A||P , if rel (A) is defined in P ,
|A| = |A|R , if rel (A) is defined in R.
The following remark follows from Lemma 5.4 in (Apt and Pedreschi 1994) and
Corollary 3.7 in (Apt and Pedreschi 1994). Together with Theorem 42, it implies
termination of bounded queries in (Apt and Pedreschi 1994).
Remark 43
If P ∪ R is semi-acceptable wrt. | | and M and Q is bounded wrt. | | and M then
all LD-derivations of (P ∪ R) ∪ {Q} are finite.
We now show that whenever Theorem 42 can be applied to prove termination of
all the queries bounded wrt. | | and M , then also our method can be used to prove
termination of the same class of queries with no need of || ||P for relating the proofs
of the two modules.
In the following theorem for the sake of simplicity we assume that P ❂ R is a
finest decomposition of P ∪ R. We discuss later how to extend the result to the
general case.
Theorem 44
Let P and R be two programs such that P extends R, and let M be a model of
P ∪ R. Suppose that
• R is semi-acceptable wrt. | |R and M ∩ BR ,
• P is semi-acceptable wrt. | |P and M ,
24
A. Bossi, N. Cocco, S. Etalle, and S. Rossi
• there exists a level mapping || ||P defined as in Theorem 42.
Let | | be the level mapping defined by Theorem 42. Moreover, suppose P ❂ R is a
finest decomposition of P ∪ R. If Q is bounded wrt. | |, then Q is strongly bounded
wrt. P ∪ R and | |P and | |R .
Proof
Since we are considering a finest decomposition of P ∪ R, by Proposition 41, R is
acceptable wrt. | |R , while P is acceptable wrt. | |′P such that if A is defined in P
then |A|′P = |A|P else |A|′P = 0.
By Lemma 40 all LD-descendants of (P ∪ R) ∪ {Q} are bounded wrt. | | and M .
By definition of boundedness, for all LD-descendants Q′ of (P ∪ R) ∪ {Q}, first(Q′ )
is bounded wrt. | |. By definition of | |, for all atoms A bounded wrt. | | we have
that: if A is defined in R then A is bounded wrt. | |R , while if A is defined in P
then A is bounded wrt. | |P and hence wrt. | |′P (since |A|′P = |A|P ). Hence the
thesis follows.
If the hierarchy P ❂ R is not a finest one and | |P and | |R are the level mappings
corresponding to P and R respectively, then we can decompose P into a finest
decomposition, P := Pn ❂ . . . ❂ P1 , and consider instead of | |P the derived level
mappings | |Pi defined in the following way: if A is defined in Pi then |A|Pi = |A|P
else |A|Pi = 0. Similarly we can decompose R := Rn ❂ . . . ❂ R1 and define the
corresponding level mappings. The derived level mappings satisfy all the properties
we need for proving that if Q is bounded wrt. | |, then Q is strongly bounded wrt.
P ∪ R and | |P1 , . . . , | |Pn , | |R1 , . . . , | |Rn .
To complete the comparison with (Apt and Pedreschi 1994), we can observe
that our method is applicable also for proving termination of queries in modular
programs which are not (semi-)acceptable. Such programs clearly cannot be dealt
with Apt and Pedreschi’s method. The program of Example 37 is a non-acceptable
program for which we proved termination of all well-typed queries by applying
Corollary 35. The following is a simple example of a non-acceptable program to
which we can apply the general Theorem 13.
Example 45
Let R be the following trivial program:
r1: q(0).
r2: q(s(Y)) ← q(Y).
The program R is acceptable wrt. the following natural level mapping | |R and
any model M :
|q(t )|R = |t |size .
Let P be a program, which extends R, defined as follows:
p1: r(0,0).
p2: r(s(X),Y).
p3: p(X) ← r(X,Y), q(Y).
Theory and Practice of Logic Programming
25
The program P is acceptable wrt. the following trivial level mapping | |P and
any model M :
|q(y)|P = 0,
|r(x , y)|P = 0,
|p(x )|P = 1.
Note that, even if each module is acceptable, P ∪ R cannot be acceptable wrt.
any level mapping and model. In fact P ∪ R is not left-terminating: for example
it does not terminate for the ground query p(s(0)). As a consequence Apt and
Pedreschi’s method does not apply to P ∪ R. On the other hand, there are ground
queries, such as p(0), which terminate in P ∪ R. We can prove it as follows.
• By Theorem 13, for all strongly bounded queries Q wrt. P ∪ R and | |R , | |P ,
all LD-derivations of (P ∪ R) ∪ {Q} are finite.
• p(0) is strongly bounded wrt. P ∪R and | |R , | |P . In fact, Call (P ∪R) (p(0)) =
{p(0), r(0,Y), q(0)} and all these atoms are bounded wrt. their corresponding level mapping.
7 Conclusions
In this paper we propose a modular approach to termination proofs of general programs by following the proof style introduced by Apt and Pedreschi. Our technique
allows one to give simple proofs in hierarchically structured programs, namely programs which can be partitioned into n modules, R1 ∪ . . . ∪ Rn , such that for all
i ∈ {1, . . . , n − 1}, Ri+1 extends R1 ∪ . . . ∪ Ri .
We supply the general Theorem 9 which can be iteratively applied to a hierarchy
of two programs and a class of queries enjoying persistence properties through
LDNF-resolution. We then use such a result to deal with a general hierarchy of
acceptable programs, by introducing an extension of the concept of boundedness
for hierarchical programs, namely strong boundedness. Strong boundedness is a
property on queries which can be easily ensured for hierarchies of programs behaving
well, such as well-moded or well-typed programs. We show how specific and simple
hierarchical termination proofs can be derived for such classes of programs and
queries. We believe this is a valuable proof technique since realistic programs are
typically well-moded and well-typed.
The simplifications in the termination proof derive from the fact that for proving
the termination of a modular program, we simply prove acceptability of each module
by choosing a level mapping which focuses only on the predicates defined in it, with
no concern of the module context. Generally this can be done by using very simple
and natural level mappings which are completely independent from one module
to another. A complicated level mapping is generally required when we prove the
termination of a program as a whole and we have to consider a level mapping which
appropriately relates all the predicates defined in the program. Hence the finer the
modularization of the program the simpler the level mappings. Obviously we cannot
completely ignore how predicates defined in different modules relate to each other.
26
A. Bossi, N. Cocco, S. Etalle, and S. Rossi
On one hand, when we prove acceptability for each module, we consider a model for
the whole program. This guarantees the compatibility among the definitions in the
hierarchy. On the other hand, for queries we use the notion of strong boundedness.
The intuition is that we consider only what may influence the evaluation of queries
in the considered class.
The proof method of Theorem 9 can be applied also to programs which are not
acceptable. In fact, the condition on the lower module is just that it terminates on
all the queries in the considered class and not on all ground queries as required for
acceptable programs. From Theorem 9 we could also derive a method to deal with
pre-compiled modules (or even modules written in a different language) provided
that we already know termination properties and we have a complete specification.
For sake of simplicity, in the first part of the paper we consider the notion of
acceptability instead of the less requiring notion of semi-acceptability. This choice
makes proofs of our results much simpler. On the other hand, as we show in Section
6, our results can be applied also to hierarchies of semi-acceptable programs.
We have compared our proposal with the one in (Apt and Pedreschi 1994). They
propose a modular approach to left-termination proofs in a hierarchy of two definite
programs P ❂ R. They require both the (semi)-acceptability of the two modules R
and P wrt. their respective level mappings and a condition relating the two level
mappings which is meant to connect the two termination proofs.
Our method is more powerful both because we consider also general programs
and because we capture definite programs and queries which cannot be treated by
the method developed in (Apt and Pedreschi 1994). In fact there are non-acceptable
programs for which we can single out a class of terminating queries.
For the previous reasons our method improves also with respect to (Pedreschi
and Ruggieri 1996, Pedreschi and Ruggieri 1999) where hierarchies of modules are
considered. In (Pedreschi and Ruggieri 1996, Pedreschi and Ruggieri 1999) a unifying framework for the verification of total correctness of logic programs is provided.
The authors consider modular termination by following the approach in (Apt and
Pedreschi 1994).
In (Marchiori 1996) a methodology for proving termination of general logic programs is proposed which is based on modularization. In this approach, the acyclic
modules, namely modules that terminate independently from the selection rule, play
a distinctive role. For such modules, the termination proof does not require a model.
In combination with appropriate notions of up-acceptability and low-acceptability for
the modules which are not acyclic, this provides a practical technique for proving
termination of the whole program. Analogously to (Apt and Pedreschi 1994), also in
(Marchiori 1996) a relation between the level mappings of all modules is required. It
is interesting to note that the idea of exploiting acyclicity is completely orthogonal
to our approach: we could integrate it into our framework.
Another related work is (Decorte et al. 1999), even if it does not aim explicitly at
modularity. In fact they propose a technique for automatic termination analysis of
definite programs which is highly efficient also because they use a rather operational
notion of acceptability with respect to a set of queries, where decreasing levels are
required only on (mutually) recursive calls as in (De Schreye et al. 1992). Effectively,
Theory and Practice of Logic Programming
27
this corresponds to considering a finest decomposition of the program and having
independent level mappings for each module. However, their notion of acceptability
is defined and verified on call-patterns instead of program clauses. In a sense, such
an acceptability with respect to a set of queries combines the concepts of strongly
boundedness and (standard) acceptability. They start from a class of queries and
try to derive automatically a termination proof for such a class, while we start from
the program and derive a class of queries for which it terminates.
In (Verbaeten et al. 1999) termination in the context of tabled execution is considered. Also in this case modular results are inspired by (De Schreye et al. 1992)
by adapting the notion of acceptability wrt. call-patterns to tabled executions. This
work is further developed in (Verbaeten et al. 2001) where their modular termination conditions are refined following the approach by (Apt and Pedreschi 1994).
In (Etalle et al. 1999) a method for modular termination proofs for well-moded
definite programs is proposed. Our present work generalizes such result to general
programs.
Our method may help in designing more powerful automatic systems for verifying termination (De Schreye et al. 1992, Speirs, Somogyi and Søndergaard. 1997,
Decorte et al. 1999, Codish and Taboch 1999). We see two directions which could
be pursued for a fruitful integration with existing automatic tools. The first one exploits the fact that in each single module it is sufficient to synthesize a level mapping
which does not need to measure atoms defined in other modules. The second one
concerns tools based on call-patterns analysis (De Schreye et al. 1992, Gabbrielli
and Giacobazzi 1994, Codish and Demoen 1995). They can take advantage of the
concept of strong boundedness which, as we show, can be implied by well-behavior
of programs (Debray and Warren 1988, Debray 1989).
Acknowledgements. This work has been partially supported by MURST with the
National Research Project “Certificazione automatica di programmi mediante interpretazione astratta”.
References
Apt, K. R. (1990). Introduction to Logic Programming, in J. van Leeuwen (ed.), Handbook of Theoretical Computer Science, Vol. B: Formal Models and Semantics, Elsevier,
Amsterdam and The MIT Press, Cambridge, pp. 495–574.
Apt, K. R. (1997). From Logic Programming to Prolog, Prentice Hall.
Apt, K. R. and Etalle, S. (1993).
On the unification free Prolog programs, in
A. Borzyszkowski and S. Sokolowski (eds), Proceedings of the Conference on Mathematical Foundations of Computer Science (MFCS 93), Vol. 711 of Lecture Notes in
Computer Science, Springer-Verlag, pp. 1–19.
Apt, K. R. and Luitjes, I. (1995). Verification of logic programs with delay declarations,
in A. Borzyszkowski and S. Sokolowski (eds), Proceedings of the Fourth International
Conference on Algebraic Methodology and Software Technology, (AMAST’95), Vol. 936
of Lecture Notes in Computer Science, Springer-Verlag, pp. 1–19.
Apt, K. R. and Pedreschi, D. (1993). Reasoning about termination of pure Prolog programs,
Information and Computation 106(1): 109–157.
28
A. Bossi, N. Cocco, S. Etalle, and S. Rossi
Apt, K. R. and Pedreschi, D. (1994). Modular termination proofs for logic and pure Prolog
programs, in G. Levi (ed.), Advances in Logic Programming Theory, Oxford University
Press, pp. 183–229.
Apt, K. R. and Pellegrini, A. (1994). On the occur-check free Prolog programs, ACM
Transactions on Programming Languages and Systems 16(3): 687–726.
Bezem, M. (1993). Strong termination of logic programs, Journal of Logic Programming
15(1&2): 79–97.
Bronsard, F., Lakshman, T. K. and Reddy, U. S. (1992). A framework of directionality
for proving termination of logic programs, in K. R. Apt (ed.), Proceedings of the Joint
International Conference and Symposium on Logic Programming, MIT Press, pp. 321–
335.
Cavedon, L. (1989). Continuity, consistency and completeness properties for logic programs,
in G. Levi and M. Martelli (eds), Proceedings of the Sixth International Conference on
Logic Programming, The MIT press, pp. 571–584.
Clark, K. L. (1978). Negation as failure rule, in H. Gallaire and G. Minker (eds), Logic and
Data Bases, Plenum Press, pp. 293–322.
Codish, M. and Demoen, B. (1995). Analyzing logic programs using ”prop”-ositional logic
programs and a magic wand, Journal of Logic Programming 25(3): 249–274.
Codish, M. and Taboch, C. (1999). A semantic basis for the termination analysis of logic
programs, Journal of Logic Programming 41(1): 103–123.
De Schreye, D., Verschaetse, K. and Bruynooghe, M. (1992). A Framework for Analyzing
the Termination of Definite Logic programs with respect to Call Patterns, in I. Staff
(ed.), Proceedings of the International Conference on Fifth Generation Computer Systems (FGCS’92), Tokio, ICOT, pp. 481–488.
Debray, S. K. (1989). Static inference of modes and data dependencies in logic programs,
ACM Transactions on Programming Languages and Systems 11(3): 418–450.
Debray, S. K. and Warren, D. S. (1988). Automatic mode inference for logic programs,
Journal of Logic Programming 5(3): 207–229.
Decorte, S., De Schreye, D. and Vandecasteele, H. (1999). Constraint-based termination
analysis of logic programs, ACM Transactions on Programming Languages and Systems
21(6): 1137–1195.
Dembiński, P. and Maluszyński, J. (1985). AND-parallelism with intelligent backtracking
for annotated logic programs, Proceedings of the International Symposium on Logic
Programming, Boston, pp. 29–38.
Drabent, W. (1987). Do logic programs resemble programs in conventional languages?,
in E. Wada (ed.), Proceedings International Symposium on Logic Programming, IEEE
Computer Society, pp. 389–396.
Etalle, S., Bossi, A. and Cocco, N. (1999). Termination of well-moded programs, Journal
of Logic Programming 38(2): 243–257.
Gabbrielli, M. and Giacobazzi, R. (1994). Goal independency and call patterns in the analysis of logic programs, Proceedings of the Ninth ACM Symposium on Applied Computing,
ACM Press, pp. 394–399.
Janssen, G. and Bruynooghe, M. (1992). Deriving descriptions of possible values of program
variables by means of abstract interpretation, Journal of Logic Programming 13(2–
3): 205–258.
Lloyd, J. W. (1987). Foundations of Logic Programming, Symbolic Computation – Artificial
Intelligence, Springer-Verlag. Second edition.
Marchiori, E. (1996). Practical methods for proving termination of general logic programs,
Journal of Artificial Intelligence Research 4: 179–208.
Theory and Practice of Logic Programming
29
Pedreschi, D. and Ruggieri, S. (1996). Modular verification of Logic Programs, in F. de Boer
and M. Gabbrielli (eds), Proceedings of the W2 Post-Conference Workshop of the 1996
JLCSLP,Bonn, Germany. http://www.di.unipi.it/∼gabbri/w2.html.
Pedreschi, D. and Ruggieri, S. (1999). Verification of Logic Programs, Journal of Logic
Programming 39(1–3): 125–176.
Rosenblueth, D. (1991). Using program transformation to obtain methods for eliminating
backtracking in fixed-mode logic programs, Technical Report 7, Universidad Nacional
Autonoma de Mexico, Instituto de Investigaciones en Matematicas Aplicadas y en Sistemas.
Speirs, C., Somogyi, Z. and Søndergaard., H. (1997). Termination analysis for Mercury, in
P. V. Hentenryck (ed.), Proceedings of the Fourth International Static Analysis Symposium, (SAS’97), Vol. 1302 of Lecture Notes in Computer Science, Springer-Verlag,
pp. 157–171.
Sterling, L. and Shapiro, E. (1986). The Art of Prolog, The MIT Press.
Verbaeten, S., Sagonas, K. F. and De Schreye, D. (1999). Modular termination proofs for
Prolog with tabling, in G. Nadathur (ed.), Proceedings of International Conference on
Principles and Practice of Declarative Programming (PPDP’99), Vol. 1702 of Lecture
Notes in Computer Science, pp. 342–359.
Verbaeten, S., Sagonas, K. F. and De Schreye, D. (2001). Termination proofs for logic
programs with tabling, ACM Transactions on Computational Logic 2(1): 57–92.
| 6 |
arXiv:1610.10023v1 [] 31 Oct 2016
QUASI-PRÜFER EXTENSIONS OF RINGS
GABRIEL PICAVET AND MARTINE PICAVET-L’HERMITTE
Abstract. We introduce quasi-Prüfer ring extensions, in order
to relativize quasi-Prüfer domains and to take also into account
some contexts in recent papers, where such extensions appear in a
hidden form. An extension is quasi-Prüfer if and only if it is an INC
pair. The class of these extensions has nice stability properties.
We also define almost-Prüfer extensions that are quasi-Prüfer, the
converse being not true. Quasi-Prüfer extensions are closely linked
to finiteness properties of fibers. Applications are given for FMC
extensions, because they are quasi-Prüfer.
1. Introduction and Notation
We consider the category of commutative and unital rings. An epimorphism is an epimorphism of this category. Let R ⊆ S be a (ring)
extension. The set of all R-subalgebras of S is denoted by [R, S]. The
extension R ⊆ S is said to have FIP (for the “finitely many intermediate algebras property”) if [R, S] is finite. A chain of R-subalgebras of S
is a set of elements of [R, S] that are pairwise comparable with respect
to inclusion. We say that the extension R ⊆ S has FCP (for the “finite
chain property”) if each chain in [R, S] is finite. Dobbs and the authors
characterized FCP and FIP extensions [10]. Clearly, an extension that
satisfies FIP must also satisfy FCP. An extension R ⊆ S is called FMC
if there is a finite maximal chain of extensions from R to S.
We begin by explaining our motivations and aims. The reader who is
not familiar with the notions used will find some Scholia in the sequel,
as well as necessary definitions that exist in the literature. Knebusch
and Zang introduced Prüfer extensions in their book [26]. Actually,
these extensions are nothing but normal pairs, that are intensively
studied in the literature. We do not intend to give an extensive list of
recent papers, written by Ayache, Ben Nasr, Dobbs, Jaballah, Jarboui
and some others. We are indebted to these authors because their papers are a rich source of suggestions. We observed that some of them
2010 Mathematics Subject Classification. Primary:13B22; Secondary:
Key words and phrases. flat epimorphism, FIP, FCP extension, minimal extension, integral extension, Morita, Prüfer hull, support of a module.
1
2
G. PICAVET AND M. PICAVET
are dealing with FCP (FIP, FMC) extensions, followed by a Prüfer
extension, perhaps under a hidden form. These extensions reminded
us quasi-Prüfer domains (see [18]). Therefore, we introduced in [38]
quasi-Prüfer extensions R ⊆ S as extensions that can be factored
R ⊆ R′ ⊆ S, where the first extension is integral and the second is
Prüfer. Note that FMC extensions are quasi-Prüfer.
We give a systematic study of quasi-Prüfer extensions in Section 2
and Section 3. The class of quasi-Prüfer extensions has a nice behavior
with respect to the classical operations of commutative algebra. An important result is that quasi-Prüfer extensions coincide with INC-pairs.
Another one is that this class is stable under forming subextensions
and composition. A striking result is the stability of the class of quasiPrüfer extensions by absolutely flat base change, like localizations and
Henselizations. Any ring extension R ⊆ S admits a quasi-Prüfer closure, contained in S. Examples are provided by Laskerian pairs, open
pairs and the pseudo-Prüfer pairs of Dobbs-Shapiro [15].
Section 4 deals with almost-Prüfer extensions, a special kind of quasiPrüfer extensions. They are of the form R ⊆ T ⊆ S, where the first
extension is Prüfer and the second is integral. Any ring extension
admits an almost-Prüfer closure, contained in S. The class of almostPrüfer extensions seems to have less properties than the class of quasiPrüfer extensions but has the advantage of the commutation of Prüfer
closures with localizations at prime ideals. We examine the transfer of
the quasi (almost)-Prüfer properties to subextensions.
Section 5 study the transfer of the quasi (almost)-Prüfer properties
to Nagata extensions.
In section 6, we complete and generalize the results of Ayache-Dobbs
in [5], with respect to the finiteness of fibers. These authors have evidently considered particular cases of quasi-Prüfer extensions. A main
result is that if R ⊆ S is quasi-Prüfer with finite fibers, then so is
R ⊆ T for T ∈ [R, S]. In particular, we recover a result of [5] about
FMC extensions.
Now Section 7 gives calculations of |[R, S]| with respect to its Prüfer
closure, quasi-Prüfer (almost-Prüfer) closure in case R ⊆ S has FCP.
1.1. Recalls about some results and definitions. The reader is
warned that we will mostly use the definition of Prüfer extensions by
flat epimorphic subextensions investigated in [26]. The results needed
may be found in Scholium A for flat epimorphic extensions and some
results of [26] are summarized in Scholium B. Their powers give quick
proofs of results that are generalizations of results of the literature.
QUASI-PRÜFER EXTENSIONS
3
As long as FCP or FMC extensions are concerned, we use minimal
(ring) extensions, a concept introduced by Ferrand-Olivier [17]. An
extension R ⊂ S is called minimal if [R, S] = {R, S}. It is known that
a minimal extension is either module-finite or a flat epimorphism [17]
and these conditions are mutually exclusive. There are three types of
integral minimal (module-finite) extensions: ramified, decomposed or
inert [36, Theorem 3.3]. A minimal extension R ⊂ S admits a crucial
ideal C(R, S) =: M which is maximal in R and such that RP = SP
for each P 6= M, P ∈ Spec(R). Moreover, C(R, S) = (R : S) when
R ⊂ S is an integral minimal extension. The key connection between
the above ideas is that if R ⊆ S has FCP or FMC, then any maximal
(necessarily finite) chain of R-subalgebras of S, R = R0 ⊂ R1 ⊂ · · · ⊂
Rn−1 ⊂ Rn = S, with length n < ∞, results from juxtaposing n
minimal extensions Ri ⊂ Ri+1 , 0 ≤ i ≤ n − 1.
Following [24], we define the length ℓ[R, S] of [R, S] as the supremum
of the lengths of chains in [R, S]. In particular, if ℓ[R, S] = r, for some
integer r, there exists a maximal chain in [R, S] with length r.
As usual, Spec(R), Max(R), Min(R), U(R), Tot(R) are respectively
the set of prime ideals, maximal ideals, minimal prime ideals, units,
total ring of fractions of a ring R and κ(P ) = RP /P RP is the residual
field of R at P ∈ Spec(R).
If R ⊆ S is an extension, then (R : S) is its conductor and if P ∈
Spec(R), then SP is the localization SR\P . We denote the integral
S
closure of R in S by R (or R).
A local ring is here what is called elsewhere a quasi-local ring. The
support of an R-module E is SuppR (E) := {P ∈ Spec(R) | EP 6= 0}
and MSuppR (E) := SuppR (E) ∩ Max(R). Finally, ⊂ denotes proper
inclusion and |X| the cardinality of a set X.
Scholium A We give some recalls about flat epimorphisms (see [27,
Chapitre IV], except (2) which is [31, Proposition 2]).
(1) R → S is a flat epimorphism ⇔ for all P ∈ Spec(R), either
RP → SP is an isomorphism or S = P S ⇔ RP ⊆ SP is a flat
epimorphism for all P ∈ Spec(R).
(2) (S) A flat epimorphism, with a zero-dimensional domain, is surjective.
(3) If f : A → B and g : B → C are ring morphisms such that g ◦ f
is injective and f is a flat epimorphism, then g is injective.
(4) Let R ⊆ T ⊆ S be a tower of extensions, such that R ⊆ S is a
flat epimorphism. Then T ⊆ S is a flat epimorphism but R ⊆ T
does not need. A Prüfer extension remedies to this defect.
4
G. PICAVET AND M. PICAVET
(5) (L) A faithfully flat epimorphism is an isomorphism. Hence,
R = S if R ⊆ S is an integral flat epimorphism.
(6) If f : R → S is a flat epimorphism and J an ideal of S, then
J = f −1 (J)S.
(7) If f : R → S is an epimorphism, then f is spectrally injective
and its residual extensions are isomorphisms.
(8) Flat epimorphisms remain flat epimorphisms under base change
(in particular, after a localization with respect to a multiplicatively closed subset).
(9) Flat epimorphisms are descended by faithfully flat morphisms.
1.2. Recalls and results on Prüfer extensions. We recall some
definitions and properties of ring extensions R ⊆ S and rings R. There
are a lot of characterizations of Prüfer extensions. We keep only those
that are useful in this paper. We give the two definitions that are dual
and emphasize some characterizations in the local case.
Scholium B
(1) [26] R ⊆ S is called Prüfer if R ⊆ T is a flat epimorphism for
each T ∈ [R, S].
(2) R ⊆ S is called a normal pair if T ⊆ S is integrally closed for
each T ∈ [R, S].
(3) R ⊆ S is Prüfer if and only if it is a normal pair [26, Theorem
5.2(4)].
(4) R is called Prüfer if its finitely generated regular ideals are
invertible, or equivalently, R ⊆ Tot(R) is Prüfer [20, Theorem
13((5)(9))].
Hence Prüfer extensions are a relativization of Prüfer rings. Clearly,
a minimal extension is a flat epimorphism if and only if it is Prüfer.
We will then use for such extensions the terminology: Prüfer minimal
extensions. The reader may find some properties of Prüfer minimal
extensions in [36, Proposition 3.2, Lemma 3.4 and Proposition 3.5], asserted by L. Dechene in her dissertation, but where in addition R must
be supposed local. The reason why is that this word has surprisingly
disappeared during the printing process of [36].
We will need the two next results. Some of them do not explicitely
appear in [26] but deserve to be emphasized. We refer to [26, Definition
1, p.22] for a definition of Manis extensions.
Proposition 1.1. Let R ⊆ S be a ring extension.
(1) R ⊆ S is Prüfer if and only if RP ⊆ SP is Prüfer for each
P ∈ Spec(R) (respectively, P ∈ Supp(S/R)).
QUASI-PRÜFER EXTENSIONS
5
(2) R ⊆ S is Prüfer if and only if RM ⊆ SM is Manis for each
M ∈ Max(R).
Proof. (1) The class of Prüfer extensions is stable under localization [26,
Proposition 5.1(ii), p.46-47]. To get the converse, use Scholium A(1).
(2) follows from [26, Proposition 2.10, p.28, Definition 1, p.46].
Proposition 1.2. Let R ⊆ S be a ring extension, where R is local.
(1) R ⊆ S is Manis if and only if S \ R ⊆ U(S) and x ∈ S \ R ⇒
x−1 ∈ R. In that case, R ⊆ S is integrally closed.
(2) R ⊆ S is Manis if and only if R ⊆ S is Prüfer.
(3) R ⊆ S is Prüfer if and only if there exists P ∈ Spec(R) such
that S = RP , P = SP and R/P is a valuation domain. Under
these conditions, S/P is the quotient field of R/P .
Proof. (1) is [26, Theorem 2.5, p.24]. (2) is [26, Scholium 10.4, p.147].
Then (3) is [10, Theorem 6.8].
Next result shows that Prüfer FCP extensions can be described in a
special manner.
Proposition 1.3. Let R ⊂ S be a ring extension.
(1) If R ⊂ S has FCP, then R ⊂ S is integrally closed ⇔ R ⊂ S is
Prüfer ⇔ R ⊂ S is a composite of Prüfer minimal extensions.
(2) If R ⊂ S is integrally closed, then R ⊂ S has FCP ⇔ R ⊂ S is
Prüfer and Supp(S/R) is finite.
Proof. (1) Assume that R ⊂ S has FCP. If R ⊂ S is integrally closed,
then, R ⊂ S is composed of Prüfer minimal extensions by [10, Lemma
3.10]. Conversely, if R ⊂ S is composed of Prüfer minimal extensions,
R ⊂ S is integrally closed, since so is each Prüfer minimal extension. A
Prüfer extension is obviously integrally closed, and an FCP integrally
closed extension is Prüfer by [10, Theorem 6.3].
(2) The logical equivalence is [10, Theorem 6.3].
Definition 1.4. [26] A ring extension R ⊆ S has:
bS , called the
(1) a greatest flat epimorphic subextension R ⊆ R
Morita hull of R in S.
eS , called the Prüfer hull
(2) a greatest Prüfer subextension R ⊆ R
of R in S.
b := R
bS and R
e := R
eS , if no confusion can occur. R ⊆ S is
We set R
e
called Prüfer-closed if R = R.
eS is denoted by P(R, S) in [26] and R
bS is the weakly
Note that R
surjective hull M(R, S) of [26]. Our terminology is justified because
6
G. PICAVET AND M. PICAVET
Morita’s work is earlier [30, Corollary 3.4]. The Morita hull can be
computed by using a (transfinite) induction [30]. Let S ′ be the set of
all s ∈ S such that there is some ideal I of R, such that IS = S and
Is ⊆ R. Then R ⊆ S ′ is a subextension of R ⊆ S. We set S1 := S ′
and Si+1 := (Si )′ ⊆ Si . By [30, p.36], if R ⊂ S is an FCP extension,
b = Sn for some integer n.
then R
At this stage it is interesting to point out a result; showing again
that integral closedness and Prüfer extensions are closely related.
Proposition 1.5. Olivier [33, Corollary, p.56] An extension R ⊆ S is
integrally closed if and only if there is a pullback square:
R −−−→
y
S
y
V −−−→ K
where V is a semi-hereditary ring and K its total quotient ring.
In that case V ⊆ K is a Prüfer extension, since V is a Prüfer ring,
whose localizations at prime ideals are valuation domains and K is an
absolutely flat ring. As there exist integrally closed extensions that
are not Prüfer, we see in passing that the pullback construction may
not descend Prüfer extensions. The above result has a companion for
minimal extensions that are Prüfer [21, Proposition 3.2].
Proposition 1.6. Let R ⊆ S be an extension and T ∈ [R, S], then
eT = R
e ∩ T . Therefore, for T, U ∈ [R, S] with T ⊆ U, then R
eT ⊆ R
eU .
R
eT is the greatest Prüfer extenProof. Obvious, since the Prüfer hull R
sion R ⊆ V contained in T .
e if R ⊆ S has FCP.
We will show later that in some cases Te ⊆ U
2. Quasi-Prüfer extensions
We introduced the following definition in [38, p.10].
Definition 2.1. An extension of rings R ⊆ S is called quasi-Prüfer if
one of the following equivalent statements holds:
(1) R ⊆ S is a Prüfer extension;
(2) R ⊆ S can be factored R ⊆ T ⊆ S, where R ⊆ T is integral
and T ⊆ S is Prüfer. In that case R = T
To see that (2) ⇒ (1) observe that if (2) holds, then T ⊆ R is integral
and a flat injective epimorphism, so that R = T by (L) (Scholium A(5)).
QUASI-PRÜFER EXTENSIONS
7
We observe that quasi-Prüfer extensions are akin to quasi-finite extensions if we refer to Zariski Main Theorem. This will be explored in
Section 6, see for example Theorem 6.2.
Hence integral or Prüfer extensions are quasi-Prüfer. An extension
is clearly Prüfer if and only if it is quasi-Prüfer and integrally closed.
Quasi-Prüfer extensions allow us to avoid FCP hypotheses.
We give some other definitions involved in ring extensions R ⊆ S.
The fiber at P ∈ Spec(R) of R ⊆ S is FibR,S (P ) := {Q ∈ Spec(S) |
Q ∩ R = P }. The subspace FibR,S (P ) of Spec(S) is homeomorphic
to the spectrum of the fiber ring FR,S (P ) := κ(P ) ⊗R S at P . The
homeomorphism is given by the spectral map of S → κ(P ) ⊗R S and
κ(P ) → κ(P ) ⊗R S is the fiber morphism at P .
Definition 2.2. A ring extension R ⊆ S is called:
(1) incomparable if for each pair Q ⊆ Q′ of prime ideals of S, then
Q ∩ R = Q′ ∩ R ⇒ Q = Q′ , or equivalently, κ(P ) ⊗R T is a
zero-dimensional ring for each T ∈ [R, S] and P ∈ Spec(R),
such that κ(P ) ⊗R T 6= 0.
(2) an INC-pair if R ⊆ T is incomparable for each T ∈ [R, S].
(3) residually algebraic if R/(Q ∩ R) ⊆ S/Q is algebraic for each
Q ∈ Spec(S).
(4) a residually algebraic pair if the extension R ⊆ T is residually
algebraic for each T ∈ [R, S].
The following characterization was announced in [38]. We were unaware that this result is also proved in [7, Corollary 1], when we present
it in ArXiv. However, our proof is largely shorter because we use the
powerful results of [26].
Theorem 2.3. An extension R ⊆ S is quasi-Prüfer if and only if
R ⊆ S is an INC-pair and, if and only if, R ⊆ S is a residually
algebraic pair.
Proof. Suppose that R ⊆ S is quasi-Prüfer and let T ∈ [R, S]. We
set U := RT . Then R ⊆ U is a flat epimorphism by definition of a
Prüfer extension and hence is incomparable as is R ⊆ R . It follows
that R ⊆ U is incomparable. Since T ⊆ U is integral, it has going-up.
It follows that R ⊆ T is incomparable. Conversely, if R ⊆ S is an
INC-pair, then so is R ⊆ S. Since R ⊆ S is integrally closed, R ⊆ S
is Prüfer [26, Theorem 5.2,(9’), p.48]. The second equivalence is [14,
Proposition 2.1] or [18, Theorem 6.5.6].
Corollary 2.4. An extension R ⊆ S is quasi-Prüfer if and only if
R ⊆ T is Prüfer for each T ∈ [R, S].
8
G. PICAVET AND M. PICAVET
It follows that most of the properties described in [6] for integrally
closed INC-pairs of domains are valid for arbitrary ring extensions.
Moreover, a result of Dobbs is easily gotten: an INC-pair R ⊆ S is
an integral extension if and only if R ⊆ S is spectrally surjective [14,
Theorem 2.2]. This follows from Scholium A, Property (L).
Example 2.5. Quasi-Prüfer domains R with quotient fields K can be
characterized by R ⊆ K is quasi-Prüfer. The reader may consult [9,
Theorem 1.1] or [18]. In view of [2, Theorem 2.7], R is a quasi-Prüfer
domain if and only if Spec(R(X)) → Spec(R) is bijective.
We give here another example of quasi-Prüfer extension. An extension R ⊂ S is called a going-down pair if each of its subextensions has
the going-down property. For such a pair, R ⊆ T has incomparability
for each T ∈ [R, S], at each non-maximal prime ideal of R [3, Lemma
5.8](ii). Now let M be a maximal ideal of R, whose fiber is not void in
T . Then R ⊆ T is a going-down pair, and so is R/M ⊆ T /MT because
MT ∩ R = M. By [3, Corollary 5.6], the dimension of T /MT is ≤ 1.
Therefore, if R ⊂ S is a going-down pair, then R ⊂ S is quasi-Prüfer
if and only if dim(T /MT ) 6= 1 for each T ∈ [R, S] and M ∈ Max(R).
Also open-ring pairs R ⊂ S are quasi-Prüfer by [8, Proposition 2.13].
An i-pair is an extension R ⊆ S such that Spec(T ) → Spec(R) is
injective for each T ∈ [R, S], or equivalently if and only if R ⊆ S is
quasi-Prüfer and R ⊆ R is spectrally injective [38, Proposition 5.8].
These extensions appear frequently in the integral domains context.
Another examples are given by some extensions R ⊆ S, such that
Spec(S) = Spec(R) as sets, as we will see later.
3. Properties of quasi-Prüfer extensions
We now develop the machinery of quasi-Prüfer extensions.
Proposition 3.1. An extension R ⊂ S is (quasi-)Prüfer if and only if
RP ⊆ SP is (quasi-)Prüfer for any P ∈ Spec(R) (P ∈ MSupp(S/R)).
Proof. The proof is easy if we use the INC-pair property definition of
quasi-Prüfer extension (see also [6, Proposition 2.4]).
Proposition 3.2. Let R ⊆ S be a quasi-Prüfer extension and ϕ : S →
S ′ an integral ring morphism. Then ϕ(R) ⊆ S ′ is quasi-Prüfer and
S ′ = ϕ(S)ϕ(R), where ϕ(R) is the integral closure of ϕ(R) in S ′ .
Proof. It is enough to apply [26, Theorem 5.9] to the Prüfer extension
R ⊆ S and to use Definition 2.1.
This result applies with S ′ := S ⊗R R′ , where R → R′ is an integral
morphism. Therefore integrality ascends the quasi-Prüfer property.
QUASI-PRÜFER EXTENSIONS
9
We know that a composite of Prüfer extensions is Prüfer [26, Theorem 5.6, p.51]. The following Corollary 3.3 contains [7, Theorem 3].
Corollary 3.3. Let R ⊆ T ⊆ S be a tower of extensions. Then R ⊆ S
is quasi-Prüfer if and only if R ⊆ T and T ⊆ S are quasi-Prüfer. It
follows that R ⊆ T is quasi-Prüfer if and only if R ⊆ RT is quasiPrüfer.
Proof. Consider a tower (T ) of extensions R ⊆ R ⊆ S := R′ ⊆ R′ ⊆ S ′
(a composite of two quasi-Prüfer extensions). By using Proposition 3.2
we see that R ⊆ S = R′ ⊆ R′ is quasi-Prüfer. Then (T ) is obtained
by writing on the left an integral extension and on the right a Prüfer
extension. Therefore, (T ) is quasi-Prüfer. We prove the converse.
If R ⊆ T ⊆ S is a tower of extensions, then R ⊆ T and T ⊆ S
are INC-pairs whenever R ⊆ S is an INC-pair. The converse is then a
consequence of Theorem 2.3.
The last statement is [7, Corollary 4].
Using the above corollary, we can exhibit new examples of quasiPrüfer extensions. We recall that a ring R is called Laskerian if each of
its ideals is a finite intersection of primary ideals and a ring extension
R ⊂ S a Laskerian pair if each T ∈ [R, S] is a Laskerian ring. Then [42,
Proposition 2.1] shows that if R is an integral domain with quotient
field F 6= R and F ⊂ K is a field extension, then R ⊂ K is a Laskerian
pair if and only if K is algebraic over R and R (in K) is a Laskerian
Prüfer domain. It follows easily that R ⊂ K is quasi-Prüfer.
Next result generalizes [25, Proposition 1].
Corollary 3.4. An FMC extension R ⊂ S is quasi-Prüfer.
Proof. Because R ⊂ S is a composite of finitely many minimal extensions, by Corollary 3.3, it is enough to observe that a minimal extension
is either Prüfer or integral.
Corollary 3.5. Let R ⊆ S be a quasi-Prüfer extension and a tower
R ⊆ T ⊆ S, where R ⊆ T is integrally closed. Then R ⊆ T is Prüfer.
T
Proof. Observe that R ⊆ T is quasi-Prüfer and then that R = R .
Next result deals with the Dobbs-Shapiro pseudo-Prüfer extensions
of integral domains [15], that they called pseudo-normal pairs. Suppose
that R is local, we call here pseudo-Prüfer an extension R ⊆ S such that
there exists T ∈ [R, S] with Spec(R) = Spec(T ) and T ⊆ S is Prüfer
[15, Corollary 2.5]. If R is arbitrary, the extension R ⊆ S is called
pseudo-Prüfer if RM ⊆ SM is pseudo-Prüfer for each M ∈ Max(R).
In view of the Corollary 3.3, it is enough to characterize quasi-Prüfer
extensions of the type R ⊆ T with Spec(R) = Spec(T ).
10
G. PICAVET AND M. PICAVET
Corollary 3.6. Let R ⊆ T be an extension with Spec(R) = Spec(T )
and (R, M) local. Then R ⊆ T is quasi-Prüfer if and only if Spec(R) =
Spec(U) for each U ∈ [R, T ] and, if and only if R/M ⊆ T /M is an
algebraic field extension. In such a case, R ⊆ T is Prüfer-closed.
Proof. It follows from [1] that M ∈ Max(T ). Part of the proof is gotten
by observing that R ⊆ U is an INC extension if Spec(R) = Spec(U).
e is a spectrally
Another one is proved in [1, Corollary 3.26]. Now R ⊆ R
e
surjective flat epimorphism and then, by Scholium A, R = R.
Let R ⊆ S be an extension and I an ideal shared with R and S. It
is easy to show that R ⊆ S is quasi-Prüfer if and only if R/I ⊆ S/I is
quasi-Prüfer by using [26, Proposition 5.8] in the Prüfer case. We are
able to give a more general statement.
Lemma 3.7. Let R ⊆ S be a (quasi-)Prüfer extension and J an ideal
of S with I = J ∩ R. Then R/I ⊆ S/J is a (quasi-)Prüfer extension.
If R ⊆ S is Prüfer and N is a maximal ideal of S, then R/(N ∩ R) is
a valuation domain with quotient field S/N.
Proof. Assume first that R ⊆ S is Prüfer. We have J = IS by
Scholium A(6), because R ⊆ S is a flat epimorpism. Therefore, any
D ∈ [R/I, S/J] is of the form C/J where C ∈ [R, S]. We can write
C/IS = (C + I)/IS ∼
= C/C ∩ IS. As R ⊆ C is a flat epimorphism,
C ∩ IS = IC. It then follows that D = C ⊗ R/I and we get easily
that R/I ⊆ S/J is Prüfer, since R/I ⊆ D is a flat epimorphism. The
quasi-Prüfer case is an easy consequence.
With this lemma we generalize and complete [23, Proposition 1.1].
Proposition 3.8. Let R ⊆ S be an extension of rings. The following
statements are equivalent:
(1) R ⊆ S is quasi-Prüfer;
(2) R/(Q ∩ R) ⊆ S/Q is quasi-Prüfer for each Q ∈ Spec(S) ;
(3) (X − s)S[X] ∩ R[X] 6⊆ M[X] for each s ∈ S and M ∈ Max(R);
(4) For each T ∈ [R, S], the fiber morphisms of R ⊆ T are integral.
Proof. (1) ⇒ (2) is entailed by Lemma 3.7. Assume that (2) holds and
let M ∈ Max(R) that contains a minimal prime ideal P , lain over by a
minimal prime ideal Q of S. Then (2) ⇒ (3) follows from [23, Proposition 1.1(1)], applied to R/(Q ∩ R) ⊆ S/Q. If (3) holds, argue as in
the paragraph before [23, Proposition 1.1] to get that R ⊆ S is a Pextension, whence an INC-extension by [14, Proposition 2.1]. Because
integral extensions have incomparability, we see that (4) ⇒ (1). Corollary 3.3 shows that the reverse implication holds, if any quasi-Prüfer
QUASI-PRÜFER EXTENSIONS
11
extension R ⊆ S has integral fiber morphisms. For P ∈ Spec(R), the
extension RP /P RP ⊆ SP /P SP is quasi-Prüfer by Lemma 3.7. The
ring RP /P RP is zero-dimensional and RP /P RP → SP /P SP , being a
flat epimorphism, is therefore surjective by Scholium A (S). It follows
that the fiber morphism at P is integral.
Remark 3.9. The logical equivalence (1) ⇔ (2) is still valid if we
replace quasi-Prüfer with integral in the above proposition. It is enough
to show that an extension R ⊆ S is integral when R/P ⊆ S/Q is
integral for each Q ∈ Spec(S) and P := Q ∩ R. We can suppose that
S = R[s] ∼
= R[X]/I, where X is an indeterminate, I an ideal of R[X]
and Q varies in Min(S), because for an extension A ⊆ B, any element
of Min(A) is lain over by some element of Min(B). If Σ is the set of
unitary polynomials of R[X], the assumptions show that any element
of Spec(R[X]), containing I, meets Σ. As Σ is a multiplicatively closed
subset, I ∩ Σ 6= ∅, whence s is integral over R.
But a similar result does not hold if we replace quasi-Prüfer with
Prüfer, except if we suppose that R ⊆ S is integrally closed. To see
this, apply the above proposition to get a quasi-Prüfer extension R ⊆ S
if each R/P ⊆ S/Q is Prüfer. Actually, this situation already occurs
for Prüfer rings and their factor domains, as Lucas’s paper [29] shows.
More precisely, [29, Proposition 2.7] and the third paragraph of [29, p.
336] shows that if R is a ring with Tot(R) absolutely flat, then R is
a quasi-Prüfer ring if R/P is a Prüfer domain for each P ∈ Spec(R).
Now example [29, Example 2.4] shows that R is not necessarily Prüfer.
We observe that if R ⊆ S is quasi-Prüfer, then R/M is a quasiPrüfer domain for each N ∈ Max(S) and M := N ∩ R (in case R ⊆ S
is integral, R/M is a field). To prove this, observe that R/M ⊆ S/N
can be factored R/M ⊆ κ(M) ⊆ S/N. As we will see, R/M ⊆ κ(M)
is quasi-Prüfer because R/M ⊆ S/N is quasi-Prüfer.
The class of Prüfer extensions is not stable by (flat) base change.
For example, let V be a valuation domain with quotient field K. Then
V [X] ⊆ K[X] is not Prüfer [26, Example 5.12, p.53]. Thus if we
consider an ideal I of R and J := IS, R ⊆ S Prüfer may not imply
R/I ⊆ S/IS Prüfer except if IS ∩ R = I. This happens for instance
for a prime ideal I of R that is lain over by a prime ideal of S.
Proposition 3.10. Let R ⊆ S be a (quasi)-Prüfer extension and R →
T a flat epimorphism, then T ⊆ S ⊗R T is (quasi)-Prüfer. If in addition
S and T are both subrings of some ring and R ⊆ T is an extension,
then T ⊆ T S is (quasi)-Prüfer.
12
G. PICAVET AND M. PICAVET
Proof. For the first part, it is enough to consider the Prüfer case. It is
well known that the following diagram is a pushout if Q ∈ Spec(T ) is
lying over P in R:
RP −−−→
y
SP
y
TQ −−−→ (T ⊗R S)Q
As RP → TQ is an isomorphism since R → T is a flat epimorphism
by Scholium A, it follows that RP ⊆ SP identifies to TQ → (T ⊗R S)Q .
The result follows because Prüfer extensions localize and globalize.
In case R → T is a flat epimorphic extension, the surjective maps
T ⊗R S → T S and R ⊗R T → RT are isomorphisms because R → T R
(resp. S → ST ) is injective and R → T ⊗T R (resp. S → S ⊗R T ) is a
flat epimorphism. Then it is enough to use Scholium A.
The reader may find in [26, Corollary 5.11, p.53] that if R ⊆ A ⊆ S
and R ⊆ B ⊆ S are extensions and R ⊆ A and R ⊆ B are both Prüfer,
then R ⊆ AB is Prüfer.
Proposition 3.11. Let R ⊆ A and R ⊆ B be two extensions, where
A and B are subrings of a ring S. If they are both quasi-Prüfer, then
R ⊆ AB is quasi-Prüfer.
Proof. Let U and V be the integral closures of R in A and B. Then R ⊆
A ⊆ AV is quasi-Prüfer because A ⊆ AV is integral and Corollary 3.3
applies. Using again Corollary 3.3 with R ⊆ V ⊆ AV , we find that
V ⊆ AV is quasi-Prüfer. Now Proposition 3.10 entails that B ⊆ AB
is quasi-Prüfer because V ⊆ B is a flat epimorphism. Finally R ⊆ AB
is quasi-Prüfer, since a composite of quasi-Prüfer extensions.
It is known that an arbitrary product of extensions is Prüfer if and
only if each of its components is Prüfer [26, Proposition 5.20, p.56].
The following result is an easy consequence.
Proposition 3.12. Let {Ri ⊆ Si |i = 1, . . . , n} be a finite family of
quasi-Prüfer extensions, then R1 × · · · × Rn ⊆ S1 × · · · × Sn is quasiPrüfer. In particular, if {R ⊆ Si |i = 1, . . . , n} is a finite family of
quasi-Prüfer extensions, then R ⊆ S1 × · · · × Sn is quasi-Prüfer.
In the same way we have the following result deduced from [26,
Remark 5.14, p.54].
Proposition 3.13. Let R ⊆ S be an extension of rings and an upward
directed family {Rα |α ∈ I} of elements of [R, S] such that R ⊆ Rα is
quasi-Prüfer for each α ∈ I. Then R ⊆ ∪[Rα |α ∈ I] is quasi-Prüfer.
QUASI-PRÜFER EXTENSIONS
13
Proof. It is enough to use [26, Proposition 5.13, p.54] where Aα is the
integral closure of R in Rα .
A ring morphism R → T preserves the integral closure of ring morT ⊗R S
∼
phisms R → S if T
= T ⊗R R for every ring morphism R → S. An
absolutely flat morphism R → T (R → T and T ⊗R T → T are both
flat) preserves integral closure [33, Theorem 5.1]. Flat epimorphisms,
Henselizations and étale morphisms are absolutely flat. Another examples are morphisms R → T that are essentially of finite type and
(absolutely) reduced [37, Proposition 5.19](2). Such morphisms are flat
if R is reduced [28, Proposition 3.2].
We will prove an ascent result for absolutely flat ring morphisms.
This will be proved by using base changes. For this we need to introduce
some concepts. A ring A is called an AIC ring if each monic polynomial
of A[X] has a zero in A. We recalled in [35, p.4662] that any ring A
has a faithfully flat integral extension A → A∗ , where A∗ is an AIC
ring. Moreover, if A is an AIC ring, each localization AP at a prime
ideal P of A is a strict Henselian ring [35, Lemma II.2].
Theorem 3.14. Let R ⊆ S be a (quasi-) Prüfer extension and R → T
an absolutely flat ring morphism. Then T → T ⊗R S is a (quasi-)
Prüfer extension.
Proof. We can suppose that R is an AIC ring. To see this, it is enough
to use the base change R → R∗ . We set T ∗ := T ⊗R R∗ , S ∗ := S ⊗R R∗ .
We first observe that R∗ ⊆ S ∗ is quasi-Prüfer for the following reason:
the composite extension R ⊆ S ⊆ S ∗ is quasi-Prüfer because the last
extension is integral. Moreover, R∗ → T ∗ is absolutely flat. In case
T ∗ ⊆ T ∗ ⊗R∗ S ∗ is quasi-Prüfer, so is T ⊆ T ⊗R S, because T → T ∗ =
T ⊗R R∗ is faithfully flat and T ∗ ⊆ T ∗ ⊗R∗ S ∗ is deduced from T ⊆R S
by the faithfully flat base change T → T ⊗R S. It is then enough to
apply Proposition 3.17.
We thus assume from now on that R is an AIC ring.
Let N ∈ Spec(T ) be lying over M in R. Then RM → TN is absolutely
flat [32, Proposition f] and RM ⊆ SM is quasi-Prüfer. Now observe
that (T ⊗R S)N ∼
= TN ⊗RM SM . Therefore, we can suppose that R and
T are local and R → T is local and injective. We deduce from [33,
Theorem 5.2], that RM → TN is an isomorphism. Therefore the proof
is complete in the quasi-Prüfer case. For the Prüfer case, we need only
to observe that absolutely flat morphisms preserve integral closure and
a quasi-Prüfer extension is Prüfer if it is integrally closed.
14
G. PICAVET AND M. PICAVET
Proposition 3.15. Let R ⊆ S be an extension of rings and R → T a
base change which preserves integral closure. If T ⊆ T ⊗R S has FCP
and R ⊆ S is Prüfer, then T ⊆ T ⊗R S is Prüfer.
Proof. The result holds because an FCP extension is Prüfer if and only
if it is integrally closed.
e ⊆ Te needs not to be an isomorphism, since
We observe that T ⊗R R
this property may fail even for a localization R → RP , where P is a
prime ideal of R.
Proposition 3.16. Let R ⊆ S be an extension of rings, R → R′ a
faithfully flat ring morphism and set S ′ := R′ ⊗R S. If R′ ⊆ S ′ is
(quasi-) Prüfer (respectively, FCP), then so is R ⊆ S.
Proof. The Prüfer case is clear, because faithfully flat morphisms descend flat epimorphisms (Scholium A (9)). For the quasi-Prüfer case,
we use the INC-pair characterization and the fact that FR,S (P ) →
FR′ ,S ′ (P ′ ) is faithfully flat for P ′ ∈ Spec(R′ ) lying over P in R [22,
Corollaire 3.4.9]. The FCP case is proved in [11, Theorem 2.2].
Proposition 3.17. Let R ⊆ S be a ring extension and R → R′ a
spectrally surjective ring morphism (for example, either faithfully flat
or injective and integral). Then R ⊆ S is quasi-Prüfer if R′ → R′ ⊗R S
is injective (for example, if R → R′ is faithfully flat) and quasi-Prüfer.
Proof. Let T ∈ [R, S] and P ∈ Spec(R) and set T ′ := T ⊗R R′ . There
is some P ′ ∈ Spec(R′ ) lying over P , because R → R′ is spectrally
surjective. There is a faithfully flat morphism FR,T (P ) → FR′ ,T ′ (P ′) ∼
=
′
FR,T (P ) ⊗k(P ) κ(P ) [22, Corollaire 3.4.9]. By Theorem 2.3, the result
follows from the faithful flatness of FR,T (P ) → FR′ ,T ⊗R R′ (P ′ ).
Theorem 3.18. Let R ⊆ S be a ring extension.
=⇒
e
(1) R ⊆ S has a greatest quasi-Prüfer subextension R ⊆ R = R.
=⇒
e =: R
~ is quasi-Prüfer and then R
~ ⊆ R.
(2) R ⊆ RR
=⇒
(3) R
R
=⇒
e R = R.
e
= R and R
Proof. To see (1), use Proposition 3.13 which tells us that the set of all
quasi-Prüfer subextensions is upward directed and then use Proposi=⇒
=⇒
tion 3.12 to prove the existence of R . Then let R ⊆ T ⊆ R be a tower
=⇒
e ⊆ =⇒
R , we
with R ⊆ T integral and T ⊆ R Prüfer. From T ⊆ R ⊆ R
=⇒
e
deduce that T = R and then R = R.
e can be factored R ⊆ R
e ⊆ RR
e and is a tower of
(2) Now R ⊆ RR
e → RR
e is integral.
quasi-Prüfer extensions, because R
QUASI-PRÜFER EXTENSIONS
15
=⇒
(3) Clearly, the integral closure and the Prüfer closure of R in R are
=⇒
=⇒
e with R , and R, R
e ⊆ R.
the respective intersections of R and R
This last result means that, as long integral closures and Prüfer
=⇒
closures of subsets of R are concerned, we can suppose that R ⊆ S is
quasi-Prüfer.
4. Almost-Prüfer extensions
We next give a definition “dual” of the definition of a quasi-Prüfer
extension.
4.1. Arbitrary extensions.
Definition 4.1. A ring extension R ⊆ S is called an almost-Prüfer
extension if it can be factored R ⊆ T ⊆ S, where R ⊆ T is Prüfer and
T ⊆ S is integral.
Proposition 4.2. An extension R ⊆ S is almost-Prüfer if and only if
e ⊆ S is integral. It follows that the subring T of the above definition
R
e=R
b when R ⊆ S is almost-Prüfer.
is R
Proof. If R ⊆ S is almost-Prüfer, there is a factorization R ⊆ T ⊆
e⊆R
b ⊆ S, where T ⊆ R
b is both integral and a flat epimorphism by
R
e=R
b by Scholium A (5) (L).
Scholium A (4). Therefore, T = R
Corollary 4.3. Let R ⊆ S be a quasi-Prüfer extension, and let T ∈
^
∩T
[R, S]. Then, T ∩ R ⊆ T R is almost-Prüfer. Moreover, T = R
TR
.
Proof. T ∩ R ⊆ T is quasi-Prüfer by Corollary 3.3. Being integrally
closed, it is Prüfer by Corollary 3.5. Moreover, T ⊆ T R is an integral
^
extension. Then, T ∩ R ⊆ T R is almost-Prüfer and T = R
∩T
TR
.
We note that integral extensions and Prüfer extensions are almostPrüfer and hence minimal extensions are almost-Prüfer. There are
quasi-Prüfer extensions that are not almost-Prüfer. It is enough to
consider [39, Example 3.5(1)]. Let R ⊆ T ⊆ S be two minimal extensions, where R is local, R ⊆ T integral and T ⊆ S is Prüfer. Then
b and
R ⊆ S is quasi-Prüfer but not almost-Prüfer, because S = R
e The same example shows that a composite of almost-Prüfer
R = R.
extensions may not be almost-Prüfer.
But the reverse implication holds.
16
G. PICAVET AND M. PICAVET
Theorem 4.4. Let R ⊆ S be an almost-Prüfer extension. Then R ⊆ S
e = R,
b (R)
e P =R
fP for each P ∈ Spec(R).
is quasi-Prüfer. Moreover, R
In this case, any flat epimorphic subextension R ⊆ T is Prüfer.
e ⊆ S, be an almost-Prüfer extension, that is R
e ⊆ S is
Proof. Let R ⊆ R
e is Prüfer. Now the Morita
integral. The result follows because R ⊆ R
hull and the Prüfer hull coincide by Proposition 4.2. In the same way,
e P →R
fP is a flat epimorphism and (R)
e P → SP is integral.
(R)
We could define almost-Prüfer rings as the rings R such that R ⊆
e = Tot(R) (by TheoTot(R) is almost-Prüfer. But in that case R
rem 4.4), so that R is a Prüfer ring. The converse evidently holds.
Therefore, this concept does not define something new.
We observed in [10, Remark 2.9(c)] that there is an almost-Prüfer
FMC extension R ⊆ S ⊆ T , where R ⊆ S is a Prüfer minimal extension and S ⊆ T is minimal and integral. But R ⊆ T is not an FCP
extension.
Proposition 4.5. Let R ⊆ S be an extension verifying the hypotheses:
(i) R ⊆ S is quasi-Prüfer.
(ii) R ⊆ S can be factored R ⊆ T ⊆ S, where R ⊆ T is a flat
epimorphism.
(1) Then the following commutative diagram (D) is a pushout,
R −−−→
y
R
y
T −−−→ T R
T R ⊆ S is Prüfer and R ⊆ T R is quasi-Prüfer. Moreover,
FR,R (P ) ∼
= FT,T R (Q) for each Q ∈ Spec(T ) and P := Q ∩ R.
(2) If in addition R ⊆ T is integrally closed, (D) is a pullback,
T ∩ R = R, (R : R) = (T : T R) ∩ R and (T : T R) = (R : R)T .
Proof. (1) Consider the injective composite map R → R ⊗R T → T R.
As R → R ⊗R T is a flat epimorphism, because deduced by a base
change of R → T , we get that the surjective map R ⊗R T → T R
is an isomorphism by Scholium A (3). By fibers transitivity, we have
FT,RT (Q) ∼
= κ(Q)⊗k(P ) FR,R (P ) [22, Corollaire 3.4.9]. As κ(P ) → κ(Q)
is an isomorphism by Scholium A, we get that FR,R (P ) ∼
= FT,RT (Q).
(2) As in [5, Lemma 3.5], R = T ∩ R. The first statement on the
conductors has the same proof as in [5, Lemma 3.5]. The second holds
because R ⊆ T is a flat epimorphism (see Scholium A (6)).
QUASI-PRÜFER EXTENSIONS
17
Theorem 4.6. Let R ⊂ S be a quasi-Prüfer extension and the diagram
(D’):
R −−−→ R
y
y
(1)
(2)
(3)
(4)
(5)
e −−−→ RR
e
R
e = R and
(D’) is a pushout and a pullback, such that R ∩ R
e : RR)
e ∩ R so that (R
e : RR)
e = (R : R)R.
e
(R : R) = (R
=⇒
e = S, where
e=R
~ ⊆ R =R
e =R
R ⊂ S can be factored R ⊆ RR
the first extension is almost-Prüfer and the second is Prüfer.
e = R.
e
e⇔R
R ⊂ S is almost-Prüfer if and only if S = RR
e =R
e=R
~ is the greatest almost-Prüfer subextension of
R ⊆ RR
e=R
eR~ .
R ⊆ S and R
e
Supp(S/R) = Supp(R/R)
∪ Supp(R/R) if R ⊆ S is almostPrüfer. (Supp can be replaced with MSupp).
Proof. To show (1), (2), in view of Theorem 3.18, it is enough to apply
=⇒
e and S = R , because R ⊆ RR
e is almostProposition 4.5 with T = R
Prüfer whence quasi-Prüfer, keeping in mind that a Prüfer extension is
integrally closed, whereas an integral Prüfer extension is trivial. Moree = RR
e because RR
e⊆R
e is both integral and integrally closed.
over, R
(3) is obvious.
(4) Now consider an almost-Prüfer subextension R ⊆ T ⊆ U, where
R ⊆ T is Prüfer and T ⊆ U is integral. Applying (3), we see that
U U
e ⊆ RR
e in view of Proposition 1.6.
U =R R
e
(5) Obviously, Supp(R/R)
∪ Supp(R/R) ⊆ Supp(S/R). Conversely,
e M = RM .
let M ∈ Spec(R) be such that RM 6= SM , and RM = (R)
e M = RM , which is absurd.
Then (3) entails that SM = (R)M (R)
Corollary 4.7. Let R ⊆ S be an almost-Prüfer extension. The following conditions are equivalent:
(1) Supp(S/R) ∩ Supp(R/R) = ∅.
e ∩ Supp(R/R)
e
(2) Supp(S/R)
= ∅.
e
(3) Supp(R/R) ∩ Supp(R/R) = ∅.
eP = R
fP for each
Proof. Since R ⊆ S is almost-Prüfer, we get (R)
e
P ∈ Spec(R). Moreover, Supp(S/R) = Supp(R/R) ∪ Supp(R/R) =
e ∪ Supp(R/R).
e
Supp(S/R) ∪ Supp(R/R) = Supp(S/R)
e ∩ Supp(R/R).
e
(1) ⇒ (2): Assume that there exists P ∈ Supp(S/R)
e P 6= SP , RP , so that RP ⊂ SP is neither Prüfer, nor integral.
Then, (R)
18
G. PICAVET AND M. PICAVET
But, P ∈ Supp(S/R) = Supp(S/R) ∪ Supp(R/R). If P ∈ Supp(S/R),
then P 6∈ Supp(R/R), so that (R)P = RP and RP ⊂ SP is Prüfer,
a contradiction. If P ∈ Supp(R/R), then P 6∈ Supp(S/R), so that
(R)P = SP and RP ⊂ SP is integral, a contradiction.
e
(2) ⇒ (3): Assume that there exists P ∈ Supp(R/R)
∩ Supp(R/R).
e
Then, RP 6= (R)P , (R)P , so that RP ⊂ SP is neither Prüfer, nor ine ∪ Supp(R/R).
e
tegral. But, P ∈ Supp(S/R) = Supp(S/R)
If P ∈
e
e
e
Supp(S/R), then P 6∈ Supp(R/R), so that (R)P = RP and RP ⊂ SP
e
e
is integral, a contradiction. If P ∈ Supp(R/R),
then P 6∈ Supp(S/R),
e P = SP and RP ⊂ SP is Prüfer, a contradiction.
so that (R)
(3) ⇒ (1): Assume that there exists P ∈ Supp(S/R) ∩ Supp(R/R).
Then, (R)P 6= RP , SP , so that RP ⊂ SP is neither Prüfer, nor integral.
e
e
If P ∈ Supp(R/R),
But, P ∈ Supp(S/R) = Supp(R/R) ∪ Supp(R/R).
then P 6∈ Supp(R/R), so that (R)P = RP and RP ⊂ SP is Prüfer,
e
a contradiction. If P ∈ Supp(R/R), then P 6∈ Supp(R/R),
so that
e
(R)P = RP and RP ⊂ SP is integral, a contradiction.
Proposition 4.5 has the following similar statement proved by Ayache
and Dobbs. It reduces to Theorem 4.6 in case R ⊆ S has FCP because
of Proposition 1.3.
Proposition 4.8. Let R ⊆ T ⊆ S be a quasi-Prüfer extension, where
T ⊆ S is an integral minimal extension and R ⊆ T is integrally closed .
Then the diagram (D) is a pullback, S = T R and (T : S) = (R : R)T .
Proof. [5, Lemma 3.5].
Proposition 4.9. Let R ⊆ U ⊆ S and R ⊆ V ⊆ S be two towers
of extensions, such that R ⊆ U and R ⊆ V are almost-Prüfer. Then
g=U
e Ve .
R ⊆ UV is almost-Prüfer and UV
Proof. Denote by U ′ , V ′ and W ′ the Prüfer hulls of R in U, V and
W = UV . We deduce from [26, Corollary 5.11, p.53], that R ⊆ U ′ V ′
is Prüfer. Moreover, U ′ V ′ ⊆ UV is clearly integral and U ′ V ′ ⊆ W ′
because the Prüfer hull is the greatest Prüfer subextension. We deduce
g=U
e Ve .
that R ⊆ UV is almost-Prüfer and that UV
Proposition 4.10. Let R ⊆ U ⊆ S and R ⊆ V ⊆ S be two towers
of extensions, such that R ⊆ U is almost-Prüfer and R ⊆ V is a flat
epimorphism. Then U ⊆ UV is almost-Prüfer.
Proof. Mimic the proof of Proposition 4.9 and use [26, Theorem 5.10,
p.53].
Proposition 4.11. Let R ⊆ S be an almost-Prüfer extension and
R → T a flat epimorphism. Then T ⊆ T ⊗R S is almost-Prüfer.
QUASI-PRÜFER EXTENSIONS
Proof. It is enough to use Proposition 3.10 and Definition 4.1.
19
Proposition 4.12. An extension R ⊆ S is almost-Prüfer if and only
fP = (R)
e P for each P ∈ Spec(R).
if RP ⊆ SP is almost-Prüfer and R
e P ⊆R
fP . Suppose
Proof. For an arbitrary extension R ⊆ S we have (R)
eP =R
fP by
that R ⊆ S is almost-Prüfer, then so is RP ⊆ SP and (R)
Theorem 4.4. Conversely, if R ⊆ S is locally almost-Prüfer, whence
fP = (R)
e P holds
locally quasi-Prüfer, then R ⊆ S is quasi-Prüfer. If R
e P so that S = RR
e and
for each P ∈ Spec(R), we have SP = (RR)
R ⊆ S is almost-Prüfer by Theorem 4.6.
Corollary 4.13. An FCP extension R ⊆ S is almost-Prüfer if and
only if RP ⊆ SP is almost-Prüfer for each P ∈ Spec(R).
Proof. It is enough to show that R ⊆ S is almost-Prüfer if RP ⊆ SP
is almost-Prüfer for each P ∈ Spec(R) using Proposition 4.12. Any
e ⊂ R1 is integral by definition of R.
e Assume that
minimal extension R
′
′
]
e P ⊂ (R
e
e
(R)
P ), so that there exists R2 ∈ [R, S] such that (R)P ⊂ (R2 )P
e P , for
is a Prüfer minimal extension with crucial maximal ideal Q(R)
e with Q ∩ R ⊆ P . In particular, R
e ⊂ R′ is not
some Q ∈ Max(R)
2
e R′ ] such that
integral. We may assume that there exists R1′ ∈ [R,
2
e Using
R1′ ⊂ R2′ is a Prüfer minimal extension with P 6∈ Supp(R1′ /R).
e R′ ] such that R
e ⊂ R2 is a
[39, Lemma 1.10], there exists R2 ∈ [R,
2
Prüfer minimal extension with crucial maximal ideal Q, a contradic]
e P ⊂ SP is integral for each P , whence (R)
e P = (R
tion. Then, (R)
P ).
We now intend to demonstrate that our methods allow us to prove
easily some results. For instance, next statement generalizes [5, Corollary 4.5] and can be fruitful in algebraic number theory.
Proposition 4.14. Let (R, M) be a one-dimensional local ring and
R ⊆ S a quasi-Prüfer extension. Suppose that there is a tower R ⊂
T ⊆ S, where R ⊂ T is integrally closed. Then R ⊆ S is almost-Prüfer,
e and S is zero-dimensional.
T =R
Proof. Because R ⊂ T is quasi-Prüfer and integrally closed, it is Prüfer.
If some prime ideal of T is lying over M, R ⊂ T is a faithfully flat
epimorphism, whence an isomorphism by Scholium A, which is absurd.
Now let N be a prime ideal of T and P := N ∩ R. Then RP is zerodimensional and isomorphic to TN . Therefore, T is zero-dimensional.
It follows that T R is zero-dimensional. Since RT ⊆ S is Prüfer, we
deduce from Scholium A, that RT = S. The proof is now complete.
20
G. PICAVET AND M. PICAVET
We also generalize [5, Proposition 5.2] as follows.
Proposition 4.15. Let R ⊂ S be a quasi-Prüfer
extension, such that
q
R is local with maximal ideal N := (R : R). Then R is local and
[R, S] = [R, R] ∪ [R, S]. If in addition R is one-dimensional, then
either R ⊂ S is integral or there is some minimal prime ideal P of R,
such that S = (R)P , P = SP and R/P is a one-dimensional valuation
domain with quotient field S/P .
Proof. R is obviously local. Let T ∈ [R, S] \ [R, R] and s ∈ T \ R.
Then s ∈ U(S) and s−1 ∈ R by Proposition 1.2 (1). But s−1 6∈ U(R),
so that s−1 ∈ N. It follows that there exists some integer n such that
s−n ∈ (R : R), giving s−n R ⊆ R, or, equivalently, R ⊆ Rsn ⊆ T .
Then, T ∈ [R, S] and we obtain [R, S] = [R, R] ∪ [R, S].
Assume that R is one-dimensional. If R ⊂ S is not integral then
R ⊂ S is Prüfer and R is one-dimensional. To complete the proof, use
Proposition 1.2 (3).
4.2. FCP extensions. In case we consider only FCP extensions, we
obtain more results.
Proposition 4.16. Let R ⊆ S be an FCP extension. The following
statements are equivalent:
(1) R ⊆ S is almost-Prüfer.
(2) RP ⊆ SP is either integral or Prüfer for each P ∈ Spec(R).
e ∩ Supp(R/R)
e
(3) RP ⊆ SP is almost-Prüfer and Supp(S/R)
= ∅.
(4) Supp(R/R) ∩ Supp(S/R) = ∅.
Proof. The equivalence of Proposition 4.12 shows that (2) ⇔ (1) holds
because Tb = Te and over a local ring T , an almost-Prüfer FCP extension
T ⊆ U is either integral or Prüfer [39, Proposition 2.4] . Moreover when
e P =R
fP
RP ⊆ SP is either integral or Prüfer, it is easy to show that (R)
Next we show that (3) is equivalent to (2) of Proposition 4.12.
e ∩ Supp(R/R)
e
Let P ∈ Supp(S/R)
be such that RP ⊆ SP is almoste
e P ⊂ SP . Since R ⊂ R
e
Prüfer. Then, (R)P 6= RP , SP , so that RP ⊂ (R)
e P , giving (R)
e P ⊆R
fP and RP 6= R
fP . It follows
is Prüfer, so is RP ⊂ (R)
fP = SP in view of the dichotomy principle [39, Proposition 3.3]
that R
fP 6= (R)
e P.
since RP is a local ring, and then R
fP 6= (R)
e P , i.e. P ∈ Supp(S/R). Then,
Conversely, assume that R
fP , so that R
fP = SP , as we have just seen. Hence RP ⊂ SP
RP 6= R
is integrally closed. It follows that RP = RP = RP , so that P 6∈
e
eP 6=
by Theorem 4.6(5). Moreover, R
Supp(R/R) and P ∈ Supp(R/R)
QUASI-PRÜFER EXTENSIONS
21
e
e ∩
SP implies that P ∈ Supp(S/R).
To conclude, P ∈ Supp(S/R)
e
Supp(R/R).
(1) ⇔ (4) An FCP extension is quasi-Prüfer by Corollary 3.4. Supe we
pose that R ⊆ S is almost-Prüfer. By Theorem 4.6, letting U := R,
get that U ∩R = R and S = RU. We deduce from [39, Proposition 3.6]
that Supp(R/R) ∩ Supp(S/R) = ∅. Suppose that this last condition
holds. Then by [39, Proposition 3.6] R ⊆ S can be factored R ⊆ U ⊆ S,
where R ⊆ U is integrally closed, whence Prüfer by Proposition 1.3,
and U ⊆ S is integral. Therefore, R ⊆ S is almost-Prüfer.
Lemma 4.17. Let B ⊂ D and C ⊂ D be two integral minimal extensions and A := B ∩ C. If A ⊂ D has FCP, then, A ⊂ D is integral.
Proof. Set M := (B : D) and N := (C : D).
If M 6= N, then, A ⊂ D is integral by [13, Proposition 6.6].
Assume that M = N. Then, M ∈ Max(A) by [13, Proposition 5.7].
Let B ′ be the integral closure of A in B. Then M is also an ideal of
B ′ , which is prime in B ′ , and then maximal in B ′ . If A ⊂ D is an
FCP extension, so is B ′ ⊆ B, which is a flat epimorphism, and so is
B ′ /M ⊆ B/M. Then, B ′ = B since B ′ /M is a field. It follows that
A ⊆ B is an integral extension, and so is A ⊂ D.
Proposition 4.18. Let R ⊂ S be an FCP almost-Prüfer extension.
e=R
b is the least T ∈ [R, S] such that T ⊆ S is integral.
Then, R
Proof. We may assume that R ⊂ S is not integral. If there is some
e such that U ⊆ R
e is integral, then U = R.
e Set X := {T ∈
U ∈ [R, R]
e is a minimal element of X.
[R, S] | T ⊆ S integral}. It follows that R
e is the least element of X.
We are going to show that R
e S] ≥ 1 and let R
e = R0 ⊂ R1 ⊂ · · · ⊂ Rn−1 ⊂ Rn = S
Set n := ℓ[R,
e S], with length n. There does not exist a
be a maximal chain of [R,
e
maximal chain of R-subalgebras
of S with length > n. Let T ∈ X. We
e
intend to show that T ∈ [R, S]. It is enough to choose T such that T
is a minimal element of X. Consider the induction hypothesis: (Hn ):
e S] when n := ℓ[R,
e S].
X ⊆ [R,
e ⊂ S is minimal. Let T ∈ X and
We first show (H1 ). If n = 1, R
e
T1 ∈ [T, S] be such that T1 ⊂ S is minimal. Assume that T1 6= R.
e⊂R
e is integral, which contradicts the
Lemma 4.17 shows that T1 ∩ R
e so that T = R
e for the same
beginning of the proof. Then, T1 = R,
contradiction and (H1 ) is proved.
Assume that n > 1 and that (Hk ) holds for any k < n. Let T ∈ X
e S], then
and T1 ∈ [T, S] be such that T1 ⊂ S is minimal. If T1 ∈ [R,
22
G. PICAVET AND M. PICAVET
e T1 ] ≤ n−1. But we get that T ∈ [R, T1 ], with T ⊆ T1 integral.
k := ℓ[R,
e is also the Prüfer hull of R ⊆ T1 , with k := ℓ[R,
e T1 ] ≤ n−1.
Moreover, R
e T1 ] ⊂ [R,
e S].
Since (Hk ) holds, we get that T ∈ [R,
e
If T1 6∈ [R, S], set U := T1 ∩Rn−1 . We get that T1 ⊂ S and Rn−1 ⊂ S
are minimal and integral. Using again Lemma 4.17, we get that U ⊂ S
e Rn−1 ] = n − 1 and U ∈ [R, Rn−1 ]. As before, R
e is
is integral, with ℓ[R,
e Rn−1 ],
also the Prüfer hull of R ⊆ Rn−1 . Since (Hn−1 ) holds, U ∈ [R,
e
so that T1 ∈ [R, S], a contradiction. Therefore, (Hn ) is proved.
We will need a relative version of the support. Let f : R → T be a
ring morphism and E a T -module. The relative support of E over R is
SR (E) := a f (SuppT (E)) and MSR (E) := SR (E)∩Max(R). In particular, for a ring extension R ⊂ S, we have SR (S/R) := SuppR (S/R)).
Proposition 4.19. Let R ⊆ S be an FCP extension. The following
statements hold:
e
(1) Supp(R/R)
∩ Supp(R/R) = ∅.
e
e R)
e ∩ Supp(R/R)
e
(2) Supp(R/R)
∩ Supp(R/R) = Supp(R/
= ∅.
e
(3) MSupp(S/R) = MSupp(R/R) ∪ MSupp(R/R).
e is
Proof. (1) is a consequence of Proposition 4.16(4) because R ⊆ R
almost-Prüfer.
e
We prove the first part of (2). If some M ∈ Supp(R/R)∩Supp(R/R),
′
e M , T := (R)M
it can be supposed in Max(R). Set R := RM , U := (R)
and M ′ := MRM . Then, R′ 6= U, T , with R′ ⊂ U FCP Prüfer and
R′ ⊂ T FCP integral, an absurdity [39, Proposition 3.3].
e R)
e ∩
To show the second part, assume that some P ∈ Supp(R/
e
Supp(R/R).
Then, P 6∈ Supp(R/R) by the first part of (2), so that
e P = RP R
eP = R
eP , a contradiction.
RP = RP , giving (R)
S
S
(3) Obviously, MSupp(S/R) = MS (S/R) = MS (S/T )∪MS (T /T )
T
T
∪MS (T /U )∪MS (U /U)∪MS (U/R). By [39, Propositions 2.3 and
S
S
T
3.2], we have MS (S/T ) ⊆ S (T /T ) = S (R/R ) = MS (R/R) =
T
T
MSupp(R/R), MS (T /U ) = S (R /R) ⊆ S (R/R) = Supp(R/R)
T
T
and MS (U /U) = S (R /R) = Supp(R/R). To conclude, MSupp(S/R) =
e
MSupp(R/R)
∪ MSupp(R/R).
Proposition 4.20. Let R ⊂ S be an FCP extension and M ∈ MSupp(S/R),
g
e
e
e
then R
M = (R)M if and only if M 6∈ MSupp(S/R) ∩ MSupp(R/R).
g
e
Proof. In fact, we are going to show that R
M 6= (R)M if and only if
e ∩ MSupp(R/R).
e
M ∈ MSupp(S/R)
QUASI-PRÜFER EXTENSIONS
23
e ∩ MSupp(R/R).
e
g
Let M ∈ MSupp(S/R)
Then, R
M 6= RM , SM and
g
e
g
then RM ⊂ R
⊂
S
.
Since
R
⊂
R
is
Prüfer,
so is RM ⊂ R
M
M
M
e M ⊆ R
g
g
by Proposition 1.2, giving (R)
and
R
=
6
R
.
Therefore,
M
M
M
g
g
e
R
=
S
[39,
Proposition
3.3]
since
R
is
local,
and
then
R
M
M
M
M 6= (R)M .
g
e
g
g
Conversely, if R
M 6= (R)M , then, RM 6= RM , so that RM = SM , as
we have just seen and then RM ⊂ SM is integrally closed. It follows
that RM = RM = RM , so that M 6∈ MSupp(R/R). Hence, M ∈
e
eM 6= SM ⇒ M ∈
MSupp(R/R)
by Proposition 4.19(3). Moreover, R
e To conclude, M ∈ MSupp(S/R)
e ∩ MSupp(R/R).
e
MSupp(S/R).
g
e
If R ⊆ S is any ring extension, with dim(R) = 0, then R
M = (R)M
for any M ∈ Max(R). Indeed by Scholium A (2), the flat epimorphism
e is bijective as well as RM → (R)
e M . This conclusion is still
R → R
valid in another context.
Corollary 4.21. Let R ⊂ S be an FCP extension. Assume that one
of the following conditions is satisfied:
e ∩ MSupp(R/R)
e
(1) MSupp(S/R)
= ∅.
e
(2) S = RR, or equivalently, R ⊆ S is almost-Prüfer.
g
e
Then, R
M = (R)M for any M ∈ Max(R).
Proof. (1) is Proposition 4.20. (2) is Proposition 4.12.
Proposition 4.22. Let R ⊂ S be an almost-Prüfer FCP extension.
e in T R.
e
Then, any T ∈ [R, S] is the integral closure of T ∩ R
e and V := T R.
e Since R ⊂ S is almost-Prüfer,
Proof. Set U := T ∩ R
e
e
e is also the Prüfer hull of
U ⊆ R is Prüfer and R ⊆ V is integral and R
U ⊆ V . Because R ⊂ S is almost-Prüfer, for each M ∈ MSuppR (S/R),
RM ⊆ SM is either integral, or Prüfer by Proposition 4.16, and so is
g
e
UM ⊆ VM . But R
M = (R)M by Corollary 4.21 is also the Prüfer hull
′
′
of UM ⊆ VM . Let T be the integral closure of U in V . Then, TM
is
the integral closure of UM in VM .
′
e M,
Assume that UM ⊆ VM is integral. Then VM = TM
and UM = (R)
e M = TM , giving TM = T ′ .
so that VM = TM (R)
M
′
e M,
Assume that UM ⊆ VM is Prüfer. Then UM = TM
and VM = (R)
′
e M = TM , giving TM = T .
so that UM = TM ∩ (R)
M
′
To conclude, we get that TM = TM
for each M ∈ MSuppR (S/R).
′
Since RM = SM , with TM = TM
for each M ∈ Max(R)\MSuppR (S/R),
′
we get T = T , whence T is the integral closure of U ⊆ V .
24
G. PICAVET AND M. PICAVET
We build an example of an FCP extension R ⊂ S where we have
g
e M for some M ∈ Max(R). In particular, R ⊂ S is not
RM 6= (R)
almost-Prüfer.
Example 4.23. Let R be an integral domain with quotient field S
and Spec(R) := {M1 , M2 , P, 0}, where M1 6= M2 are two maximal
ideals and P a prime ideal satisfying P ⊂ M1 ∩ M2 . Assume that
there are R1 , R2 and R3 such that R ⊂ R1 is Prüfer minimal, with
C (R, R1 ) = M1 , R ⊂ R2 is integral minimal, with C (R, R2 ) = M2
and R2 ⊂ R3 is Prüfer minimal, with C (R2 , R3 ) = M3 ∈ Max(R2 )
such that M3 ∩ R = M2 and M2 R3 = R3 . This last condition is
satisfied when R ⊂ R2 is either ramified or inert. Indeed, in both
cases, M3 R3 = R3 ; moreover, in the ramified case, we have M32 ⊆ M2
and in the inert case, M3 = M2 [36, Theorem 3.3]. We apply [13,
Proposition 7.10] and [10, Lemma 2.4] several times. Set R2′ := R1 R2 .
Then, R1 ⊂ R2′ is integral minimal, with C (R1 , R2′ ) =: M2′ = M2 R1
and R2 ⊂ R2′ is Prüfer minimal, with C (R2 , R2′ ) =: M1′ = M1 R2 ∈
Max(R2 ). Moreover, M1′ 6= M3 , Spec(R1 ) = {M2′ , P1 , 0}, where P1
is the only prime ideal of R1 lying over P . But, P = (R : R1 ) by
[17, Proposition 3.3], so that P = P1 . Set R3′ := R3 R2′ . Then, R2′ ⊂
R3′ is Prüfer minimal, with C (R2′ , R3′ ) =: M3′ = M3 R2′ ∈ Max(R2′ )
and R3 ⊂ R3′ is Prüfer minimal, with C (R3 , R3′ ) = M1′′ = M1 R3 ∈
Max(R3 ). It follows that we have Spec(R3′ ) = {P ′ , 0} where P ′ is the
only prime ideal of R3′ lying over P . To end, assume that R3′ ⊂ S is
Prüfer minimal, with C (R3′ , S) = P ′. Hence, R2 is the integral closure
of R in S. In particular, R ⊂ S has FCP [10, Theorems 6.3 and
3.13] and is quasi-Prüfer. Since R ⊂ R1 is integrally closed, we have
e Assume that R1 6= R.
e Then, there exists T ∈ [R1 , S] such
R1 ⊆ R.
that R1 ⊂ T is Prüfer minimal and C (R1 , T ) = M2′ , a contradiction
by Proposition 4.16 since M2′ = C (R1 , R2′ ), with R1 ⊂ R2′ integral
e It follows that M1 ∈ MSupp(R/R).
e
minimal. Then, R1 = R.
But,
′
e
e
P = C (R3 , S) ∩ R ∈ Supp(S/R) and P ⊂ M1 give M1 ∈ MSupp(S/R),
g
e
so that R
M1 6= (R)M1 by Proposition 4.20 giving that R ⊂ S is not
almost-Prüfer.
We now intend to refine Theorem 4.6, following the scheme used in
[4, Proposition 4] for extensions of integral domains.
Proposition 4.24. Let R ⊆ S and U, T ∈ [R, S] be such that R ⊆ U is
integral and R ⊆ T is Prüfer. Then U ⊆ UT is Prüfer in the following
cases and R ⊆ UT is almost-Prüfer.
e
(1) Supp(R/R)∩Supp(R/R)
= ∅ (for example, if R ⊆ S has FCP).
(2) R ⊆ U preserves integral closure.
QUASI-PRÜFER EXTENSIONS
25
Proof. (1) We have ∅ = MSupp(U/R)∩MSupp(T /R), since U ⊆ R and
e Let M ∈ MSupp((UT )/R). For M ∈ MSupp(U/R), we have
T ⊆ R.
RM = TM and (UT )M = UM . If M ∈
/ MSupp(U/R), then UM = RM
and (UT )M = TM , so that UM ⊆ (UT )M identifies to RM ⊆ TM .
Let N ∈ Max(U) and set M := N ∩ R ∈ Max(R) since R ⊆ U is
integral. If M 6∈ Supp(R/R), then RM = RM = UM and N is the
only maximal ideal of U lying over M. It follows that UM = UN and
(UT )M = (UT )N by [10, Lemma 2.4]. Then, UN ⊆ (UT )N identifies
e
to RM ⊆ TM which is Prüfer. If M 6∈ Supp(R/R),
then RM = TM
gives UM = (UT )M , so that UN = (UT )N by localizing the precedent
equality and UN ⊆ (UT )N is still Prüfer. Therefore, U ⊆ UT is locally
Prüfer, whence Prüfer by Proposition 1.1.
(2) The usual reasoning shows that U ⊗R T ∼
= UT , so that U ⊆ UT
UT
UT
is integrally closed. Since U is contained in R , we get that U = R .
Now observe that R ⊆ UT is almost-Prüfer, whence quasi-Prüfer. It
follows that U ⊆ UT is Prüfer.
Next propositions generalize Ayache’s results of [4, Proposition 11].
Proposition 4.25. Let R ⊆ S be a quasi-Prüfer extension, T, T ′ ∈
[R, S] and U := T ∩ T ′ . The following statements hold:
Te = (T^
∩ R) for each T ∈ [R, S].
′
e
e
T ∩ T ⊆ T^
∩ T ′.
Let Supp(T /T ) ∩ Supp(Te/T ) = ∅ (this assumption holds if R ⊆
S has FCP). Then, T ⊆ T ′ ⇒ Te ⊆ Te′ .
e
(4) If Supp(U /U) ∩ Supp(U/U)
= ∅, then Te ∩ Te′ = T^
∩ T ′.
(1)
(2)
(3)
Proof. (1) We observe that R ⊆ T is quasi-Prüfer by Corollary 3.3.
Since T ∩ R is the integral closure of R in T , we get that T ∩ R ⊆ T is
∩ R.
Prüfer. It follows that T ∩ R ⊆ Te is Prüfer. We thus have Te ⊆ T^
e
To prove the reverse inclusion, we set V := T ∩ R and W := V ∩ T .
We have W ∩ R = Ve ∩ R = V , because V ⊆ Ve ∩ R is integral and
Prüfer since we have a tower V ⊆ Ve ∩ R ⊆ Ve . Therefore, V ⊆ W
is Prüfer because W ∈ [V, Ve ]. Moreover, T ⊆ Te ⊆ Ve , since V ⊆ Te
is Prüfer. Then, T ⊆ W is integral because W ∈ [T, T ], and we have
V ⊆ T ⊆ W . This entails that T = W = Ve ∩ T , so that T ⊆ Ve is
Prüfer. It follows that Ve ⊆ Te since T ∈ [V, Ve ].
(2) A quasi-Prüfer extension is Prüfer if and only if it is integrally
closed. We observe that T ∩ T ′ ⊆ Te ∩ Te′ is integrally closed, whence
Prüfer. It follows that Te ∩ Te′ ⊆ T^
∩ T ′.
26
G. PICAVET AND M. PICAVET
(3) Set U = T ∩ R and U ′ = T ′ ∩ R, so that U, U ′ ∈ [R, R] with
U ⊆ U ′ . In view of (1), we thus can suppose that T, T ′ ∈ [R, R]. It
follows that T ⊆ T ′ is integral and T ⊆ Te is Prüfer. We deduce from
Proposition 4.24(1) that T ′ ⊆ T ′ Te is Prüfer, so that TeT ′ ⊆ Te′ , because
Supp(T /T ) ∩ Supp(Te/T ) = ∅ and T = R. Therefore, we have Te ⊆ Te′ .
e /U) = ∅. Then, T ∩T ′ ⊂ T, T ′
(4) Assume that Supp(U /U)∩Supp(U
gives T^
∩ T ′ ⊆ Te ∩ Te′ in view of (3), so that T^
∩ T ′ = Te ∩ Te′ by (2).
Proposition 4.26. Let R ⊆ S be a quasi-Prüfer extension and T ⊆ T ′
a subextension of R ⊆ S. Set U := T ∩ R, U ′ := T ′ ∩ R, V := T R and
V ′ := T ′ R. The following statements hold:
(1) T ⊆ T ′ is integral if and only if V = V ′ .
(2) T ⊆ T ′ is Prüfer if and only if U = U ′ .
(3) Assume that U ⊂ U ′ is integral minimal and V = V ′ . Then,
T ⊂ T ′ is integral minimal, of the same type as U ⊂ U ′ .
(4) Assume that V ⊂ V ′ is Prüfer minimal and U = U ′ . Then,
T ⊂ T ′ is Prüfer minimal.
(5) Assume that T ⊂ T ′ is minimal and set P := C(T, T ′ ).
(a) If T ⊂ T ′ is integral, then U ⊂ U ′ is integral minimal if and
only if P ∩ U ∈ Max(U).
(b) If T ⊂ T ′ is Prüfer, then V ⊂ V ′ is Prüfer minimal if and
only if there is exactly one prime ideal in V lying over P .
Proof. In [R, S] we have the integral extensions U ⊆ U ′ , T ⊆ V, T ′ ⊆
V ′ and the Prüfer extensions V ⊆ V ′ , U ⊆ T, U ′ ⊆ T ′ . Moreover, R
is also the integral closure of U ⊆ V ′ .
(1) is gotten by considering the extension T ⊆ V ′ , which is both
T ⊆ V ⊆ V ′ and T ⊆ T ′ ⊆ V ′ .
(2) is gotten by considering the extension U ⊆ T ′ , which is both
U ⊆ T ⊆ T ′ and U ⊆ U ′ ⊆ T ′ .
(3) Assume that U ⊂ U ′ is integral minimal and V = V ′ . Then,
T ⊂ T ′ is integral by (1) and T 6= T ′ because of (2). Set M := (U :
U ′ ) ∈ SuppU (U ′ /U). For any M ′ ∈ Max(U) such that M ′ 6= M, we
′
′
′
have UM ′ = UM
′ , so that TM ′ = T M ′ because UM ′ ⊆ TM ′ is Prüfer.
′
′
′
But, U ⊆ T is almost-Prüfer, giving T = T U . By Theorem 4.6,
(T : T ′ ) = (U : U ′ )T = MT 6= T because T 6= T ′ . We get that U ⊆ T
Prüfer implies that M 6∈ SuppU (T /U) and UM = TM . It follows that
T ′ M = TM U ′ M = U ′ M . Therefore, TM ⊆ T ′ M identifies to UM ⊆ U ′ M ,
which is minimal of the same type as U ⊂ U ′ by [13, Proposition 4.6].
Then, T ⊂ T ′ is integral minimal, of the same type as U ⊂ U ′ .
(4) Assume that V ⊂ V ′ is Prüfer minimal and U = U ′ . Then,
T ⊂ T ′ is Prüfer by (2) and T 6= T ′ because of (1). Set Q := C(V, V ′ )
QUASI-PRÜFER EXTENSIONS
27
and P := Q ∩ T ∈ Max(T ) since Q ∈ Max(V ). For any P ′ ∈ Max(T )
such that P ′ 6= P , and Q′ ∈ Max(V ) lying above P ′, we have VQ′ = VQ′ ′ ,
so that VP ′ = V ′ P ′ . It follows that T ′ P ′ ⊆ V ′ P ′ is integral, so that
TP ′ = T ′ P ′ and P ′ 6∈ SuppT (T ′/T ). We get that T ⊂ T ′ is Prüfer
minimal in view of [10, Proposition 6.12].
(5) Assume that T ⊂ T ′ is a minimal extension and set P := C(T, T ′ ).
(a) Assume that T ⊂ T ′ is integral. Then, V = V ′ and U 6= U ′
by (1) and (2). We can use Proposition 4.5 getting that P = (U :
U ′ )T ∈ Max(T ) and Q := (U : U ′ ) = P ∩ U ∈ Spec(U). It follows that
Q 6∈ SuppU (T /U), so that UQ = TQ and UQ′ = TQ′ . Then, UQ ⊂ UQ′ is
integral minimal, with Q ∈ SuppU (U ′ /U).
If Q 6∈ Max(U), then U ⊂ U ′ is not minimal by the properties of the
crucial maximal ideal.
Assume that Q ∈ Max(U) and let M ∈ Max(U), with M 6= Q.
′
Then, UM = UM
because M + Q = U, so that U ⊂ U ′ is a minimal
extension and (a) is gotten.
(b) Assume that T ⊂ T ′ is Prüfer. Then, V 6= V ′ and U = U ′ by (1)
and (2). Moreover, P T ′ = T ′ gives P V ′ = V ′ . Let Q ∈ Max(V ) lying
over P . Then, QV ′ = V ′ gives that Q ∈ SuppV (V ′ /V ). Moreover, we
have V ′ = V T ′ . Let P ′ ∈ Max(T ), P ′ 6= P . Then, TP ′ = TP′ ′ gives
VP ′ = VP′ ′ . It follows that SuppT (V ′ /V ) = {P } and SuppV (V ′ /V ) =
{Q ∈ Max(V ) | Q ∩ T = P }. But, by [10, Proposition 6.12], V ⊂ V ′ is
Prüfer minimal if and only if |SuppV (V ′ /V )| = 1, and then if and only
if there is exactly one prime ideal in V lying over P .
Lemma 4.27. Let R ⊆ S be an FCP almost-Prüfer extension and
U ∈ [R, R], V ∈ [R, S]. Then U ⊆ V has FCP and is almost-Prüfer.
Proof. Obviously, U ⊆ V has FCP and R is the integral closure of U
in V . Proposition 4.16 entails that SuppR (R/R) ∩ SuppR (S/R) = ∅.
We claim that SuppU (R/U) ∩ SuppU (V /R) = ∅. Deny and let Q ∈
SuppU (R/U) ∩ SuppU (V /R). Then, RQ 6= UQ , VQ . If P := Q ∩ R. we
get that RP 6= UP , VP , giving RP 6= RP , SP , a contradiction. Another
use of Proposition 4.16 shows that U ⊆ V is almost-Prüfer.
Proposition 4.28. Let R ⊆ S be an FCP almost-Prüfer extension and
T ⊆ T ′ a subextension of R ⊆ S. Set U := T ∩ R and V ′ := T ′ R. Let
W be the Prüfer hull of U ⊆ V ′ . Then, W is also the Prüfer hull of
T ⊆ T ′ and T ⊆ T ′ is an FCP almost-Prüfer extension.
Proof. By Lemma 4.27, we get that U ⊆ V ′ is an FCP almost-Prüfer
extension. Let Te be the Prüfer hull of T ⊆ T ′ . Since U ⊆ T and T ⊆ Te
are Prüfer, so is U ⊆ Te and Te ⊆ V ′ gives that Te ⊆ W . Then, T ⊆ W
is Prüfer as a subextension of U ⊆ W .
28
G. PICAVET AND M. PICAVET
Moreover, in view of Proposition 4.18, W is the least U-subalgebra
of V ′ over which V ′ is integral. Since T ′ ⊆ V ′ is integral, we get that
W ⊆ T ′ , so that W ∈ [T, T ′ ], with W ⊆ T ′ integral as a subextension
of W ⊆ V ′ . It follows that W is also the Prüfer hull of T ⊆ T ′ and
T ⊆ T ′ is an FCP almost-Prüfer extension.
5. The case of Nagata extensions
In this section we transfer the quasi-Prüfer (and almost-Prüfer) properties to Nagata extensions.
Proposition 5.1. Let R ⊆ S be a Prüfer (and FCP) extension, then
R(X) ⊆ S(X) is a Prüfer (and FCP) extension.
Proof. We can suppose that (R, M) is local, in order to use Proposition 1.2(3). Then it is enough to know the following facts: V (X) is
a valuation domain if so is V ; R[X]P [X] ∼
= RP (X) where
= R(X)P (X) ∼
∼
P (X) = P R(X) and R(X)/P (X) = (R/P )(X) for P ∈ Spec(R). If in
addition R ⊆ S is FCP, it is enough to use [11, Theorem 3.9]: R ⊂ S
has FCP if and only if R(X) ⊂ S(X) has FCP.
Proposition 5.2. If R ⊆ S is quasi-Prüfer, then so is R(X) ⊆ S(X),
R(X) = R(X) ∼
= R ⊗R R(X) and S(X) ∼
= S ⊗R R(X).
Proof. It is enough to use proposition 5.1, because R(X) = R(X). The
third assertion results from [34, Proposition 4 and Proposition 7].
Proposition 5.3. If R ⊆ S is almost-Prüfer, then so is R(X) ⊆ S(X).
^ = R(X)
e
It follows that R(X)
for an almost-Prüfer extension R ⊆ S.
e is Prüfer and R
e ⊆ S is
Proof. If R ⊆ S is almost-Prüfer, then R ⊆ R
e
e
integral and then R(X) ⊆ R(X)
is Prüfer and R(X)
⊆ S(X) is integral,
^ = R(X).
e
whence R(X) ⊆ S(X) is almost-Prüfer with R(X)
e = R.
Lemma 5.4. Let R ⊂ S be an FCP ring extension such that R
^ = R(X).
Then, R(X)
^ there is some T ′ ∈ [R(X), R(X)]
^ such that
Proof. If R(X) 6= R(X),
′
′
R(X) ⊂ T is Prüfer minimal. Set C (R(X), T ) ∈ MSupp(S(X)/R(X))
=: M ′ . There is M ∈ MSupp(S/R) such that M ′ = MR(X) [11,
Lemma 3.3]. But, M ′ 6∈ MSupp(R(X)/R(X)) = MSupp(R(X)/R(X))
by Proposition 4.19(2), giving that M 6∈ MSupp(R/R) = S (R/R).
Then [39, Proposition 1.7(3)] entails that M ∈ S (S/R). By [39,
Proposition 1.7(4)], there are some T1 , T2 ∈ [R, S] with T1 ⊂ T2 Prüfer
minimal (an FCP extension is quasi-Prüfer), with M = C (T1 , T2 ) ∩ R.
QUASI-PRÜFER EXTENSIONS
29
We can choose for T1 ⊂ T2 the first minimal extension verifying the preceding property. Therefore, M 6∈ S (T1 /R), so that M 6∈ S (T1 /R) =
Supp(T1 /R). By [39, Lemma 1.10], we get that there exists T ∈ [R, T2 ]
such that R ⊂ T is Prüfer minimal, a contradiction.
^
e
Proposition 5.5. If R ⊂ S is an FCP extension, then, R(X)
= R(X).
e is Prüfer, R(X) ⊆ R(X)
e
Proof. Because R ⊆ R
is Prüfer by Corol^ Assume that R(X)
^ and set
e
e
lary 5.1. Then, R(X)
⊆ R(X).
6= R(X)
^
e so that T = Te, giving T
e
T := R,
(X) = T (X) = R(X)
by Lemma 5.4.
^
^ is a Prüfer extension, contradicting the definition
Hence T
(X) ⊂ R(X)
^
^
e
of T
(X). So, R(X)
= R(X).
Proposition 5.6. Let R ⊆ S be an almost-Prüfer FCP extension, then
\ = R(X).
^
b
R(X)
= R(X)
\ = R(X)
^ = R(X)
e
b
Proof. We have a tower R(X) ⊆ R(X)
= R(X),
where the first and the third equalities come from Theorem 4.4 and
the second from Proposition 5.5.
We end this section with a special result.
Proposition 5.7. Let R ⊆ S be an extension such that R(X) ⊆ S(X)
\
b
has FIP, then R(X)
= R(X).
Proof. The map [R, S] → [R(X), S(X)] defined by T 7→ T (X) =
\ = T (X) for
R(X) ⊗R T is bijective [12, Theorem 32], whence R(X)
\ is a flat epimorphism.
b
some T ∈ [R, S]. Moreover, R(X)
→ R(X)
b = T and the result follows.
Since R → R(X) is faithfully flat, R
6. Fibers of quasi-Prüfer extensions
We intend to complete some results of Ayache-Dobbs [5]. We begin
by recalling some features about quasi-finite ring morphisms. A ring
morphism R → S is called quasi-finite by [40] if it is of finite type
and κ(P ) → κ(P ) ⊗R S is finite (as a κ(P )-vector space), for each
P ∈ Spec(R) [40, Proposition 3, p.40].
Proposition 6.1. A ring morphism of finite type is incomparable if
and only if it is quasi-finite and, if and only if its fibers are finite.
Proof. Use [41, Corollary 1.8] and the above definition.
30
G. PICAVET AND M. PICAVET
Theorem 6.2. An extension R ⊆ S is quasi-Prüfer if and only if
R ⊆ T is quasi-finite (respectively, has finite fibers) for each T ∈ [R, S]
such that T is of finite type over R, if and only if R ⊆ T has integral
fiber morphisms for each T ∈ [R, S].
Proof. It is clear that R ⊆ S is an INC-pair implies the condition
because of Proposition 6.1. To prove the converse, let T ∈ [R, S] and
write T as the union of its finite type R-subalgebras Tα . Now let
Q ⊆ Q′ be prime ideals of T , lying over a prime ideal P of R and
set Qα := Q ∩ Tα and Q′α := Q′ ∩ Tα . If R ⊆ Tα is quasi-finite, then
Qα = Q′α , so that Q = Q′ and then R ⊆ T is incomparable. The last
statement is Proposition 3.8.
Corollary 6.3. An integrally closed extension is Prüfer if and only if
each of its subextensions R ⊆ T of finite type has finite fibers.
Proof. It is enough to observe that the fibers of a (flat) epimorphism
have a cardinal ≤ 1, because an epimorphism is spectrally injective.
A ring extension R ⊆ S is called strongly affine if each of its subextensions R ⊆ T is of finite type. The above considerations show that
in this case R ⊆ S is quasi-Prüfer if and only if each of its subextensions R ⊆ T has finite fibers. For example, an FCP extension is
strongly affine and quasi-Prüfer. We also are interested in extensions
R ⊆ S that are not necessarily strongly affine and such that each of its
subextensions R ⊆ T have finite fibers.
Next lemma will be useful, its proof is obvious.
Lemma 6.4. Let R ⊆ S be an extension and T ∈ [R, S]
(1) If T ⊆ S is spectrally injective and R ⊆ T has finite fibers, then
R ⊆ S has finite fibers.
(2) If R ⊆ T is spectrally injective, then T ⊆ S has finite fibers if
and only if R ⊆ S has finite fibers.
Remark 6.5. Let R ⊆ S be an almost-Prüfer extension, such that the
e ⊆ S has finite fibers and let P ∈ Spec(R).
integral extension T := R
The study of the finiteness of FibR,S (P ) can be reduced as follows. As
R ⊆ S is an epimorphism, because it is Prüfer, it is spectrally injective
(see Scholium A). The hypotheses of Proposition 4.5 hold. We examine
three cases. In case (R : R) 6⊆ P , it is well known that RP = (R)P so
that |FibR,S (P )| = 1, because R → S is spectrally injective. Suppose
now that (R : R) = P . From (R : R) = (T : S)∩R, we deduce that P is
lain over by some Q ∈ Spec(T ) and then FibR,R (P ) ∼
= FibT,S (Q). The
conclusion follows as above. Thus the remaining case is (R : R) ⊂ P
QUASI-PRÜFER EXTENSIONS
31
and we can assume that P T = T for if not FibR,R (P ) ∼
= FibT,S (Q) for
some Q ∈ Spec(T ) by Scholium A (1).
e⊆S
Proposition 6.6. Let R ⊆ S be an almost-Pr̈ufer extension. If R
eP : SP ) is a maximal ideal of R
eP for
has finite fiber morphisms and (R
e then R ⊆ R and R ⊆ S have finite fibers.
each P ∈ SuppR (S/R),
Proof. The Prüfer closure commutes with the localization at prime
e Let P be a prime ideal
ideals by Proposition 4.12. We set T := R.
of R and ϕ : R → RP the canonical morphism. We clearly have
FibR,. (P ) = a ϕ(FibRP ,.P (P RP )). Therefore, we can localize the data at
P and we can assume that R is local.
In case (T : S) = T , we get a factorization R → R → T . Since
R → T is Prüfer so is R → R and it follows that R = R because a
Prüfer extension is integrally closed.
From Proposition 1.2 applied to R ⊆ T , we get that there is some
P ∈ Spec(R) such that T = RP , R/P is a valuation ring with quotient
field T /P and P = PT . It follows that (T : S) = PT = P ⊆ R, and
hence (T : S) = (T : S) ∩ R = (R : R). We have therefore a pushout
diagram by Theorem 4.6:
R′ := R/P −−−→ R/P := R′
y
y
T ′ := T /P −−−→ S/P := S ′
where R/P is a valuation domain, T /P is its quotient field and R/P →
S/P is Prüfer by [26, Proposition 5.8, p. 52].
Because R′ → S ′ is injective and a flat epimorphism, there is a bijective map Min(S ′ ) → Min(R′ ). But T ′ → S ′ is the fiber at P of T → S
and is therefore finite. Therefore, Min(S ′ ) is a finite set {N1 , . . . , Nn }
of maximal ideals lying over the minimal prime ideals {M1 , . . . , Mn } of
R′ lying over 0 in R′ . We infer from Lemma 3.7 that R′ /Mi → S ′ /Ni
is Prüfer, whence integrally closed. Therefore, R′ /Mi is an integral domain and the integral closure of R′ in S ′ /Ni . Any maximal ideal M of
R′ contains some Mi . To conclude it is enough to use a result of Gilmer
[19, Corollary 20.3] because the number of maximal ideals in R′ /Mi is
less than the separable degree of the extension of fields T ′ ⊆ S ′ /Ni .
e : S) is a maximal ideal of R.
e We
Remark 6.7. (1) Suppose that (R
e : S)P ⊆ (R
eP : SP ) and the hypotheses on (R
e : S) of
clearly have (R
the above proposition hold.
32
G. PICAVET AND M. PICAVET
e ⊆ S is a tower of finitely many integral minimal
(2) In case R
e =
extensions Ri−1 ⊆ Ri with Mi = (Ri−1 : Ri ), then SuppRe (S/R)
e where Ni = Mi ∩ R. If the ideals Ni are dif{N1 , . . . , Nn } ⊆ Max(R)
e ⊆ S is integral minimal and the
ferent, each localization at Ni of R
above result may apply. This generalizes the Ayache-Dobbs result [5,
e ⊆ S is supposed to be integral minimal.
Lemma 3.6], where R
Proposition 6.8. Let R ⊆ S be a quasi-Prüfer ring extension.
(1) R ⊆ S has finite fibers if and only if R ⊆ R has finite fibers.
(2) R ⊆ R has finite fibers if and only if each extension R ⊆ T ,
where T ∈ [R, S] has finite fibers.
Proof. (1) Let P ∈ Spec(R) and the morphisms κ(P ) → κ(P ) ⊗R R →
κ(P )⊗R S. The first (second) morphism is integral (a flat epimorphism)
because deduced by base change from the integral morphism R → R
(the flat epimorphism R → S). Therefore, the ring κ(P ) ⊗R R is zero
dimensional, so that the second morphism is surjective by Scholium
A (2). Set A := κ(P ) ⊗R R and B := κ(P ) ⊗R S, we thus have a
module finite flat ring morphism A → B. Hence, AQ → BQ is free for
each Q ∈ Spec(A) [16, Proposition 9] and BQ 6= 0 because it contains
κ(P ) 6= 0. Therefore, AQ → BQ is injective and it follows that A ∼
= B.
(2) Suppose that R ⊆ R has finite fibers and let T ∈ [R, S], then
R ⊆ RT is a flat epimorphism by Proposition 4.5(1) and so is κ(P ) ⊗R
R → κ(P ) ⊗R RT . Since Spec(κ(P ) ⊗R RT ) → Spec(κ(P ) ⊗R R)
is injective, R ⊆ RT has finite fibers. Now R ⊆ T has finite fibers
because T ⊆ RT is integral and is therefore spectrally surjective.
Remark 6.9. Actually, the statement (1) is valid if we only suppose
that R ⊆ S is a flat epimorphism.
Next result contains [5, Lemma 3.6], gotten after a long proof.
Corollary 6.10. Let R ⊆ S be an almost-Prüfer extension. Then
R ⊆ S has finite fibers if and only if R ⊆ R has finite fibers, and if
e ⊆ S has finite fibers.
and only if R
Proof. By Proposition 6.8(1) the first equivalence is clear. The second
is a consequence of Lemma 6.4(2).
The following result is then clear.
Theorem 6.11. Let R ⊆ S be a quasi-Prüfer extension with finite
fibers, then R ⊆ T has finite fibers for each T ∈ [R, S].
Corollary 6.12. If R ⊆ S is quasi-finite and quasi-Prüfer, then R ⊆ T
e ⊆ S is module finite.
has finite fibers for each T ∈ [R, S] and R
QUASI-PRÜFER EXTENSIONS
33
Proof. By the Zariski Main Theorem, there is a factorization R ⊆ F ⊆
S where R ⊆ F is module finite and F ⊆ S is a flat epimorphism
[40, Corollaire 2, p.42]. To conclude, we use Scholium A in the rest of
e ⊗R F → S is injective because F → R
e ⊗R F
the proof. The map R
is a flat epimorphism and is surjective, since it is integral and a flat
e ⊗R F → S is a flat epimorphism .
epimorphism because R
Corollary 6.13. An FMC extension R ⊆ S is such that R ⊆ T has
finite fibers for each T ∈ [R, S].
Proof. Such an extension is quasi-finite and quasi-Prüfer. Then use
Corollary 6.12.
[5, Example 4.7] exhibits some FMC extension R ⊆ S, such that
R ⊆ R has not FCP. Actually, [R, R] is an infinite (maximal) chain.
Proposition 6.14. Let R ⊆ S be a quasi-Prüfer extension such that
R ⊆ R has finite fibers and R is semi-local. Then T is semi-local for
each T ∈ [R, S].
Proof. Obviously R is semi-local. From the tower R ⊆ T R ⊆ S we
deduce that R ⊆ T R is Prüfer. It follows that T R is semi-local [5,
Lemma 2.5 (f)]. As T ⊆ T R is integral, we get that T is semi-local.
The following proposition gives a kind of converse.
Proposition 6.15. Let R ⊆ S be an extension with R semi-local. Then
R ⊆ S is quasi-Prüfer if and only if T is semi-local for each T ∈ [R, S].
Proof. If R ⊆ S is quasi-Prüfer, R ⊆ S is Prüfer. Let T ∈ [R, S]
and set T ′ := T R, so that T ⊆ T ′ is integral, and R ⊆ T ′ is Prüfer
(and then a normal pair). It follows from [5, Lemma 2.5 (f)] that T ′ is
semi-local, and so is T .
If T is semi-local for each T ∈ [R, S], so is any T ∈ [R, S]. Then,
(R, S) is a residually algebraic pair [6, Theorem 3.10] (generalized to
arbitrary extensions) and so is RM ⊆ SM for each M ∈ Max(R),
whence is Prüfer [6, Theorem 2.5] (same remark) and Proposition 1.2.
Then, R ⊆ S is Prüfer by Proposition 1.1 and R ⊆ S is quasi-Prüfer.
7. Numerical properties of FCP extensions
Lemma 7.1. Let R ⊂ S be an FCP extension. The map ϕ : [R, S] →
{(T ′, T ′′ ) ∈ [R, R]×[R, S] | SuppT ′ (R/T ′)∩SuppT ′ (T ′′ /R) = ∅}, defined
by ϕ(T ) := (T ∩ R, RT ) for each T ∈ [R, S], is bijective. In particular,
if R ⊂ S has FIP, then |[R, S]| ≤ |[R, R]||[R, S]|.
34
G. PICAVET AND M. PICAVET
Proof. Let (T ′ , T ′′) ∈ [R, R]×[R, S]. Then, R is also the integral closure
of T ′ in T ′′ (and in S).
Let T ∈ [R, S]. Set T ′ := T ∩ R and T ′′ := RT . Then (T ′ , T ′′ ) ∈
[R, R] × [R, S]. Assume that T ′ = T ′′ , so that T ′ = T ′′ = R, giving T =
R and SuppT ′ (R/T ′ ) = SuppT ′ (T ′′ /R) = ∅. Assume that T ′ 6= T ′′ . In
view of [39, Proposition 3.6], we get SuppT ′ (R/T ′ )∩SuppT ′ (T ′′ /R) = ∅.
Hence ϕ is well defined.
Now, let T1 , T2 ∈ [R, S] be such that ϕ(T1 ) = ϕ(T2 ) = (T ′ , T ′′).
Assume T ′ 6= T ′′ . Another use of [39, Proposition 3.6] gives that T1 =
T2 . If T ′ = T ′′ , then, T ′ = T ′′ = R, so that T1 = T2 = R. It follows
that ϕ is injective. The same reference gives that ϕ is bijective.
Proposition 7.2. Let R ⊂ S be a FCP extension. We define two
order-isomorphisms ϕ′ and ψ as follows:
~ → [R, R] × [R, R]
~ defined by ϕ′ (T ) := (T ∩ R, T R)
ϕ′ : [R, R]
~ → [R, R]
e × [R,
e R]
~ defined by ψ(T ) := (T ∩ R,
e T R).
e
ψ : [R, R]
Proof. This follows from [39, Lemma 3.7] and Proposition 4.19. (We
~ = R.)
e
recall that R
~ R)
e = Supp(R/R),
Corollary 7.3. If R ⊆ S has FCP, then Supp(R/
e
~
e
~
= Supp(R/R)
and Supp(R/R)
= Supp(R/R)∪Supp(R/R).
Supp(R/R)
e R),
e B := Supp(R/R),
e
e
Proof. Set A := Supp(R/
C := Supp(R/R)
e
with
and D := Supp(R/R). Then, A ∪ B = C ∪ D = Supp(R/R),
A ∩ B = C ∩ D = B ∩ D = ∅ by Proposition 4.19.
Assume that A ∪ B 6= B ∪ D and let P ∈ (A ∪ B) \ (B ∪ D).
e P = (R)P (R)
e P = RP , a contradiction. It follows that
Then, RP 6= (R)
A ∪ B = B ∪ D. Intersecting the two members of this equality with
A and D, we get A = A ∩ D = D. In the same way, intersecting the
equality A ∪ B = C ∪ D = C ∪ A by B and C, we get B = C.
Corollary 7.4. Let R ⊂ S be an FCP extension. We define two orderisomorphisms
~ by ϕ1 (T ) := T R
e → [R, R]
ϕ1 : [R, R]
e R]
~ by ψ1 (T ) := T R.
e
ψ1 : [R, R] → [R,
Proof. We use notation of Proposition 4.19. We begin to remark that
e play symmetric roles.
R and R
e be such that ϕ1 (T ) = ϕ1 (T ′ ). Since T ∩ R =
Let T, T ′ ∈ [R, R]
′
T ∩ R = R by Proposition 4.19, we get ϕ(T ) = ϕ(T ′ ), so that T = T ′
and ϕ1 is injective. A similar argument shows that ψ1 is injective.
QUASI-PRÜFER EXTENSIONS
35
e There exists T ∈ [R, R]
e such that ϕ(T ) = (R, U), so
Let U ∈ [R, R].
e
e
that R = T ∩ R and U = T R. Let M ∈ Supp(R/R)
= Supp(R/R)
∪
e
Supp(R/R) by Corollary 7.3. If M ∈ Supp(R/R),
then M 6∈ Supp(R/R)
e M = RM R
eM = R
eM . If M ∈
by Proposition 4.19, giving TM ⊆ R
Supp(R/R), the same reasoning gives TM ⊆ RM , so that RM = TM ∩
eM . Then, TM = R
eM . It follows that T ⊆ R,
e
RM = TM , but RM = R
e
giving T ∈ [R, R] and ϕ1 is surjective, hence bijective. A similar argument shows that ψ1 is surjective, hence bijective.
~
e × [R, R] → [R, R]
Corollary 7.5. If R ⊂ S has FCP, then θ : [R, R]
′
′
defined by θ(T, T ) := T T , is an order-isomorphism. In particular, if
e
~ ≤ |[R, S]|.
R ⊂ S has FIP, then |[R, R]||[R,
R] = |[R, R]k
Proof. Using notation of Proposition 7.2 and Corollary 7.4, we may
remark that ψ◦θ = Id×ψ1 . Since ψ and Id×ψ1 are order-isomorphisms,
so is θ. The FIP case is obvious.
Gathering the previous results, we get the following theorem.
Theorem 7.6. If R ⊂ S has FCP, the next statements are equivalent:
(1) Supp(R/R) ∩ Supp(S/R) = ∅.
(2) The map ϕ : [R, S] → [R, R] × [R, S] defined by ϕ(T ) := (T ∩
R, T R) is an order-isomorphism.
(3) R ⊆ S is almost-Prüfer.
e
(4) Supp(S/R) = Supp(R/R).
e → [R, S] defined by ϕ1 (T ) := T R is an
(5) The map ϕ1 : [R, R]
order-isomorphism.
e S] defined by ψ1 (T ) := T R
e is an
(6) The map ψ1 : [R, R] → [R,
order-isomorphism.
e × [R, R] → [R, S] defined by θ(T, T ′ ) := T T ′
(7) The map θ : [R, R]
is an order-isomorphism.
e = Supp(R/R).
If one of these conditions holds, then Supp(S/R)
If R ⊂ S has FIP, the former conditions are equivalent to each of
the following conditions:
e
R]|.
(8) |[R, S]| = |[R, R]||[R,
(9) k[R, S]| = |[R, R]||[R, S]|.
e = |[R, S]|.
(10) |[R, R]|
e S]|.
(11) |[R, R]| = |[R,
Proof. (1) ⇒ (2) by [39, Lemma 3.7].
36
G. PICAVET AND M. PICAVET
(2) ⇒ (1). If the statement (2) holds, there exists T ∈ [R, S] such
that T ∩ R = R and T R = S. Then, [39, Proposition 3.6] gives that
Supp(R/R) ∩ Supp(S/R) = ∅.
(1) ⇒ (3) by [39, Proposition 3.6].
(3) ⇒ (4), (5), (6) and (7): Use Corollary 7.3 to get (4), Corollary 7.4
to get (5) and (6), and Corollary 7.5 to get (7). Moreover, (3) and
e = Supp(R/R).
Corollary 7.3 give Supp(S/R)
(4) ⇒ (1) by Proposition 4.19(2).
e
(5), (6) or (7) ⇒ (3) because, in each case, we have S = RR.
Assume now that R ⊂ S has FIP.
Then, obviously, (7) ⇒ (8), (2) ⇒ (9), (5) ⇒ (10) and (6) ⇒ (11).
e so
e
R]| = |[R, R]|,
(9) ⇒ (3) by Corollary 7.5, which gives |[R, R]||[R,
e and then S = R.
e
that |[R, S]| = |[R, R]|,
(8) ⇒ (1): Using the map ϕ of Lemma 7.1, we get that {(T ′ , T ′′) ∈
[R, R] × [R, S] | SuppT ′ (R/T ′ ) ∩ SuppT ′ (T ′′ /R) = ∅} = [R, R] × [R, S],
so that SuppR (R/R) ∩ SuppR (S/R) = ∅.
(10) ⇒ (3) and (11) ⇒ (3) by Corollary 7.4.
Example 7.7. We give an example where the results of Theorem 7.6
do not hold if R ⊆ S has not FCP. Set R := ZP and S := Q[X]/(X 2 ),
e = Q because R ⊂ R
e is Prüfer (minimal)
where P ∈ Max(Z). Then, R
e ⊂ S is integral minimal. Set M := P RP ∈ Max(R) with (R, M)
and R
a local ring. It follows that M ∈ Supp(R/R) ∩ Supp(S/R) because
R ⊂ S is neither integral, nor Prüfer. Similarly, M ∈ Supp(R/R) ∩
e
Supp(R/R).
Indeed, R ⊂ R has not FCP.
We end the paper by some length computations in the FCP case.
Proposition 7.8. Let R ⊆ S be an FCP extension. The following
statements hold:
e = ℓ[R, R]
~ and ℓ[R, R] = ℓ[R,
e R]
~
(1) ℓ[R, R]
~ = ℓ[R, R]
e + ℓ[R,
e R]
~ = ℓ[R, R] + ℓ[R, R]
~
(2) ℓ[R, R]
~ = |Supp (R/R)|
e = |SuppR (R/R)|.
e
~
(3) ℓ[R, R]
= ℓ[R, R]
R
Proof. To prove (1), use the maps ϕ1 and ψ1 of Corollary 7.4. Then (2)
follows from [11, Theorem 4.11] and (3) from [10, Proposition 6.12].
References
[1] D.F. Anderson and D.E. Dobbs, Pairs of rings with the same prime ideals,
Can. J. Math. XXXII, (1980), 362–384.
[2] D. F. Anderson, D. E. Dobbs and M. Fontana, On treed Nagata rings, J.
Pure Appl. Algebra, 61, (1989), 107–122.
QUASI-PRÜFER EXTENSIONS
37
[3] A. Ayache, M. Ben Nasr, O. Echi and N. Jarboui, Universally catenarian
and going-down pairs of rings, Math. Z, 238, (2001), 695–731.
[4] A. Ayache, A constructive study about the set of intermediate rings, Comm.
Algebra, 41 (2013), 4637–4661.
[5] A. Ayache and D. E. Dobbs, Finite maximal chains of commutative rings,
JAA, 14, (2015), 14500751–1450075-27.
[6] A. Ayache and A. Jaballah, Residually algebraic pairs of rings, Math. Z,
225, (1997), 49–65.
[7] M. Ben Nasr and N. Jarboui, New results about normal pairs of rings with
zero-divisors, Ricerche mat. 63 (2014), 149–155.
[8] R.D. Chatham, Going-down pairs of commutative rings, Rendiconti del
Circolo Matematico di Palermo, Serie II, Tomo L, (2001), 509–542.
[9] G. W. Chang and M. Fontana, Uppers to 0 in polynomial rings and Prüfer
-like domains, Comm. Algebra, 37 (2009), 164–192.
[10] D. E. Dobbs, G. Picavet and M. Picavet-L’Hermitte, Characterizing the
ring extensions that satisfy FIP or FCP, J. Algebra, 371 (2012), 391–429.
[11] D. E. Dobbs, G. Picavet and M. Picavet-L’Hermitte, Transfer results for
the FIP and FCP properties of ring extensions, Comm. Algebra, 43 (2015),
1279–1316.
[12] D. E. Dobbs, G. Picavet and M. Picavet-L’Hermitte, When an extension
of Nagata rings has only finitely many intermediate rings, each of those is
a Nagata ring?, Int. J. Math. Math. Sci., 2014 (2014), Article ID315919,
13 pp.
[13] D. E. Dobbs, G. Picavet, M. Picavet-L’Hermitte and J. Shapiro, On intersections and composites of minimal ring extensions, J P J. Algebra, Number
Theory Appl., 26 (2012), 103–158.
[14] D. E. Dobbs, On characterizations of integrality involving the lying-over
and incomparability properties, J. Comm. Algebra, 1 (2009), 227–235.
[15] D.E. Dobbs and J. Shapiro, Pseudo-normal pairs of integral domains, Houston J. Math., 40 (2014), 1–9.
[16] S. Endo, On semi-hereditary rings, J. Math. Soc. Japan, 13 (1961), 109–
119.
[17] D. Ferrand and J.-P. Olivier, Homomorphismes minimaux d’anneaux, J.
Algebra, 16 (1970), 461–471.
[18] M. Fontana, J. A. Huckaba and I. J. Papick, Prüfer domains, Dekker, New
York, 1997.
[19] R. Gilmer, Multiplicative Ideal Theory, Dekker, New York, 1972.
[20] M. Griffin, Prüfer rings with zero divisors, Journal fur die reine and angewande Mathematik, 239, (1969), 55–67.
[21] M. Grandet, Une caratérisation des morphismes minimaux non entiers,
C.R. Acad. Sc. Paris, 271, (1970), Série A 581–583.
[22] A. Grothendieck and J. Dieudonné, Eléments de Géométrie Algébrique,
Springer Verlag, Berlin, (1971).
[23] E. Houston, Uppers to zero in polynomial rings, pp. 243–261, in: Multiplicative Ideal Theory in Commutative Algebra, Springer-Verlag, New York,
2006.
[24] A. Jaballah, Finiteness of the set of intermediary rings in normal pairs,
Saitama Math. J., 17 (1999), 59–61.
38
G. PICAVET AND M. PICAVET
[25] N. Jarboui and E. Massaoud, On finite saturated chains of overrings,
Comm. Algebra, 40, (2012), 1563–1569.
[26] M. Knebusch and D. Zhang, Manis Valuations and Prüfer Extensions I,
Springer, Berlin, 2002.
[27] D. Lazard, Autour de la platitude, Bull. Soc. Math. France, 97, (1969),
81–128.
[28] M. Lazarus, Fermeture intégrale et changement de base, Ann. Fac. Sci.
Toulouse, 6, (1984), 103–120.
[29] T. G. Lucas, Some results on Prüfer rings, Pacific J. Math., 124, (1986),
333–343.
[30] K. Morita, Flat modules, Injective modules and quotient rings, Math. Z.,
120 (1971), 25–40.
[31] J.P. Olivier, Anneaux absolument plats universels et épimorphismes à buts
réduits, Séminaire Samuel. Algèbre Commutative, Tome 2 (1967-1968),
exp. no 6, p. 1–12.
[32] J. P. Olivier, Montée des propriété par morphismes absolument plats, J.
Alg. Pure Appl., Université des Sciences et Technique du Languedoc, Montpellier, France (1971).
[33] J.P. Olivier, Going up along absolutely flat morphisms, J. Pure Appl. Algebra, 30 (1983), 47–59.
[34] G. Picavet, Propriétés et applications de la notion de contenu, Comm.
Algebra,13, (1985), 2231–2265.
[35] G. Picavet, Universally going-down rings, 1-split rings, and absolute integral closure, Comm. Algebra, 31, (2003), 4655–4685.
[36] G. Picavet and M. Picavet-L’Hermitte, About minimal morphisms, pp.
369–386, in: Multiplicative Ideal Theory in Commutative Algebra, Springer,
New York, 2006.
[37] G. Picavet, Seminormal or t-closed schemes and Rees rings, Algebra Repr.
Theory, 1, (1998), 255–309.
[38] G. Picavet and M. Picavet-L’Hermitte, Some more combinatorics results
on Nagata extensions, Palestine J. Math., 1, (Spec.1), (2016), 49–62.
[39] G. Picavet and M. Picavet-L’Hermitte, Prüfer and Morita hulls of FCP
extensions, Comm. Algebra, 43, (2015), 102-119.
[40] M. Raynaud, Anneaux locaux Henséliens, Lect. Notes in Math., Springer,
Vol. 169, (1970).
[41] H. Uda, Incomparability in ring extensions, Hiroshima Math. J., 9, (1979),
451–463.
[42] S. Visweswaran, Laskerian pairs, J. Pure Appl. Algebra, 59, (1989), 87–110.
Université Blaise Pascal, Laboratoire de Mathématiques, UMR6620
CNRS, 24, avenue des Landais, BP 80026, 63177 Aubière CEDEX, France
E-mail address: [email protected]
E-mail address: picavet.gm(at)wanadoo.fr
| 0 |
Automatic segmenting teeth in X-ray images: Trends, a
novel data set, benchmarking and future perspectives
Gil Silva1 , Luciano Oliveira2
arXiv:1802.03086v1 [] 9 Feb 2018
Ivision Lab, Federal University of Bahia, Brazil
Matheus Pithon3
Southeast State University of Bahia, Brazil
Abstract
This review presents an in-depth study of the literature on segmentation
methods applied in dental imaging. Ten segmentation methods were studied
and categorized according to the type of the segmentation method (region-based,
threshold-based, cluster-based, boundary-based or watershed-based), type of Xray images used (intra-oral or extra-oral) and characteristics of the dataset used
to evaluate the methods in the state-of-the-art works. We found that the literature has primarily focused on threshold-based segmentation methods (54%).
80% of the reviewed papers have used intra-oral X-ray images in their experiments, demonstrating preference to perform segmentation on images of already
isolated parts of the teeth, rather than using extra-oral X-rays, which show
tooth structure of the mouth and bones of the face. To fill a scientific gap in
the field, a novel data set based on extra-oral X-ray images are proposed here.
A statistical comparison of the results found with the 10 image segmentation
methods over our proposed data set comprised of 1,500 images is also carried
out, providing a more comprehensive source of performance assessment. Discussion on limitations of the methods conceived over the past year as well as
future perspectives on exploiting learning-based segmentation methods to improve performance are also provided.
Keywords: image segmentation, dental X-ray, orthopantomography
1. Introduction
In dentistry, radiographic images are fundamental data sources to aid diagnosis. Radiography is the photographic record of an image produced by the
1 [email protected]
2 [email protected]
3 [email protected]
passage of an X-ray source through an object (Quinn and Sigl, 1980). X-ray
images are used in dental medicine to check the condition of the teeth, gums,
jaws and bone structure of a mouth (Quinn and Sigl, 1980). Without X-rays,
Dentists would not be able to detect many dental problems until they become
severe. This way, the radiographic examination helps the dentist to discover the
cause of the problem at an early stage, allowing then to outline the best treatment plan for the patient. Another application of dental X-rays is in the field
of forensic identification, especially in cadavers (Paewinsky et al., 2005). The
forensic dentistry aims to identify individuals based on their dental characteristics. In recent years, forensic literature has also provided automatic methods to
assessing person’s age from degenerative changes in teeth (Willems et al., 2002).
These age-related changes can be assessed by digital radiography (Paewinsky
et al., 2005). With the advancement of artificial intelligence and pattern recognition algorithms, X-ray images have been increasingly used as an input to these
intelligent algorithms. In this context, we highlight here an in-depth study over
some segmentation methods in the literature that are regarded to the recognition of image patterns in dental X-rays.
1.1. Overview of dental image segmentation
In dentistry, X-rays are divided into two categories: (i) Intra-oral radiographic examinations are techniques performed with the film positioned in the
buccal cavity (the X-ray image is obtained inside the patient’s mouth); and (ii)
extra-oral radiographic examinations are the techniques in which the patient is
positioned between the radiographic film and the source of X-rays (the X-ray
image is obtained outside the patient’s mouth) (Association, 1987).
In this paper, some works that use segmentation methods applied to the
following types of X-ray images are analyzed: bitewing and periapical (intraoral), and panoramic (extra-oral). The bitewing X-ray images are used to show
details of the upper and lower teeth in a mouth region, while the periapical
X-ray images is used to monitor the entire tooth (Wang et al., 2016). On the
other hand, panoramic radiography, also known as orthopantomography, is
one of the radiological exams capable of obtaining fundamental information for
the diagnosis of anomalies in dental medicine (Amer and Aqel, 2015), (Wang
et al., 2016). Orthopantomographic examination allows for the visualization of
dental irregularities, such as: teeth included, bone abnormalities, cysts, tumors,
cancers, infections, post-accident fractures, temporomandibular joint disorders
that cause pain in the ear, face, neck and head (Oliveira and Proença, 2011).
X-ray images are pervasively used by dentists to analyze the dental structure
and to define patient’s treatment plan. However, due to the lack of adequate
automated resources to aid the analysis of dental X-ray images, X-ray analysis
relies on mostly the dentist’s experience and visual perception (Wang et al.,
2016). Other details in dental X-rays that make it difficult to analyze these
images are: Variations of patient-to-patient teeth, artifacts used for restorations
and prostheses, poor image qualities caused by certain conditions (such as noise,
low contrast, homogeneity in regions close to objects of interest), space existing
by a missing tooth, and limitation of acquisition methods; all these challenges
2
result in unsuccessful development of automated computer tools to aid dental
diagnosis, avoiding completely automatic analysis Amer and Aqel (2015).
Image segmentation is the process of partitioning a digital image into multiple regions (pixel set) or objects, in order to make an image representation
simpler, and to facilitate its analysis. The present work is being carried out
to help in finding advances in the state-of-the art of methods for segmenting
dental X-ray images that are able, for example, to isolate teeth from other parts
of the image (jaws, temporomandibular regions, details of nasal, face and gums)
towards facilitating the automatic analysis of X-rays. With that, we are capable
to discuss limitations in the current proposed methods and future perspectives
for breakthroughs in this research field.
1.2. Contributions
This paper provides an in-depth review of the literature in dental X-ray image segmentation. A comparative evaluation of ten methods to segment extraoral dental images over a novel data set is also addressed. The proposed data
set was gathered specially for the present study, and contains 1,500 annotated
panoramic X-ray images 4 . This present study is towards to answering the following questions (see Section 3): Which category of segmentation method is
most used in the reviewed works?, do public data sets used to evaluate dental
segmentation methods present sufficiently variability to evaluate the progress
of the field?. Also, these other following questions (see Section 4) which segmentation method obtains the best performance on extracting characteristics of
radiographic images (panoramic X-ray), so that it is possible to perfectly isolate
teeth?, what are the gaps in dental X-rays that can benefit from the application
of image segmentation methods? Finally, we discuss recent advances in pattern
recognition methods that could be applied in tooth segmentation (see Section
5).
To answer the list of questions, the present review follows the steps: (i)
analysis of the current state-of-the-art, observing the trends of the segmentation methods in dental X-ray images, (ii) identification of which image segmentation methods are the most used among the reviewed works, (iii) analysis
of the amount and variety of images used in the experiments of the reviewed
works, (iv) identification of which type of dental X-ray image has been most
used among the reviewed works, (v) introduction of a novel annotated data set
with a high variability and a great number of images, and, finally, (vi) a comparative evaluation of dental segmentation methods applied to our data set. These
steps are followed from the classification of the papers found in the literature,
considering: segmentation methods, X-ray image types, size and variety of the
data sets used.
It is noteworthy that the reviewed articles mostly work with small data sets,
ranging from 1 to 100 images in average, and the only work with more than one
thousand images is not publicly available, or only containing images varying
4 Our
data set will be publicly available on the acceptance of the paper.
3
only in relation to the number of teeth. To tackle this limitation, the proposed
data set is comprised of 1,500 annotated images, which allow the classification
of X-rays in 10 categories according to the following general characteristics:
Structural variations in relation to the teeth, number of teeth, existence of
restorations, existence of dental implants, existence of dental appliances, and
existence of dental images with more than 32 teeth. The images represent the
most diverse situations found among patients in dental offices. In this sense,
a comparative evaluation of 10 segmentation methods was performed to verify
which method can more accurately identify each individual tooth, in panoramic
X-ray images. Metrics, such as accuracy, specificity, precision, recall (sensitivity)
and F-score, were used to assess the performance of each segmentation method
analyzed here.
2. Research methodology
This review has followed the methodological steps: (A) select the digital
libraries and articles (Section 2.1), (B) review the selected articles (Section
2.2), (C) define relevant categories to classify the articles and classify articles in
the categories defined (Section 3). Steps (B) and (C) ran until final results were
obtained. Step (D) was repeated until the evaluation of all the segmentation
methods studied was finalized (Section 4). Finally, step (E) presents discussion
about the evaluated methods and future directions to build more robust and
efficient segmentation methods (Section 5).
2.1. Research sources and selection of the articles
Our review is based on the state-of-the-art articles found in the following digital libraries: IEEE Xplore1 , ScienceDirect2 , Google Scholar3 and Scopus4 . The
choice of these four digital libraries relies on the fact that they include articles
presented in all other digital libraries related to either Computer Science or Dentistry. The selection of the articles was based on their prominence in the field of
English language. The articles were selected in two phases: In phase I, a total of
94 articles were found in these four digital libraries. In Phase II, articles such as
calendars, book chapter, publisher’s notes, subject index, volume content, and
from symposiums were excluded from the present study. Only peer-reviewed
international conferences and journal articles were considered; among
those, studies that corresponded to some of the following cases were considered
as non-relevant and excluded from the analysis: (1) did not answer any of our
questions in this research, (2) duplicated, (3) not peer-reviewed, and (4) did
not apply segmentation methods on at least one of the following types of dental
X-rays: Bitewing, periapical or panoramic. The final number of articles selected
1 http://ieeexplore.ieee.org/Xplore
2 http://www.sciencedirect.com
3 https://scholar.google.com
4 http://www.scopus.com/
4
was reduced to 41, at the end. The number of articles initially found in each
digital library is summarized in Table 1.
As shown in Table 1, only three of the four libraries surveyed have found
relevant studies. In addition, forty-nine percent (49%) of the articles selected
as relevant were found in the Science Direct digital library. Table 2 shows the
initial statistics obtained in the review stage of the present study, containing
the distribution of articles by digital library and year of publication. The data
presented in Table 2 show the largest number of articles found in the IEEE
Xplore digital library with 34 articles (36%). The results in Table 2 also show
the increasing trend in the number of articles published in recent years. Sixtysix percent (66%) of the articles found were published in the last five years (61
articles).
2.2. Selection of the relevant articles
In the second stage, the goal was to ensure the correct classification of the
articles selected only as relevant. The review of the articles follows a categorization phase (presented in the next section), since it was necessary to re-read
articles to classify them in each of the respective categories.
Table 1: Total number of studies found by digital library.
Source
Results
Not
Relevant
Repeated
Incomplete
Relevant
Studies
IEEE Xplore
Science Direct
Google Scholar
Scopus
34
28
23
9
16
8
7
0
0
0
1
1
5
0
7
8
13
20
8
0
TOTAL
94
31
2
20
41
SOURCE
Table 2: Distribution of articles by digital library and year of publication.
YEAR
Sum
%
IEEE Xplore
Science Direct
Google Scholar
Scopus
2016
2015
2014
2013
2012
2011
2010
2009
2008
2007
2006
2005
2004
11
12
12
13
13
5
6
5
5
8
3
0
1
12%
13%
13%
14%
14%
5%
6%
5%
5%
9%
3%
0%
1%
3
5
5
8
5
2
1
1
1
1
2
0
0
4
6
2
1
4
0
3
1
1
4
1
0
1
2
1
3
1
4
3
2
2
2
3
0
0
0
2
0
2
3
0
0
0
1
1
0
0
0
0
Total
94
100%
34
28
23
9
5
3. Taxonomy of the relevant works
Each article selected as relevant was classified among categories defined in
the present study, according to: The segmentation method used, type of dental
X-ray used images, the size and variety of the data set used. It is noteworthy
that the segmentation methods discussed and benchmarked in this review are
strictly from the state-of-the-art works.
3.1. Segmentation categories
Chen and Leung (2004) categorize segmentation methods according to the
characteristics (shape, histogram, threshold, region, entropy, spatial correlation
of pixels, among others) searched in a variety of source images (X-ray, thermal
ultrasonic, etc) to generate the cut-off point (value that determines what the
objects of interest in the analyzed image are). We adapted the general classification found in (Chen and Leung, 2004) to the classification of the works
studied in the field of dental image segmentation as follows: (1) Region-based,
(2) threshold-based, (3) cluster-based, (4) boundary-based, (5) watershed-based.
The categories were defined based on the characteristics that the relevant articles explore in the images analyzed to carry out the segmentation. Table 3 shows
the relevant works, classified into the categories of the segmentation methods.
Important details about each segmentation method presented in each category
are addressed in Section 4.2.
Region-based. The goal of the region-based method is to divide an image into
regions, based on discontinuities in pixel intensity levels. Among the relevant
articles selected, only Lurie et al. (2012) and Modi and Desai (2011) used the
region-based segmentation. The aim of the study in (Lurie et al., 2012) was to
segment panoramic X-ray images of the teeth to assist the Dentist in procedures
for detection of osteopenia and osteoporosis. Modi and Desai (2011) used region
growing approach to segment bitewing X-ray images.
Threshold-based. The rationale of the intensity threshold application in image
segmentation starts from the choice of a threshold value. Pixels whose values
exceed the threshold are placed into a region, while pixels with values below
the threshold are placed into an adjacent region. Most of the articles selected
as relevant (54%) use the threshold-based segmentation approach (Abaza et al.,
2009), (Ajaz and Kathirvelu, 2013) (Cameriere et al., 2015), (Jain and Chen,
2004), (Lin et al., 2014), (Dighe and Revati, 2012), (Huang and Hsu, 2008), (Lin
et al., 2015), (Bruellmann et al., 2016), (Amer and Aqel, 2015), (Tikhe et al.,
2016)), ((Said et al., 2006), (Geraets et al., 2007), (Lin et al., 2010), (Wang et al.,
2016), (Nomir and Abdel-Mottaleb, 2008b), (Kaur and Kaur, 2016), (Nomir
and Abdel-Mottaleb, 2008a), (Keshtkar and Gueaieb, 2007), (Lin et al., 2013),
(Indraswari et al., 2015), (Mohamed Razali et al., 2014).
In certain cases, pixel gray levels, which belongs to the objects of interest,
are substantially different from the gray levels of the pixels in the background.
In those cases, threshold segmentation based on the histogram of the image is
6
Table 3: Works grouped by segmentation methods.
Category
Segmentation method (Related works)
Region-based
Region growing ((Lurie et al., 2012), (Modi and Desai, 2011))
Threshold-based
Histogram-based threshold ((Abaza et al., 2009), (Ajaz and
Kathirvelu, 2013) (Cameriere et al., 2015), (Jain and Chen,
2004), (Lin et al., 2014), (Dighe and Revati, 2012), (Huang
and Hsu, 2008), (Lin et al., 2015), (Bruellmann et al.,
2016), (Amer and Aqel, 2015), (Tikhe et al., 2016)) / Variable threshold ((Said et al., 2006), (Geraets et al., 2007),
(Lin et al., 2010), (Wang et al., 2016), (Nomir and AbdelMottaleb, 2008b), (Kaur and Kaur, 2016), (Nomir and
Abdel-Mottaleb, 2008a), (Keshtkar and Gueaieb, 2007), (Lin
et al., 2013), (Indraswari et al., 2015), (Mohamed Razali
et al., 2014))
Cluster-based
Fuzzy-C-means (Alsmadi (2015), Son and Tuan (2016))
Boundary-based
Level set method ((Ehsani Rad et al., 2013), (Li et al.,
2006), (Li et al., 2007), (An et al., 2012)) / Active contour ((Ali et al., 2015), (Niroshika et al., 2013), (Hasan
et al., 2016)) / Edge detection ((Senthilkumaran, 2012b),
(Lin et al., 2012), (Razali et al., 2015), (Senthilkumaran,
2012a), (Gráfová et al., 2013), (Trivedi et al., 2015)) / Point
detection ((Economopoulos et al., 2008))
Watershed-based
Watershed ((Li et al., 2012))
usually used to separate objects of interest from the background. This way,
histograms can be used in situations, where objects and background have intensity levels grouped into two dominant modes. The present research identified
that seven out of the relevant papers used histogram-based threshold as the
main stage of segmentation (Abaza et al., 2009), (Ajaz and Kathirvelu, 2013)
(Cameriere et al., 2015), (Jain and Chen, 2004), (Lin et al., 2014), (Dighe and
Revati, 2012), (Huang and Hsu, 2008), (Lin et al., 2015), (Bruellmann et al.,
2016), (Amer and Aqel, 2015), (Tikhe et al., 2016).
Thresholding simply based on the histogram of the image usually fails when
the image exhibits considerable variation in contrast and illumination, resulting
in many pixels that can not be easily classified as first or second plane. One
solution to this problem is to try to estimate a ”shading function”, and then use
it to compensate for the pattern of non-uniform intensities. The commonly used
approach to compensate for irregularities, or when there is a lot of variation of
the intensity of the pixels related to the dominant object (in which case the
histogram-based thresholding has difficulties) is the use of variable threshold
based on local statistics of the pixels of the image. The studies in (Said et al.,
2006), (Geraets et al., 2007), (Lin et al., 2010), (Wang et al., 2016), (Nomir and
Abdel-Mottaleb, 2008b), (Kaur and Kaur, 2016), (Nomir and Abdel-Mottaleb,
2008a), (Keshtkar and Gueaieb, 2007), (Lin et al., 2013), (Indraswari et al.,
2015), (Mohamed Razali et al., 2014) applied local variable thresholding as
7
the main step for segmentation of the dental X-ray images.
Cluster-based. Clustering is a method used to make automatic grouping of
data according to a certain degree of similarity between the data. The criterion
of similarity depends on the problem to be solved. In general, the number
of groups to be detected must be informed as the initial parameter for the
algorithm to perform data clustering. Among the relevant papers, Alsmadi
(2015) used clustering to perform the segmentation of panoramic X-ray images,
while Son and Tuan (2016) proposed a clustering-based method to segment
X-rays of bitewing and periapical types.
Boundary-based. Boundary-based methods are used to search for discontinuities (point and edge detection) in the gray levels of the image. Thirty-four
percent (34%) of the relevant papers used boundary-based segmentation methods.
The classical boundary-based approach performs the search for points and
edges in images by detecting discontinuity in color or pixel intensities in images.
Among the works that used boundary-based methods, (Senthilkumaran, 2012b),
(Lin et al., 2012), (Razali et al., 2015), (Senthilkumaran, 2012a), (Gráfová et al.,
2013), (Trivedi et al., 2015) and (Economopoulos et al., 2008) used the classical
approach for point and edge detection to segment the images. A more recent
approach on boundary-based segmentation is known as active contour model
(Ali et al., 2015), (Niroshika et al., 2013), (Hasan et al., 2016), also called snakes,
which performs segmentation by delineating an object outline from an image.
The goal is to minimize the initialization of energy functions, and the stop
criterion is when the minimum energy is detected. The region that represents
the minimum energy value corresponds to the contour that best approaches
the perimeter of an object. Another recent boundary-based detection approach
is a variation of the active contour model known as level set method (LSM).
The LSM performs segmentation by means of geometric operations to detect
contours with topology changes. The studies found in (Ehsani Rad et al., 2013),
(Li et al., 2006), (Li et al., 2007), (An et al., 2012) used LSM to segment the
X-ray images.
Watershed-based. Watershed is a transformation defined in a grayscale image. The watershed transformation uses mathematical morphology to segment
an image in adjacent regions. Among the relevant articles selected, only (Li
et al., 2012) used the watershed-based segmentation to segment bitewing X-ray
images.
3.2. Type of the X-ray images
Approximately eighty percent (80%) of the reviewed papers used intra-oral
X-ray images. Only three of the reviewed papers used extra-oral panoramic
X-ray images. The studies addressed in (Geraets et al., 2007), (Son and Tuan,
2016) and (Trivedi et al., 2015) perform experiments with intra-oral and extraoral images. Table 4 summarizes the relevant papers grouped by the type of
X-ray image.
8
Table 4: Works grouped by X-ray images.
X-ray
Related works
Bitewing
(Jain and Chen, 2004), (Ehsani Rad et al.,
2013), (Senthilkumaran, 2012b), (Nomir and AbdelMottaleb, 2008a), (Lin et al., 2010), (Lin et al.,
2012), (Wang et al., 2016), (Keshtkar and Gueaieb,
2007), (Nomir and Abdel-Mottaleb, 2008b), (Modi
and Desai, 2011), (Ali et al., 2015), (Li et al., 2012),
(Kaur and Kaur, 2016)
Periapical
Cameriere et al. (2015), (Lin et al., 2014), (Li et al.,
2006), (Dighe and Revati, 2012), (Li et al., 2007),
(Huang and Hsu, 2008), (Lin et al., 2015), (Bruellmann et al., 2016), (Lin et al., 2013), (Niroshika
et al., 2013), (Tikhe et al., 2016), (Senthilkumaran,
2012a), (An et al., 2012), (Economopoulos et al.,
2008)
Panoramic
(Alsmadi, 2015), (Amer and Aqel, 2015), (Lurie
et al., 2012), (Ajaz and Kathirvelu, 2013), (Indraswari et al., 2015), (Mohamed Razali et al.,
2014), (Razali et al., 2015), (Hasan et al., 2016),
(Gráfová et al., 2013)
Bitewing / Periapical
(Said et al., 2006), (Abaza et al., 2009)
Bitewing / Panoramic
(Son and Tuan, 2016)
Periapical / Panoramic
(Geraets et al., 2007)
Bitewing / Periapical / Panoramic
(Trivedi et al., 2015)
3.3. Characteristics of the data sets used in the reviewed works
Sixty-one percent (61%) of the relevant papers used data sets containing
between 1 and 100 X-ray images ((Lin et al., 2012), (Li et al., 2007), (Son and
Tuan, 2016), (Alsmadi, 2015), (Lin et al., 2015), (Lin et al., 2010), (Cameriere
et al., 2015), (Jain and Chen, 2004), (Dighe and Revati, 2012), (Lin et al., 2014),
(Bruellmann et al., 2016), (Economopoulos et al., 2008), (Gráfová et al., 2013),
(Kaur and Kaur, 2016), (An et al., 2012), (Ehsani Rad et al., 2013), (Tikhe
et al., 2016), (Lin et al., 2013), (Ajaz and Kathirvelu, 2013), (Mohamed Razali
et al., 2014), (Indraswari et al., 2015), (Modi and Desai, 2011), (Amer and
Aqel, 2015), (Li et al., 2012), (Senthilkumaran, 2012a)). Eight of the reviewed
articles did not present information about the data set used ((Li et al., 2006),
(Senthilkumaran, 2012b), (Trivedi et al., 2015), (Niroshika et al., 2013), (Ali
et al., 2015), (Razali et al., 2015), (Keshtkar and Gueaieb, 2007), (Geraets et al.,
2007)). Four of the papers reviewed used between 101 and 200 images ((Wang
et al., 2016), (Nomir and Abdel-Mottaleb, 2008b), (Lurie et al., 2012), (Nomir
and Abdel-Mottaleb, 2008a)). Three used between 201 and 500 images ((Huang
and Hsu, 2008), (Abaza et al., 2009), (Hasan et al., 2016)). Only one among the
papers reviewed used more than 500 images in their experiments (Said et al.,
9
Figure 1: Number of works by number of images
2006). In general, the reviewed articles exploited data sets containing X-ray
images with small variations (i.e., varying only with respect to the number of
teeth). In addition, as shown in the previous section, there is a predominance of
intra-oral radiographs (which show only a part of the teeth) rather than extraoral radiographs (which present the entire dental structure in a single image).
Figure 1 depicts the number of works versus number of images used in each
group of work.
4. Evaluation of the segmentation methods
In our work, to evaluate the segmentation methods studied, we created a
methodology that consists of six stages. In the first stage, we started with
the acquisition of images through the orthopantomograph (device used for the
generation of orthopantomography images), and the collected images were classified into 10 categories according to the variety of structural characteristics of
the teeth. The second stage consists of annotating the images (obtaining the
binary images), which correspond to the demarcations of the objects of interest
in each analyzed image. After finishing the tooth annotation process, in the
third stage, the buccal region is annotated, as the region of interest (ROI) to
determine the actual image of the teeth. In the fourth stage, the statistics of
the gathered data set are calculated. The fifth and sixth consist in analyzing
the performance of the segmentation algorithms, using the metrics summarized
in Table 7, and in evaluating the results achieved by each segmentation method
studied.
4.1. Construction of the data set
The images used in our data set were acquired from the X-ray camera model:
ORTHOPHOS XG 5 / XG 5 DS / Ceph, manufactured by Sirona Dental Sys10
tems GmbH. X-rays were acquired at the Diagnostic Imaging Center of the
Southwest State University of Bahia (UESB). The radiographic images used for
this research were coded in order to identify the patient in the study5 .
The gathered data set consists of 1,500 annotated panoramic X-ray images.
The images have significant structural variations in relation to: the teeth, the
number of teeth, existence of restorations, existence of implants, existence of
appliances, existence of supernumerary teeth (referring to patients with more
than 32 teeth), and the size of the mouth and jaws. All images originally
obtained by the ORTHOPHOS XG 5 / XG 5 DS / Ceph ortopantomograph had
dimensions 2440 × 1292 pixels. The images were captured in gray level. The
work with panoramic X-ray images is more challenging, due to heterogeneity
reasons, among which the following stands out: 1) Different levels of noise
generated by the ortopantomograph; 2) Image of the vertebral column, which
covers the front teeth in some cases; 3) Low contrast, making morphological
properties complex.
To thoroughly benchmark the methods studied here, the 1,500 images were
distributed among 10 categories. The images were named, using whole numbers,
in sequential order by category, aiming at not identifying the patients in the
study. The process of categorizing the images was performed manually, selecting
images individually, counting tooth by tooth, as well as verifying structural
characteristics of the teeth. The images were classified according to the variety
of structural characteristics of the teeth (see Table 5). Finally, the images were
cut out to disregard non-relevant information (white border around the images
and part of the spine) generated by the orthopantomograph device. After the
clipping process, there was a change in the size of the images to 1991 × 1127
pixels, but without affecting the objects of interest (teeth), as shown in Figure
2. The cropped images were saved on the new dimension to be used in the
following stages, which will be presented in the next sections. Figure 3 shows
an X-ray image corresponding to each of the categories of our data set.
Image annotation. The process of annotating the images of our proposed
data set occurred in two parts. First, it was initiated by the upper jaw through
the annotation of the third right upper molar and making the annotation of
all the teeth of the upper arch to the third left upper molar. Then, the same
process was performed on the lower jaw with all the teeth, and in the same
direction as the upper jaw, from left to right, starting with the annotation of
the third right lower molar, and annotating all teeth from the lower arch to the
lower third molar. Figure 4 illustrates the tooth annotation process through a
panoramic X-ray image of the data set.
Determining ROI. For each image, after the annotation of the teeth, the
buccal region was also annotated, covering the whole region delineated by the
5 The use of the radiographs in the research was authorized by the National Commission for
Research Ethics (CONEP) and by the Research Ethics Committee (CEP), under the report
number 646,050, approved on 05/13/2014.
11
Figure 2: Example of the clipping and resizing of the data set images of the present work.
contour of the jaws. This process was carried out in view of preserving the
area containing all the teeth (objects of interest). Finally, the region of interest
(ROI) was determined by multiplying the values of the pixel array elements,
representing the original panoramic X-ray image, by its corresponding binary
matrix, resulting from the process of oral annotation. Figure 5 illustrates the
whole process to determine the ROI of the images.
Data set statistics. Table 5 presents the statistics of our data set: The categorization of the images, the total number of images used, the total of images
by category and the average of teeth of the images by category.
Statistics of the image ROIs. For all statistics, only the pixels in the image
ROIs were considered. The results of the statistical operations were used as
a parameter to perform the segmentation algorithms studied. The statistics
raised over the image ROIs were as follows:
The image statistics per category were organized in a single table to better
analyze the results found among the categories, as shown in Table 6. From
the analysis of that table, it was possible to compare the characteristics of
each data set category. For instance, category 5 is formed by images with
dental implants, which correspond to regions of high luminosity in the images,
resulting in pixels with greater intensity than those found in the images of the
other categories.
4.2. Performance analysis of the segmentation methods
The following metrics were used to evaluate the segmentation methods studied: Accuracy, specificity, precision, recall (sensitivity) and F-score, which are
12
Figure 3: Examples of images from the data set categories of present work: (a) Category 1;
(b) Category 2; (c) Category 3; (d) Category 4; (e) Category 5; (f ) Category 6;
(g) Category 7; (h) Category 8; (i) Category 9; (j) Category 10.
commonly used in the field of computer vision for performance analysis of segmentation. Table 7 presents a summary of these metrics.
4.2.1. Methodology of the performance analysis
Only the image ROIs were considered to calculate the metrics for the evaluation of the segmentation methods. The process presented in Figure 6 was carried
13
Figure 4: Annotation of the teeth.
Figure 5: Determining the ROI of the images.
14
Table 5: Categorization of data set images and average number of teeth per category.
Number
1
2
3
4
5
6
7
8
9
10
Category
Images with all the teeth, containing teeth with
restoration and with dental appliance
Images with all the teeth, containing teeth with
restoration and without dental appliance
Images with all the teeth, containing teeth without
restoration and with dental appliance
Images with all the teeth, containing teeth without
restoration and without dental appliance
Images containing dental implant
Images containing more than 32 teeth
Images missing teeth, containing teeth with
restoration and dental appliance
Images missing teeth, containing teeth with
restoration and without dental appliance
Images missing teeth, containing teeth without
restoration and with dental appliance
Images missing teeth, containing teeth without
restoration and without dental appliance
Images
Average
teeth
73
32
220
32
45
32
140
32
120
170
115
18
37
27
457
29
45
28
115
28
Table 6: Image statistics by category.
Category
Highest value
Lowest value
Mean
Entropy
Category 1
Category 2
Category 3
Category 4
Category 5
Category 6
Category 7
Category 8
Category 9
Category 10
253
250
248
215
254
230
255
253
251
214
10
16
13
20
5
18
7
11
9
20
108.30
108.29
107.25
107.31
109.36
100.43
108.50
106.72
107.33
105.94
6.93
6.83
6.88
6.82
6.94
6.86
6.88
6.89
6.89
6.70
out on all the segmented images obtained by each one of the 10 segmentation
methods analyzed. Figure 7 illustrates the steps of the performance evaluation over 10 segmentation methods (see also Table 9 for a list of the evaluated
methods).
4.2.2. Computing the metrics over the data set
Table 8 summarizes the process to calculate the accuracy for all images in
each category, using only one segmentation method. Parameters of each method
were optimized for best performance. On each category, the average accuracy
was computed. After that, to find the accuracy for all images in the data set,
the average accuracy was multiplied by the number of images in each category,
obtaining a weighted sum for all images in the data set (1, 500 images). By
dividing by the number of images in the whole data set, we were able to find the
15
Table 7: Metrics used to evaluate the segmentation methods studied.
Initial measures
Positive (P)
Negative (N)
True Positive (TP)
True Negative (TN)
False Positive (FP)
False Negative (FN)
Pixel is in a class of interest
Pixel is not in a class of interest
The pixel in the ground truth is positive, while method ranks the
pixel as positive
The pixel in; 3) the ground truth is negative, while method ranks
the pixel as negative
The pixel in the ground truth is negative, while method ranks
the pixel as positive
The pixel in the ground truth is positive, while method ranks the
pixel as negative
Metrics used for performance evaluation
Accuracy
Specificity
Precision
Sensitivity/Recall
F-score
Relation between total of hits on the total set of errors and hits.
This value is calculated by: (TP + TN)/(TP + FN + FP + TN)
Percentage of negative samples correctly identified on total negative samples. This value is calculated by: TN/(FP + TN)
Percentage of positive samples correctly classified on the total
of samples classified as positive. This value is calculated by:
TP/(TP + FP)
Percentage of positive samples correctly classified on the total of
positive samples. This value is calculated by: TP/(TP + FN)
Represents the harmonic average between precision and sensitivity. It is calculated by: 2*Recall*Precision/(Recall + Precision)
Table 8: Average accuracy per category.
Category (Images)
Category
Category
Category
Category
Category
Category
Category
Category
Category
Category
Accuracy
1 (73)
2 (220)
3 (45)
4 (140)
5 (120)
6 (170)
7 (115)
8 (457)
9 (45)
10 (115)
0.7585 x 73 = 55.3709
0.7471 x 220 = 164.3572
0.7682 x 45 = 34.5702
0.7909 x 140 = 110.7238
0.7506 x 120 = 90.0745
0.8295 x 170 = 141.0067
0.7401 x 115 = 85.1139
0.8351 x 457 = 381.6530
0.7646 x 45 = 34.4083
0.8005 x 115 = 92.0603
SUM
1189.34
Overall accuracy
0.7929
Overall accuracy = SUM/1,500
average accuracy of the data set for one segmentation method. The same process
was performed to calculate all other metrics (specificity, recall, precision and fscore), over each segmentation method. The segmentation methods evaluated
in this work are summarized in Table 9. Next, each one of the methods are
discussed and evaluated.
16
Figure 6: Obtaining the values of the metrics used over each image of the data set.
Table 9: Segmentation methods evaluated in the present study.
Category
Segmentation methods
Region-based
Thresholding-based
Clustering-based
Boundary-based
Watershed
1) Region growing; 2) Region splitting and merging
3) Basic global thresholding; 4) Niblack
5) Fuzzy C-means clustering
6) Sobel; 7) Canny; 8)Active contour without edges; 9) Level-set
10) Marker-controlled watershed
1) Region growing. Region growing is a method that groups pixels based
on a predefined criterion to create larger regions. A standard approach for
the region growing method is to perform calculations that generate sets of pixel
values, whose properties group them close to the center (centroids) of the values
we are looking for, so these values are used as seeds. Region growing needs two
parameters to perform the segmentation operations, as follows:
• Seeds - Initial points to start the growth of regions. In the present work,
the cells containing the X and Y coordinates of the centroids (center of
the objects of interest) of the tooth regions (objects of interest in the
images) were selected manually to serve as seeds. The seed values were
grouped into vectors for each corresponding image, which served as the
initial points for the execution of the region growing method;
• Dist - Threshold that will work as a basis to indicate the similarity between the pixels that will be part of the region or not. The parameter dist
17
Figure 7: Steps of the performance analysis of segmentation algorithms.
also corresponds to the conditional stop value of the algorithm. Thus, it is
used to verify when the difference of the mean intensity between the pixels
of a region and a new pixel becomes larger than the informed parameter,
and therefore there are no more pixels to be inserted in the regions. The
parameter dist also corresponds to the conditional stop value of the algorithm. The best value found for the dist parameter was 0.1.
2) Region splitting and merging. Segmentation based on division and union
of regions is generally performed in four basic steps: 1) The image as a whole is
considered as the area of initial interest; 2) The area of interest is examined to
decide which of the pixels satisfy some criteria of similarity; 3) if true, the area
of interest becomes part of a region in the image, receiving a label; 4) otherwise,
the area of interest is divided and each one is successively considered as area
of interest. After each division, a joining process is used to compare adjacent
regions, putting them together if necessary. This process continues until no
further division or no further union of regions are possible. The most granular
level of division that can occur is when there are areas that contain only one
pixel. Using this approach, all the regions that satisfy the similarity criterion are
filled with 1’s. Likewise, the regions that do not satisfy the similarity criterion
are filled with 0’s, thus creating a segmented image. The method needs two
18
parameters:
• qtdecomp - Minimum block size for decomposition (this parameter must
be a positive integer), and set to 1 in our evaluation;
• splitmerge - Similarity criterion used to indicate whether the region
(block) should be divided or not. In the present work, we compared the
standard deviation of the intensity of the pixels in the analyzed region.
If the standard deviation is greater than the lowest intensity value of the
pixels, then the region is divided.
3) Basic global thresholding. This method performs segmentation based on
the histogram of the image. Assuming that f (x, y) corresponds to the histogram
of an image, then to separate objects of interest from the background, an initial
threshold (T ) is chosen. Then any pixel of the image, represented by (x, y),
that is greater than T is marked as an object of interest, otherwise the pixel is
marked as the background. In our work, we used the following steps:
1. Estimate an initial value for the global limit, T (we used the average pixel
intensity of the ROI of each image analyzed);
2. Segment the image through the threshold (T ). Then, two groups of pixels
appear: G1 , referring to pixels with values greater than T and G2 , referring
to pixels with values less than or equal to T ;
3. Calculate the mean intensity values, m1 and m2 , of the pixels in G1 and
G2 , respectively;
4. Calculate (m1 + m2 )/2 to obtain a new threshold (T ) value;
5. Repeat steps 2 to 4 until the value of T , is less than a positive value
predefined by a parameter ∆T. The larger the ∆T, the less interactions
the method will perform. For the experiments, 0.5 was the best value
found for ∆T, in our work;
6. Finally, converting the grayscale image into a binary image using the
T / den threshold, where T is the threshold obtained in the previous
steps and den denote an integer value, which scales the maximum value of
the ratio of T / den to 1. The output image is binarized by replacing all
pixels of the input image that are of intensity greater than the threshold
T / den by the value 1 and all other pixels by the value 0.
4) Niblack method. Based on two measures of local statistics: mean and standard deviation within a neighborhood block of size n×n, a threshold T (x, y) for
each pixel is calculated. Then, as the neighborhood block moves, it involves
different neighborhoods, obtaining new thresholds, T , at each location (x, y).
Local standard deviation and mean are useful to determine local thresholds,
because they are descriptors of contrast and luminosity. When the contrast
or luminosity are found with great intensity in the images, they hinder the
segmentation process in methods that use a single global threshold, such as
the basic global thresholding. Adaptations of segmentation using local variable
19
thresholding have been proposed in the literature. However, the method originally proposed in (Niblack, 1985) was evaluated here. The local threshold is
calculated with a block of size n×n, according to
T (x, y) = m(x, y) + k ∗ σ(x, y) ,
(1)
where m(x,y) and σ(x,y) represent the mean and local standard deviation of the
local block of size n×n, respectively. k is an imbalance constant (also called bias)
that modifies the local value obtained by the local standard deviation. k equal
to 1 was adopted to avoid modifying the standard deviation locally calculated.
For each pixel, the following process for all the images was performed:
1. Calculate mean and standard deviation within the local block (x, y);
2. Calculate the threshold T (x, y);
3. If the value of the pixel is greater than T (x, y), one is assigned.
5) Fuzzy C-means clustering. Fuzzy C-Means starts from a set Xk = {X1 , X2 ,
X3 , ..., Xn } ∈ Rp , where Xk is a characteristic vector for all k ∈ {1, 2, ..., n},
and Rp is the p-dimensional space (Bezdek, 1981). A fuzzy pseudopartition,
denoted by P = {U1 , U2 , ..., Uc }, must satisfy the following
c
X
U1 (Xk ) = 1 ,
(2)
i=1
for every k ∈ {1, 2, ..., n}, and n denotes the number of elements of the set X.
The sum of the membership degrees of an element in all families must be equal
to one. And,
0<
n
X
Ui (Xk ) < n ,
(3)
k=1
for every i ∈ {1, 2, ..., c} and c represents the number of classes. Therefore, the
sum of the membership degrees of all the elements of a family must be less than
the number of elements in the set X. Ui is the degree of relevance for Xk in
a cluster. The way to determine if the algorithm based on the Fuzzy C-means
method finds an optimal fuzzy partition is defined by the objective function:
Jm =
n X
k
X
2
m
Uci
kXmn − Vmc k ,
(4)
t=1 c=1
where Vmc = (V11 , ..., Vmc ) is the matrix containing the centers of the clusters, M
is the fuzzy coefficient responsible for the degree of fuzing of the elements Xmn
and Vmc . The objective function is used to obtain the clusters by calculating
the Euclidean distance between the image data and the cluster centers. The
center Vmc of each cluster c(c = 1, ..., k) for an interaction t is given by
(t)
Vmc
(t)
Pn
=
M
i=1 (Uci ) Xmn
.
Pn
(t) M
i=1 (Uci )
20
(5)
To perform the segmentation using the Fuzzy C-means method, the following
parameters were used:
• Fuzzy coefficient M equal to 2, responsible for the degree of fuzzification;
• Stop criterion using a number of iterations (100), or if the method reaches
the minimum error rate (0.00001).
The formation of clusters continues until the maximum number of iterations
is completed, or when the minimum error rate is reached.
6) Sobel. Sobel works as an operator that calculates finite differences to identify the edges of the image. To perform the segmentation using the Sobel edge
detector, we used an automatically threshold T, calculated from the image pixels.
7) Canny. In Canny (Canny, 1986) detector, the edges are identified by observing the maximum location of the gradient of f (x, y). Canny performs the
following steps:
1. Smooth the Gaussian filter image first;
2. The local gradient, [gx2 + gy2 ]1/2 , and the edge direction, tan−1 (gx /gy ), are
computed at each point;
3. The edges are identified in step (2). Known the directions of the edge, a
non-maximum suppression is performed; this is done by tracing the border
and suppressing pixel values (setting them to zero) that are not considered
edge pixels. Two thresholds, T1 and T2, with T1 ¡ T2 (automatically
calculated based on each image);
4. Finally, the edges detection of the image is performed considering the
pixels that have values greater than T2.
8) Active contour without edges. This method is a variation of the model
originally proposed by Chan and Vese (Chan and Vese, 2001), and works differently from the classical edge detection methods. This is able to detect objects
whose boundaries are not defined by the gradient.
To perform the segmentation using the active contour without edges method,
the following parameters were used:
• Initial mask (a matrix of 0’s and 1’s), where the mask corresponds to 75%
of the image to be segmented);
• Total number of iterations (500, in our case);
• A stopping term equal to 0.1, in our work.
21
9) Level set method. This method is a variation of the active contour method.
Level set method (LSM) was originally proposed by Osher (1988). LSM can
perform numeric calculations of curves and surfaces without requiring predefined
criteria. The objective of LSM is to represent the limit of an object using a level
adjustment function, usually represented mathematically by the variable α. The
curves of the objects are obtained by calculating the value of γ through the set
of levels of α, given by:
γ = {(x, y)|α(x, y) = 0} ,
(6)
and the α function has positive values within the region delimited by the γ curve
and negative values outside the curve.
10) Marker-controlled watershed. The goal of the watershed segmentation
method is to separate two adjacent regions, which present abrupt changing in the
gradient values. Assuming that gradient values form a topographic surface with
valleys and mountains, brighter pixels (e.g. teeth in X-ray images) correspond to
those with the highest gradient, while the darker ones (e.g. valleys between teeth
in X-ray images) would correspond to those with the lowest gradient. A variation
of the watershed method is the marker-controlled watershed that prevents the
occurrence of the phenomenon known as super-segmentation (excess of pixels
that can not be attached to any other part of the image) using morphological
operations of opening and closing to make adjustments to the gray levels of the
image and avoid over-segmentation.
To segment the images in our data set by using the marker-controlled watershed method, the following steps are performed:
1.
2.
3.
4.
Compute a segmentation function, which tries to identify dark regions;
Compute markers of the target object;
Compute markers of the segments out of the target objects;
Calculate the transformation of the segmentation function to obtain the
position of the target objects and the positions of the background markers.
4.3. Result analysis
Tables 10 and 11 present samples of the results obtained by each of the segmentation methods evaluated, in each category. Table 12 summarizes the overall
averages obtained by calculating the metrics, which were applied to evaluate the
performance of each segmentation method. From Table 12, we can observe that
the splitting and merging-based and the sobel methods achieved almost perfect
results in the specificity (which corresponds to the true negatives). This could
be explained due to the characteristics of the two methods to focus on the edges.
Similarly, the fuzzy C-means and canny methods also had over 90% with respect
to specificity. In contrast, these four segmentation methods obtained poor results in relation to the recall metric (which privileges the true positives in the
evaluation). Thus, the images, segmented by algorithms based on splitting and
22
Table 10: Samples of the results of the segmentation methods evaluated (PART 1).
Method
Category 1
Category 2
Region
growing
Splitting
and merging
Basic global
threshold
Niblack
method
Fuzzy
c-means
Canny
Sobel
Active
contour
without
edges
Level set
method
Watershed
23
Category 3
Category 4
Category 5
Table 11: Samples of the results of the segmentation methods evaluated (PART 2).
Method
Category 6
Category 7
Region
growing
Splitting
and merging
Basic global
threshold
Niblack
method
Fuzzy
c-means
Canny
Sobel
Active
contour
without
edges
Level set
method
Watershed
24
Category 8
Category 9
Category 10
merging, fuzzy C-means, canny and sobel, showed a predominance of true negatives in their results. However, when segmentation results in a predominance of
elements of a class (for example, true negatives), it indicates that for a binary
classification problem, the result of the segmentation algorithm is equivalent to
the random suggestion of a class. Therefore, algorithms based on splitting and
merging, fuzzy C-means, canny and sobel presented poor results when applied
to the data set images used in the present work.
Table 12: Overall average results.
Method
Accuracy
Specificity
Precision
Recall
F-score
Region growing
Splitting/merging
Global thresholding
Niblack method
Fuzzy C-means
Canny
Sobel
Without edges
Level set method
Watershed
0.6810
0.8107
0.7929
0.8182
0.8234
0.7927
0.8025
0.8020
0.7637
0.7658
0.6948
0.9958
0.8191
0.8174
0.9159
0.9637
0.9954
0.8576
0.7842
0.7531
0.3553
0.8156
0.5202
0.5129
0.6186
0.4502
0.6663
0.5111
0.4776
0.4782
0.6341
0.0807
0.6931
0.8257
0.4525
0.1122
0.0360
0.5750
0.6808
0.8157
0.4419
0.1429
0.5621
0.6138
0.4939
0.1751
0.0677
0.5209
0.5224
0.5782
The results presented in Table 12 also show that the niblack reached the
highest value of the recall metric (approximately 83%), indicating that the images segmented by niblack presented the highest number of true positives (pixels
corresponding to the objects of interest in the analyzed images) and, therefore,
few false negatives with respect to the other methods of segmentation evaluated.
Niblack also obtained approximately 82% in relation to the specificity metric,
which corresponds to the true negatives that were correctly identified. Besides
niblack, only the marker-controlled watershed method reached above 80% of
the recall metric. The marker-controlled watershed obtained lower results than
the Nibliack method in all other analyzed metrics. The active contour without
edges and the level set segmentation methods obtained less than 70% of the
recall metric; these methods also achieved poorer results when compared to the
Niblack and marker-controlled watershed methods.
Considering the results, one can state that the segmentation process of
panoramic X-ray images of the teeth based on thresholding, achieved a significant performance improvement when a local threshold (niblack method)
was used, instead of using a single global threshold (basic global thresholding). Niblack was the one which presented superior performance to segment
teeth.
5. Discussion and conclusions
From the images obtained with the X-ray, the dentist can analyze the entire dental structure and construct (if necessary) the patient’s treatment plan.
However, due to the lack of adequate automated resources to aid the analysis
25
of dental X-ray images, the evaluation of these images occurs empirically, that
is to say, only using the experience of the dentist. The difficulty of analyzing
the dental X-ray images is still great when dealing with extra-oral radiographs,
because these images are not restricted to only an isolated part of the teeth,
as happens in the intra-oral images. In addition to the teeth, extra-oral X-rays
also show the temporomandibular regions (jaw joints with the skull) and details originated by the bones of the nasal and facial areas. Other information
on dental radiographs that make it difficult to analyze these images are: variations of patient-to-patient teeth, artifacts used for restorations and prostheses,
poor image qualities caused by some conditions, such as noise, low contrast,
homogeneity in regions that represent teeth and not teeth, space existing for a
missing tooth, and limitation of acquisition methods, which sometimes result in
unsuccessful segmentation. Therefore, the literature review carried out in the
present study has revealed that there is still room to find an adequate method
for segmentation of dental X-ray images that can be used as a basis for the
construction of specialized systems to aid in dental diagnosis.
It is noteworthy that the revised literature has been increasingly focused on
research to segment dental X-ray images based on thresholding (see Section 3.1
and Table 3). Eighty percent (80%) of the analyzed articles used intra-oral X-ray
images in their experiments. Public data sets used in the works analyzed in our
study indicate a preference in the literature for segmenting images that present
only an isolated part of the teeth, rather than using extra-oral X-rays that show
the entire dental structure of the mouth and face bones in a single image. The
results of the present study also show that the majority of the papers reviewed
(61%) worked with data sets containing between 1 and 100 dental x-ray images
(with only one work exploiting a data set with more than 500 images, containing
500 bitewing and 130 periapical dental radiographic films). This finding seems
to indicate that most of the methods are not thoroughly evaluated.
One contribution of the present study was the construction of a data set
with 1,500 panoramic X-ray images, characterizing a significant differential when
compared to other works. Our data set has a diversity of images, with different
characteristics and distributed in 10 categories that were defined. With our
diverse and great-in-size data set, we performed an in-depth evaluation of the
segmentation methods applied to the panoramic X-ray images of our dataset
teeth using the following metrics: Accuracy, specificity, precision, recall, and
F-score for performance measurement of the segmentation algorithms studied.
After the performance evaluation of 10 segmentation methods over our proposed
data set, we could conclude that none of the algorithms studied was able to
completely isolate the objects of interest (teeth) in the data set images used in
the present work, failing mainly because of the bone parts.
5.1. Future perspectives on exploiting learning-based segmentation methods
We can state that panoramic X-ray images of the teeth present characteristics that make the segmentation process difficult. It is realized that a possible
way for segmentation of images in this branch can be the use of methods based
26
on machine learning. Recently research has revealed interest in object recognition in images using segments (Keypoints and Lowe, 2004), (Arbel et al., 2012),
using techniques that combine segmentation and object recognition in images
(also known as semantic segmentation). The work carried out in (Carreira
et al., 2012), for example, presents how to explore grouping methods that calculate second order statistics in the form of symmetric matrices to combine image
recognition and segmentation. The method proposes to efficiently perform second order statistical operations over a large number of regions of the image,
finding clusters in shared areas of several overlapping regions. The role of image aggregation is to produce an overall description of a region of the image.
Thus, a single descriptor can summarize local characteristics within a region,
and can be used as input for a standard classifier. Most current clustering techniques calculate first-order statistics, for example, by performing the calculation
of the maximum or average value of the characteristics extracted from a cluster (Boureau et al., 2010). The work proposed by (Carreira et al., 2012) uses
different types of second-order statistics, highlighting the local descriptors scale
invariant feature transform (SIFT) (Keypoints and Lowe, 2004) and the Fisher
coding (Perronnin et al., 2010), which also uses second order statistics to recognize objects in the images to perform the semantic segmentation. Second-order
statistics could be exploited to segment dental X-ray images by learning the
shape of the teeth while performing the segmentation.
With no a priori information, finding segments of image objects is a remarkable skill of human vision. In the human visual system, when we look at an
image, not all hypotheses are equally perceived by different people. For example,
some people may recognize objects that are usually compact in their projection
in the image and others may not be able to perceive these objects. In a computational view, the work performed in Carreira and Cristian (2012) presents
a new proposal to generate and classify hypotheses of objects in an image using processes from bottom-up and mid-level bands. The objects are segmented
without prior knowledge about the their locations. The authors present how to
classify the hypotheses of the objects through a model to predict the image segments, considering the properties of the region that composes the objects in the
images studied. According to Carreira and Cristian (2012), real-world object
statistics are not easy to identify in segmentation algorithms, sometimes leading
to unsuccessful segmentation. One possibility to solve this problem would be to
obtain the parameters of the segmentation algorithm, forming a model of machine learning using large amounts of annotated data. However, the local scope
and inherently combinatory nature of the annotations of the images decrease the
effectiveness of segmentation. While energy-minimization image segmentation
problems generate multiple regions among objects, the work done in (Carreira
and Cristian, 2012) separated regions into individually connected components.
Some characteristics harm the segmentation process: For example, in an image
containing people hugged, segmenting people and their arms may require prior
knowledge of the number of arms displayed in the image and the locations of
the people may be in. Then, it is necessary to deal with such scenarios in an
ascending way, that is to say, based on strong clues as continuity of the pixels
27
that compose the objects of interest, and can be explored in the image analyzed.
Still in the field of energy minimization, an observation that must be taken into
account is the problem that leads to energy minimization that can be used to
identify regions of image objects (Tu and Zhu, 2006). In this sense, a promising
direction that can be followed towards to improve the segmentation in dental Xray images is the development of operations that can minimize energy functions
to highlight the regions that represent objects of interest in the orthopantomography images. A big concern here is the overlap of image characteristics of the
teeth with jaw and skull, in extra-oral X-rays images.
5.2. Deep learning-based segmentation
The work found in (Garcia-Garcia et al., 2017) provides a review of deep
learning methods on semantic segmentation in various application areas. First,
the authors describe the terminology of this field, as well as the mandatory
background concepts. Then the key data sets and challenges are set out to help
researchers on how to decide which ones best fit their needs and goals. The
existing methods are reviewed, highlighting their contributions and their significance in the field. The authors conclude that semantic segmentation has been
addressed with many success stories, although still remains an open problem.
Given the recent success of deep learning methods in several areas of image
pattern recognition, the use of these methods on a huge data set as ours could
be a breakthrough in the field. As presented in (Garcia-Garcia et al., 2017),
deep learning has proven to be extremely powerful to address the most diverse
segmentation problems, so we can expect a flurry of innovation and lines of research over the next few years, exploiting studies that apply deep learning and
semantic segmentation to dental X-ray images.
References
Abaza, A., Ross, A., and Ammar, H. (2009). Retrieving dental radiographs
for post-mortem identification. In Intl. Conf. on Image Processing, pages
2537–2540.
Ajaz, A. and Kathirvelu, D. (2013). Dental biometrics: Computer aided human
identification system using the dental panoramic radiographs. In Intl. Conf.
on Communication and Signal Processing, pages 717–721.
Ali, R. B., Ejbali, R., and Zaied, M. (2015). GPU-based Segmentation of Dental
X-ray Images using Active Contours Without. In Intl. Conf. on Intelligent
Systems Design and Applications, volume 1, pages 505–510.
Alsmadi, M. K. (2015). A hybrid fuzzy c-means and neutrosophic for jaw lesions
segmentation. Ain Shams Engineering Journal.
Amer, Y. Y. and Aqel, M. J. (2015). An efficient segmentation algorithm for
panoramic dental images. Procedia Computer Science, 65:718–725.
28
An, P.-l., Huang, P., Whe, P., Ang, H. U., Science, C., Engineering, I., and
Science, C. (2012). An automatic lesion detection method for dental xray images by segmentation using variational level set. In Proceedings of
the 2012 International Conference on Machine Learning and Cybernetics,
Xian, volume 1, pages 1821–1825.
Arbel, P., Berkeley, B., Rd, W., and Park, M. (2012). Semantic Segmentation
using Regions and Parts. pages 3378–3385.
Association, A. D. (1987). Dental radiographic examinations - recommendations for patient selection and limiting radiation exposure. Journal of the
American Medical Association, 257(14):1929–1936.
Bezdek, J. C. (1981). Pattern recognition with fuzzy objective function algorithms. SIAM Review, 25(3):442–442.
Boureau, Y.-L., Ponce, J., and LeCun, Y. (2010). A Theoretical Analysis of
Feature Pooling in Visual Recognition. Icml, pages 111–118.
Bruellmann, D., Sander, S., and Schmidtmann, I. (2016). The design of an fast
fourier filter for enhancing diagnostically relevant structures – endodontic
files. Computers in Biology and Medicine, 72:212–217.
Cameriere, R., De Luca, S., Egidi, N., Bacaloni, M., Maponi, P., Ferrante, L.,
and Cingolani, M. (2015). Automatic age estimation in adults by analysis of
canine pulp/tooth ratio: Preliminary results. Journal of Forensic Radiology
and Imaging, 3(1):61–66.
Canny, J. (1986). A computational approach to edge detection. IEEE Trans.
on Patt. Analysis and Machine Intellig., PAMI-8(6):679–698.
Carreira, J., Caseiro, R., Batista, J., and Sminchisescu, C. (2012). Semantic segmentation with second-order pooling. Lecture Notes in Computer Science,
7578:430–443.
Carreira, J. and Cristian, S. (2012). Cpmc: Automatic object segmentation using. IEEE Trans. on Pattern Analysis and Machine Intelligence,
34(7):1312–1328.
Chan, T. F. and Vese, L. A. (2001). Active contours without edges. IEEE
Trans. on Image Processing, 10(2):266–277.
Chen, S. and Leung, H. (2004). Survey over image thresholding techniques
and quantitative performance evaluation. Journal of Electronic Imaging,
13(1):220.
Dighe, S. and Revati, S. (2012). Preprocessing, Segmentation and Matching of
Dental Radiographs used in Dental Biometrics. 1(2278):52–56.
29
Economopoulos, T., Matsopoulos, G. K., Asvestas, P. A., Gröndahl, K., and
Gröndahl, H. G. (2008). Automatic correspondence using the enhanced
hexagonal centre-based inner search algorithm for point-based dental image
registration. Dentomaxillofacial Radiology, 37(4):185–204.
Ehsani Rad, A., Shafry, M., Rahim, M., and Norouzi, A. (2013). Digital dental
x-ray image segmentation and feature extraction. 11:3109–3114.
Garcia-Garcia, A., Orts-Escolano, S., Oprea, S., Villena-Martinez, V., and
Garcia-Rodriguez, J. (2017). A Review on Deep Learning Techniques Applied to Semantic Segmentation. pages 1–23.
Geraets, W. G. M., Verheij, J. G. C., van der Stelt, P. F., Horner, K., Lindh,
C., Nicopoulou-Karayianni, K., Jacobs, R., Harrison, E. J., Adams, J. E.,
and Devlin, H. (2007). Prediction of bone mineral density with dental
radiographs. Bone, 40(5):1217–1221.
Gráfová, L., Kašparová, M., Kakawand, S., Procházka, A., and Dostálová, T.
(2013). Study of edge detection task in dental panoramic radiographs.
Dentomaxillofacial Radiology, 42(7).
Hasan, M. M., Hassan, R., and Ismail, W. (2016). Automatic segmentation
of jaw from panoramic dental x-ray images using gvf snakes. In WAC,
volume 1, pages 1–6.
Huang, C. H. and Hsu, C. Y. (2008). Computer-assisted orientation of dental
periapical radiographs to the occlusal plane. Oral Surgery, Oral Medicine,
Oral Pathology, Oral Radiology and Endodontology, 105(5):649–653.
Indraswari, R., Arifin, A. Z., Navastara, D. A., and Jawas, N. (2015). Teeth
segmentation on dental panoramic radiographs using decimation-free directional filter banck thresholding and multistage adaptive thresholding. In
Intl. Conf. on Information, Communication Technology and System, volume 1, pages 49–54.
Jain, A. K. and Chen, H. (2004). Matching of dental X-ray images for human
identification. Pattern Recognition, 37(7):1519–1532.
Kaur, J. and Kaur, J. (2016). Dental image disease analysis using pso and backpropagation neural network classifier. Intl. Journal of Advanced Research
in Computer Science and Software Engineering, 6(4):158–160.
Keshtkar, F. and Gueaieb, W. (2007). Segmentation of dental radiographs using
a swarm intelligence approach. In Canadian Conference on Electrical and
Computer Engineering, pages 328–331.
Keypoints, S.-i. and Lowe, D. G. (2004). Distinctive image features from. Intl.
Journal of Computer Vision, 60(2):91–110.
30
Li, H., Sun, G., Sun, H., and Liu, W. (2012). Watershed algorithm based on
morphology for dental x-ray images segmentation. In Intl. Conference on
Signal Processing Proceedings, volume 2, pages 877–880.
Li, S., Fevens, T., Krzyzak, A., Jin, C., and Li, S. (2007). Semi-automatic
computer aided lesion detection in dental x-rays using variational level set.
Pattern Recognition, 40(10):2861–2873.
Li, S., Fevens, T., Krzyzak, A., and Li, S. (2006). An automatic variational level
set segmentation framework for computer aided dental x-rays analysis in
clinical environments. Computerized Medical Imag. and Graph., 30(2):65–
74.
Lin, P. L., Huang, P. W., Huang, P. Y., and Hsu, H. C. (2015). Alveolar
bone-loss area localization in periodontitis radiographs based on threshold
segmentation with a hybrid feature fused of intensity and the h-value of
fractional brownian motion model. Computer Methods and Programs in
Biomedicine, 121(3):117–126.
Lin, P. L., Huang, P. Y., and Huang, P. W. (2013). An effective teeth segmentation method for dental periapical radiographs based on local singularity. In
Intl. Conf. on System Science and Engineering, volume 1, pages 407–411.
Lin, P. L., Huang, P. Y., Huang, P. W., Hsu, H. C., and Chen, C. C. (2014).
Teeth segmentation of dental periapical radiographs based on local singularity analysis. Computer Methods and Programs in Biomedicine, 113(2):433–
445.
Lin, P. L., Lai, Y. H., and Huang, P. W. (2010). An effective classification and
numbering system for dental bitewing radiographs using teeth region and
contour information. Pattern Recognition, 43(4):1380–1392.
Lin, P. L., Lai, Y. H., and Huang, P. W. (2012). Dental biometrics: Human
identification based on teeth and dental works in bitewing radiographs.
Patt. Recog., 45(3):934–946.
Lurie, A., Tosoni, G. M., Tsimikas, J., and Fitz, W. (2012). Recursive hierarchic
segmentation analysis of bone mineral density changes on digital panoramic
images. Oral Surgery, Oral Medicine, Oral Pathology and Oral Radiology,
113(4):549–558.
Modi, C. K. and Desai, N. P. (2011). A simple and novel algorithm for automatic selection of ROI for dental radiograph segmentation. In Canadian
Conference on Electrical and Computer Engineering, pages 000504–000507.
Mohamed Razali, M. R., Ahmad, N. S., Mohd Zaki, Z., and Ismail, W. (2014).
Region of adaptive threshold segmentation between mean, median and otsu
threshold for dental age assessment. In Intl. Conf. on Computer, Communications, and Control Technology, Proceedings, pages 353–356.
31
Niblack, W. (1985). An introduction to digital image processing. 1 edition.
Niroshika, U. A. A., Meegama, R. G. N., and Fernando, T. G. I. (2013). Active
contour model to extract boundaries of teeth in dental X-ray images. In
Proceedings of the 8th International Conference on Computer Science and
Education, ICCSE 2013, pages 396–401.
Nomir, O. and Abdel-Mottaleb, M. (2008a). Fusion of matching algorithms for
human identification using dental x-ray radiographs. IEEE Transactions
on Inform. Forensics and Security, 3(2):223–233.
Nomir, O. and Abdel-Mottaleb, M. (2008b). Hierarchical contour matching for
dental x-ray radiographs. Pattern Recognition, 41(1):130–138.
Oliveira, J. and Proença, H. (2011). Caries Detection in Panoramic Dental
X-ray Images, pages 175–190. Springer Netherlands, Dordrecht.
Osher, S. J. (1988). Fronts propagating with curvature dependent speed. Computational Physics, 79(1):1–5.
Paewinsky, E., Pfeiffer, H., and Brinkmann, B. (2005). Quantification of secondary dentine formation from orthopantomograms – a contribution to
forensic age estimation methods in adults. Intl. Journal of Legal Medicine,
119(1):27–30.
Perronnin, F., Jorge, S., Mensink, T., Perronnin, F., Jorge, S., Mensink, T.,
Kernel, F., Perronnin, F., and Jorge, S. (2010). Improving the fisher kernel for large-scale image classification to cite this version : Improving the
fisher kernel for large-scale image classification. In Springer-Verlag Berlin
Heidelberg, pages 143–156.
Quinn, R. A. and Sigl, C. C. (1980). Radiography in Modern Industry.
Razali, M. R. M., Ahmad, N. S., Hassan, R., Zaki, Z. M., and Ismail, W. (2015).
Sobel and canny edges segmentations for the dental age assessment. In Intl.
Conference on Computer Assisted System in Health, pages 62–66.
Said, E., Nassar, D., Fahmy, G., and Ammar, H. (2006). Teeth segmentation in
digitized dental x-ray films using mathematical morphology. IEEE Transactions on Inform. Forensics and Security, 1(2):178–189.
Senthilkumaran, N. (2012a). Fuzzy logic approach to edge detection for dental
x-ray image segmentation. 3(5):5236–5238.
Senthilkumaran, N. (2012b). Genetic algorithm approach to edge detection for
dental x-ray image segmentation. Intl. Journal of Advanced Research in
Computer Science and Electronics Engineering, 1(7):5236–5238.
Son, L. H. and Tuan, T. M. (2016). A cooperative semi-supervised fuzzy clustering framework for dental x-ray image segmentation. Expert Systems with
Applications, 46:380–393.
32
Tikhe, S., Naik, A., Bhide, S., Saravanan, T., and Kaliyamurthie, K. (2016).
Algorithm to Identify Enamel Caries and Interproximal Caries Using Dental
Digital Radiographs. In Intl. Advanced Computing Conference, IACC 2016,
pages 225–228.
Trivedi, D. N., Kothari, A. M., Shah, S., and Nikunj, S. (2015). Dental Image
Matching By Canny Algorithm for Human Identification. International
Journal of Advanced Computer Research, 4(17):985–990.
Tu, Z. and Zhu, S. C. (2006). Parsing images into regions, curves, and curve
groups. Intl. Journal of Computer Vision, 69(2):223–249.
Wang, C. W., Huang, C. T., Lee, J. H., Li, C. H., Chang, S. W., Siao, M. J., Lai,
T. M., Ibragimov, B., Vrtovec, T., Ronneberger, O., Fischer, P., Cootes,
T. F., and Lindner, C. (2016). A benchmark for comparison of dental
radiography analysis algorithms. Medical Image Analysis, 31:63–76.
Willems, G., Moulin-Romsee, C., and Solheim, T. (2002). Non-destructive
dental-age calculation methods in adults: intra- and inter-observer effects.
Forensic Science Intl., 126(3):221–226.
33
| 1 |
arXiv:1608.05994v1 [] 21 Aug 2016
Inefficient Best Invariant Tests
Richard A. Lockhart∗
Department of Mathematics and Statistics
Simon Fraser University
Burnaby, BC
Canada V5A 1S6
November 10, 2017
Abstract
Test statistics which are invariant under various subgroups of the
orthogonal group are shown to provide tests whose powers are asymptotically equal to their level against the usual type of contiguous alternative in models where the number of parameters is allowed to grow
as the sample size increases. The result is applied to the usual analysis of variance test in the Neyman-Scott many means problem and
to an analogous problem in exponential families. Proofs are based
on a method used by Čibisov(1961) to study spacings statistics in a
goodness-of-fit problem. We review the scope of the technique in this
context.
Keywords. Asymptotic relative efficiency, Neyman-Scott many means problem, goodness-of-fit, spacings statistics, many parameter problems, permutation central limit theorem, bootstrap.
AMS 1980 Classification. Primary 62G20, Secondary 62G30.
Running title: Invariant tests
∗
This manuscript was written in 1992 but the reviewers were so caustic I never tried
again to publish it. It seems relevant today. Supported by the Natural Sciences and
Engineering Research Council of Canada.
1
1
Introduction
Consider the problem of data X coming from a model indexed by a parameter
space M. Suppose there is a group G which acts both on the data and on
the parameter space so that gX has the same distribution under m ∈ M as
X has under g −1 m. The problem of testing Ho : m ∈ M0 ⊂ M is invariant
under G if m ∈ M0 iff gm ∈ M0 for all g ∈ G. In what follows we shall
impose the stronger condition that gm = m for all m ∈ M0 and all g ∈ G.
In this note we use the observation that for an invariant test the power
at alternative m minus the level of the test is the covariance between the
test function and the likelihood ratio averaged over the orbit of m under
G to study the power of invariant tests under contiguous alternatives. The
technique was used in Čibisov(1961) to study the asymptotic behaviour of
tests of uniformity based on sample spacings.
Our results may be summarized as follows. When for a given point m
in the null hypothesis, the number of possible directions of departure from
m into the alternative hypothesis grows with the sample size the power of
invariant tests may be expected to be low. We will exhibit a variety of
examples in which the power minus the level of invariant tests converges to
0 uniformly in the class of invariant tests.
We begin in section 2 with a simple normal example to illustrate the
technique and set down the basic identities and inequalities. In section 3 we
extend the normal example to general exponential families using a version of
the permutation central limit theorem. In section 4 we examine the Neyman
Scott many means problem and extend our results to more general models.
In section 5 we revisit Čibisov’s example to show that a variety of goodnessof-fit tests have ARE 0 over a large class of alternatives. The results of the
first 5 sections suggest that either invariance is not always desirable or that
contiguity calculations are not always the right way to compare powers of
tests in such models. Section 6 is a discussion of the relevance of contiguity
calculations in this context, together with some open problems and suggestions for further work. An appendix contains some technical steps in the
proofs.
2
2
Basic Results: A Normal Example
Suppose X ∈ Rn has a multivariate normal distribution with mean vector
m ∈ Rn and identity covariance matrix. The problem of testing the null
hypothesis Ho : m = 0 is invariant under the group of orthogonal transformations. Under Ho , X and PX have the same distribution for any orthogonal
matrix P. Consider a simple alternative m 6= 0. The likelihood ratio of m to
0 is L(X) = exp{mT X − kmk2 /2}. Thus the Neyman Pearson test rejects
when mT X/kmk is too large. Under Ho this statistic has a standard normal
distribution while under the alternative m the mean is shifted to kmk. Thus
as n → ∞ non-trivial limiting power (i.e. a limiting power larger than the
level and less than 1) results when kmk → δ 6= 0. (Throughout this paper
objects named by Roman letters depend on n; wherever possible the dependence is suppressed in the notation. Objects named by Greek letters do not
depend on n.)
We may analyse the efficiency of an invariant test relative to the Neyman
Pearson test for a given sequence of alternatives as follows. Let T be the
class of all test functions T (X) for testing Ho which are invariant under the
group of orthogonal transformations, that is, for which T (PX) = T (X) for
each orthogonal transformation P. We have the following theorem.
Theorem 1 As n → ∞,
sup{|Em (T ) − E0 (T )| : T ∈ T , kmk ≤ δ} → 0.
Since the parameter space depends on the sample size the usual definition
of relative efficiency does not make sense. Instead, given an alternative m
we will define the efficiency of a (level α where α is fixed) test relative to the
Neyman Pearson test to be 1/c2 where c is chosen so that the test under consideration has power against cm equal to the Neyman Pearson power against
the alternative m. In traditional finite dimensional parametric models this
notion agrees with the usual notion of Pitman relative efficiency (asymptotically). Thus ARE will be the limit of 1/c2 ; the alternative sequence m must
be contiguous so that the asymptotic power of the Neyman Pearson test is
not 1. For a discussion of the relevance of the Neyman-Pearson test as a
standard for the relative efficiency see section 6.
3
Corollary 1 The best invariant test of Ho has ARE 0.
Proof of Theorem 1
Let L(X) be the likelihood ratio for m to 0. Then
Em (T (X)) = E0 (T (X)L(X)) .
Since PX and X have the same distribution under Ho we have
Em (T (X)) = E0 (T (PX)L(PX))
= E0 (T (X)L(PX))
(1)
for all orthogonal P. Since P appears only on the right hand side of (1) we
may average over orthogonal P to obtain
Em (T (X)) =
Z
E0 (T (X)L(PX)) F (dP)
= E0 T (X)
Z
L(PX)F (dP)
(2)
where F is any probability measure on the compact group of orthogonal
matrices. Let
Z
L(X) = L(PX)F (dP).
Then
|Em (T ) − E0 (T )| = |E0 (T (L − 1))|
≤ E0 (|L − 1|).
(3)
Since the last quantity is free of T we need only show that
sup{E0 (|L − 1|) : kmk ≤ δ} → 0.
Since E(L) = 1 the dominated convergence theorem shows that it suffices to
prove for an arbitrary sequence of alternatives m with kmk ≤ δ and for a
suitably chosen sequence of measures F that L → 1 in probability. We take
F to be Haar measure on the compact group of orthogonal transformations;
that is, we give P a uniform distribution.
4
For each fixed X, when P has the distribution for which F is Haar measure
on the orthogonal group, the vector PX has the uniform distribution on the
sphere of radius kXk. Using the fact that a standard multivariate normal
vector divided by its length is also uniform on a sphere we find that
L = H(kmk · kXk)/H(0)
where
H(t) =
Z
π
0
exp(t cos θ) sinn−2 θ dθ.
Standard asymptotic expansions of Bessel functions (see, e.g. Abramowitz
and Stegun, 1965, p 376ff) then make it easy to show that L → 1 in probability, finishing the proof.•
A test invariant under the group of orthogonal transformations is a function of kXk2 and the analysis above can be made directly and easily using
the fact that this statistic has a chi-squared distribution with non-centrality
parameter kmk2 . Our interest centres on the technique of proof. Equations
(1-3) and the argument following (3) use only the group structure of the
problem and the absolute continuity of the alternative with respect the null.
The remainder of the argument depends on an asymptotic approximation to
the likelihood ratio averaged over alternatives. Whenever such an approximation is available we can expect to obtain efficiency results for the family of
invariant tests. In the next section we apply the technique to a more general
model by replacing the explicit Bessel function calculation with a version of
the permutation central limit theorem.
3
Exponential Families
Suppose now that X = (X1 , . . . , Xn )T with the Xi independent and Xi having the exponential family density exp(mi xi − β(mi )) relative to some fixed
measure. We assume the mi ’s take values in Θ an open subset of R. The
parameter space is then M = Θn . Let Θ0 be a fixed compact subset of Θ.
Consider the null hypothesis Ho : m1 = · · · = mn . This problem is invariant
under the subgroup of the orthogonal group consisting of all permutation
5
matrices P. Let m = mi /n; we also use m to denote the vector of length
n all of whose entries are m. The calculations leading to (1-3) establish that
for any test T in T , the family of all tests invariant under permutations of
the entries of X, we have
P
|Em (T ) − Em (T )| ≤ Em (|L − 1|).
where now
n
L = exp −
X
(β(mi ) − β(m))
oX
exp (m − m)T PX /n!
P
is the likelihood ratio averaged over all permutations of the alternative vector
m.
3.1
Heuristic Computations
Think now of X as fixed and P as a randomly chosen permutation matrix.
Then (m − m)T PX = (m − m)T P(X − X) has moment generating function
g(s) =
X
exp{s(m − m)T P(X − X)}/n!.
P
If max{|mi − m| : 1 ≤ i ≤ n} → 0 the permutation central limit theorem
suggests that (m − m)T PX has approximately a normal distribution with
P
P
mean 0 and variance (mi − m)2 (Xi − X)2 /n. Thus heuristically
g(s) ≈ exp{s2
X
(mi − m)2
X
(Xi − X)2 /(2n)}.
Under the same condition, max{|mi − m| : 1 ≤ i ≤ n} → 0, we may expand
X
(β(mi ) − β(m)) ≈
X
(mi − m)2 β ′′ (m)/2.
We are led to the heuristic calculation
L(X) ≈ exp{
X
(mi − m)2 (S 2 − β ′′ (m))/2}
where S 2 = (Xi − X)2 /n is the sample variance. Since Varm (X) = β ′′ (m)
P
we see that L should converge to 1 provided (mi − m)2 = o(n1/2 ) or
P
6
km − mk = o(n1/4 ). In the simple normal example of the previous section
we are actually able to prove (again using asymptotic expansions of Bessel
functions) this strengthened version of Theorem 1, namely, if a = o(n1/4 )
then
sup{|Em (T ) − E0 (T )| : T ∈ T , kmk ≤ a} → 0.
In the more general exponential family problem we do not know a permutation central limit theorem which extends in a useful way to give convergence
of moment generating functions. For contiguous alternative sequences we are
able to replace the moment generating function by a characteristic function
calculation. We can then prove that invariant tests have power converging
to their level uniformly on compact subsets of Θ.
Theorem 2 As n → ∞,
sup{|Em (T ) − Em (T )| : T ∈ T , km − mk ≤ δ, mi ∈ Θ0 } → 0.
3.2
Proof of Theorem 2
Our proof uses contiguity techniques to replace the moment generating function used in the heuristics above by a corresponding characteristic function
calculation. A standard compactness argument reduces the problem to showing that
Em (T ) − Em (T ) → 0
for an arbitrary sequence of alternatives m satisfying (mi − m)2 ≤ δ 2 ,
mi ∈ Θ0 , and an arbitrary sequence T ∈ T . Our proof is now in three stages.
First we prove that any such sequence is contiguous to the null sequence
indexed by m. The second step is to eliminate the particular statistics, T , as
at (3), reducing the problem to showing that the permutation characteristic
function of the log-likelihood ratio is nearly non-random. The final step is
to establish the latter fact appealing to Theorem 3.
P
Step 1: Contiguity of m to m.
The log-likelihood ratio of m to m is
ℓ(X) =
X
(mi − m)(Xi − β(m)) −
7
X
(β(mi ) − β(m)).
Under m the mean of ℓ(X) is
−
X
(β(mi ) − β(m)) = −
X
(mi − m)2 β ′′ (ti )
for some ti between mi and m. In view of the conditions on m and the
compactness of Θ0 this sequence of means is bounded. Also under m the
variance of ℓ(x) is
X
(mi − m)2 β ′′ (m).
Again this is bounded. Thus the sequence of log-likelihood ratios is tight under the null sequence m and so the sequence of alternatives, m, is contiguous
to the null sequence m.
Lemma 1 Suppose Q is a sequence of measures contiguous to a sequence of
measures P . If T is a bounded sequence of statistics such that
EP (T exp(iτ log dQ/dP )) − EP (T )EP (exp(iτ log dQ/dP )) → 0
for each real τ then
EQ (T ) − EP (T ) → 0.
Remark: In the Lemma the random variable log dQ/dP can be replaced by
any random variable S such that log dQ/dP −S tends to 0 in probability
under P .
Remark: The Lemma is very closely connected with LeCam’s third lemma
(see Hájek and Šidák, 1967, page 209, their formula 4) which could also
be applied here to the Q characteristic function of T .
Before proving the lemma we finish the theorem.
Step 2: Elimination of the sequence T .
Let H(t, X) = exp(itℓ(X)). According to the lemma we must show
Em (T (X)H(τ, X)) − Em (T )Em (H(τ, X)) → 0 .
Arguing as in (1-3) the quantity in (4) may be seen to be
Em (T (X)H(τ, X)) − Em (T (X))Em (H(τ, X))
8
(4)
where H(τ, X) = P H(τ, PX)/n! = E(H(τ, PX)|X). Since both T and
H(τ, X) are bounded it suffices to prove that
P
H(τ, X) − Em (H(τ, X)) → 0
(5)
in probability.
Remark: If ℓ(X) = S(X)+oP (1) under m then in view of the remark following
the lemma the random variable ℓ(X) can be replaced in the definition of
H by the random variable S(X). Moreover, if S ∗ (X) = S(X) + a then
(5) will hold for S ∗ replacing ℓ if and only if it holds for S replacing ℓ
because the sequence exp(iτ a) is bounded in modulus.
Step 3: Application of the Permutation Central Limit Theorem
Using the last remark take S ∗ (X) = (m − m)T Px. The variable H
P
then becomes P exp(iτ (m − m)T Px)/n! whereas Em (H(τ, X)) becomes
Em (exp(iτ (m − m)T X)).
Now let F̂ be the empirical distribution of the X1 , . . . , Xn . Let X1∗ , . . . , Xn∗
be independent and identically distributed according to F̂ . We will show in
the next section that
X
exp(iτ mT Px)/n! − EF̂ exp(iτ
P
and
EF̂ exp(iτ
X
X
mj Xj∗ ) − Em exp(iτ
mj Xj∗ ) → 0
X
mj Xj ) → 0
(6)
(7)
in probability for each fixed τ . This will finish the proof of Theorem 2 except
for establishing the lemma.
To prove the (undoubtedly well-known) lemma argue as follows. Letting
ℓ = log dQ/dP , the condition shows that EP (T φ(ℓ)) − EP (T )EP (φ(ℓ)) → 0
for each bounded continuous function φ. There is then a sequence a tending
to infinity so slowly that EP (T f (ℓ)) − EP (T )EP (f (ℓ)) → 0 where f (x) =
min(ex , a). In view of contiguity EQ (T ) − EP (T eℓ ) → 0 and for any sequence
a tending to infinity |EP (T (eℓ − f (ℓ)))| ≤ EP (|eℓ − f (ℓ)|) → 0. The lemma
follows.•
9
3.3
The Permutation Limit Theorem
Suppose m ∈ Rn and x ∈ Rn are two (non-random) vectors with m = 0 for
convenience. Suppose P is a random n × n permutation matrix. The random
P
P
variable mT Px has mean nm x and variance s2 = (mi −m)2 (xi −x)2 /(n−
1). An alternative description of the random variable mT Px is as follows.
Let J1 , . . . , Jn be a simple random sample drawn without replacement from
the set {1, . . . , n}. Then J1 , . . . , Jn is a random permutation of {1, . . . , n}
P
and mT Px has the same distribution as mi x(Ji ); we now use functional
rather than subscript notation for legibility. Hájek(1961, see his formula
3.11) shows that it is possible to construct, on a single probability space,
J1 , . . . , Jn together with J1∗ , . . . , Jn∗ , a random sample drawn with replacement
from {1, . . . , n} in such a way that
E[(
X
mi x(Ji ) −
X
mi x(Ji∗ ))2 ] ≤ 3s max |xi − x|/(n − 1)1/2 .
(8)
Since the Xi have an exponential family distribution and Θ is compact it is
straightforward to check that
S max |Xi − X|/(n − 1)1/2 = OP (log n/n1/2 ) = oP (1)
under m where S denotes the sample standard deviation. In view of the
elementary inequality
|E(exp(itW )) − E(exp(itW ′ ))| ≤ t2 E((W − W ′ )2 ) + |t|E1/2 ((W − W ′ )2 ) (9)
this establishes (6).
It seems easiest to deal with (7) after recalling some facts about weak
convergence in the presence of moment conditions. Consider the set, ∆,
of distribution functions F on Rπ which have finite variance. Throughout
2
this paper we write F ⇒Φ if EF (γ(X)) → EΦ (γ(X)) for each fixed continuous function
γ such that γ(x)/(1 + kxk2 ) is bounded. (The notation is
R
EF (γ(X)) = γ(x)F (dx).) This notion of convergence defines a topology
on ∆ which can be metrized by a metric ρ2 in such a way that ∆ becomes a
complete separable metric space. In fact the metric ρ2 may be taken to be
the Wasserstein metric; see Shorack and Wellner (1986, pp 62-65).
Suppose that ∆0 is a subset of ∆. The following are equivalent:
10
i. ∆0 has compact closure.
ii. for each ǫ > 0 there is a fixed compact subset Ψ of Rπ such that
EF (kXk2 1(X 6∈ Ψ)) ≤ ǫ for all F ∈ ∆0 .
iii. there is a fixed function Ψ such that EF kXk2 1(kXk ≥ t) ≤ Ψ(t) for
all F ∈ ∆0 and all t where Ψ has the property that limt→∞ Ψ(t) = 0.
iv. the family ∆0 makes kXk2 uniformly integrable.
Notice that for each
limt→∞ Ψ(t) = 0 the family of distribu fixed Ψ such that
2
tions F with EF kXk 1(|X| ≥ t) ≤ Ψ(t) for all t is a compact metric space.
2
Finally note that F ⇒Φ if and only if F converges in distribution to Φ and
EF (kXk2 ) → EΦ (kXk2 ). All the foregoing results are trivial modifications
of the usual results for convergence in distribution; see Billingsley (1968) pp
31-41. It will be useful to let ρ0 denote a metric analogous to ρ2 for which
the space of all distributions on Rπ becomes a complete separable metric
space with the topology of convergence in distribution. If ∆0 is a compact
subset of ∆ for the metric ρ2 then it is a compact subset of the space of all
distributions for the metric ρ0 .
To state the result let F̂ be the empirical distribution of the numbers
X1 , . . . , Xn . Let X1∗ , . . . , Xn∗ be independent and identically distributed acP
cording to F̂ . Let LF̂ be the (conditional given X1 , . . . , Xn ) law of mi Xi∗
P
and LF be the law of mi Xi .
Theorem 3 Let ∆0 be a fixed compact subset of ∆. Suppose F is any sequence in ∆0 and that for each n the vector X = (X1 , . . . , Xn )T has independent entries distributed according to F . Assume m is an arbitrary sequence
P
satisfying m = 0 and m2i ≤ δ 2 . If F̂ is the empirical distribution function
of X1 , . . . , Xn then ρ2 (LF , LF̂ ) → 0 in probability.
Corollary 2 Under the conditions of Theorem 3
EF̂ exp(iτ
X
mj Xj ) − EF exp(iτ
in probability for each fixed τ .
11
X
mj Xj ) → 0
It is straightforward to rephrase a (weakened) version of Hájek’s permutation central limit theorem in the notation of Theorem 3. Let LP be the
law of mT Px.
Theorem 4 Suppose that m ∈ Rn and x ∈ Rn are two sequences of vectors
such that
nm x → 0,
(10)
X
(mi − m)2 ≤ δ 2
(11)
for a fixed δ and such that x satisfies
X
(xi − x)2 1(|xi − x| ≥ t)/n ≤ Ψ(t)
(12)
where Ψ is a fixed function such that limt→∞ Ψ(t) = 0. If F̂ is the empirical
distribution of the numbers {1, . . . , xn } then ρ2 (LP , LF̂ ) → 0.
The proof of Theorem 3 is in the Appendix.
Remark: Theorem 3 asserts the validity of the bootstrap approximation to
LF . Theorem 4 says that the bootstrapping carried out in Theorem 3 by
sampling from the list X1 , . . . , Xn with replacement can also be carried
out without replacement. Notice that the result applies only to contrasts
in the Xi ; the condition m = 0 is crucial to this bootstrap interpretation
of Theorem 4.
4
4.1
Extensions
Non-exponential families
The exponential family model gives the log-likelihood ratio a rather special
form. For general models, however, a Taylor expansion can be used to show
that the log-likelihood ratio has almost that form. Rather than try to discover
the weakest possible conditions on a general model permitting the conclusion
we try to illustrate the idea with assumptions which are far from best possible.
12
Suppose X = (X1 , . . . , Xn )T with the individual Xi independent and Xi
having density exp(φ(·; mi )) where mi ∈ Θ an open subset of R. Again
consider the null hypothesis Ho : m1 = · · · = mm . Let Θ0 denote some
fixed compact subset of Θ. The log-likelihood ratio for m to m is ℓ(X) =
P
{φ(Xi ; mi ) − φ(Xi ; m)}. Assume that φ is twice differentiable with respect
to the parameter. Let φi denote the i-th derivative of φ with respect to the
parameter. Assume that the usual identities of large sample theory of likelihood hold, namely, that φ1 (X, m) has mean 0 and finite variance ι(m) under
m and that φ2 (X, m) has mean −ι(m). Then we may write
ℓ(X) =
(mi − m)φ1 (Xi , m) − (mi − m)2 ι(m)/2
X
+ (mi − m)2 {φ2 (Xi , m) + ι(m)}/2
X
+ (mi − m)2 {φ2 (Xi , ti ) − φ2 (Xi , m)}
X
X
(13)
where ti is between mi and m.
We see that, under the usual sort of regularity conditions, there will exist
a constant α such that
|Em (φ(Xi ; m∗ ) − φ(Xi ; m))| ≤ α(m∗ − m)2
(14)
Varm (φ(Xi ; m∗ ) − φ(Xi ; m)) ≤ α(m∗ − m)2
(15)
and such that
for all m and m∗ in Θ0 . Under these two conditions we see that |E(ℓ(X))| ≤
P
P
α (mi − m)2 and Varm (ℓ(X)) ≤ α (mi − m)2 . Thus any sequence m with
P
(mi − m)2 ≤ δ 2 and all mi ∈ Θ0 is contiguous to the null sequence m.
The first two steps in the proof of Theorem 3 may therefore be seen to apply to general one-parameter models under regularity conditions, specifically
whenever (14) and (15) hold.
Further regularity assumptions are necessary in order for Step 3 of Theorem 2 to go through in the present context. To get some insight consider
the situation where max{|mi − m|; 1 ≤ i ≤ n} → 0. Under further regularity
conditions we will have
X
(mi − m)2 {φ2 (Xi ; ti ) − φ2 (Xi ; m)} → 0
13
(16)
and
X
(mi − m)2 {φ2 (Xi ; m) + ι(m)} → 0
(17)
in probability. Assuming that (16) and (17) hold we see
ℓ(X) =
X
(mi − m)φ1 (Xi ; m) −
X
(mi − m)2 ι(m)/2 + oP (1).
(18)
Define S ∗ (X) = (mi − m)φ1 (Xi ; m). Step 3 of the proof of Theorem 2
may now be carried through with Xi replaced by Ui = φ1 (Xi ; m) provided
that the map from Θ0 to ∆ which associates m with the m distribution of
φ1 (Xi ; m) is continuous.
P
The assumption that max{|mi − m|; 1 ≤ i ≤ n} → 0 can be avoided; we
now make this assertion precise. We will need three more assumptions:
sup{Em (|φ2 (X, m) + ι(m)|1(|φ2 (X, m) + ι(m)| ≥ t)); m ∈ Θ0 } → 0
(19)
as t → ∞. Define
W (X, ǫ, m) = sup{|φ2 (X, m′ ) − φ2 (X, m)|; |m′ − m| ≤ ǫ, m′ ∈ Θ0 }.
The second assumption will then be
lim sup{Em (W (X, ǫ, m)); m ∈ Θ0 } = 0.
ǫ→0
(20)
Finally we will assume that the map
(m, m) 7→
L((φ(X, m) − φ(X, m))/(m − m)|m) m 6= m
L(φ1 (X, m))
m=m
(21)
is continuous from Θ0 × Θ0 to ∆ where L(X|m) denotes the law of X when
m is true.
Theorem 5 Assume conditions (14, 15, 19, 20, 21). Then as n → ∞,
sup{|Em (T ) − Em (T )| : T ∈ T , km − mk ≤ δ, mi ∈ Θ0 } → 0.
Proof
Assume, without loss of generality that the entries in m have been sorted
so that |m1 − m| ≥ · · · ≥ |mn − m|. Let k = k(n) be any sequence tending to
14
infinity. Then (mi − m)2 ≤ δ 2 implies that max{|mi − m|; k ≤ i ≤ n} → 0.
The assumptions now imply that
P
ℓ(X) =
X
{φ(Xi , mi ) − φ(Xi , m)} +
(mi − m)φ1 (Xi , mi )
i>k
i≤k
−
X
X
(mi − m)2 ι(m)/2 + oP (1).
i>k
Define
S ∗ (X) =
X
{φ(Xi , mi ) − φ(Xi , m) − Em (φ(Xi , mi ) − φ(Xi , m))}
i≤k
+
X
(mi − m)φ1 (Xi , mi ).
i>k
Let H(τ, X) =
P
P
exp(iτ S ∗ (PX))/n!. As before it suffices to prove that
H(τ, X) − Em (H(τ, X)) → 0
(22)
in probability. The proof may be found in the Appendix; its length is due to
our failure to impose the condition that max{|mi − m|; k ≤ i ≤ n} → 0 .
4.2
The Neyman-Scott Problem
Consider now the Neyman-Scott many means problem in the following form.
Let {Xij ; 1 ≤ j ≤ ν, 1 ≤ i ≤ n} be independent normals with mean mi and
standard deviation σ. The usual Analysis of Variance F -test of the hypothesis
that m1 = · · · = mn is invariant under permutations of the indices i. Any
level α test of this null hypothesis for this model with σ unknown is a level
α test in any submodel with a known value of σ. When σ is known the
argument of section 3 can be applied to the vector X = (X 1· , . . . , X n· ) of
cell means to conclude that the ARE of ANOVA is 0 along any contiguous
sequence of alternatives.
This Analysis of Variance problem may be extended to the following
multiparameter exponential family setting. Suppose that for i = 1, . . . , n
the Rπ -valued random variable Xi has density exp{mT
i xi − β(mi )} relative
to some fixed measure on Rπ . The natural parameter space for a single
15
observation Xi is some Θ ⊂ Rπ . Let a(m) be some parameter of interest
and consider the problem of testing Ho : a(m1 ) = · · · = a(mn ). The problem
is again permutation invariant.
Let M be the n × π matrix with ith row mi and M be the n × π matrix
with ith row m. Let X be the n × π matrix with ith row XiT . The log
likelihood ratio of M to M is
ℓ(X) = tr((M − M )T X) −
X
(β(mi ) − β(m))
Denote kMk2 = tr(M T M). Let Θ0 be some fixed compact subset of Θ. Let
T be the family of all permutation invariant test functions, T (X).
Theorem 6 As n → ∞,
sup{|EM (T ) − EM (T )| : T ∈ T , kM − M k ≤ δ, mi ∈ Θ0 } → 0.
The proof of this theorem is entirely analogous to that of Theorem 2
needing only a multivariate extension of Theorems 3 and 4. Suppose M and
x are sequences of n × π matrices. Let M1 , . . . , Mπ and x1 , . . . , xπ be the
columns of M and x respectively. Let P be a random n × n permutation
matrix and let F̂ be the empirical distribution (measure on Rπ ) of the n
rows of x. Let X be an n × π matrix whose rows are iid according to F̂ . Let
LP denote the joint law of (M1T Px1 , . . . , MπT Pxπ ). Let LF̂ denote the law of
MiT Xi where Xi is the ith column of X.
Theorem 7 Suppose that each Mi satisfies (11) and has Mi = 0. Suppose
that each xi satisfies (12). Then ρ2 (LP , LF̂ ) → 0.
The obvious analogue of Theorem 3 also holds.
Theorem 8 Let ∆0 be a fixed compact subset of ∆. Suppose F is any sequence in ∆0 and that for each n the n × π matrix X has independent rows
distributed according to F . Assume M is an arbitrary sequence of n × π
matrices whose columns Mi each satisfy (11) and have Mi = 0. If F̂ is the
empirical distribution function of the rows of X then ρ2 (LF , LF̂ ) → 0 in
probability.
16
It should be noted that the actual null hypothesis plays no role in these
theorems. If the theorems are to be used to deduce that any particular
sequence of permutation invariant tests has poor power properties it is necessary that m1 = . . . = mn imply the assertion that the null hypothesis is
true and that there be some alternative sequence satisfying the conditions of
the preceding theorems.
5
Spacings Statistics
Suppose U1 ≤ · · · ≤ Un are the order statistics for a sample of size n from
a distribution on the unit interval. To test the null hypothesis that this distribution is uniform many authors have suggested tests based on the sample
spacings Di = Ui − Ui−1 where we take U0 = 0 and Un+1 = 1. Examples
P
of statistics include Moran’s statistic log(Di ) and Greenwood’s statistic
P 2
Di . See Guttorp and Lockhart (1989) and the references therein for a
detailed discussion. Notice that these statistics are invariant under permutations of the Di . Also note that the joint distribution of the Di is permutation
invariant.
Consider a sequence of alternative densities 1 + h(x)/n1/2 . Čibisov(1961)
showed (though his proof seems to rely on convergence of the permutation
moment generating function which does not seem to me to follow from the
form of the permutation central limit theorem which he cites) under differentiability conditions on h that the power of any spacings statistic invariant
under permutations of the Di is asymptotically equal to its level using essentially the method of proof used above. We can relax the conditions on h
somewhat to achieve the following.
Theorem 9 Let ∆0 be a compact subset of L2 , the Hilbert space of square
integrable functions on the unit interval. Let T be the family of permutation
invariant test functions T (D). As n → ∞,
sup{|Eh (T ) − E0 (T )| : T ∈ T , h ∈ ∆0 } → 0
.
17
In Guttorp and Lockhart(1988) it is established that, under the condiP
tions of the theorem, ℓ(D), the log-likelihood ratio, is equal to hi (Di −
R
1/(n + 1)) − h2 (x) dx/2 + OP (1) for any sequence of alternatives h converging in L2 where the hi are suitable constants derived from h. Using the
remark following Lemma 1, the fact that the joint distribution of the Di is
permutation invariant under the null hypothesis, and the characterization of
the spacings as n + 1 independent exponentials divided by their total, the
theorem may be proved by following the argument leading to Theorem 2.
6
6.1
Discussion
Relevance of Contiguity
All the theorems establish that permutation invariant tests are much less
powerful than the Neyman-Pearson likelihood ratio test for alternatives which
are sufficiently different from the null that the Neyman-Pearson test has nontrivial power. Thus if, in practice, it is suspected which parameters mi are
the ones most likely to be different from all the others there will be scope for
much more sensitive tests than the invariant tests.
On the other hand, some readers will argue that the analysis of variance
is often used in situations where no such prior information is available. Such
readers, I suspect, will be inclined to argue that this sort of contiguity calculation is irrelevant to practical people. Some readers may feel that a user
would need unreasonable amounts of prior knowledge to derive a better test
than the F -test. Consider the situation of the normal example in the first
section. Suppose that the mi can be sorted so that adjacent entries are rather
similar. Define h(s) = m[ns] . If the sequence of functions are reasonably close
to some square integrable limit η then, without knowing η, we can construct
a test whose power stays larger than its level if the Neyman-Pearson test has
the same property. Specifically consider the exponential family example. Let
γi ; i = 1, 2, . . . be an orthogonal basis of L2 [0, 1] with each γi continuous and
let λi be any sequence of summable positive constants. Define a test statistic
P
P
P
of the form T (X) = i λi ( γi (j/n)Xj )/( j γi2 (j/n))1/2 . If the sequence h
18
converges to some η 6= 0 in L2 [0, 1] then the asymptotic power of T will be
larger than its level. The test is the analogue of the usual sort of quadratic
goodness-of-fit test of the Cramer-von Mises type.
It is worth noting that the calculations compute the power function by
averaging over alternative vectors which are a permutation of a basic vector
m. Another approach to problems with large numbers of different populations (labelled here by the index i) is to model the mi themselves as an iid
sequence chosen from some measure G. In this case the null hypothesis is
that G is point mass at some unknown value m. I note that alternative
measures G which make the resulting model contiguous to the null make
VarG (mi ) = O(n−1/2 ) which means that a typical mi deviates from m by
the n−1/4 discrepancy which arises in our first example and in analysis of
spacings tests. In other words when any ordering of the mi is equally likely
vectors m differing from m by the amount we have used here are indistinguishable from the null hypothesis according to this empirical Bayes model.
It is important to note, however, that for this empirical Bayes model the
hypothesis of permutation invariance of the statistic T is unimportant: if
VarG (mi ) = o(n−1/2 ) then every test statistic, permutation invariant or not,
has power approaching its level.
The proofs hinge rather critically on the exact invariance properties of the
statistics considered. In the Neyman-Scott problem for instance if a single
sample size were to differ from all the others the whole argument would
come apart. As long as the sample sizes are bounded the ARE of ANOVA
is 0 nevertheless, as may be seen by direct calculation with the alternative
non-central F -distribution. In the spacings problem of section 5 the sample
2-spacings defined by Di = Ui+2 −Ui still provide tests with non-trivial power
only at alternatives at the n−1/4 distance; the joint distribution of these 2spacings is not permutation invariant and our ideas do not help. Our ideas
do apply, however, to the non-overlapping statistics of Del Pino(1971).
The definition of ARE offered here may well be challenged since the comparison is relative to the Neyman-Pearson test which would not be used for
a composite null versus composite alternative situation. Nevertheless there
seems to us to be a sharp distinction between procedures for which our definition yields an ARE of 0 and the quadratic tests mentioned above whose
19
ARE is then positive.
6.2
Open Problems and Conjectures
The results presented here lead to some open problems and obvious areas for
further work. Hájek’s proof of the permutation central limit theorem guarantees convergence of the characteristic function and moments up to order 2
of the variables mT Px. Our heuristic calculations suggest that a good deal
more information could be extracted if the characteristic function could be
replaced by the moment generating function and if convergence of the moment generating function could be established not only for fixed arguments
but for arguments growing at rates slower than n1/2 . Such an extension would
eliminate, as in the normal example, the need for considering only contiguous alternatives. Large sample theory for spacings statistics suggests that
the results presented here hold out to alternatives at such distances. If, in
addition, approximations were available to the moment generating function
for arguments actually growing at the rate n1/2 the technique might extend
to providing power calculations for alternatives so distant that permutation
invariant tests have non-trivial limiting power. Another possible extension
would use Edgeworth type expansions in the permutation central limit theorem to get approximations for the difference between the power and the level
in the situation, covered by our theorems, where this difference tends to 0.
Consider the exponential family model of section 3 for the special case of
the normal distribution. The problem of testing m1 = · · · = mn is invariant under the permutation group and under the sub-group of the orthogonal
group composed of all orthogonal transformations fixing the vector 1 all of
whose entries are 1. It is instructive to compare our results for the two different groups. The example illustrates the trade-off. For statistics invariant
under larger groups it may be easier to prove the required convergence of
the average likelihood ratio; the easier proof is balanced against applying the
conclusion to a smaller family of statistics.
For statistics invariant under the larger group of orthogonal transformations fixing 1 we can modify the argument of section 2 and extend the
conclusion described in the heuristic problem of section three to relatively
20
large values for km − mk. Rather than describe the details we follow a suggestion made to us by Peter Hooper. Suppose Y = Xb + se where e has a
multivariate normal distribution with mean vector m and variance covariance
matrix the identity and where s is an unknown constant, X is an n×p matrix
of regression covariate values of rank p and b is an unknown p dimensional
vector. Let H = X(X T X)−1 X T be the hat matrix. Suppose we wish to test
the hypothesis (I − H)m = 0. (If X = 1 this is equivalent to the problem
mentioned above of testing m1 = · · · = mn . The problem is identifiable only
if Hm = 0 or equivalently if X T m = 0.) The problem is invariant under the
group OX of orthogonal matrices P for which PX = X.
Suppose T (Y ) is a family of statistics such that T (PY ) = T (Y ) for any
P in OX . Consider the likelihood ratio of m to Hm (the latter is a point in
the null). Following equations (1-3) we are lead to study
L(Y ) =
Z
L(PY )F (dP).
where now F is Haar measure on OX and L(Y ) = exp(mT (I − H)(Y −
Xb) − k(I − H)mk2 /2). Since PX = x we see that L(PY ) = exp(mT (I −
H)P(Y − Xb) − k(I − H)mk2 /2). If P is distributed according to F and
Z is standard multivariate normal then PT (I − H)m/k(I − H)mk and (I −
H)Z/k(I − H)Zk have the same distribution. This fact and expansions of
Bessel functions show that L → 1 in probability provided k(I − H)mk =
o((n − p)1/4 ).
The family of statistics invariant under the group of permutations of the
entries of Y will be different than the family invariant under OX . When
X is simply a vector which is a non-zero multiple of 1 the family of statistics invariant under the permutation group is much larger than the family invariant under O1 . For this case we are led to study the variable
P
L = P exp((m − m)T Pe − km − mk2 /2)/n!. We find that Em (L) = 1
P
and Varm (L) = P exp((m − m)T P(m − m))/n! − 1. Just when this variance
goes to 0 depends on extending the permutation central limit theorem to
give convergence of moment generating functions. Since the random variable
(m − m)T P(m − m) has mean 0 and variance km − mk4 /n we are again led
to the heuristic rate km − mk = o(n1/4 ). However, by taking m to have
exactly one non-zero entry it is not too hard to check that this heuristic calculation cannot be made rigorous without further conditions on m to control
21
the largest entries.
Finally, if X is not a scale multiple of 1 the problem is not invariant under
the permutation group. Is there some natural extension of our techniques to
this context for a group smaller than OX ?
Appendix
Proof of Theorem 3
We prove below (cf Shorack and Wellner, p 63 their formula 5, except that
there the distribution F does not depend on n) that
ρ2 (F, F̂ ) → 0
(23)
in probability. If Theorem 3 were false then from any counterexample sequence we could extract a subsequence which is a counterexample and along
which the convergence in (23) is almost sure. The theorem then follows from
the assertion that
ρ2 (F, G) → 0
implies
ρ2 (LF , LG ) → 0
(24)
whenever F is any sequence of distributions with compact closure in ∆.
Assertion (24) is a consequence of Lemma 1 of Guttorp and Lockhart (1988).
To prove (23) we may assume without loss, in view of the compactness of
2
∆0 that F ⇒Φ for some Φ. Elementary moment calculations assure that F̂ (τ )
converges in probability to Φ(τ ) for each τ which is a continuity point of Φ.
This guarantees that ρ0 (F̂ , Φ) → 0 in probability. We need only show that
P
EF̂ (X 2 ) → EΦ (X 2 ). But EF̂ (X 2 ) = Xi2 /n. The triangular array version of
the law of large numbers given in Lemma 2 of Guttorp and Lockhart(1988)
P
2
shows that Xi2 /n − EF (X 2 ) → 0 in probability. Since F ⇒Φ implies that
EF (X 2 ) → EΦ (X 2 ) we are done.
22
Proof of Theorem 5
It remains to choose a sequence k = k(n) in such a way that we can
check (22). In view of permutation invariance we may assume without
loss that |m1 − m| ≥ · · · ≥ |mn − m|. Define matrices C and D by
setting C(i, j) = φ(Xj , mi ) − φ(Xj , m) − Em (φ(Xj , mi ) − φ(Xj , m)) and
D(i, j) = (mi − m)φ1 (Xj ; m). We will eventually choose a sequence k and
put B(i, j) = C(i, j) for i ≤ k and B(i, j) = D(i, j) for i > k. Note that
P
∗
i B(i, i) is simply S (X).
If P is a random permutation matrix then in row i there is precisely 1 nonzero entry; let Ji be the column where this entry occurs. Then S ∗ (PX) =
P
i B(i, Ji ). The variables J1 , . . . , Jn are a random permutation of the set
{1, . . . , n}. As in the proof of Theorem 2 let J1∗ , . . . , Jn∗ be a set of independent
random variables uniformly distributed on {1, . . . , n}. We will show that for
each fixed κ
ρ2 L(
and
ρ2 L(
X
C(i, Ji∗ )|X), L(
i≤κ
X
X
i≤κ
C(i, Ji∗ )|X), L(
X
i≤κ
i≤κ
C(i, i)) → 0
(25)
C(i, Ji )|X) → 0
(26)
in probability. We will also show that for any sequence k tending to ∞ with
k 2 = o(n) we have
ρ2 L(
X
(mi − m)φ1 (X(Ji∗ ), m)|X), L(
i>k
X
i>k
D(i, i)) → 0
(27)
in probability. There is then a single sequence k tending to infinity so slowly
that (25) and (26) hold with κ replaced by k and so that (27) holds. We use
this sequence k to define B.
We will then show that
ρ2 L(
X
B(i, Ji∗ )|X), L(
i
X
i
23
!
B(i, i)) → 0
(28)
in probability and that
ρ2 L(
X
B(i, Ji )|X), L(
X
B(i, Ji∗ )|X)
i
i
!
→0
(29)
in probability. These two are enough to imply (22) as in Corollary 2 and the
obvious (but unstated) corresponding corollary to Theorem 4.
Proof of (25)
For each fixed i we may apply (23) with the vector (X1 , . . . , Xn ) replaced
by (C(i, 1), . . . , C(i, n)) to conclude that
ρ2 (L(C(i, Ji∗ )|X), L(C(i, i))) → 0
in probability; the condition imposed on F leading to (23) is implied by (21).
Use the independence properties to conclude
ρ2 (L(C(1, J1∗), . . . , C(κ, Jκ∗ )|X), L(C(1, 1), . . . , C(κ, κ))) → 0
for each fixed κ. Assertion (25) follows.
Proof of (26)
For each fixed i we have L(C(i, Ji∗ )|X) = L(C(i, Ji )|X). Furthermore it
is possible to construct J and J ∗ in such a way that for each fixed κ we have
P (Ji = Ji∗ ; 1 ≤ i ≤ κ) → 1. This establishes (26).
Proof of (27)
Let m−k = i>k mi /(n − k) and let U be the vector with ith entry
φ1 (Xi , m). Arguing as in the proof of Theorem 3 (see 24 above) we see that
P
ρ2 L(
X
(mi − m−k )U(Ji∗ )|X), L(
i>k
X
i>k
(mi − m−k )Ui ) → 0
(30)
in probability. We need to replace m−k by m in order to verify (27). Elementary algebra shows m−k − m = O(k/(n − k)). Temporarily let
T1 =
X
(mi − m−k )U(Ji∗ ) −
X
i>k
i>k
24
(mi − m)U(Ji∗ ),
and
T2 =
X
(mi − m−k )Ui −
i>k
X
(mi − m)Ui
i>k
In view of (21) we see that Var(U1 ) = O(1). Hence
Var(T2 ) = (m−k − m)2 (n − k)Var(U1 ) → 0.
Since the central limit theorem shows that the sequence L(
compact closure in ∆ we may use (31) to show that
ρ2 L(
X
(mi − m−k )Ui ), L(
i>k
X
i>k
(31)
P
i>k
D(i, i)) has
D(i, i)) → 0.
(32)
Next
Var(T1 |X) = (m−k − m)2 (n − k)Var(U(J1∗ )|X).
(33)
Since
Var(U(J1∗ )|X) =
X
Ui2 /n − (
X
Ui /n)2
we may apply the triangular array law of large numbers given in Guttorp and
Lockhart(1988, Lemma 2) to conclude that Var(U(J1∗ )|X) = OP (1). Since
k 2 = o(n) we see that the right hand side of (33) tends to 0 in probability.
Hence
ρ2 L(
X
(mi − m−k )U(Ji∗ )|X), L(
i>k
i>k
X
(mi − m)U(Ji∗ )|X) → 0
(34)
in probability. Assembling (30), (32) and (34) we have established (27).
Proof of (28)
Given X, the variables i≤k C(i, Ji∗ ) and i>k (mi − m)φ1 (X(Ji∗ ), m) are
P
P
independent. Similarly i≤k C(i, i) and i>k D(i, i) are independent. Statement (28) then follows from (25) and (27).
P
P
Proof of (29)
To deal with (29) we must cope with the lack of independence among
the Ji . A random permutation of {1, . . . , n} can be generated as follows.
25
Pick J1 , . . . , Jk a simple random sample of size k from {1, . . . , n}. Let P−k ,
independent of J1 , . . . , Jk , be a random permutation of {1, . . . , n − k}. Then
P
B(i, Ji ) has the same distribution (given X) as
k
X
C(i, Ji ) + (m−k − m−k )T P−k U−k
i=1
where the subscript −k on m denotes deletion of m1 , . . . , mk while that on
U denotes deletion of the entries U(J1 ), . . . , U(Jk ).
Let Z0 denote a random variable, independent of J and J ∗ whose condiP
tional distribution given X is normal with mean 0 and variance i>k (mi −
m)2 S where S is the sample variance of the Uj ’s. Statement (29) is a consequence of the following 3 assertions:
ρ2 L(
X
B(i, Ji∗ )|X), L(
X
i≤k
in probability,
ρ2 (L(
X
C(i, Ji ) + Z0 |X), L(
i≤k
C(i, Ji∗ ) + Z0 |X) → 0
X
i≤k
in probability, and
ρ2 L(
X
C(i, Ji ) + Z0 |X), L(
C(i, Ji∗ ) + Z0 |X) → 0
X
i
i≤k
in probability.
B(i, Ji )|X) → 0
(35)
(36)
(37)
Condition (36) follows from (26), the conditional independence of Z0 and
J, J ∗ and the fact that the conditional variance of Z0 is bounded. Condition
(35) is implicit in the proof of (28) after noting that the variance S of the
entries in U is negligibly different from ι(m).
It remains to establish (37). We will condition on (J1 , . . . , Jk ) as well as
X and apply the Permutation Central Limit Theorem. The application of
the conditions of that theorem is a bit delicate since the conditions will only
be shown to hold in probability. We present the argument in the form of a
technical lemma.
26
Lemma 2 Suppose (W1 , W2 , W3 ) is a sequence of random variables. Suppose
g and h are sequences of measurable functions defined on the range spaces of
(W1 , W2 ) and (W1 , W2 , W3 ). Let ζ1 , ζ2 be two independent real valued random
variables. Suppose that there are functions fi for i = 0, 1, . . . (also indexed
as usual by the hidden index n) such that
f0 (w1 ) → 0
implies
2
g(w1 , W2 ) ⇒ζ1
(38)
and
f1 (w1 , w2 ) → 0
i = 1, 2, . . .
implies
2
h(w1 , w2 , W3 ) ⇒ζ2 .
(39)
If f0 (W1 ) → 0 in probability and fi (W1 , W2 ) → 0 in probability then
L(g(W1, W2 ) + h(W1 , W2 , W3 )|W1 ) ⇒ ζ1 + ζ2
in probability. If in addition
E(g(W1 , W2 ) + h(W1 , W2 , W3 )|W1 ) → E(ζ1 + ζ2 )
(40)
Var(g(W1 , W2 )|W1 ) → Var(ζ1 )
(41)
Var(h(W1 , W2 , W3 )|W1 ) → Var(ζ2 )
(42)
in probability,
in probability and
in probability then
2
L(g(W1 , W2 ) + h(W1 , W2 , W3 )|W1 ) ⇒ζ1 + ζ2
in probability.
The lemma is to be applied with W1 = X, with W2 = (J1 , . . . , Jk ) and
with W3 = P−k . Conclusion (37) can be reduced to the form given by a comP
pactness argument. The sequence of laws of i≤k C(i, i) has compact closure
in ∆ in view of (21). Then apply (25) and (26) to conclude that the sequence
P
of laws of i≤k C(i, Ji ) also has compact closure. Let the distribution of ζ1
be any limit point of this sequence of laws. The random variable ζ2 will be
the normal limit in distribution of (m−k − m)T P−k U−k .
27
In order to apply the lemma we must give the conditions of the Permutation Central Limit Theorem in a form in which we have only a countable
family of convergences, as required in (39), to check. Note that (12) in Theorem 4 can be replaced by the assertion that there is a sequence τ1 , . . . of real
numbers increasing to ∞ such that
max(n−1
for each j ≥ 1.
(xi − x)2 1(|xi − x| > τj ), Ψ(τj )) − Ψ(τj ) → 0
X
(43)
We now apply Theorem 4 with x replaced by U−k , with m replaced by
m−k −m and with P replaced by P−k . It is easy to check by conditioning on J
that E(U−k ) = 0 and Var(U−k ) = ι(m)/(n−k). Hence(n−k)(m−k −m)U−k →
0 in probability. Set
T3 =
Then
X
(U−k,i − U−k )2 1(|U−k,i − U−k | > t)/(n − k)
T3 ≤ 2
X
2
U−k,i
1(|U−k,i − U−k | > t)/(n − k)
+2U−k
2
X
1(|U−k,i − U−k | > t)/(n − k).
The second term on the right is OP (1/(n − k)) = oP (1). Since
1(|U−k,i − U−k | > t) ≤ 1(|U−k,i | > t/2) + 1(|U−k | > t/2)
we see that
T3 ≤ 2(n − k)−1
X
Ui2 1(|Ui | > t/2) + oP (1) .
i
Take Ψ(t) = 2 sup{Em (φ1 (X1 , m)1(|φ1 (X1 , m)| > t/2); m ∈ Θ0 } and apply Lemma 2 of Guttorp and Lockhart together with (21) to check that (43)
holds. To finish the proof of (37) we need to check convergence of second moments as in (40), (41) and (42). This can be done using (25), (26) and direct
P
calculation of the conditional mean and variance given X of i>k D(i, Ji ).
Theorem (5) follows.
The technical lemma itself may be proved as follows. From any counterexample sequence we may extract by a diagonalization argument a subsequence
which is still a counterexample and for which f0 (W1 ) → 0 almost surely and
28
fj (W1 , W2 ) → 0 almost surely for each j ≥ 1. For any sample sequence for
which all these convergences occur we have L(h(W1 , W2 , W3 )|W1 , W2 ) ⇒ ζ2
and L(g(W1, W2 )|W1 ) ⇒ ζ1 . Evaluation of the conditional characteristic
function of g(W1 , W2 ) + h(W1 , W2 , W3 ) given W1 by further conditioning on
W2 yields the convergence in distribution asserted in the lemma. The remaining conclusions concerning moments are more elementary analogues of
the same idea.
References
Abramowitz, M. and Stegun, I. A. (1965). Handbook of Mathematical Functions. New York: Dover.
Billingsley, Patrick (1968). Convergence of Probability Measures. New York:
Wiley.
Čibisov, D.M. (1961). On the tests of fit based on sample spacings. Teor.
Verojatnos. i Primenen. 6, 354–8.
Del Pino, G.E. (1979). On the asymptotic distribution of k-spacings with
applications to goodness-of-fit tests. Ann. Statist., 7, 1058-1065.
Guttorp, P. and Lockhart, R. A. (1988). On the asymptotic distribution of
quadratic forms in uniform order statistics. Ann. Statist., 16, 433-449.
Hájek, J. (1961). Some estensions of the Wald-Wolfowitz-Noether Theorem.
Ann. Math. Statist., 32, 506-523.
Hájek, J. and Šidák, Z. (1967). Theory of Rank Tests. Academic Press: New
York.
Shorack, G. R. and Wellner, J. A. (1986). Empirical Processes with Applications to Statistics. New York: Wiley.
29
| 10 |
Scavenger 0.1:
A Theorem Prover Based on Conflict Resolution
Daniyar Itegulov1 , John Slaney2 , and Bruno Woltzenlogel Paleo2
?
arXiv:1704.03275v2 [cs.LO] 31 Oct 2017
1
2
ITMO University, St. Petersburg, Russia
[email protected]
Australian National University, Canberra, Australia
[email protected]
[email protected]
Abstract. This paper introduces Scavenger, the first theorem prover for
pure first-order logic without equality based on the new conflict resolution
calculus. Conflict resolution has a restricted resolution inference rule that
resembles (a first-order generalization of) unit propagation as well as a
rule for assuming decision literals and a rule for deriving new clauses by
(a first-order generalization of) conflict-driven clause learning.
1
Introduction
The outstanding efficiency of current propositional Sat-solvers naturally raises
the question of whether it would be possible to employ similar ideas for automating first-order logical reasoning. The recent Conflict Resolution calculus1
(CR) [25] can be regarded as a crucial initial step to answer this question. From
a proof-theoretical perspective, CR generalizes (to first-order logic) the two
main mechanisms on which modern Sat-solvers are based: unit propagation and
conflict-driven clause learning. The calculus is sound and refutationally complete,
and CR derivations are isomorphic to implication graphs.
This paper goes one step further by defining proof search algorithms for CR.
Familiarity with the propositional CDCL procedure [18] is assumed, even though
it is briefly sketched in Section 2. The main challenge in lifting this procedure to
first-order logic is that, unlike in propositional logic, first-order unit propagation
does not always terminate and true clauses do not necessarily have uniformly
true literals (cf. Section 4). Our solutions to these challenges are discussed in
Section 5 and Section 6, and experimental results are presented in Section 7.
Related Work: CR’s unit-propagating resolution rule can be traced back to
unit-resulting resolution [20]. Other attempts to lift DPLL [13, 19] or CDCL [18]
to first-order logic include Model Evolution [2, 5, 3, 4], Geometric Resolution [24],
Non-Redundant Clause Learning [1] and the Semantically-Guided Goal Sensitive
procedure [6–9]. A brief summary of these approaches and a comparison with
?
1
Author order is alphabetical by surname.
Not to be confused with the homonymous calculus for linear rational inequalities [17].
CR can be found in [25]. Furthermore, many architectures [12, 15, 16, 29, 11]
for first-order and higher-order theorem proving use a Sat-solver as a black
box for propositional reasoning, without attempting to lift it; and Semantic
Resolution [26, 14] is yet another related approach that uses externally built
first-order models to guide resolution.
2
Propositional CDCL
During search in the propositional case, a Sat-solver keeps a model (a.k.a. trail)
consisting of a (conjunctive) list of decision literals and propagated literals.
Literals of unit clauses are automatically added to the trail, and whenever a
clause has only one literal that is not falsified by the current model, this literal is
added to the model (thereby satisfying that clause). This process is known as
unit-propagation. If unit propagation reaches a conflict (i.e. a situation where the
dual of a literal already contained in the model would have to be added to it),
the Sat-solver backtracks, removing from the model decision literals responsible
for the conflict (as well as propagated literals entailed by the removed decision
literals) and deriving, or learning, a conflict-driven clause consisting2 of duals of
the decision literals responsible for the conflict (or the empty clause, if there were
no decision literals). If unit propagation terminates without reaching a conflict
and all clauses are satisfied by the model, then the input clause set is satisfiable.
If some clauses are still not satisfied, the Sat-solver chooses and assigns another
decision literal, adding it to the trail, and satisfying the clauses that contain it.
3
Conflict Resolution
The inference rules of the conflict resolution calculus CR are shown in Figure 1.
The unit propagating resolution rule is a chain of restricted resolutions with unit
clauses as left premises and a unit clause as final conclusion. Decision literals are
denoted by square brackets, and the conflict-driven clause learning rule infers a
new clause consisting of negations of instances of decision literals used to reach
a conflict (a.k.a. the empty clause ⊥). A clause learning inference is said to
discharge the decision literals that it uses. As in the resolution calculus, CR
derivations are directed acyclic graphs that are not necessarily tree-like. A CR
refutation is a CR derivation of ⊥ with no undischarged decision literals.
From a natural deduction point of view, a unit propagating resolution rule can
be regarded as a chain of implication eliminations taking unification into account,
whereas decision literals and conflict driven clause learning are reminiscent of,
respectively, assumptions and chains of negation introductions, also generalized
to first-order through unification. Therefore, CR can be considered a first-order
hybrid of resolution and natural deduction.
2
In practice, optimizations (e.g. 1UIP) are used, and more sophisticated clauses, which
are not just disjunctions of duals of the decision literals involved in the conflict, can
be derived. But these optimizations are inessential to the focus of this paper.
Unit-Propagating Resolution:
`1
...
`n
`01 ∨ . . . ∨ `0n ∨ `
u(σ)
`σ
where σ is a unifier of `k and `0k , for all k ∈ {1, . . . , n}.
Conflict:
`
`0
c(σ)
⊥
where σ is a unifier of ` and `0 .
Conflict-Driven Clause Learning:
i
i
[`1 ]1
[`n ]n
..
..
1
n
.. (σ11 , . . . , σm
)
)
.. (σ1n , . . . , σm
n
1
..
..
⊥
cli
1
n
n
)
(`1 σ11 ∨ . . . ∨ `1 σm
)
∨
.
.
.
∨
(`
σ
∨
.
.
.
∨
`
σ
n
n
1
m
n
1
where σjk (for 1 ≤ k ≤ n and 1 ≤ j ≤ mk ) is the
composition of all substitutions used on the j-th patha from `k to ⊥.
a
Since a proof DAG is not necessarily tree-like, there may be more than one path
connecting `k to ⊥ in the DAG-like proof.
Fig. 1: The Conflict Resolution Calculus CR
4
Lifting Challenges
First-order logic presents many new challenges for methods based on propagation
and decisions, of which the following can be singled out:
(1) non-termination of unit-propagation: In first-order logic, unit propagation
may never terminate. For example, the clause set {p(a), ¬p(X) ∨ p(f (X)), q ∨
r, ¬q ∨ r, q ∨ ¬r, ¬q ∨ ¬r} is clearly unsatisfiable, because there is no assignment of
p and q to true or false that would satisfy all the last four clauses. However, unit
propagation would derive the following infinite sequence of units, by successively
resolving ¬p(X) ∨ p(f (X)) with previously derived units, starting with p(a):
{p(f (a)), p(f (f (a))), . . . , p(f (. . . (f (a)) . . .)), . . .}. Consequently, a proof search
strategy that would wait for unit propagation to terminate before making decisions
would never be able to conclude that the given clause set is unsatisfiable.
(2) absence of uniformly true literals in satisfied clauses: While in the propositional case, a clause that is true in a model always has at least one literal
that is true in that model, this is not so in first-order logic, because shared
variables create dependencies between literals. For instance, the clause set
{p(X) ∨ q(X), ¬p(a), p(b), q(a), ¬q(b)} is satisfiable, but there is no model where
p(X) is uniformly true (i.e. true for all instances of X) or q(X) is uniformly true.
(3) propagation without satisfaction: In the propositional case, when only one
literal of a clause is not false in the model, this literal is propagated and added
to the model, and the clause necessarily becomes true in the model and does
not need to be considered in propagation anymore, at least until backtracking.
In the first-order case, on the other hand, a clause such as p(X) ∨ q(X) would
propagate the literal q(a) in a model containing ¬p(a), but p(X) ∨ q(X) does not
become true in a model where q(a) is true. It must remain available for further
propagations. If, for instance, the literal ¬p(b) is added to the model, the clause
will be used again to propagate q(b).
(4) quasi-falsification without propagation: A clause is quasi-falsified by a model
iff all but one of its literals are false in the model. In first-order logic, in contrast to
propositional logic, it is not even the case that a clause will necessarily propagate
a literal when only one of its literals is not false in the model. For instance, the
clause p(X) ∨ q(X) ∨ r(X) is quasi-falsified in a model containing ¬p(a) and
¬q(b), but no instance of r(X) can be propagated.
The first two challenges affect search in a conceptual level, and solutions are
discussed in Section 5. The last two prevent a direct first-order generalization of
the data structures (e.g. watched literals) that make unit propagation so efficient
in the propositional case. Partial solutions are discussed in Section 6.
5
First-Order Model Construction and Proof Search
Despite the fundamental differences between propositional and first-order logic
described in the previous section, the first-order algorithms presented aim to
adhere as much as possible to the propositional procedure sketched in the Section 2.
As in the propositional case, the model under construction is a (conjunctive) list
of literals, but literals may now contain (universal) variables. If a literal `[X] is
in a model M , then any instance `[t] is said to be true in M . Note that checking
that a literal ` is true in a model M is more expensive in first-order logic than in
propositional logic: whereas in the latter it suffices to check that ` is in M , in
the former it is necessary to find a literal `0 in M and a substitution σ such that
` = `0 σ. A literal ` is said to be strongly true in a model M iff ` is in M .
There is a straightforward solution for the second challenge (i.e. the absence
of uniformly true literals in satisfied clauses): a clause is satisfied by a model M
iff all its relevant instances have a literal that is true in M , where an instance
is said to be relevant if it substitutes the clause’s variables by terms that occur
in M . Thus, for instance, the clause p(X) ∨ q(X) is satisfied by the model
[¬p(a), p(b), q(a), ¬q(b)], because both relevant instances p(a)∨q(a) and p(b)∨q(b)
have literals that are true in the model. However, this solution is costly, because
it requires the generation of many instances. Fortunately, in many (though not
all) cases, a satisfied clause will have a literal that is true in M , in which case
the clause is said to be uniformly satisfied. Uniform satisfaction is cheaper to
check than satisfaction. However, a drawback of uniform satisfaction is that the
model construction algorithm may repeatedly attempt to satisfy a clause that
is not uniformly satisfied, by choosing one of its literals as a decision literal.
For example, the clause p(X) ∨ q(X) is not uniformly satisfied by the model
[¬p(a), p(b), q(a), ¬q(b)]. Without knowing that this clause is already satisfied by
the model, the procedure would try to choose either p(X) or q(X) as decision
literal. But both choices are useless decisions, because they would lead to conflicts
with conflict-driven clauses equal to a previously derived clause or to a unit clause
containing a literal that is part of the current model. A clause is said to be weakly
satisfied by a model M if and only if all its literals are useless decisions.
Because of the first challenge (i.e. the non-termination of unit-propagation in
the general first-order case), it is crucial to make decisions during unit propagation.
In the example given in item 1 of Section 4, for instance, deciding q at any moment
would allow the propagation of r and ¬r (respectively due to the 4th and 6th
clauses), triggering a conflict. The learned clause would be ¬q and it would
again trigger a conflict by the propagation of r and ¬r (this time due to the 3rd
and 5th clauses). As this last conflict does not depend on any decision literal,
the empty clause is derived and thus the clause set is refuted. The question is
how to interleave decisions and propagations. One straightforward approach is
to keep track of the propagation depth 3 in the implication graph: any decision
literal or literal propagated by a unit clause has propagation depth 0; any
other literal has propagation depth k + 1, where k is the maximum propagation
depth of its predecessors. Then propagation is performed exhaustively only up
to a propagation depth threshold h. A decision literal is then chosen and the
threshold is incremented. Such eager decisions guarantee that a decision will
eventually be made, even if there is an infinite propagation path. However,
eager decisions may also lead to spurious conflicts generating useless conflictdriven clauses. For instance, the clause set {1 : p(a), 2 : ¬p(X) ∨ p(f (X)), 3 :
¬p(f (f (f (f (f (f (a))))))), 4 : ¬r(X)∨q(X), 5 : ¬q(g(X))∨¬p(X), 6 : z(X)∨r(X)}
(where clauses have been numbered for easier reference) is unsatisfiable, because
a conflict with no decisions can be obtained by propagating p(a) (by 1), and
then p(f (a)), p(f (f (a))), . . . , p(f (f (f (f (f (f (a))))))), (by 2, repeatedly), which
conflicts with ¬p(f (f (f (f (f (f (a))))))) (by 3). But the former propagation has
depth 6. If the propagation depth threshold is lower than 6, a decision literal
is chosen before that conflict is reached. If r(X) is chosen, for example, in an
attempt to satisfy the sixth clause, there are propagations (using r(X) and clauses
1, 4, 5 and 6) with depth lower than the threshold and reaching a conflict that
3
Because of the isomorphism between implication graphs and subderivations in Conflict
Resolution [25], the propagation depth is equal to the corresponding subderivation’s
height, where initial axiom clauses and learned clauses have height 0 and the height
of the conclusion of a unit-propagating resolution inference is k + 1 where k is the
maximum height of its unit premises.
generates the clause ¬r(g(a)), which is useless for showing unsatisfiability of the
whole clause set. This is not a serious issue, because useless clauses are often
generated in conflicts with non-eager decisions as well. Nevertheless, this example
suggests that the starting threshold and the strategy for increasing the threshold
have to be chosen wisely, since the performance may be sensitive to this choice.
Interestingly, the problem of non-terminating propagation does not manifest in
fragments of first-order logic where infinite unit propagation paths are impossible.
A well-known and large fragment is the effectively propositional (a.k.a. BernaysSchönfinkel ) class, consisting of sentences with prenex forms that have an ∃∗ ∀∗
quantifier prefix and no function symbols. For this fragment, a simpler proof
search strategy that only makes decisions when unit propagation terminates, as
in the propositional case, suffices. Infinite unit propagation paths do not occur in
the effectively propositional fragment because there are no function symbols and
hence the term depth4 does not increase arbitrarily. Whenever the term depth is
bounded, infinite unit propagation paths cannot occur, because there are only
finitely many literals with bounded term depth (given the finite set of constant,
function and predicate symbols with finite arity occurring in the clause set).
The insight that term depth is important naturally suggests a different
approach for the general first-order case: instead of limiting the propagation
depth, limit the term depth instead, allowing arbitrarily long propagations as long
as the term depth of the propagated literals are smaller than the current term
depth threshold. A literal is propagated only if its term depth is smaller than the
threshold. New decisions are chosen when the term-depth-bounded propagation
terminates and there are still clauses that are not uniformly satisfied. As before,
eager decisions may lead to spurious conflicts, but bounding propagation by term
depth seems intuitively more sensible than bounding it by propagation depth.
6
Implementation Details
Scavenger is implemented in Scala and its source code and usage instructions are
available in https://gitlab.com/aossie/Scavenger. Its packrat combinator
parsers are able to parse TPTP CNF files [28]. Although Scavenger is a firstorder prover, every logical expression is converted to a simply typed lambda
expression, implemented by the abstract class E with concrete subclasses Sym,
App and Abs for, respectively, symbols, applications and abstractions. A trait
Var is used to distinguish variables from other symbols. Scala’s case classes
are used to make E behave like an algebraic datatype with (pattern-matchable)
constructors. The choice of simply typed lambda expressions is motivated by the
intention to generalize Scavenger to multi-sorted first-order logic and higher-order
logic and support TPTP TFF and THF in the future. Every clause is internally
represented as an immutable two-sided sequent consisting of a set of positive
literals (succedent) and a set of negative literals (antecedent).
4
The depth of constants and variables is zero and the depth of a complex term is k + 1
when k is the maximum depth of its proper subterms.
When a problem is unsatisfiable, Scavenger can output a CR refutation internally represented as a collection of ProofNode objects, which can be instances
of the following immutable classes: UnitPropagatingResolution, Conflict,
ConflictDrivenClauseLearning, Axiom, Decision. The first three classes correspond directly to the rules shown in Figure 1. Axiom is used for leaf nodes
containing input clauses, and Decision represents a fictive rule holding decision literals. Each class is responsible for checking, typically through require
statements, the soundness conditions of its corresponding inference rule. The
Axiom, Decision and ConflictDrivenClauseLearning classes are less than 5
lines of code each. Conflict and UnitPropagatingResolution are respectively
15 and 35 lines of code. The code for analyzing conflicts, traversing the subderivations (conflict graphs) and finding decisions that contributed to the conflict, is
implemented in a superclass, and is 17 lines long.
The following three variants of Scavenger were implemented:
– EP-Scavenger: aiming at the effectively propositional fragment, propagation
is not bounded, and decisions are made only when propagation terminates.
– PD-Scavenger: Propagation is bounded by a propagation depth threshold
starting at 0. Input clauses are assigned depth 0. Derived clauses and propagated literals obtained while the depth threshold is k are assigned depth k + 1.
The threshold is incremented whenever every input clause that is neither
uniformly satisfied nor weakly satisfied is used to derive a new clause or to
propagate a new literal. If this is not the case, a decision literal is chosen
(and assigned depth k + 1) to uniformly satisfy one of the clauses that is
neither uniformly satisfied nor weakly satisfied.
– TD-Scavenger: Propagation is bounded by a term depth threshold starting at
0. When propagation terminates, a stochastic choice between either selecting
a decision literal or incrementing the threshold is made with probability of
50% for each option. Only uniform satisfaction of clauses is checked.
The third and fourth challenges discussed in Section 4 are critical for performance, because they prevent a direct first-order generalization of data structures
such as watched literals, which enables efficient detection of clauses that are
ready to propagate literals. Without knowing exactly which clauses are ready to
propagate, Scavenger (in its three variants) loops through all clauses with the goal
of using them for propagation. However, actually trying to use a given clause for
propagation is costly. In order to avoid this cost, Scavenger performs two quicker
tests. Firstly, it checks whether the clause is uniformly satisfied (by checking
whether one of its literals belongs to the model). If it is, then the clause is
dismissed. This is an imperfect test, however. Occasionally, some satisfied clauses
will not be dismissed, because (in first-order logic) not all satisfied clauses are
uniformly satisfied. Secondly, for every literal ` of every clause, Scavenger keeps a
set of decision literals and propagated literals that are unifiable with `. A clause c
is quasi-falsified when at most one literal of c has an empty set associated with it.
This is a rough analogue of watched literals for detecting quasi-falsified clauses.
Again, this is an imperfect test, because (in first-order logic) not all quasi-falsified
clauses are ready to propagate. Despite the imperfections of these tests, they do
reduce the number of clauses that need to be considered for propagation, and
they are quick and simple to implement.
Overall, the three variants of Scavenger listed above have been implemented
concisely. Their main classes are only 168, 342 and 176 lines long, respectively,
and no attempt has been made to increase efficiency at the expense of the code’s
readability and pedagogical value. Premature optimization would be inappropriate
for a first proof-of-concept.
Scavenger still has no sophisticated backtracking and restarting mechanism,
as propositional Sat-solvers do. When Scavenger reaches a conflict, it restarts
almost completely: all derived conflict-driven clauses are kept, but the model
under construction is reset to the empty model.
7
Experiments
Experiments were conducted5 in the StarExec cluster [27] to evaluate Scavenger’s
performance on TPTP v6.4.0 benchmarks in CNF form and without equality.
For comparison, all other 21 provers available in StarExec’s TPTP community
and suitable for CNF problems without equality were evaluated as well. For each
job pair, the timeouts were 300 CPU seconds and 600 Wallclock seconds.
Prover
Problems Solved
EPR
All
PEPR-0.0ps
432
GrAnDe-1.1
447
Paradox-3.0
467
ZenonModulo-0.4.1 315
TD-Scavenger
350
PD-Scavenger
252
Geo-III-2016C
344
EP-Scavenger
349
Metis-2.3
404
Z3-4.4.1
507
Zipperpin-FOF-0.4 400
Otter-3.3
362
432
447
506
628
695
782
840
891
950
1027
1029
1068
Prover
Bliksem-1.12
SOS-2.0
CVC4-FOF-1.5.1
SNARK-20120808
Beagle-0.9.47
E-Darwin-1.5
Prover9-1109a
Darwin-1.4.5
iProver-2.5
ET-0.2
E-2.0
Vampire-4.1
Problems Solved
EPR
All
424
351
452
417
402
453
403
508
551
486
489
540
1107
1129
1145
1150
1153
1213
1293
1357
1437
1455
1464
1524
Table 1: Number of problems solved by each prover
Table 1 shows how many of the 1606 unsatisfiable CNF problems and 572
effectively propositional (EPR) unsatisfiable CNF problems each theorem prover
solved; and figures 2 and 3 shows the performance in more detail. For a first
implementation, the best variants of Scavenger show an acceptable performance.
All variants of Scavenger outperformed PEPR, GrAnDe, DarwinFM, Paradox
5
Raw experimental data are available at https://doi.org/10.5281/zenodo.293187.
and ZenonModulo; and EP-Scavenger additionally outperformed Geo-III. On
the effectively propositional propblems, TD-Scavenger outperformed LEO-II,
ZenonModulo and Geo-III, and solved only 1 problem less than SOS-2.0 and 12
less than Otter-3.3. Although Otter-3.3 has long ceased to be a state-of-the-art
prover and has been replaced by Prover9, the fact that Scavenger solves almost
as many problems as Otter-3.3 is encouraging, because Otter-3.3 is a mature
prover with 15 years of development, implementing (in the C language) several
refinements of proof search for resolution and paramodulation (e.g. orderings,
set of support, splitting, demodulation, subsumption) [21, 22], whereas Scavenger
is a yet unrefined and concise implementation (in Scala) of a comparatively
straightforward search strategy for proofs in the Conflict Resolution calculus,
completed in slightly more than 3 months. Conceptually, Geo-III (based on
Geometric Resolution) and Darwin (based on Model Evolution) are the most
similar to Scavenger. While Scavenger already outperforms Geo-III, it is still
far from Darwin. This is most probably due to Scavenger’s current eagerness
to restart after every conflict, whereas Darwin backtracks more carefully (cf.
Sections 6 and 8). Scavenger and Darwin also treat variables in decision literals
differently. Consequently, Scavenger detects more (and non-ground) conflicts, but
learning conflict-driven clauses can be more expensive, because unifiers must be
collected from the conflict graph and composed.
Fig. 2: Performance on all benchmarks (provers ordered by performance)
EP-Scavenger solved 28.2% more problems than TD-Scavenger and 13.9% more
than PD-Scavenger. This suggests that non-termination of unit-propagation is an
Fig. 3: Performance on EPR benchmarks only (provers ordered by performance)
uncommon issue in practice: EP-Scavenger is still able to solve many problems,
even though it does not care to bound propagation, whereas the other two
variants solve fewer problems because of the overhead of bounding propagation
even when it is not necessary. Nevertheless, there were 28 problems solved only by
PD-Scavenger and 26 problems solved only by TD-Scavenger (among Scavenger’s
variants). EP-Scavenger and PD-Scavenger can solve 9 problems with TPTP
difficulty rating 0.5, all from the SYN and FLD domains. 3 of the 9 problems
were solved in less than 10 seconds.
8
Conclusions and Future Work
Scavenger is the first theorem prover based on the new Conflict Resolution calculus.
The experiments show a promising, albeit not yet competitive, performance.
A comparison of the performance of the three variants of Scavenger shows
that it is non-trivial to interleave decisions within possibly non-terminating unitpropagations, and further research is needed to determine (possibly in a problem
dependent way) optimal initial depth thresholds and threshold incrementation
strategies. Alternatively, entirely different criteria could be explored for deciding
to make an eager decision before propagation is over. For instance, decisions
could be made if a fixed or dynamically adjusted amount of time elapses.
The performance bottleneck that needs to be most urgently addressed in
future work is backtracking and restarting. Currently, all variants of Scavenger
restart after every conflict, keeping derived conflict-driven clauses but throwing
away the model construct so far. They must reconstruct models from scratch
after every conflict. This requires a lot of repeated re-computation, and therefore
a significant performance boost could be expected through a more sensible
backtracking strategy. Scavenger’s current naive unification algorithm could be
improved with term indexing [23], and there might also be room to improve
Scavenger’s rough first-order analogue for the watched literals data structure,
even though the first-order challenges make it unlikely that something as good as
the propositional watched literals data structure could ever be developed. Further
experimentation is also needed to find optimal values for the parameters used in
Scavenger for governing the initial thresholds and their incrementation policies.
Scavenger’s already acceptable performance despite the implementation improvement possibilities just discussed above indicates that automated theorem
proving based on the Conflict Resolution calculus is feasible. However, much work
remains to be done to determine whether this approach will eventually become
competitive with today’s fastest provers.
Acknowledgments: We thank Ezequiel Postan for his implementation of TPTP
parsers for Skeptik [10], which we have reused in Scavenger. We are grateful
to Albert A. V. Giegerich, Aaron Stump and Geoff Sutcliffe for all their help
in setting up our experiments in StarExec. This research was partially funded
by the Australian Government through the Australian Research Council and
by the Google Summer of Code 2016 program. Daniyar Itegulov was financially
supported by the Russian Scientific Foundation (grant 15-14-00066).
References
1. Alagi, G., Weidenbach, C.: NRCL - A model building approach to the bernaysschönfinkel fragment. In: Lutz, C., Ranise, S. (eds.) Frontiers of Combining Systems
- 10th International Symposium, FroCoS 2015, Wroclaw, Poland, September 2124, 2015. Proceedings. Lecture Notes in Computer Science, vol. 9322, pp. 69–84.
Springer (2015), http://dx.doi.org/10.1007/978-3-319-24246-0_5
2. Baumgartner, P.: A first order Davis-Putnam-Longeman-Loveland procedure. In:
Proceedings of the 17th International Conference on Automated Deduction (CADE).
pp. 200–219 (2000)
3. Baumgartner, P.: Model evolution-based theorem proving. IEEE Intelligent Systems
29(1), 4–10 (2014), http://dx.doi.org/10.1109/MIS.2013.124
4. Baumgartner, P., Fuchs, A., Tinelli, C.: Lemma learning in the model evolution
calculus. In: Hermann, M., Voronkov, A. (eds.) Logic for Programming, Artificial
Intelligence, and Reasoning, 13th International Conference, LPAR 2006, Phnom
Penh, Cambodia, November 13-17, 2006, Proceedings. Lecture Notes in Computer
Science, vol. 4246, pp. 572–586. Springer (2006), http://dx.doi.org/10.1007/
11916277_39
5. Baumgartner, P., Tinelli, C.: The model evolution calculus. In: Baader, F. (ed.)
Automated Deduction - CADE-19, 19th International Conference on Automated
Deduction Miami Beach, FL, USA, July 28 - August 2, 2003, Proceedings. Lecture
Notes in Computer Science, vol. 2741, pp. 350–364. Springer (2003), http://dx.
doi.org/10.1007/978-3-540-45085-6_32
6. Bonacina, M.P., Plaisted, D.A.: Constraint manipulation in SGGS. In: Kutsia, T.,
Ringeissen, C. (eds.) Proceedings of the Twenty-Eighth Workshop on Unification
(UNIF), Seventh International Joint Conference on Automated Reasoning (IJCAR)
and Sixth Federated Logic Conference (FLoC). pp. 47–54. Technical Reports of the
Research Institute for Symbolic Computation, Johannes Kepler Universität Linz
(July 2014), http://vsl2014.at/meetings/UNIF-index.html
7. Bonacina, M.P., Plaisted, D.A.: SGGS theorem proving: an exposition. In: Schulz,
S., Moura, L.D., Konev, B. (eds.) Proceedings of the Fourth Workshop on Practical
Aspects in Automated Reasoning (PAAR), Seventh International Joint Conference
on Automated Reasoning (IJCAR) and Sixth Federated Logic Conference (FLoC),
July 2014. EasyChair Proceedings in Computing (EPiC), vol. 31, pp. 25–38 (July
2015)
8. Bonacina, M.P., Plaisted, D.A.: Semantically-guided goal-sensitive reasoning: Model
representation. Journal of Automated Reasoning 56(2), 113–141 (2016), http:
//dx.doi.org/10.1007/s10817-015-9334-4
9. Bonacina, M.P., Plaisted, D.A.: Semantically-guided goal-sensitive reasoning: Inference system and completeness. Journal of Automated Reasoning pp. 1–54 (2017),
http://dx.doi.org/10.1007/s10817-016-9384-2
10. Boudou, J., Fellner, A., Woltzenlogel Paleo, B.: Skeptik: A proof compression
system. In: Demri, S., Kapur, D., Weidenbach, C. (eds.) Automated Reasoning
- 7th International Joint Conference, IJCAR 2014, Held as Part of the Vienna
Summer of Logic, VSL 2014, Vienna, Austria, July 19-22, 2014. Proceedings.
Lecture Notes in Computer Science, vol. 8562, pp. 374–380. Springer (2014), http:
//dx.doi.org/10.1007/978-3-319-08587-6_29
11. Brown, C.E.: Satallax: An automatic higher-order prover. In: Gramlich, B., Miller,
D., Sattler, U. (eds.) Automated Reasoning - 6th International Joint Conference,
IJCAR 2012, Manchester, UK, June 26-29, 2012. Proceedings. Lecture Notes in
Computer Science, vol. 7364, pp. 111–117. Springer (2012), http://dx.doi.org/
10.1007/978-3-642-31365-3_11
12. Claessen, K.: The anatomy of Equinox – an extensible automated reasoning tool for
first-order logic and beyond (talk abstract). In: Proceedings of the 23rd International
Conference on Automated Deduction (CADE-23). pp. 1–3 (2011)
13. Davis, M., Putnam, H.: A computing procedure for quantification theory. Journal
of the ACM 7, 201–215 (1960)
14. Hodgson, K., Slaney, J.K.: System description: SCOTT-5. In: Automated Reasoning,
First International Joint Conference, IJCAR 2001, Siena, Italy, June 18-23, 2001,
Proceedings. pp. 443–447 (2001), http://dx.doi.org/10.1007/3-540-45744-5_36
15. Korovin, K.: iprover - an instantiation-based theorem prover for first-order logic (system description). In: Armando, A., Baumgartner, P., Dowek, G. (eds.) Automated
Reasoning, 4th International Joint Conference, IJCAR 2008, Sydney, Australia,
August 12-15, 2008, Proceedings. Lecture Notes in Computer Science, vol. 5195, pp.
292–298. Springer (2008), http://dx.doi.org/10.1007/978-3-540-71070-7_24
16. Korovin, K.: Inst-Gen - a modular approach to instantiation-based automated
reasoning. In: Programming Logics. pp. 239–270 (2013)
17. Korovin, K., Tsiskaridze, N., Voronkov, A.: Conflict resolution. In: Gent, I.P. (ed.)
Principles and Practice of Constraint Programming - CP 2009, 15th International
Conference, CP 2009, Lisbon, Portugal, September 20-24, 2009, Proceedings. Lecture
Notes in Computer Science, vol. 5732, pp. 509–523. Springer (2009), http://dx.
doi.org/10.1007/978-3-642-04244-7_41
18. João Marques-Silva, I.L., Malik, S.: Conflict-driven clause learning SAT solvers. In:
Handbook of Satisfiability, pp. 127 – 149 (2008)
19. Martin Davis, G.L., Loveland, D.: A machine program for theorem proving. Communications of the ACM 57, 394–397 (1962)
20. McCharen, J., Overbeek, R., Wos, L.: Complexity and related enhancements for automated theorem-proving programs. Computers and Mathematics with Applications
2, 1—-16 (1976)
21. McCune, W.: OTTER 2.0. In: Stickel, M.E. (ed.) 10th International Conference
on Automated Deduction, Kaiserslautern, FRG, July 24-27, 1990, Proceedings.
Lecture Notes in Computer Science, vol. 449, pp. 663–664. Springer (1990), http:
//dx.doi.org/10.1007/3-540-52885-7_131
22. McCune, W.: OTTER 3.3 reference manual. CoRR cs.SC/0310056 (2003), http:
//arxiv.org/abs/cs.SC/0310056
23. Nieuwenhuis, R., Hillenbrand, T., Riazanov, A., Voronkov, A.: On the evaluation of
indexing techniques for theorem proving. In: Automated Reasoning, First International Joint Conference, IJCAR 2001, Siena, Italy, June 18-23, 2001, Proceedings.
pp. 257–271 (2001), http://dx.doi.org/10.1007/3-540-45744-5_19
24. de Nivelle, H., Meng, J.: Geometric resolution: A proof procedure based on finite
model search. In: Furbach, U., Shankar, N. (eds.) Automated Reasoning, Third
International Joint Conference, IJCAR 2006, Seattle, WA, USA, August 17-20,
2006, Proceedings. Lecture Notes in Computer Science, vol. 4130, pp. 303–317.
Springer (2006), http://dx.doi.org/10.1007/11814771_28
25. Slaney, J., Woltzenlogel Paleo, B.: Conflict resolution: a first-order resolution calculus with decision literals and conflict-driven clause learning. Journal of Automated
Reasoning pp. 1–24 (2017), http://dx.doi.org/10.1007/s10817-017-9408-6
26. Slaney, J.K.: SCOTT: A model-guided theorem prover. In: Bajcsy, R. (ed.) Proceedings of the 13th International Joint Conference on Artificial Intelligence. Chambéry,
France, August 28 - September 3, 1993. pp. 109–115. Morgan Kaufmann (1993),
http://ijcai.org/Proceedings/93-1/Papers/016.pdf
27. Stump, A., Sutcliffe, G., Tinelli, C.: StarExec: A cross-community infrastructure
for logic solving. In: Demri, S., Kapur, D., Weidenbach, C. (eds.) Automated
Reasoning: 7th International Joint Conference, IJCAR 2014, Held as Part of the
Vienna Summer of Logic, VSL 2014, Vienna, Austria, July 19-22, 2014. Proceedings.
pp. 367–373. Springer International Publishing, Cham (2014), http://dx.doi.org/
10.1007/978-3-319-08587-6_28
28. Sutcliffe, G.: The TPTP problem library and associated infrastructure: The FOF
and CNF parts, v3.5.0. Journal of Automated Reasoning 43(4), 337–362 (2009)
29. Voronkov, A.: AVATAR: the architecture for first-order theorem provers. In: Biere,
A., Bloem, R. (eds.) Computer Aided Verification - 26th International Conference,
CAV 2014, Held as Part of the Vienna Summer of Logic, VSL 2014, Vienna, Austria,
July 18-22, 2014. Proceedings. Lecture Notes in Computer Science, vol. 8559, pp.
696–710. Springer (2014), http://dx.doi.org/10.1007/978-3-319-08867-9_46
| 2 |
arXiv:1608.03090v1 [] 10 Aug 2016
Regression Models for Output Prediction of
Thermal Dynamics in Buildings∗
Georgios C. Chasparis†
Software Competence Center Hagenberg GmbH
Department of Data Analysis Systems
Softwarepark 21
4232 Hagenberg, Austria
Email: [email protected]
Thomas Natschlaeger
Software Competence Center Hagenberg GmbH
Department of Data Analysis Systems
Softwarepark 21
4232 Hagenberg, Austria
Email: [email protected]
Standard (black-box) regression models may not necessarily
suffice for accurate identification and prediction of thermal
dynamics in buildings. This is particularly apparent when
either the flow rate or the inlet temperature of the thermal
medium varies significantly with time. To this end, this paper analytically derives, using physical insight, and investigates linear regression models with nonlinear regressors for
system identification and prediction of thermal dynamics in
buildings. Comparison is performed with standard linear regression models with respect to both a) identification error,
and b) prediction performance within a model-predictivecontrol implementation for climate control in a residential
building. The implementation is performed through the EnergyPlus building simulator and demonstrates that a careful
consideration of the nonlinear effects may provide significant
benefits with respect to the power consumption.
1
Introduction
The increased demand for electricity power and/or fuel
consumption in residential buildings requires an efficient
control design for all heating/cooling equipment. To this end,
recently, there have been several efforts towards a more efficient climate control in residential buildings [2, 3, 4, 5]. Ef-
∗ This work is part of the mpcEnergy project which is supported within
the program Regionale Wettbewerbsfähigkeit OÖ 2007-2013 by the European Fund for Regional Development as well as the State of Upper Austria.
The research reported in this article has been (partly) supported by the Austrian Ministry for Transport, Innovation and Technology, the Federal Ministry of Science, Research and Economy, and the Province of Upper Austria
in the frame of the COMET center SCCH. An earlier version of part of this
paper appeared in [1].
† Address all correspondence related to this paper to this author.
ficiency of power/fuel consumption is closely related to the
ability of the heating/cooling strategy to incorporate predictions of the heat-mass transfer dynamics and exogenous thermal inputs (e.g., outdoor temperature, solar radiation, user
activities, etc.). Naturally this observation leads to modelpredictive control (MPC) implementations (see, e.g., [2, 3]).
The performance though of any such implementation will be
closely related to the performance of the prediction models
for the evolution of both the indoor temperature and the exogenous thermal inputs.
A common approach in the derivation of prediction
models for the indoor temperature evolution relies on the
assumption that the heat-mass transfer dynamics can be
approximated well by linear models. For example, in
the optimal supervisory control formulation introduced in
[2] for controlling residential buildings with an heatingventilation-air-conditioning (HVAC) system, a linear (statespace) model is considered. As pointed out in [2], including
the actual dynamics might be complicated for the derivation
of optimal control strategies. Similar is the assumption in the
dynamic game formulation of HVAC thermally controlled
buildings in [6], where again a linear model is assumed for
the overall system. Also, in references [7, 8], detailed nonlinear representations of the heat-mass transfer dynamics (of
a vapor-compression system in [7] and for a building in [8])
are linearized about an operating point to allow for a more
straightforward controller design.
Given this difficulty in incorporating nonlinear representations in the controller design, several efforts for identification of heat-mass transfer dynamics have adopted linear
transfer functions, such as the standard ARX or ARMAX
black-box model structures (cf., [9]). Examples of such im-
plementations include the MIMO ARMAX model in [10],
the MISO ARMAX model for HVAC systems in [11] and
the ARX model structure considered in [12].
On the other hand, the nonlinear nature of the heatmass transfer dynamics in buildings has been pointed out by
several papers, including the switched linear dynamics for
modeling the intermittent operation of RH systems in [4],
the multiple-time scale analysis considered in [12, 5] and
the physics-based representation of the air-conditioning systems in [7]. In reference [12], a comparison is performed
between standard linear ARX models with two time-scale
transfer models, which according to the authors better represent thermal models. Indeed, the operation of either an
radiant-heating (RH) system or an HVAC system usually
takes place in a faster time-scale than the room dynamics,
an observation that was also utilized for the derivation of a
(nonlinear) model predictive controller in [5] using singularperturbation techniques.
The following questions naturally emerge: What is the
effect of the nonlinearities of an RH or an HVAC system in
the identification error of the indoor temperature? Can more
detailed model representations be utilized to reduce identification error and improve energy efficiency? This paper begins with the observation that thermal dynamics in buildings
are nonlinear in nature due to the operation of the RH and
HVAC systems. We avoid introducing any multiple timescale approximation as in [5] and we adopt a detailed representation of the heat-mass transfer dynamics (using standard
Bond-graph models). In principle, linear transfer models
incorporating linear regressors, such as Output-Error (OE),
ARX or ARMAX structures, may be sufficient for system
identification in buildings. However, we wish to investigate
whether this is a reasonable assumption and whether more
accurate representations of the dynamics are necessary.
To this end, linear regression models with nonlinear regressors are derived analytically using physical insight for
prediction of the indoor temperature in residential buildings
with RH and/or HVAC systems. The derivation accounts for
alternative information structures depending on the sensors
available, which is the main contribution of this paper. The
derived identification models are compared against standard
(black-box) linear regression models trained with simulated
data generated in EnergyPlus (V7-2-0) building simulator
tool developed by the U.S. Department of Energy [13]. Comparison is also performed with respect to the prediction performance of the derived models when used within a standard
MPC for climate control in a residential building. The results indicate that a more detailed design of the thermal-mass
transfer prediction models can be beneficial with respect to
the energy consumption. This paper extends prior work of
the same authors [1], since it extends the derivation of the regression models into a larger family of information structures
(sensors available), while it provides a detailed comparison
with respect to the energy consumption.
In the remainder of the paper, Section 2 provides a description of the overall framework and the objective of this
paper. Section 3 provides a short introduction to a class of
linear regression models (briefly LRM) for system identifica-
tion of heat-mass transfer dynamics. Section 4 analytically
derives using physical insight linear regression models with
nonlinear regressors (briefly NRM). In Section 5, we compare the performance of the derived NRM with LRM with
respect to both a) identification error, and b) prediction performance within a standard MPC for climate control. Finally,
Section 6 presents concluding remarks.
Notation:
• col{x1 , ..., xn }, for some real numbers x1 , ..., xn ∈ R,
denotes the column vector (x1 , ..., xn ) in Rn . Also,
row{x1 , ..., xn } denotes the corresponding row vector.
• diag{x1 , ..., xn } denotes the diagonal matrix in Rn×n
with diagonal entries x1 , ..., xn .
• for any finite set A, |A| denotes the cardinality of A.
.
• = denotes definition.
2
Framework & Objective
When addressing system identification and control of
heat-mass transfer dynamics in buildings, the assumed model
of indoor temperature prediction should be a) accurate
enough to capture the phenomena involved, and b) simple
enough to accommodate a better control design. Given that
prior literature has primarily focused on linear state-space or
transfer models, we wish to address the following:
1. investigate the performance of standard (black-box)
linear (regression) models (LRM) in the identification/prediction of heat-mass transfer dynamics in residential buildings;
2. derive analytically, using physical insight and under alternative information structures, an accurate representation of the dynamics using linear regression models with
nonlinear regressors (NRM);
3. provide a detailed comparison between the derived
NRM with standard LRM and assess the potential energy saving.
To this end, we consider residential buildings divided
into a set of zones I . Each zone is controlled independently
in terms of heating using either a RH and/or an HVAC system. We will use the notation i, j to indicate representative
elements of the set of zones I . The temperature of a zone i is
affected by the temperature of the neighboring zones and/or
the outdoor environment, denoted Ni , i.e., those zones which
are adjacent to i, separated by some form of separator.
In the remainder of this section, we provide some standard background on heat-mass transfer dynamics (based on
which the derivation of the prediction models will be presented). We further discuss the considered assumptions with
respect to the available sensors, and finally some generic
properties of the dynamics.
2.1
Background: Heat-mass transfer dynamics
We will use standard modeling techniques of heat-mass
transfer dynamics for each thermal zone i ∈ I , following
a standard Bond-graph approach [14] and the simplified
Fourier’s law for heat conduction and Newton’s law for heat
Fig. 1: Bond-graph approximation of heat-mass transfer dynamics of a thermal zone i ∈ I .
convection (cf., [15]). Such bond-graph approximations of
thermal dynamics are equivalent to a series of electrical
resistance-capacitor (RC) connections. Figure 1 provides a
detailed bond-graph approximation for a thermal zone i ∈ I
with both RH and HVAC heating equipment.
heat transfer dynamics for a separator i j can be written as
Q̇−
s,i j
We use C to denote the thermal capacitance of a separator (e.g., wall) or a thermal zone (e.g., room), and the symbol
R to denote the heat resistance, according to an electricity
equivalent terminology. We also use the symbol Q̇ to denote
heat flow and the symbol T to denote temperature.
The subscript s denotes separator-related parameters;
the subscript r denotes zone-related parameters, and usually
corresponds to a room; the subscript w denotes parameters
related to the RH system; and the subscript a denotes HVACrelated parameters. Finally, a pair i j denotes the interconnection between two neighboring zones i, j ∈ I .
Separators have been modeled as two thermal resistances separated by a capacitance, with the superscript “+”
denoting the “inside” part of the separator, and the superscript “−” denoting the “outside” part of the separator. For
example, R+
s,i j denotes the thermal resistance that it is adjacent to i. Furthermore, Tr,i , i ∈ I , denotes the temperature
of zone i and Ts,i j , j ∈ Ni , denotes the (internal) temperature of the separator i j. The separator exchanges heat with
the neighboring zones i and j due to heat convection, whose
−
resistance has been incorporated within R+
s,i j and Rs,i j . The
1
(Ts,i j − Tr,i )
R+
s,i j
1
= − (Tr, j − Ts,i j )
Rs,i j
+
= Q̇−
s,i j − Q̇s,i j ,
Q̇+
s,i j =
Cs,i j Ṫs,i j
(1)
+
where Q̇−
s,i j and Q̇s,i j denote the heat input and output for
separator i j, respectively.
Regarding the RH system, it is modeled by two thermal
restrictors Rw,i (V̇w,i ) separated by a thermal capacitance Cw,i
(that determines the heat stored in the thermal medium). In
.
particular, Rw,i = Rw,i (V̇w,i ) = 1/ρw cwV̇w,i and Cw,i = cw ρwVw ,
where ρw , cw and Vw are the density, specific heat capacity and volume of the water in the RH system. Due to the
capacitance of the thermal medium, a state variable Tw,i is
introduced that represents the temperature of the water at the
point of heat exchange. The radiator exchanges heat with
the thermal zone due to heat convection of resistance Rc,i .
Furthermore, the inlet water temperature in the RH system is
+
denoted by Tw,i
, while the outlet water temperature, denoted
−
Tw,i , is considered approximately equal to Tw,i (if we assume
uniform temperature distribution within the radiator device).
Thus, the heat transfer dynamics of the RH system are
Q̇w,i =
1
(Tw,i − Tr,i )
Rc,i
Cw,i Ṫw,i =
1
(T + − Tw,i ) − Q̇w,i ,
Rw,i (V̇w,i ) w,i
(2)
vector, respectively. Furthermore, we define
.
A
i (ui ) =
ar,i (V̇a,i ) row{a+
arw,i
rs,i j } j∈Ni
col{a+
0
s,i j } j∈Ni diag{as,i j } j∈Ni
aw,i
0
awc,i (V̇w,i )
where Q̇w,i is the heat input to zone i through the radiator.
Regarding the HVAC system, we consider the standard
assumption that the fluid is incompressible, thus we are only
interested in the thermal part of the system [16]. A thermal
resistance, Ra,i (V̇a,i ), is introduced to represent a restrictor of
heat flow. It is given by Ra,i = Ra,i (V̇a,i ) = 1/ρa caV̇a,i where ρa
denotes the air density, ca denotes the air specific heat capacity at constant pressure, and V̇a,i is the air volume rate. The
outlet temperature of air in the HVAC system is considered
approximately equal to the zone temperature, while the in+
let air temperature is denoted by Ta,i
. Thus, the heat input
attributed to the HVAC system is
Q̇HVAC,i =
1
(T + − Tr,i ).
Ra,i (V̇a,i ) a,i
Finally, the heat-mass transfer dynamics of the thermal
zone i can be expressed as follows
∑ Q̇+s,i j + Q̇w,i + Q̇HVAC,i + Q̇ext,i .
(4)
j∈Ni
From equations (1), (2), (3) and (4), we can derive in a
straightforward manner the state-space representation of the
overall system, which can be written in a generalized form
as follows
ẋi (t) = Ai (ui (t))xi (t) + Ei (ui (t))di (t),
i ∈ I,
(5)
where
Tr,i
.
xi = col{Ts,i j } j∈Ni ,
Tw,i
+
Tw,i
+
Ta,i
.
di =
col{Tr, j } j∈N ,
Q̇ext,i
.
ui =
where
.
ar,i (V̇a,i ) = − ∑ j∈Ni 1/Cr,i R+s,i j − 1/Cr,i Rc,i − 1/Cr,i Ra,i (V̇a,i ),
. 1
.
+
a+
ara,i (V̇a,i ) = 1/Cr,i Ra,i (V̇a,i ),
rs,i j = /Cr,i Rs,i j ,
.
. 1
−
a+ = 1/Cs,i j R+s,i j , a−
s,i j = /Cs,i j Rs,i j ,
(6)
. s,i j+
. 1
−
as,i j = −as,i j − as,i j , aww,i (V̇w,i ) = /Cw,i Rw,i (V̇w,i ),
. 1
. 1
. 1
aw,i = /Cw,i Rc,i , aext,i = /Cr,i , arw,i = /Cr,i Rc,i ,
.
awc,i (V̇w,i ) = −aw,i − aww,i (V̇w,i ).
(3)
The disturbance Q̇ext,i denotes heat inputs from external
sources. It will correspond to a vector of external heat flow
rates, attributed to solar radiation, human presence, etc.
Cr,i Ṫr,i =
.
E
i (ui ) =
0
ara,i (V̇a,i )
0
aext,i
−
0
0
diag{as,i j } j∈Ni 0
aww,i (V̇w,i )
0
0
0
V̇w,i
V̇a,i
,
i
are the state vector, control-input vector and disturbance
2.2
Information structures
Different information structures will be considered depending on the sensors available. More specifically, for the
remainder of the paper, we consider the following cases:
• Full information (FI): All state, control and disturbance
variables can be measured except for the separator state
variables {Ts,i j } j .
• Medium Information (MI): All state, control and disturbance variables can be measured except for the separator state variables, {Ts,i j } j , and the water state variable
at the point of heat exchange, Tw,i .
• Limited information (LI): All state, control and disturbance variables can be measured except for the separator state variables {Ts,i j } j , the water state variable Tw,i
−
+
and the disturbance variables Tw,i
and Tw,i
.
The specifics can also be visualized in Table 1. The introduced alternative structures intend on demonstrating the
flexibility of the proposed derivation of nonlinear regressors,
since different residential buildings may not necessarily incorporate all sensors of the FI structure.
The case of limited information (LI) corresponds to the
case where only the flow rates are known in the RH and/or
the HVAC system, while the corresponding inlet/outlet temperatures of the thermal medium cannot be measured. Such
assumption may be reasonable for most today’s residential
buildings.
Alternative information structures may be considered,
such as the ones where Q̇ext,i (due to radiation or peoples’
presence) is not known, however such cases can be handled
through the above cases and with small modifications.
Tr,i
{Ts,i j } j∈Ni
Tw,i
V̇w,i
V̇a,i
+
Tw,i
+
Ta,i
{Tr, j } j∈Ni
Q̇ext,i
(FI)
X
X
X
X
X
X
X
X
(MI)
×
X
×
X
X
X
X
X
X
(LI)
×
X
×
×
X
X
×
×
X
X
Table 1: Information structures with measured variables.
2.3
The RH dynamics can be written as follows
System separation under FI
Under the full information structure (FI), a natural separation of the system dynamics can be introduced between
the zone and RH dynamics. Such separation will be used at
several points throughout the paper, and it is schematically
presented in Figure 2.
ẋw,i (t) = Aw,i (uw,i (t))xw,i (t) + Ew,i (uw,i (t))dw,i (t),
where
.
xw,i = Tw,i ,
dw,i
uw,i
dr,i
Tw,i
RH
.
Aw,i (ui ) = awc,i (V̇w,i ) ,
ur,i
Fig. 2: System separation architecture under FI.
ẋr,i (t) = Ar,i (ur,i (t))xr,i (t) + Er,i (ur,i (t))dr,i (t)
(7)
.
where ur,i = V̇a,i is the control-input vector and
Tw,i
+
Ta,i
.
dr,i =
col{Tr, j } j∈N ,
i
Q̇ext,i
.
xr,i =
Tr,i
col{Ts,i j } j∈Ni
,
are the state vector and disturbance vector of the zone dynamics, respectively. Furthermore, we define
.
Ar,i (ui ) =
row{a+
rs,i j } j∈Ni
ar,i (V̇a,i )
,
col{a+
s,i j } j∈Ni diag{as,i j } j∈Ni
0
aext,i
. arw,i ara,i (V̇a,i )
Er,i =
.
0
0
diag{a−
s,i j } j∈Ni 0
.
dw,i =
+
Tw,i
Tr,i
.
Ew,i = aww,i (V̇w,i ) aw,i .
3
In particular, the overall (heat-mass transfer) dynamics
for a single zone can be written as follows
.
uw,i = V̇w,i ,
are the state vector, control-input vector and disturbance vector of the RH system, respectively. Furthermore, we define
the following system matrices
Tr,i
Zone
(8)
Linear Regression Models (LRM)
We would like to focus on models for system identification which (to great extent) retain the physical interpretation of the parameters. To this end, an output-error model
structure (cf., [9]) is considered, where a one-to-one correspondence between the identification parameters and the
discrete-time model parameters can be established. In the
remainder of this section, we provide some background on
the Output-Error (OE) model structure, while we also discuss the implicit assumptions made when a linear structure
of the dynamics is assumed.
3.1
Background: Output-Error (OE) model
If we assume that the relation between input u and the
undisturbed output of the system w can be written as a linear difference equation, and that the disturbance consists of
white measurement noise v, then we obtain:
w(k) + f1 w(k − 1) + ... + fn f w(k − n f )
= b1 u(k − 1) + ... + bnb u(k − nb )
(9)
and the output is y(k) = w(k) + v(k), for some positive integers n f ≥ nb . The parameter vector to be determined is:
.
θ = (b1 , . . . , bnb , f1 , . . . , fn f ). Since w(k) is not observed, it
should be constructed from previous inputs and it should
carry an index θ.
The natural predictor (resulting from a maximum a pos-
teriori predictor1 ) is: ŷ(k|θ) = w(k, θ), and it is constructed
from past inputs only. If we define the vector:
.
ϕ(k, θ) = u(k − 1), . . . , u(k − nb ),
−w(k − 1, θ), . . . , −w(k − n f , θ) ,
may lead to significant identification errors. To see this, let
us consider the full information structure (FI). In a forthcoming section, we will show that the following approximation
of the RH part of the dynamics holds for sufficiently small
sampling period ε > 0:
(10)
then, the predictor can be written more compactly as:
ŷ(k|θ) = ϕ(k, θ)T θ, leading to a linear regression model
(LRM), where ϕ(k, θ) is called the regression vector. To simplify notation, in several cases we will write ϕ(k) instead of
ϕ(k, θ). Note that in the above expression, the w(k − j, θ),
j = 1, 2, ..., n f , are not observed, but using the above maximum a-posteriori predictor, they can be computed using
previous inputs as follows: w(k − j, θ) = ŷ(k − j|θ), j =
1, 2, ..., n f .
Such output-error model predictors will be used
throughout the paper. However, note that, depending on
the physics of the identified system, the regression vector
ϕ defined in (10) may be nonlinear with respect to the input/output lagged variables. In such cases, the resulting predictor will be referred to as a linear regression model with
nonlinear regressors (NRM).
3.2
Tw,i (k + 1) ≈
+
(1 + εawc,i (V̇w,i (k)))Tw,i (k) + εaww,i (V̇w,i (k))Tw,i
(k)+
εaw,i Tr,i (k),
plus higher-order terms of ε.
According to this (finite-response) approximation, a linear regression vector ϕ may describe well the evolution of
Tw,i as long as either the flow rate V̇w,i or the temperatures
+
Tw,i and Tw,i
are not varying significantly with time. However, variations in the flow rate V̇w,i and in the water temperature Tw,i may be large with time. Similar are the conclusions
when investigating the effect of the air flow rate V̇a,i in the
evolution of the zone temperature. The exact effect of these
nonlinear effects cannot be a-priori determined.
4
Linear Regression Models with Nonlinear Regressors
(NRM)
In this section, we will explore the formulation of OE
model structures when the regressors in ϕ may be nonlinear
functions of lagged input/output variables. Such investigation will be performed under the FI, MI and LI structure,
extending prior work of the same authors [1] to a larger set
of possible information structures (beyond the FI structure).
For the derivation, we will be using the following notation: Let q−1 {·} denote the one-step delay operator, i.e.,
q−1 {x(k)} = x(k − 1). Note that the delay operator is linear.
Let us also define the following operators:
.
Ps,i j = [(1 + εas,i j )q−1 ]
.
Pwc,i (V̇w,i ) = [(1 + εawc,i (V̇w,i ))q−1 ]
.
Pr,i (V̇a,i ) = [(1 + εar,i (V̇a,i ))q−1 ]
Discussion
Output-error model structures do not consider any disturbance terms in the process dynamics, instead they only
consider measurement noise. Hence, such model structures
are rather appropriate for providing a clear picture of the impact of the assumed (physical) model into the prediction error. On the other hand, when considering structures with process noise, such as ARMAX model structures (cf., [9, Section 4.2]), the process dynamics are perturbed by artificial
terms, which are not easily motivated by the physical insight.
Since the goal of this paper is to evaluate the impact of the
assumed (physical) dynamics, we consider OE models more
appropriate for evaluating predictions. This, however, does
not imply that the performance of an OE model structure is
necessarily better compared to an ARMAX model. The goal
of this paper is not to provide such a comparison.
Furthermore, identifiability (cf. [9, Section 4.6]) of the
considered OE model structures will be guaranteed by the
(inherent) controllability of the heat-mass transfer dynamics.
where ε > 0 defines the sampling period. Define also
.
Qs,i j = [1 − Ps,i j ],
.
Qwc,i (V̇w,i ) = [1 − Pwc,i (V̇w,i )],
.
Qr,i (V̇a,i ) = [1 − Pr,i (V̇a,i )].
3.3
For any operator P{·}, P−1 {·} will denote its inverse operator, i.e.,
Linear approximation & implicit assumptions
We wish to explore the utility of an OE model structure
using a linear regression vector ϕ into identifying the nonlinear system dynamics of Equation (5). Note that the original
dynamics (5) are bilinear in nature due to multiplications of
the flow rates (V̇w,i , V̇a,i ) with state or disturbance variables.
An investigation of the original dynamics (5) reveals that
when an output-error model is considered that admits a linear regression vector ϕ, we implicitly admit assumptions that
1 i.e., the one that maximizes the probability density function of the output given observations up to the previous time instant.
P−1 {P{x(k)}} = x(k).
Property 4.1. For each zone i ∈ I , the family of operators
{Qs,i j } j∈Ni are pairwise commutative, i.e.,
Qs,i j Qs,i j0 {x(k)} = Qs,i j0 Qs,i j {x(k)}
for any j, j0 ∈ I with j 6= j0 .
Proof. For each zone i ∈ I , and for any two neighboring
zones j, j0 ∈ Ni , we have the following:
T̂w,i (k|θw,i ) ≈ ϕw,i (k)T θw,i plus higher-order terms of ε, where
θw,i is a vector of unknown parameters and
Qs,i j Qs,i j0 {x(k)}
= [1 − Ps,i j ] 1 − Ps,i j0 {x(k)}
= [1 − Ps,i j ] x(k) − (1 + εas,i j0 )x(k − 1)
= x(k) − (1 + εas,i j0 )x(k − 1)−
(1 + εas,i j )x(k − 1) + (1 + εas,i j )(1 + εas,i j0 )x(k − 2).
T̂w,i (k − 1|θw,i )
. V̇w,i (k − 1)T̂w,i (k − 1|θw,i )
.
ϕw,i (k) =
+
V̇w,i (k − 1)Tw,i
(k − 1)
Tr,i (k − 1)
It is straightforward to check that the same expression also
occurs if we expand Qs,i j0 Qs,i j , due to the fact that (1+εas,i j )
commutes with (1 + εas,i j0 ). Thus, the conclusion follows. •
Proof. By a Taylor-series expansion of the RH dynamics, the
finite-step response of the water temperature Tw,i (k) is
Tw,i (k) ≈
(1 + εawc,i (V̇w,i (k − 1)))Tw,i (k − 1)+
+
εaww,i (V̇w,i (k − 1))Tw,i
(k − 1) + εaw,i Tr,i (k − 1),
Another two properties that will be handy in several
cases are the following:
Property 4.2. For each zone i ∈ I , we have
plus higher-order terms of ε. Equation (12) directly results
from the definition of the delay operator Pwc,i (V̇w,i ). From
Equation (12), a regression vector may be derived of the desired form. •
[Qs,i j Qwc,i (V̇w,i (k))] = [Qwc,i (V̇w,i (k))Qs,i j ]−
(1 + εas,i j )ε (1 − q−1 )awc,i (V̇w,i (k)) q−2 .
Proof. See Appendix A. •
Property 4.3. For some finite index set A and signal x(k),
h
∏ Qs,i j
j∈A
i
{x(k)} = x(k) +
|A|
∑ αm x(k − m),
(13)
(11)
m=1
for some constants α1 , ..., α|A| .
Proof. The conclusion follows in a straightforward manner
by induction and the definition of the delay operator Qs,i j . •
4.1.2 Zone dynamics
A similar approach to Proposition 4.1 for the derivation
of a nonlinear regression vector can also be employed for the
zone dynamics.
Proposition 4.2 (NRM for Zone under FI). For
sufficiently small sampling period ε > 0, the maximum
a-posteriori predictor of Tr,i (k) can be approximated by
T̂r,i (k|θr,i ) ≈ ϕr,i (k)T θr,i plus higher-order terms of ε, where
θr,i is a vector of unknown parameters and
|Ni |+1
col{T̂r,i (k − m|θr,i )}m=1
n
o
|Ni |+1
col col{Tr, j (k − m)}m=2
j∈Ni
Ni |+1
. col{V̇a,i (k − m)T̂r,i (k − m|θr,i )}|m=1
.
ϕr,i (k) =
col{V̇a,i (k − m)T + (k − m)}|Ni |+1
a,i
m=1
|Ni |+1
col{T̂w,i (k − m|θw,i )}m=1
|Ni |+1
col{Q̇ext,i (k − m)}m=1
4.1
Full-Information (FI) structure
Using the natural decomposition of the dynamics under
the FI structure described in Section 2.3, in the following
subsections we provide a derivation (using physical insight)
of nonlinear regression vectors under an OE model structure
for the two subsystems of the overall dynamics.
4.1.1 RH dynamics
By isolating the RH dynamics, we first formulate a
(finite-response) prediction of the water temperature Tw,i , as
the following proposition describes.
Proposition 4.1 (NRM for RH under FI). For
sufficiently small sampling period ε > 0, the RH dynamics (8)
can be approximated by:
Tw,i (k) ≈ Pwc,i (V̇w,i (k − 1)){Tw,i (k)}+
+
εaww,i (V̇w,i (k − 1))Tw,i
(k − 1) + εaw,i Tr,i (k − 1), (12)
plus higher-order terms of ε. Furthermore, the maximum
a-posteriori predictor of Tw,i (k) can be approximated by
Proof. See Appendix B. •
4.2
Medium-Information (MI) structure
The case of MI structure is slightly different from the
FI structure, since the water temperature, Tw,i , at the point
of heat exchange cannot be directly measured, therefore the
dynamics of the RH and the zone cannot be separated as was
demonstrated in Figure 2. The following proposition provides the corresponding NRM predictor for the MI structure.
Proposition 4.3 (NRM for Zone under MI). For sufficiently small sampling period ε > 0, the maximum aposteriori predictor of Tr,i (k) can be approximated by
T̂r,i (k|θr,i ) ≈ ϕr,i (k)T θr,i plus higher-order terms of ε, where
θr,i is a vector of unknown parameters and
.
ϕr,i (k) =
|Ni |+2
col{T̂r,i (k − m|θr,i )}m=1
n
o
|Ni |+2
col col{Tr, j (k − m)}m=2
j∈Ni
N
+2
|
|
i
col{V̇a,i (k − m)T̂r,i (k − m|θr,i )}m=1
|Ni |+2
+
col{
V̇
(k
−
m)T
(k
−
m)}
a,i
a,i
m=1
N
+2
|
|
i
+
col{V̇a,i (k − m)V̇w,i (k − m)Ta,i
(k − m)}m=2
(14)
N
+1
| i|
.
col{
V̇
(k
−
m
−
1)
T̂
(k
−
m|θ
)}
w,i
r,i
r,i
m=1
|Ni |+2
col{V̇w,i (k − m)T̂r,i (k − m|θr,i )}m=2
|Ni |+2
col{V̇w,i (k − m)V̇a,i (k − m)T̂r,i (k − m|θr,i )}m=2
|Ni |+2
+
col{V̇w,i (k − m)Tw,i
(k − m)}m=2
|Ni |+2
col{Q̇ext,i (k − m)}m=1
|Ni |+2
col{V̇w,i (k − m)Q̇ext,i (k − m)}m=2
Proof. The proof follows similar reasoning with the proof of
Proposition 4.2 and it is presented in Appendix C. •
4.3
Limited-Information (LI) structure
Similarly to the derivation of the NRM for the FI and
MI structures, we may also derive the corresponding NRM
for the case of the LI structure. Note that the difference of
such information structure (compared to MI) is the fact that
+
+
the inlet temperatures of the thermal mediums (Tw,i
and Ta,i
)
cannot be measured. In such case, the only valid assumption
is the fact that the inlet temperature of the thermal mediums
remain constant with time. In this case, Proposition 4.3 con+
+
tinue to hold, when Tw,i
and Ta,i
are replaced with unity in
(14) (i.e., they become part of the parameters in θr,i ).
5
Performance Evaluation
In this section, we provide a comparison between standard linear regression models and the nonlinear ones derived
in Section 4. The comparison will be performed both with
respect to a) identification error, and b) prediction performance within a standard MPC implementation for climate
control in a residential building. The second part is considered the most representative of the performance of an identification model since even small prediction errors might lead
to significant performance degradation.
5.1
Simulation platform
In order to compare the proposed nonlinear regression
vectors with standard linear regressors, we used the EnergyPlus (V7-2-0) building simulator developed by the U.S.
Department of Energy [13]. The BCVTB simulation tool
has also been used for allowing data collection and also climate control developed in MATLAB to be implemented during run-time. A typical (three-storey) residential building in
Linz, Austria, was accurately modeled and simulated with
the EnergyPlus environment to allow for collecting data from
a realistic residential environment.
5.2
Data generation
The data collection for system identification was performed under normal operating conditions of the heating system during the winter months (October - April) under the
weather conditions of Linz, Austria. To replicate normal operating conditions, a standard hysteresis controller was employed, according to which the water flow V̇w,i (k) is updated
regularly at time instances tk = kTsam where Tsam = 1/12h.
The control law is as follows:
.
V̇w,i (k) =
V̇w,max
0
if pi (k) > 0 and
(Tr,i (k) < Tset − ∆T or
(Tr,i (k) ≥ Tset − ∆T and Tr,i (k − 1) ≤ Tr,i (k))),
else
where ∆T determines a small temperature range about the
desired (set) temperature, Tset , in which the control flow
maintains its value and V̇w,max denotes the maximum water
flow set to 0.0787kg/sec. Furthermore, pi (k) ∈ {0, 1} indicates whether people are present in thermal zone i. In other
words, the hysteresis controller applies only if someone is
present in the thermal zone i, which can be easily determined
through a motion detector. Furthermore, the inlet water tem+
perature Tw,i
is determined by the following heating curve:
+
Tw,i
.
=
(
ρ0 + ρ1 · (Tset − Tout )ζ , if Tset > Tout
ρ0 ,
else.
(15)
where ρ0 , ρ1 and ζ are positive constants. For the simulation,
we set ∆T = 0.1, ρ0 = 29.30, ρ1 = 0.80 and ζ = 0.97. The
set temperature was set equal to Tset = 21oC.
There exists a natural ventilation system that operates
autonomously with an intermittent flow pattern V̇a,i . Thus,
the only control parameter is V̇w,i . All parameters mentioned
in Table 1 can be measured, except for {Ts,i j } j , which allows
for evaluating all considered structures (FI, MI and LI).
5.3
Recursive system identification
To utilize the linear and nonlinear regression vectors for
system identification of heat-mass transfer dynamics of a residential building, we use an OE model structure as described
in Section 4, while we employ a regularized recursive leastsquares implementation (cf., [17, Section 12.3]) for training its parameters. The reason for implementing a recursive identification procedure is primarily due to the fact that
the size of the available data is quite large, which makes the
use of standard least-squares approaches practically infeasible. Besides, predictions for the zone temperature will be
needed continuously during run-time, demanding for more
efficient computational schemes. Furthermore, a recursive
least squares implementation allows for an adaptive response
to more recent data, thus capturing more recent effects.
the form (17) the following condition is satisfied:2
#2
"
E ∆Gi0 (q)yi (k) + ∑ ∆Gil (q)zl (k)
l∈L
5.4
Identification experiments
The experiments performed in this study will involve a)
a standard linear regression model, with b) the derived NRM
of Proposition 4.3 (which corresponds to the MI structure).
Our intention is to evaluate the benefit of the more accurate
physical representation of the models derived in Section 4.
In both regression vectors (linear or nonlinear) we consider
|Ni | = 1, i.e., we implicitly assume that there is a single
neighboring zone. This can be justified by the fact that the
building can be considered as a single thermal zone, since
the same heating schedule is employed in all rooms.
In particular, the linear regression model (LRM) implemented corresponds to the following structure:
|Ni |+2
col{
T̂
(k
−
m|θ
)}
r,i
r,i m=1
n
o
|Ni |+2
col col{Tr, j (k − m)}m=1
j∈Ni
|Ni |+2
col{V̇a,i (k − m)}m=1
.
|Ni |+2
+
ϕr,i (k) =
,
col{Ta,i (k − m)}m=1
|Ni |+2
col{V̇w,i (k − m)}m=1
N
+2
|
|
col{T + (k − m|θr,i )} i
w,i
m=1
|Ni |+2
col{Q̇ext,i (k − m)}m=1
(16)
i.e., it only considers delayed samples of the state and the
available input and disturbance parameters. The NRM model
implemented corresponds to the one of Equation (14).
5.5
Persistence of Excitation
Following the analysis of Section 4, for any thermal
.
zone i ∈ I , the output of the zone yi (k) = Tr,i (k) can briefly
be expressed in the following form:
(17)
l∈L
for some linear transfer functions Gi0 (q), Gil (q), l ∈ L and
some index set L ⊂ N. The terms zil (k) represent time sequences of input, disturbance or output signals, or products
of such terms. For example, if we consider the MI model
presented in (14), the terms zil may correspond to a disturbance, such as Tr, j , j ∈ Ni , or to a product of a control input
and the output, such as V̇w,i (k)Tr,i (k).
A model of the generic form of equation (17) is uniquely
determined by its corresponding transfer functions Gi0 (q),
Gil (q), l ∈ L. Thus, in order for the data to be informative,
the data should be such that, for any two different models of
(18)
where ∆Gi0 (q), ∆Gil (q) denote the corresponding differences between the transfer functions of the two different
.
models. Moreover, E = limN→∞ 1/N ∑Nk=1 E[·] and E[·] denotes the expectation operator. Since the models are assumed
different, either ∆Gi0 (q) or ∆Gil (q), for some l ∈ L, should
be nonzero. Thus, if ∆Gil (q) 6= 0, for some l ∈ L, the persistence of excitation 3 of zil (k) will guarantee (18).
Since data are collected through a closed-loop experiment, special attention is needed for terms zil (k) which involve the product of the water flow rate V̇w,i (k) with the output yi (k) of the system, such as the term V̇w,i (k)yi (k) in (14).
Note, however, that this term is nonlinear in both yi (k) and
the presence/occupancy indicator, since the hysteresis controller a) applies only when someone is present in the zone
and b) is a nonlinear function of the output yi (k). Thus, it
is sufficient for either the input signal or the occupancy indicator to be persistently exciting in order for the experiment
to be informative. In Figure 3, we provide the positive spectrum generated through the discrete Fast Fourier Transform
(DFFT) of the disturbance and control signals.
For either the simplified model of (16) or the more detailed model of (14), note that any transfer function (from
an input/disturbance to the zone temperature) requires the
identification of at most 2( Ni + 2) parameters. Thus, in
order for an experiment to be informative enough, it is sufficient that the considered inputs/disturbances are persistently exciting of order 2( Ni + 2) or higher. In the case of
Ni = 1 (considered in this experiment), it suffices that any
input/disturbance signal is persistently exciting of order 6 (or
5 in case one of the frequencies is at zero, as it is the case for
all signals in Figure 3). As demonstrated in Figure 3, this
condition is satisfied by the input/disturbance signals, since
for all of them the positive spectrum is non-zero in at least 3
distinct frequencies.
5.6
yi (k) = Gi0 (q)yi (k) + ∑ Gil (q)zil (k),
6= 0,
Identification error comparison
In Figure 4, we demonstrate the resulting identification error under the LRM of (16) and the NRM of (14) for
Ni = 1. Note that the NRM achieves a smaller identification error of about 10%. This reduced error is observed later
in time (due to the larger training time required by the larger
number of terms used in the nonlinear regression vector).
The data used for training correspond to data collected between October and April from the simulated building, however the same data have been reused several times for better
fitting. Thus, the time axis in Figure 4 does not correspond
to the actual simulation time, but it corresponds to the accumulated time index of the reused data.
2 cf.,
3 cf.,
discussion in [9, Section 13.4].
[9, Definition 13.2]
Presence
Running Average Squared Error (oC2 )
0.4
Power
400
0.3
200
0
LRM
NRM
0
0.1
0.2
0.3
0.4
Water Flow
0.5
0.6
0.2
0.1
0
1
0.5
Power
400
Power
Power
1
0.8
0.6
0.4
0.2
0
1
0.8
0.6
0.4
0.2
0
Power
2
0
·105
0
·104
0
0.1
0.2
0.3
Water Inlet Temperature
1
0.5
1.5
2
2.5
Outdoor Temperature
5 · 10−2
·107
0.1
0.4
0.15
3
·10−2
0.2
min
·105
Nhor n
α∑
o
2
p̂i (k) T̂r,i (k) − Tset (k) /Nhor +
k=0
Nhor −1 n
Solar Radiation
∑
k=0
s.t.
1
0.5
0
5 · 10−2
0.1
0.15
0.2
Frequency (1/h)
Fig. 3: Positive Spectrum of Input and Disturbance Signals.
5.7
3.5
weather conditions may dramatically change within a few
days). This may potentially result in a significant degradation of the prediction performance when the prediction has
not taken into account the effect of the nonlinearities present
in the dynamics. The effect of these nonlinearities may not
be known a-priori and a thorough investigation is necessary
to understand their potential economic impact.
To this end, we designed a standard MPC for the RH
system of the main living area of the residential building.
The goal is to evaluate the performance of the previously designed (nonlinear) regression vectors for the zone temperature prediction compared to standard (black-box) linear regression vectors. The structure of the MPC is rather simple
and addresses the following optimization problem.
o
−
+
βTsam Tw,i
(k) − T̂w,i
(k) + γTsamV̇w,i (k)
(19a)
1.5
0
3
Fig. 4: Identification performance for the zone temperature.
200
0
2
1.5
2.5
time (hours)
Prediction performance comparison
Small identification error may not necessarily guarantee
a good prediction performance, especially when predictions
are required for several hours ahead. From the one hand,
training for system identification is based upon the one-step
ahead innovations, which does not necessarily guarantee a
similar accurate prediction for several hours ahead. On the
other hand, training might have been performed under operating conditions that are (possibly) different from the operating conditions under which predictions are requested (e.g.,
var.
T
(19b)
−
T̂w,i
(k) ≈ T̂w,i (k) ≈ ϕw,i (k) · θw,i
+
Tw,i (k) ∈ {40oC, 45oC},
(19c)
T̂r,i (k) ≈ ϕr,i (k) θr,i
V̇w,i (k) ∈ {0, V̇w,max },
(19d)
(19e)
k = 0, 1, 2, ..., Nhor − 1,
Note that the first part of the objective function (19a)
corresponds to a comfort cost. It measures the average
squared difference of the zone temperature from the desired
(or set) temperature entered by the user at time k. The set
temperature was set equal to 21oC throughout the optimization horizon. The variable p̂i (k) ∈ {0, 1} holds our estimates
on whether people are present in zone i at time instance k.
The second part of the objective function (19a) corresponds to the heating cost, while the third part corresponds
to the pump-electricity cost. The nonnegative parameters
β, γ were previously identified for the heating system of
the simulated building and take values: β = 0.3333kW /oCh,
γ = 0.5278 · 103 kW sec/hm3 . The non-negative constant α is
introduced to allow for adjusting the importance of the comfort cost compared to the energy cost. A large value has been
assigned equal to 106 to enforce high comfort.
The p̂i (k), k = 1, 2, ...,, as well as the outdoor temperature, Tout , and the solar gain, Q̇ext (k), are assumed given
(i.e., predicted with perfect accuracy). This assumption is
essential in order to evaluate precisely the impact of our temperature predictions (19b) in the performance of the derived
optimal controller.
The sampling period was set to Tsam = 1/12h, the optimization period was set to Topt = 1h, and the optimization
horizon was set to Thor = 5h. This implies that Nhor = 5 ·12 =
60. Furthermore, the control variables are the inlet water
temperature which assumes two values (40oC and 45oC) and
the water flow which assumes only two values, the minimum
0kg/sec and the maximum V̇w,max = 0.0787kg/sec.
For the prediction model of (19b), we used either the
LRM of (16) or the NRM of (14), both trained offline using data collected during normal operation of a hysteresis
controller (as demonstrated in detail in Section 5.2). For
the prediction model of the outlet water temperature used in
the computation of the cost function, we used the prediction
model derived in (13). As in the case of Section 5.6 where the
LRM and the NRM were compared, we evaluated the value
of the cost function under these two alternatives. The performances of the two models with respect to the comfort cost
(which corresponds to the 1st term of the objective (19a))
are presented in Figure 5. The performances of the two models with respect to the energy spent (i.e., heating and pump
electricity cost), which correspond to the 2nd and 3rd term
of the objective (19a), are presented in Figure 6. Note that
the NRM achieves lower comfort cost using less amount of
energy which is an indication of small prediction errors.
3
2.5
2
1.5
1
0.5
0
Comfort Cost |Tr,i (k) − Tset | (oC)
LRM
NRM
0
200
400
600 800
Time (h)
1,000 1,200
Fig. 5: Running-average comfort cost under LRM and NRM.
5.8
Discussion
Note that the NRM achieves an improvement to both
prediction accuracy and cost performance. The fact that the
LRM may not provide a similarly good performance as the
Heating and Pump Electricity Cost (kW h/Tsam )
0.2
0.1
0
LRM
NRM
0
200
400
600 800
Time (h)
1,000 1,200
Fig. 6: Running-average heating and pump electricity cost
under LRM and NRM.
NRM could be attributed to the fact that the operating conditions for testing the MPC controller might not be identical with the training conditions. Note, for example, that
the training phase was performed under the heating-curve
pattern of (15), while the MPC employs only two alternative water inlet temperatures. This variation in the inlet water temperature may have exhibited different patterns in the
nonlinear parts of the dynamics, thus affecting the prediction
performance.
The simulation study presented in this paper considered
a regression vector generated with Ni = 1, which corresponds to the case of a single neighboring zone (e.g., the
outdoor temperature). The proposed regression models are
also applicable for higher-order models (i.e., when more than
one neighboring zones are present), however it might be necessary that some of the input signals are designed to include
larger number of frequencies. Note, however, that the input
signals that can be designed (e.g., medium flow rates and inlet temperatures) appear as products with other disturbance
variables in the derived regression models (e.g., equation
(14)). Thus, persistence of excitation for such products of
non-correlated signals is an easier task. On the other hand,
some of the disturbance signals (e.g., the solar radiation or
the outdoor temperature) cannot be designed. Of course, the
prediction model will operate at conditions that are similar to
which it was identified. However, the impact of such disturbance signals when they are not informative enough needs to
be further investigated.
6
Conclusions
We derived analytically regression models admitting
nonlinear regressors specifically tailored for system identification and prediction for thermal dynamics in buildings. The
proposed models were compared with standard (black-box)
linear regression models derived through the OE structure,
and an improvement was observed with respect to both the
resulting prediction performance as well as the energy cost.
The greatest advantage of the proposed identification scheme
relies on the fact that it provides a richer and more accurate
representation of the underlying physical phenomena, contrary to standard black-box identification schemes.
References
[1] Chasparis, G., and Natschläger, T., 2014. “Nonlinear system identification of thermal dynamics in buildings”. In Control Conference (ECC), 2014 European,
pp. 1649–1654.
[2] Nghiem, T., and Pappas, G., 2011. “Receding-horizon
supervisory control of green buildings”. In Proc. of the
2011 American Control Conference, pp. 4416–4421.
[3] Oldewurtel, F., Parisio, A., Jones, C., Morari, M., Gyalistras, D., Gwerder, M., Stauch, V., Lehmann, B., and
Morari, M., 2012. “Use of model predictive control and
weather forecasts for energy efficient building climate
control”. Energy and Buildings, 45, pp. 15–27.
[4] Nghiem, T., Pappas, G., and Mangharam, R., 2013.
“Event-based green scheduling of radiant systems in
buildings”. In Proc. of 2013 American Control Conference (ACC), pp. 455–460.
[5] Touretzky, C., and Baldea, M., 2013. “Model reduction
and nonlinear MPC for energy management in buildings”. In Proc. of 2013 American Control Conference
(ACC), pp. 455–460.
[6] Coogan, S., Ratliff, L., Calderone, D., Tomlin, C., and
Sastry, S., 2013. “Energy management via pricing in
LQ dynamic games”. In Proc. of 2013 American Control Conference (ACC), pp. 443–448.
[7] Rasmussen, B., Alleyne, A., and Musser, A., 2005.
“Model-driven system identification of transcritical vapor compression systems”. IEEE Transactions on Control Systems Technology, 13(3), pp. 444–451.
[8] Maasoumy, M., Razmara, M., Shahbakhti, M., and Vincentelli, A. S., 2014. “Handling model uncertainty in
model predictive control for energy efficient buildings”.
Energy and Buildings, 77, pp. 377 – 392.
[9] Ljung, L., 1999. System Identification: Theory for the
User, 2nd ed. Prentice Hall Ptr, Upper Saddle River,
NJ.
[10] Yiu, J.-M., and Wang, S., 2007. “Multiple ARMAX
modeling scheme for forecasting air conditioning system performance”. Energy Conversion and Management, 48, pp. 2276–2285.
[11] Scotton, F., Huang, L., Ahmadi, S., and Wahlberg, B.,
2013. “Physics-based modeling and identification for
HVAC systems”. In Proc. of 2013 European Control
Conference (ECC), pp. 1404–1409.
[12] Malisani, P., Chaplais, F., Petit, N., and Feldmann, D.,
2010. “Thermal building model identification using
time-scaled identification models”. In Proc. of 49th
IEEE Conference on Decision and Control, pp. 308–
315.
[13] EnergyPlus. EnergyPlus Energy Simulation software
Version: 7-2-0.
[14] Karnopp, D., Margolis, D., and Rosenberg, R., 2012.
System Dynamics: Modeling, Simulation and Control
of Mechatronic Systems, 5th ed. John Wiley & Sons,
Inc, Hoboken, NJ.
[15] Thirumaleshwar, M., 2009. Fundamentals of Heat &
Mass Transfer. Dorling Kindersley (India) Pvt. Ltd,
New Dehli, India.
[16] Karnopp, D., 1978. “Pseudo bond graphs for thermal
energy transport”. ASME Journal of Dynamic Systems,
Measurement and Control, 100, pp. 165–169.
[17] Sayed, A., 2003. Fundamentals of Adaptive Filtering.
John Wiley & Sons, Inc., New Jersey.
A
Proof of Property 4.2
We can write:
[Qs,i j Qwc,i
(V̇w,i (k))] {x(k)}
= Qs,i j Qwc,i (V̇w,i (k)) {x(k)}
= Qs,i j x(k) − (1 + εawc,i (V̇w,i (k)))x(k − 1)
= x(k) − (1 + εawc,i (V̇w,i (k)))x(k − 1)−
(1 + εas,i j )x(k − 1) + (1 + εas,i j )x(k − 2)+
(1 + εas,i j )εawc,i (V̇w,i (k − 1))x(k − 2).
Similarly, we have:
[Qwc,i (V̇w,i (k))Qs,i j ] {x(k)}
= x(k) − (1 + εawc,i (V̇w,i (k)))x(k − 1)−
(1 + εas,i j )x(k − 1) + (1 + εas,i j )x(k − 2)+
(1 + εas,i j )εawc,i (V̇w,i (k))x(k − 2).
Thus, we have:
[Qs,i j Qwc,i (V̇w,i (k))] {x(k)} =
[Qwc,i (V̇w,i (k))Qs,i j ] {x(k)} −
(1 + εas,i j )ε awc,i (V̇w,i (k)) − awc,i (V̇w,i (k − 1)) x(k − 2).
Finally, we may write:
[Qs,i j Qwc,i (V̇w,i (k))] =
[Qwc,i (V̇w,i (k))Qs,i j ]−
(1 + εas,i j )ε (1 − q−1 )awc,i (V̇w,i (k)) q−2 .
(20)
B
Proof of Proposition 4.2
A Taylor-series expansion of the continuous-time zone
dynamics leads to the following approximation:
Tr,i (k) ≈
Pr,i (V̇a,i (k − 1)){Tr,i (k)} + ε
∑ a+rs,i j Ts,i j (k − 1)+
j∈Ni
+
εarw,i Tw,i (k − 1) + εara,i (V̇a,i (k − 1))Ta,i
(k − 1)+
εaext,i Q̇ext,i (k − 1),
(21)
plus higher-order terms of ε. Furthermore, note that the finite
response of the separator-dynamics can be approximated by:
n
o
−
+
Ts,i j (k) ≈ Q−1
(22)
s,i j εas,i j Tr,i (k − 1) + εas,i j Tr, j (k − 1)
plus higher-order terms of ε. Given that the family of operators Qs,i j , j ∈ Ni are pairwise commutative by Property 4.1,
if we replace Ts,i j into (21) and we multiply both sides of the
above expression by the composite operator Qs,i j1 · · · Qs,i j|N | ,
i
briefly denoted by ∏ j∈Ni Qs,i j {·}, we have:
Tr,i
h (k) ≈
1−
∏ Qs,i j
j∈Ni
h
Qs,i j
∏
j∈Ni
2
ε
ε
i
Pr,i (V̇a,i (k − 1)){Tr,i (k)} +
∑
a+
rs,i j
∑
a+
rs,i j
j∈Ni
2
j∈Ni
εarw,i
h
∏
j∈Ni
h
ε
in
o
Tr,i (k) +
∏
j∈Ni
εaext,i
h
in
o
∏ Qs,i` a+s,i j Tr,i (k − 2) +
h `6= j
in
o
−
Q
a
T
(k
−
2)
+
r,
j
s,i`
∏
s,i j
`6= j
in
o
Qs,i j Tw,i (k − 1) +
in
o
+
Qs,i j ara,i (V̇a,i (k − 1))Ta,i
(k − 1) +
h
∏ Qs,i j
j∈Ni
i
Q̇ext,i (k − 1) ,
plus higher-order terms of ε. The conclusion follows directly
by using Property 4.3 and expanding the terms of the above
expression.
εaext,i Qwc,i (V̇w,i (k − 2)) Q̇ext,i (k − 1)
plus higher-order terms of ε. Applying to both sides of the
above expression the composite operator Qs,i j1 · · · Qs,i j|N | ,
i
briefly denoted by ∏ j∈Ni Qs,i j {·}, and making use of Property 4.1, we have:
n
o
Tr,i (k) ≈
∏ Qs,i j Qwc,i (V̇w,i (k − 2))
j∈Ni
∏ Qs,i j Qwc,i (V̇w,i (k − 2))
j∈Ni
2
ε
Pr,i (V̇a,i (k − 1)){Tr,i (k)} +
∑ a+rs,i j ∏
Qs,i` [Qs,i j Qwc,i (V̇w,i (k − 2))]Q−1
s,i j
∑ a+rs,i j ∏
−1
Qs,i` [Qs,i j Qwc,i (V̇w,i (k − 2))]Qs,i
j
`∈Ni \ j
n j∈Ni
o
+
as,i j Tr,i (k − 2) +
ε2
`∈Ni \ j
n j∈Ni
o
−
as,i j Tr, j (k − 2) +
n
o
+
ε2 arw,i ∏ Qs,i j aww,i (V̇w,i (k − 2))Tw,i
(k − 2) +
j∈Ni
2
ε arw,i aw,i
∏ Qs,i j {Tr,i (k − 2)} +
j∈Ni
ε
∏ Qs,i j Qwc,i (V̇w,i (k − 2))
j∈Ni
n
o
+
ara,i (V̇a,i (k − 1))Ta,i
(k − 1) +
εaext,i
∏ Qs,i j Qwc,i (V̇w,i (k − 2))
j∈Ni
C
Proof of Proposition 4.3
Under the MI structure, the finite-response approximation of (21) continues to hold for sufficiently small step-size
sampling interval ε. Note that the separator dynamics can
still be approximated according to Equation (22). Furthermore, a Taylor-series expansion of the water dynamics leads
to the following finite-response approximation:
Tw,i (k) ≈ Q−1
wc,i (V̇w,i (k − 1))
n
o
+
εaww,i (V̇w,i (k − 1))Tw,i
(k − 1) + εaw,i Tr,i (k − 1)
plus higher-order terms of ε. According to Property 4.2, we
may perform the substitution:
[Qs,i j Qwc,i (V̇w,i (k − 2))] =
[Qwc,i (V̇w,i (k − 2))Qs,i j ]−
(1 + εas,i j )ε (1 − q−1 )awc,i (V̇w,i (k − 2)) q−2 .
Thus, we may write:
∏
plus higher-order terms of ε. If we replace Ts,i j and Tw,i into
(21) and we apply to both sides of the resulting expression
the operator Qwc,i (V̇w,i (k − 2)), we get
n
o
Qwc,i (V̇w,i (k − 2)) Tr,i (k) ≈
Qwc,i (V̇w,i (k − 2)) Pr,i (V̇a,i (k − 1)){T
n r,i (k)} + o
−1
+
2
ε ∑ ars,i j Qwc,i (V̇w,i (k − 2))Qs,i j a+
s,i j Tr,i (k − 2) +
j∈Ni
ε
2
∑
j∈Ni
n
o
−1
−
a+
Q
(
V̇
(k
−
2))Q
a
T
(k
−
2)
+
r,
j
wc,i
w,i
rs,i j
s,i j
s,i j
+
ε2 arw,i aww,i (V̇w,i (k − 2))Tw,i
(k − 2)+
2
ε arw,i aw,i Tr,i (k − 2)+
n
o
+
εQwc,i (V̇w,i (k − 2)) ara,i (V̇a,i (k − 1))Ta,i
(k − 1) +
Q̇ext,i (k − 1)
j∈Ni
2
ε
n
o
Qs,i j Qwc,i (V̇w,i (k − 2))Qr,i (V̇a,i (k − 1)) Tr,i (k) ≈
∑ a+rs,i j ∏
`∈Ni \ j
j∈Ni
Qs,i` Qwc,i (V̇w,i (k − 2))
n
o
a+
s,i j Tr,i (k − 2) +
ε2
∑ a+rs,i j ∏
`∈Ni \ j
j∈Ni
Qs,i` Qwc,i (V̇w,i (k − 2))
n
o
a−
s,i j Tr, j (k − 2) +
n
o
+
ε2 arw,i ∏ Qs,i j aww,i (V̇w,i (k − 2))Tw,i
(k − 2) +
j∈Ni
ε2 arw,i aw,i
∏ Qs,i j {Tr,i (k − 2)} +
j∈Ni
ε
∏ Qs,i j Qwc,i (V̇w,i (k − 2))
j∈Ni
n
o
+
ara,i (V̇a,i (k − 1))Ta,i
(k − 1) +
εaext,i
∏
j∈Ni
|Ni |+2
Qs,i j Qwc,i (V̇w,i (k − 2)) Q̇ext,i (k − 1) , (23)
Qwc,i (V̇w,i (k − 2))Qr,i (V̇a,i (k − 1)){Tr,i (k)} =
Tr,i (k) − 2Tr,i (k − 1) + Tr,i (k − 2)−
εar,i (V̇a,i (k − 1))Tr,i (k − 1)−
εawc,i (V̇w,i (k − 2))Tr,i (k − 1)+
εawc,i (V̇w,i (k − 2))Tr,i (k − 2)+
εar,i (V̇a,i (k − 2))Tr,i (k − 2)+
ε2 awc,i (V̇w,i (k − 2))ar,i (V̇a,i (k − 2))Tr,i (k − 2).
∏
n
o
Qs,i` Qwc,i (V̇w,i (k − 2)) a+
T
(k
−
2)
r,i
s,i j
`∈Ni \ j
≈
Note that
α5,mV̇w,i (k − m)V̇a,i (k − m)Tr,i (k − m),
m=2
plus higher-order terms of ε. In the following, we will approximate the terms of the above expression, by neglecting
terms of order of ε3 or higher.
|Ni |+2
∑
m=2
∏
j∈Ni
=
|Ni |+2
∑
(4)
+
α1,mV̇w,i (k − m)Tw,i
(k − m),
∏ Qs,i j
j∈Ni
The following also hold:
=
|Ni |+2
∑
m=2
n
o
Qwc,i (V̇w,i (k − 2)) a+
T
(k
−
2)
r,i
s,i j
+
=a+
s,i j Tr,i (k − 2) − (1 + εawc,i (V̇w,i (k − 2)))as,i j Tr,i (k − 3)
n
o
Tr,i+ (k − 2)
(5)
α1,m Tr,i+ (k − m),
∏ Qs,i j Qwc,i (V̇w,i (k − 2))
+
≈a+
s,i j Tr,i (k − 2) − as,i j Tr,i (k − 3)
n
o
+
Qwc,i (V̇w,i (k − 2)) ara,i (V̇a,i (k − 1)Ta,i
(k − 1)
(2)
α1,m Tr,i (k − m),
n
o
+
Qs,i j aww,i (V̇w,i (k − 2))Tw,i
(k − 2)
m=2
(24)
(1)
∑
j∈Ni
=
|Ni |+2
∑
m=1
|Ni |+2
+
=ara,i (V̇a,i (k − 1))Ta,i
(k − 1)−
∑
(5)
+
α1,mV̇a,i (k − m)Ta,i
(k − m)+
(5)
+
α2,mV̇w,i (k − m)V̇a,i (k − m)Ta,i
(k − m),
+
(1 + εawc,i (V̇w,i (k − 2)))ara,i (V̇a,i (k − 2))Ta,i
(k − 2)
m=2
Qwc,i (V̇w,i (k − 2)) Q̇ext,i (k − 1)
∏ Qs,i j Qwc,i (V̇w,i (k − 2))
j∈Ni
=Q̇ext,i (k − 1) − (1 + εawc,i (V̇w,i (k − 2)))Q̇ext,i (k − 2)
where the first expression ignores terms of ε since, in (23),
it multiplies an expression of order of ε2 . Using the above
approximations and Property 4.3, the terms of (23) can be
approximated as follows:
n
o
Tr,i (k)
∏ Qs,i j Qwc,i (V̇w,i (k − 2))Qr,i (V̇a,i (k − 1))
j∈Ni
=Tr,i (k) +
|Ni |+2
∑
m=1
|Ni |+1
∑
m=1
|Ni |+2
∑
m=2
|Ni |+2
∑
m=1
(1)
α1,m Tr,i (k − m)+
(1)
α2,mV̇a,i (k − m)Tr,i (k − m)+
(1)
α3,mV̇w,i (k − 1 − m)Tr,i (k − m)+
(1)
α4,mV̇w,i (k − m)Tr,i (k − m)+
=
|Ni |+2
∑
m=1
|Ni |+2
∑
m=2
n
o
+
ara,i (V̇a,i (k − 1))Ta,i
(k − 1)
Q̇ext,i (k − 1)
(6)
α1,m Q̇ext,i (k − m)+
(6)
α2,mV̇w,i (k − m)Q̇ext,i (k − m).
(×)
for some constant parameters α×,m ∈ R.
It is straightforward to check that using the above approximations, Equation (23) can be written as a linear regression with a nonlinear regression vector of the form depicted
in Equation (14).
| 3 |
Extracting sub-exposure images from a single capture
through Fourier-based optical modulation
Shah Rez Khana , Martin Feldmanb , Bahadir K. Gunturka,1,∗
arXiv:1612.08359v2 [] 13 Feb 2018
a Dept.
b Div.
of Electrical and Electronics Eng., Istanbul Medipol University, Istanbul, Turkey
of Electrical and Computer Eng., Louisiana State University, Baton Rouge, LA
Abstract
Through pixel-wise optical coding of images during exposure time, it is possible
to extract sub-exposure images from a single capture. Such a capability can be
used for different purposes, including high-speed imaging, high-dynamic-range
imaging and compressed sensing. In this paper, we demonstrate a sub-exposure
image extraction method, where the exposure coding pattern is inspired from
frequency division multiplexing idea of communication systems. The coding
masks modulate sub-exposure images in such a way that they are placed in
non-overlapping regions in Fourier domain. The sub-exposure image extraction
process involves digital filtering of the captured signal with proper band-pass
filters. The prototype imaging system incorporates a Liquid Crystal over Silicon
(LCoS) based spatial light modulator synchronized with a camera for pixel-wise
exposure coding.
1. Introduction
Coded aperture and coded exposure photography methods, which involve
control of aperture shape and exposure pattern during exposure period, present
new capabilities and advantages over traditional photography. In coded aperture photography, the aperture shape is designed to achieve certain goals. For
∗ Corresponding
author
Email address: [email protected] (Bahadir K. Gunturk)
1 This work is supported by TUBITAK Grant 114C098.
Preprint submitted to Elsevier
February 14, 2018
example, the aperture shape can be designed to improve depth estimation accuracy as a part of depth-from-defocus technique [1], or to improve deblurring
performance through adjusting the zero crossings of point spread function [2].
Coded aperture photography may involve capture of multiple images, where
each image is captured with a different aperture shape, for instance, to acquire
light field [3], or to improve depth estimation and deblurring performance [4].
Using coded aperture, it is possible to do lensless imaging as well [5, 6].
In coded exposure photography, the exposure pattern is controlled during
exposure period. The coding can be global, where all pixels are exposed together
with a temporal pattern, or pixel-wise, where each pixel has its own exposure
pattern. An example of global exposure coding is the flutter shutter technique
[7], where the shutter is opened and closed according to a specific pattern during
exposure period to enable better recovery from motion blur. The flutter shutter
idea can also be used for high-speed imaging [8]. Pixel-wise exposure control
presents more flexibility and wider range of applications compared to global
exposure coding. An example of pixel-wise exposure control is presented in [9],
where the goal is to spatially adapt the dynamic range of the captured image.
Pixel-wise coded exposure imaging can also be used for focal stacking through
moving the lens during the exposure period [10], and for high-dynamic-range
video through per-pixel exposure offsets [11].
Pixel-wise exposure control is also used for high-speed imaging by extracting
sub-exposure images from a single capture. In [12], pixels are exposed according
to a regular non-overlapping pattern on the space-time exposure grid. Some
spatial samples are skipped in one time period to take samples in another time
period to improve temporal resolution. In other words, spatial resolution is
traded off for temporal resolution. In [13], there is also a non-overlapping spacetime exposure sampling pattern; however, unlike the global spatial-temporal
resolution trade-off approach of [12], the samples are integrated in various ways
to spatially adapt spatial and temporal resolutions according to the local motion.
For fast moving regions, fine temporal sampling is preferred; for slow moving
regions, fine spatial sampling is preferred. Instead of a regular exposure pattern,
2
(a)
(b)
(c)
(d)
Figure 1: Illustration of different exposure masks. (a) Non-overlapping uniform grid exposure
[13], (b) Coded rolling shutter [17], (c) Pixel-wise random exposure [15], (d) Frequency division
multiplexed imaging exposure [20].
random patterns can also be used [14, 15, 16]. In [14], pixel-wise random subexposure masks are used during exposure period. The reconstruction algorithm
utilizes the spatial correlation of natural images and the brightness constancy
assumption in temporal domain to achieve high-speed imaging. In [15, 16], the
reconstruction algorithm is based on sparse dictionary learning. While learningbased approaches may yield outstanding performance, one drawback is that the
dictionaries need to be re-trained each time a related parameter, such as target
frame rate, is changed.
Alternative to arbitrary pixel-wise exposure patterns, it is also proposed to
have row-wise control [17] and translated exposure mask [18, 19]. Row-wise
exposure pattern can be designed to achieve high-speed imaging, high-dynamicrange imaging, and adaptive auto exposure [17]. In [18], binary transmission
masks are translated during exposure period for exposure coding; sub-exposure
images are then reconstructed using an alternating projections algorithm. The
same coding scheme is also used in [19], but a different image reconstruction
approach is taken.
Pixel-wise exposure control can be implemented using regular image sensors
with the help of additional optical elements. In [9], an LCD panel is placed in
front of a camera to spatially control light attenuation. With such a system,
pixel-wise exposure control is difficult since the LCD attenuator is optically
defocused. In [21], an LCD panel is placed on the intermediate image plane,
3
which allows better pixel-by-pixel exposure control. One disadvantage of using
transmissive LCD panels is the low fill factor due to drive circuit elements
between the liquid crystal elements. In [22], a DMD reflector is placed on the
intermediate image plane. DMD reflectors have high fill factor and high contrast
ratio, thus they can produce sharper and higher dynamic range images compared
to LCD panels. One drawback of the DMD based design is that the micromirrors
on a DMD device reflect light at two small angles, thus the DMD plane and
the sensor plane must be properly inclined, resulting in “keystone” perspective
distortion. That is, a square DMD pixel is imaged as a trapezoid shape on
the sensor plane. As a result, pixel-to-pixel mapping between the DMD and the
sensor is difficult. In [23], a reflective LCoS spatial light modulator (SLM) is used
on the intermediate image plane. Because the drive circuits on an LCoS device
is are on the back, high fill factor is possible as opposed to the transmissive LCD
devices. Compared to a DMD, one-to-one pixel correspondence is easier with an
LCoS SLM; however, the light efficiency is not as good as the DMD approach.
In [18], a lithographically patterned chrome-on-quartz binary transmission mask
is placed on the intermediate image plane, and moved during exposure period
with a piezoelectric stage for optical coding. This approach is limited in terms
of the exposure pattern that can be applied.
In this paper, we demonstrate a sub-exposure image extraction idea. The
idea, which is called frequency division multiplexed imaging (FDMI), was presented by Gunturk and Feldman as a conference paper [20]. While the FDMI
idea was demonstrated by merging two separate images with a patterned glass
based and an LCD panel based modulation in [20], it was not demonstrated
for sub-exposure image extraction. Here, we apply the FDMI idea to extract
sub-exposure images using an optical setup incorporating an LCoS SLM synchronized with a camera for exposure coding.
In Section 2, we present the problem of extracting sub-exposure images
through space-time exposure coding, and review the FDMI approach. In Section 3, we present the optical setup used in the experiments. In Section 4, we
provide experimental results with several coded image captures. In Section 5,
4
we conclude the paper with some future research directions.
2. Extracting Sub-Exposure Images from a Single Capture
There are various exposure coding schemes designed for extracting subexposure images from a single capture. First, we would like to present a formulation of the coding process, and then review the FDMI idea.
2.1. Space-Time Exposure Coding
Space-time exposure coding of an image can be formulated using a spatiotemporal video signal I(x, y, t), where (x, y) are the spatial coordinates and t
is the time coordinate. This signal is modulated during an exposure period T
with a mask m(x, y, t) to generate an image:
T
Z
I(x, y) =
m(x, y, t)I(x, y, t)dt.
(1)
0
The mask m(x, y, t) can be divided in time into a set of constant sub-exposure
masks: m1 (x, y) for t ∈ (t0 , t1 ), m2 (x, y) for t ∈ (t1 , t2 ), ..., mN (x, y) for t ∈
(tN −1 , tN ), where N is the number of masks, and t0 and tN are the start and end
times of the exposure period. Incorporating the sub-exposure masks mi (x, y)
into equation (1), the captured image becomes
I(x, y) =
N
X
Z
ti
mi (x, y)
i=1
I(x, y, t)dt =
ti−1
N
X
mi (x, y)Ii (x, y),
(2)
i=1
where we define the sub-exposure image Ii (x, y) =
R ti
ti−1
I(x, y, t)dt. The above
equation states that the sub-exposure images Ii (x, y) are modulated with the
masks mi (x, y) and added up to form the recorded image I(x, y). The goal of
sub-exposure image extraction is to estimate the images Ii (x, y) given the masks
and the recorded image.
As we have already mentioned in Section 1, the reconstruction process might
be based on different techniques, varying from simple interpolation [12] to dictionary learning [16]. The masks mi (x, y) can be chosen in different ways as
well. Some of the masks, including non-overlapping uniform grid [12], coded
5
Figure 2: Illustration of the FDMI idea with two images [20].
rolling shutter [17], pixel-wise random exposure [15], and frequency division
multiplexed imaging exposure [20], are illustrated in Figure 1.
2.2. Frequency Division Multiplexed Imaging
The frequency division multiplexed imaging (FDMI) idea [20] is inspired
from frequency division multiplexing method in communication systems, where
the communication channel is divided into non-overlapping sidebands, each of
which carry independent signals that are properly modulated. In case of FDMI,
sub-exposure images are modulated such that they are placed in different regions
in Fourier domain. By ensuring that the Fourier components of different subexposure images do not overlap, each sub-exposure image can be extracted from
the captured signal through band-pass filtering. The FDMI idea is illustrated
for two images in Figure 2. Two band-limited sub-exposure images I1 (x, y) and
I2 (x, y) are modulated with horizontal and vertical sinusoidal masks m1 (x, y)
and m2 (x, y) during exposure period. The masks can be chosen as raised cosines:
m1 (x, y) = a + bcos(2πu0 x) and m2 (x, y) = a + bcos(2πv0 y), where a and b
are positive constants with the condition a ≥ b so that the masks are nonnegative, that is, optically realizable, and u0 and v0 are the spatial frequencies
of the masks. The imaging system captures sum of the modulated images:
I(x, y) = m1 (x, y)I1 (x, y) + m2 (x, y)I2 (x, y).
The imaging process from the Fourier domain perspective is also illustrated
6
in Figure 2. Iˆ1 (u, v) and Iˆ2 (u, v) are the Fourier transforms of the sub-exposure
images I1 (x, y) and I2 (x, y), which are assumed to be band-limited. In Figure 2, the Fourier transforms Iˆ1 (u, v) and Iˆ2 (u, v) are shown as circular regions. The Fourier transforms of the masks are impulses: M̂1 (u, v) = aδ(u, v) +
(b/2)(δ(u - u0 , v) + δ(u + u0 , v)) and M̂2 (u, v) = aδ(u, v) +
ˆ v) of
(b/2)(δ(u, v - v0 ) + δ(u, v + v0 )). As a result, the Fourier transform I(u,
the recorded image I(x, y) includes Iˆ1 (u, v) and Iˆ2 (u, v) in its sidebands, and
Iˆ1 (u, v) + Iˆ2 (u, v) in the baseband. From the sidebands, the individual subexposure images can be recovered with proper band-pass filtering. From the
baseband, the full-exposure image I1 (x, y) + I2 (x, y) can be recovered with a
low-pass filter.
It is possible to use other periodic signals instead of a cosine wave. For
example, when a square wave is used, the baseband is modulated to all harmonics of the main frequency, where the weights of the harmonics decrease with
a sinc function. Again, by applying band-pass filters on the first harmonics,
sub-exposure images can be recovered.
3. Prototype Design
The prototype system is based on the pixel-wise exposure control design
that is adopted in several papers [23, 15, 16]. As shown in Figure 3, the system
consists of an objective lens, three relay lenses, one polarizing beam splitter, an
LCoS SLM, and a camera. Relay lenses are 100mm aspherical doublets; the objective lens is a 75mm aspherical doublet; the SLM is a Forth Dimension Display
SXGA-3DM with 1280x1024 pixel resolution and a pixel pitch of 13.62µm; and
the camera is a Thorlabs DCU-224M monochromatic camera with 1280x1024
pixel resolution and a pixel pitch of 4.65µm. The objective lens forms an image on an intermediate image plane, which is relayed onto the SLM. The SLM
controls the exposure pattern of each pixel through changing the polarization
states. The image reflected from the SLM is recorded by the camera. The
camera and the SLM are synchronized using the trigger signal from the SLM.
7
(a)
(b)
Figure 3: Pixel-wise exposure control camera prototype. (a) Graphical illustration, (b) Actual
prototype.
8
4. Experimental Results
We conducted several experiments to demonstrate sub-exposure image extraction with FDMI. As the first experiment, we performed a simulation to
discuss about the mask design and limitations of the FDMI approach. An image is rotated and translated to create twelve images. Each image is low-pass
filtered to obtain band-limited signals. The images are then modulated, each
with a different mask, to place them in different regions of the Fourier domain.
The masks are designed such that the sidebands at the fundamental frequencies
and the harmonics do not overlap in the Fourier domain. We first decide on
where to place the sideband in the Fourier domain; form the corresponding 2D
raised cosine signal (mask) as discussed in Section 2; and discretize (sample) the
signal to match size of the input images. Since sampling is involved during the
mask creation, we should avoid aliasing. The highest frequency sinusoidal mask
(in, for instance, horizontal direction) becomes a square wave (binary pattern)
with a period of two pixels.
The original images and zoomed-in regions from the masks are shown in
Figure 4(a) and Figure 4(b), respectively. The highest frequency masks in horizontal and vertical directions are square waves. The modulated image, obtained
by multiplying each image with the corresponding mask and then averaging, is
shown in Figure 4(c). Its Fourier transform is shown in Figure 4(d), where the
placement of different sub-exposure images can be seen. The baseband and the
sidebands from which the sub-exposure images are extracted are also marked in
the figure. From these sidebands, we extract all 12 sub-exposure images. Four
of the extracted sub-exposure images are shown in Figure 4(e).
The second experiment is realized with the prototype system. As shown in
Figure 5, a planar surface with a text printed on is moved in front of the camera.
There are four sub-exposure masks: horizontal, vertical, right diagonal and left
diagonal. (These masks are illustrated in Figure 1(d).) The exposure period for
each mask is 10 milliseconds. The captured image is shown in Figure 5(a). Its
Fourier transform is shown Figure 5(b). The recovered sub-exposure images are
9
(a)
(b)
(c)
(d)
(e)
Figure 4: Simulation to demonstrate sub-exposure image extraction. (a) Original images, each
with size 1280 × 1024 pixels. (b) Zoomed-in regions (13 × 10 pixels) from the masks applied
to the original images. (c) Simulated image capture. (d) Fourier transform (magnitude) of
the captured image, with marked sidebands used to extract the sub-exposure images. (e) A
subset of extracted sub-exposure images. (All 12 images can be successfully extracted; only
four of them are shown here.)
10
shown in Figures 5(c)-(d). It is seen that the blurry text in the full-exposure
capture becomes readable in the sub-exposure images.
In the experiment, the SLM pattern in horizontal and vertical directions
has the highest possible spatial frequency; that is, one-pixel-on (full reflection)
one-pixel-off (no reflection) SLM pattern. In other words, we cannot push the
sidebands further away from the center in Figure 5(b). (The reason why we
have an extended Fourier region is that one SLM pixel (13.62µm) corresponds
to about three sensor pixels (each with size 4.65µm). If we had one-to-one
correspondence, the horizontal and vertical sidebands would appear all the way
to the end of the Fourier regions because we have one-on one-off binary pattern
for them.) To encode and extract more sub-exposure images, the sub-exposure
images should be passed through an optical low-pass filter to further reduce the
frequency extent.
The spatial resolution of the optical system is controlled by several factors,
including the lens quality, SLM and sensor dimensions. Since the optical modulation is done by the SLM pixels, SLM pixel size is the main limiting factor
of the spatial resolution on the sensing side. On the computational side, the
FDMI technique requires band-limited signals, which is also a limiting factor on
spatial resolution. In the first experiment, the radius of a band-pass filter used
to recover a sub-exposure image is about one-ninth of the available bandwidth,
as seen in Figure 4; that is, the spatial frequency is reduced to one-ninth of the
maximum available resolution as a result of the FDMI process. In the second
experiment, the radius of the band-pass filters is about one-fourth of the available bandwidth; that is, the spatial resolution is reduced to one-quarter of the
maximum spatial resolution. It should, however, be noted that the bandwidth
of a natural image is typically small; therefore, the effective reduction in spatial
resolution is not as much in practice.
To quantify the overall spatial resolution of our system, we took two photos
of a Siemens Star, one without any optical modulation of the SLM (that is, full
reflection), and the other with an SLM pattern of maximum spatial frequency
(one-pixel-on one-pixel-off SLM pixels). The images are shown in Figure 6.
11
(a)
(b)
(c)
(d)
(e)
(f)
Figure 5: Sub-exposure image extraction with FDMI. (a) Recorded exposure-coded image,
(b) Fourier transform (magnitude) of the recorded image, (c) Sub-exposure image recovered
from the horizontal sideband, (d) Sub-exposure image recovered from the vertical sideband,
(e) Sub-exposure image recovered from the right diagonal sideband, (f) Sub-exposure image
recovered from the left diagonal sideband.
12
(a)
(b)
Figure 6: Spatial resolution of the optical system. (a) Image without any optical modulation
of the SLM, (b) Extracted sub-exposure image when the SLM pattern is vertical square wave
with one-pixel-on one-pixel-off SLM pixels.
The radii beyond which the radial lines can be distinguished are marked as
red circles in both images. According to [24], the spatial resolution is (Number
of cycles for the Siemens Star) × (Image height in pixels) / (2π(Radius of the
circle)). It turns out that the resolution of the optical system (without any SLM
modulation) is 138 line width per picture height; while it is 69 line width per
picture height for the sub-exposure image. The reduction in spatial resolution
is 50%, which is expected considering the spatial frequency of the SLM pattern.
In the third experiment, shown in Figure 7, the target object is a printout,
which is rotated as the camera captures an image. The exposure time of the
camera is set to 45 milliseconds. In the first 30 millisecond period, the SLM
pattern is a square wave in horizontal direction; in the second 15 millisecond
period, the SLM pattern is a square wave in vertical direction. The captured
image is shown in Figure 7(a). A zoomed-in region is shown in Figure 7(b),
where horizontal and vertical lines can be clearly seen. The Fourier transform
of the image is shown in Figure 7(c). The band-pass filters to be applied on the
sidebands and the low-pass filter to be applied on the baseband are also marked
on the Fourier transform. By applying the band-pass filter to the horizontal
sideband, the 30 millisecond sub-exposure image is extracted. By applying the
13
band-pass filter to the vertical sideband, the 15 millisecond sub-exposure image
is extracted. The baseband gives the sum of the sub-exposure images, as if there
were no optical modulation.
In Figure 7(d), the baseband image is shown. The motion blur prevents
seeing the details in the image, which has a full-exposure period of 45 milliseconds. The 30 millisecond sub-exposure image is shown in Figure 7(e); and the
15 millisecond sub-exposure image is shown in Figure 7(f). The readability of
the text and the visibility of details are much improved in the 15 millisecond
sub-exposure image.
In the fourth experiment, a ball is thrown in front of the camera. The
recorded image and a 15 millisecond sub-exposure image are shown in Figure 8.
The sub-exposure image is able to capture the ball shape with much less motion
blur.
Finally, we would like to demonstrate that more than two frames can be extracted through motion estimation. The results are shown in Figure 9. We applied horizontal grating in the first 15 milliseconds, followed by a 15 milliseconds
of no-reflection period, and finally vertical grating in the last 15 milliseconds.
Two sub-exposure images corresponding to the first part and the last part of
the exposure period are extracted, which are then used to estimate the motion
field using the optical flow estimation algorithm given in [25]. The estimated
flow vectors are divided to estimate intermediate frames through image warping,
resulting in 16 frames.
5. Conclusions
In this paper, we demonstrate a sub-exposure image extraction method,
which is based on allocating different regions in Fourier domain to different
sub-exposure images. While this theoretical idea is the main goal of the paper,
we demonstrate the feasibility of the idea with real life experiments using a
prototype optical system. The optical system has physical limitations due to
low spatial resolution of the SLM and low light efficiency, which is about 25%.
14
(a)
(b)
(c)
(d)
(e)
(f)
Figure 7: Sub-exposure image extraction with FDMI. (a) Recorded exposure-coded image,
(b) A zoomed-in region from (a), (c) Fourier transform (magnitude) of the recorded image,
(d) Image extracted from the baseband, corresponding to 45 millisecond exposure period, (e)
Image extracted from the horizontal sideband, corresponding to 30 millisecond exposure period, (f) Image extracted from the vertical sideband, corresponding to 15 millisecond exposure
period. Zoomed-in regions for (d), (e), and (f) are also shown.
15
(a)
(b)
Figure 8: Sub-exposure image extraction with FDMI. (a) Recorded exposure-coded image,
(b) Extracted sub-exposure image.
Better results would be achieved in the future with the development of highresolution sensors allowing pixel-wise exposure coding.
An advantage of the FDMI approach is the low computational complexity,
which involves Fourier transform and band-pass filtering. The sub-exposure
images from the sidebands and the full exposed image from the baseband are
easily extracted by taking Fourier transform, band-pass filtering, and inverse
Fourier transform.
Extraction of sub-exposure images can be used for different purposes. In
addition to high-speed imaging, one may try to estimate space-varying motion
blur, which could be used in scene understanding, segmentation, and spacevarying deblurring applications.
16
(a)
(b)
(c)
Figure 9: Extracting intermediate sub-exposure frames through optical flow estimation. (a)
17
Recorded exposure-coded image, and a zoomed-in region. (b) Fourier transform (magnitude)
of the image, and the estimated motion field between the sub-exposure images. (c) Sixteen
sub-exposure frames generated using the estimated motion field.
References
[1] A. Levin, R. Fergus, F. Durand, W. T. Freeman, Image and depth from
a conventional camera with a coded aperture, ACM Trans. on Graphics
26 (3) (2007) Article No. 70.
[2] C. Zhou, S. Nayar, What are good apertures for defocus deblurring?, in:
IEEE Int. Conf. on Computational Photography, 2009, pp. 1–8.
[3] C.-K. Liang, T.-H. Lin, B.-Y. Wong, C. Liu, H. H. Chen, Programmable
aperture photography: multiplexed light field acquisition, ACM Trans. on
Graphics 27 (3) (2008) Article No. 55.
[4] C. Zhou, S. Lin, S. K. Nayar, Coded aperture pairs for depth from defocus
and defocus deblurring, Int. Journal of Computer Vision 93 (1) (2011) 53–
72.
[5] Y. Sekikawa, S.-W. Leigh, K. Suzuki, Coded lens: Coded aperture for lowcost and versatile imaging, in: ACM SIGGRAPH, 2014, p. Article No.
59.
[6] M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, Flatcam: Replacing lenses with masks and computation, in: IEEE Int. Conf. on Computer
Vision, 2015, pp. 12–15.
[7] R. Raskar, A. Agrawal, J. Tumblin, Coded exposure photography: motion
deblurring using fluttered shutter, ACM Trans. on Graphics 25 (3) (2006)
795–804.
[8] J. Holloway, A. C. Sankaranarayanan, A. Veeraraghavan, S. Tambe, Flutter
shutter video camera for compressive sensing of videos, in: IEEE Int. Conf.
on Computational Photography, 2012, pp. 1–9.
[9] S. K. Nayar, V. Branzoi, Adaptive dynamic range imaging: Optical control
of pixel exposures over space and time, in: IEEE Int. Conf. on Computer
Vision, 2003, pp. 1168–1175.
18
[10] X. Lin, J. Suo, G. Wetzstein, Q. Dai, R. Raskar, Coded focal stack photography, in: IEEE Int. Conf. on Computational Photography, 2013, pp.
1–9.
[11] T. Portz, L. Zhang, H. Jiang, Random coded sampling for high-speed hdr
video, in: IEEE Int. Conf. on Computational Photography, 2015, pp. 1–8.
[12] G. Bub, M. Tecza, M. Helmes, P. Lee, P. Kohl, Temporal pixel multiplexing
for simultaneous high-speed, high-resolution imaging, Nature Methods 7 (3)
(2010) 209–211.
[13] M. Gupta, A. Agrawal, A. Veeraraghavan, S. G. Narasimhan, Flexible voxels for motion-aware videography, in: European Conf. on Computer Vision,
2010, pp. 100–114.
[14] D. Reddy, A. Veeraraghavan, R. Chellappa, P2c2: Programmable pixel
compressive camera for high speed imaging, in: IEEE Int. Conf. on Computer Vision and Pattern Recognition, 2011, pp. 329–336.
[15] Y. Hitomi, J. Gu, M. Gupta, T. Mitsunaga, S. K. Nayar, Video from a
single coded exposure photograph using a learned over-complete dictionary,
in: IEEE Int. Conf. on Computer Vision, 2011, pp. 287–294.
[16] D. Liu, J. Gu, Y. Hitomi, M. Gupta, T. Mitsunaga, S. K. Nayar, Efficient
space-time sampling with pixel-wise coded exposure for high-speed imaging,
IEEE Trans. on Pattern Analysis and Machine Intelligence 36 (2) (2014)
248–260.
[17] J. Gu, Y. Hitomi, T. Mitsunaga, S. Nayar, Coded rolling shutter photography: Flexible space-time sampling, in: IEEE Int. Conf. on Computational
Photography, 2010, pp. 1–8.
[18] P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro,
D. J. Brady, Coded aperture compressive temporal imaging, Optics Express 21 (9) (2013) 10526–10545.
19
[19] R. Koller, L. Schmid, N. Matsuda, T. Niederberger, L. Spinoulas, O. Cossairt, G. Schuster, A. K. Katsaggelos, High spatio-temporal resolution video
with compressed sensing, Optics Express 23 (12) (2015) 15992–16007.
[20] B. K. Gunturk, M. Feldman, Frequency division multiplexed imaging, in:
IS&T/SPIE Electronic Imaging Conf., 2013, pp. 86600P–86600P.
[21] C. Gao, N. Ahuja, H. Hua, Active aperture control and sensor modulation
for flexible imaging, in: IEEE Int. Conf. on Computer Vision and Pattern
Recognition, 2007, pp. 1–8.
[22] S. K. Nayar, V. Branzoi, T. E. Boult, Programmable imaging: Towards a
flexible camera, Int. Journal of Computer Vision 70 (1) (2006) 7–22.
[23] H. Mannami, R. Sagawa, Y. Mukaigawa, T. Echigo, Y. Yagi, High dynamic range camera using reflective liquid crystal, in: IEEE Int. Conf. on
Computer Vision, 2007, pp. 1–8.
[24] C. Loebich, D. Wueller, B. Klingen, A. Jaeger, Digital camera resolution
measurement using sinusoidal Siemens Stars, in: SPIE Electronic Imaging,
Digital Photography III, 2007, pp. 65020N1–11.
[25] C. Liu, Beyond pixels: exploring new representations and applications for
motion analysis, MIT, 2009.
20
| 1 |
ON THE STRUCTURE OF INVOLUTIONS AND SYMMETRIC
SPACES OF QUASI DIHEDRAL GROUP
arXiv:1401.0112v3 [] 4 Jul 2017
ZAHID RAZA, IMRAN AND BIJAN DAVVAZ
Abstract. Let G = QD8k be the quasi-dihedral group of order 8n and θ be an
automorphism of QD8k of finite order. The fixed-point set H of θ is defined as
Hθ = Gθ = {x ∈ G | θ(x) = x} and generalized symmetric space Q of θ is given
by Qθ = {g ∈ G | g = xθ(x)−1 for some x ∈ G}.
The characteristics of the sets H and Q have been calculated. It is shown that
for any H and Q, H.Q 6= QD8k . The H-orbits on Q are obtained under different
conditions. Moreover, the formula to find the order of v-th root of unity in Z2k
for QD8k has been calculated. The criteria to find the number of equivalence
classes denoted by C4k of the involution automorphism has also been constructed.
Finally, the set of twisted involutions R = Rθ = { x ∈ G | θ(x) = x−1 } has been
explored.
1. Introduction
Serious work on groups generated by reflections began in the 19th century. The
finite subgroups of O(3) generated by isometric reflections on the 2-sphere (or equivalently, by orthogonal linear reflections on R3 ) had been determined by Möbius in
1852. In the second half of the 19th century, work also began on finite reflection
groups on S n for n > 2 (or equivalently, finite linear reflection groups on Rn+1 ).
By the late 19′ s other groups generated by isometric reflections across the edges of
polygons (with 3 or more edges) in the hyperbolic plane had been studied by Klein
and Poincare. The group of symmetries of such a root system was a finite group
generated by reflections had been shown by Weyl in 1925. This intimate connection
with the classification of semi-simple Lie groups cemented reflection groups into a
central place in mathematics. The two lines of research were united in (Coxeter,
1930). He classified discrete groups generated by reflections on the n-dimensional
sphere or Euclidean space. The notion of an abstract reflection group introduced
by Jacques Tits which he called a coxeter group. He considered pairs (W, S) with
W a group and S a set of involutions which generate W so that the group has a
presentation of the following form: the set of generators is S and the set of relations
2010 Mathematics Subject Classification. 20F28 .
Key words and phrases. Automorphisms, involutions, fixed-point set, symmetric spaces, quasidihedral group.
1
2
ZAHID RAZA, IMRAN AND BIJAN DAVVAZ
is {(st)m(s,t) }, where m(s, t) denotes the order of st and the relations range over all
unordered pairs s, t ∈ S with m(s, t) 6= ∞. The pair (W, S) is a coxeter system
and W is a coxeter group. This was demonstrated in (Bourbaki, 1968; Bjorner and
Francesco, 2005).
The early work by Helminck on symmetric spaces concerned some algebraic results
about their structure(Helminck, 1988). Classification of the real symmetric spaces
and their fine structure had been included in (Helminck, 1994). By considering an
open subgroup H of the fixed point group Gθ of an involution θ of G, the result was
proved that ‘‘ For k a field of characteristic not two, let G be a connected semi-simple
algebraic k-group and θ an involution of G. Then θ is defined over k if and only if
H θ is defined over k” (Helminck and Wang, 1993). They also proved that ‘‘ if G is a
connected semi-simple algebraic group, θi involution of G and Hi = Gθi , (i = 1, 2).
Let H1θ1 and H2θ2 be the identity components of H1 and H2 respectively. If H1θ1 = H2θ2
then θ1 = θ2 .”
A mini-workshop on generalizations of symmetric spaces was organized by Ralf
Kohl at.el in 2012. Mathematicians in this area brought together to present their current research projects or trigger new collaboration via comparison, analogy, transfer,
generalization and unification of method in this workshop.
The structure of automorphism group of dihedral group has been found by Cunningham at.el (2012). They also characterized the automorphism group with θ2 = id
and calculated H and Q for each automorphism for the family of finite dihedral
groups. They also proved that the set of involutions is partitioned in such a way
that there are at most two distinct equivalence classes of involutions in each of the
partition. In involution case, Q is always a subgroup of G. Moreover, the infinite
dihedral group is also discussed.
Let G = Dcn be the dicyclic group of order 4n. Let φ be an automorphism of G
of order k. Bishop at.el (2013) describe φ and the generalized symmetric space Q of
G associated with φ. When φ is an involution, they describe its fixed point group H
along with the H-orbits and G-orbits of Q corresponding to the action of φ-twisted
conjugation.
In this paper, we extent the study to quasi-dihedral group QD8k .
2. Preliminaries
The quasi-dihedral group is denoted by QD8k , where QD8k = ha, b : a4k = b2 =
1, ba = a2k−1 bi. In reset of the whole paper, we denote it by QD2m , where m = 4k
and n = m2 = 2k. That is QD2m = ha, b : am = b2 = 1, ba = an−1 bi. It is
clear from the presentation that ai bj ∈ QD2m is in normal form for some integer
0 ≤ i < m and j ∈ {0, 1}. In the whole study, Zn denotes the additive group
of integers modulo n and Um denotes the multiplicative group of units of Zn and
ON THE STRUCTURE OF INVOLUTIONS AND SYMMETRIC SPACES OF QUASI DIHEDRAL GROUP
3
r ∈ Um if and only if gcd(r, m) = 1. The v-th root of unity in Zn which can be
v
described by Rm
= {r ∈ Um | r v = 1 (mod n)}.
Let G be a group, then an element a is a conjugate with b in G if and only
if there exists an element x such that a = xbx−1 . For a ∈ G, the function
φa (x) = axa−1 ∀ x ∈ G is called an inner automorphism of G. Let θ1 , θ2 be
two automorphisms, then θ1 ∼ θ2 if and only if they are conjugate to each other. If
there exists another automorphism σ such that σθ1 σ −1 = θ2 .
Definition 3. For any c ∈ Zn , we define ZDiv (c) = {y ∈ Zn | cy ≡ 0 (mod n)} and
easy to see that | ZDiv (c) |= gcd(c, n)
Notation 4. Let us fixed the following notations, which will be used in the whole
paper.
v
• For fixed r ∈ Rm
. The number of equivalence classes with fixed leading
co-efficient r is denoted by Nr =| {θ̄ | θ = rx + s ∈ Autv (D2m )} |
• Let us consider the subset of automorphism denoted by Autv (QD2m ) and is
defined as Autv (QD2m ) = {θ ∈ Aut(QD2m ) | θv = id }.
The following lemma is easy to prove.
Lemma 5. If QD2m be the quasi-dihedral group, then
(1) bai = ai(n−1) b for all i = 0, 1, . . . , (m − 1),
(2) | a2i b |= 2 and | a2i+1 b |= 4 for all i = 0, 1, . . . , (n − 1).
Lemma 6. The automorphism group of QD2m is isomorphic to the group of affine
linear transformations of Zn . In particular, we have
Aut(QD2m ) ∼
= Af f (Zn ) = {rx + s : Zn → Zn | r ∈ Um , s ∈ Zn }
and the action of rx + s on elements of QD2m in normal form is given by
(6.1)
(rx + s)(aj bt ) = arj+[(t−1)n−(t−2)]s bt
Proof. Let θ be an automorphism, then θ(a) = ar and θ(b) = (as b), where gcd(r, m) =
1, s = 0, 2, 4, . . . , (m − 2). Consider arbitrary element aj bt ∈ QD2m , and θ being
homomorphism,
(6.2)
θ(aj bt ) = θ(aj )θ(bt ),
then θ(aj ) = arj and θ(bt ) = (as b)(as b)(as b) . . . (as b) = as (bas )(bas )b . . . (as b).
By Lemma 5, we have
θ(bt ) = as [as(n−1) b](bas )(bas )b . . . (as b) = as+s(n−1) (b2 as )(bas )b . . . (as b)
θ(bt ) = asn (b2 as )(bas )b . . . (as b) = asn (as(n−1) b2 )(bas )b . . . (as b)
θ(bt ) = asn+sn−sb3 (bas )b . . . (as b) = a2sn−s b3 (bas )b . . . (as b) after t-steps, we have
θ(bt ) = a(t−1)ns−(t−2)s bt . Put values in equation 6.2, we get
θ(aj bt ) = arj .a(t−1)ns−(t−2)s bt = arj+(t−1)ns−(t−2)s bt = arj+[(t−1)n−(t−2)]s bt . Since b2 =
4
ZAHID RAZA, IMRAN AND BIJAN DAVVAZ
1, so t ∈ {0, 1}. If t = 0, then θ(aj ) = arj+(2−n)s =⇒ rx + (2 − n)s ∈ Af f (Zn ). If
t = 1, then θ(aj b) = arj+s b =⇒ rx + s ∈ Af f (Zn ).
Conversely, For f ∈ Af f (Zn ), that is f = rx + s where r ∈ Um , s ∈ Zn and it is
given as a group action f (aj bt ) = arj+[(t−1)n−(t−2)]s bt for all aj bt ∈ QD2m .
Now, we will show that f is an automorphism. For this let us define θ : QD2m −→
QD2m by θ(aj bt ) = f (aj bt ) = arj+[(t−1)n−(t−2)]s bt . We will show that θ is a bijective
homomorphism. For this we will show that θ is one to one and onto.
one to one: Suppose that θ(aj1 bt1 ) = θ(aj2 bt2 ), then arj1 +[(t1 −1)n−(t1 −2)]s bt1 =
arj2 +[(t2 −1)n−(t2 −2)]s bt2 ⇒ arj1 +[(t1 −1)n−(t1 −2)]s−rj2−[(t2 −1)n−(t2 −2)]s bt1 −t2 = e.
Case 1 :
rj1 + [(t1 − 1)n − (t1 − 2)]s − rj2 − [(t2 − 1)n − (t2 − 2)]s = 0 and t1 − t2 = 2, since,
t ∈ {0, 1} therefore t1 − t2 6= 2 not possible.
Case 2 :
rj1 + [(t1 − 1)n − (t1 − 2)]s − rj2 − [(t2 − 1)n − (t2 − 2)]s = m and t1 − t2 = 0
⇒ r(j1 −j2 )+ns(t1 −t2 )−s(t1 −t2 ) = m ⇒ r(j1 −j2 ) = m ⇒ r | m but gcd(r, m) = 1,
not possible. Now r(j1 − j2 ) = m will be only possible when j1 − j2 = 0, r 6= 0
j1 = j2 ⇒ aj1 = aj2 ⇒ aj1 bt1 = aj2 bt2 ⇒ θ is one to one.
onto: It is cleared from the definition of the group action that for every arj+[(t−1)n−(t−2)]s bt
there exists aj bt ∈ QD2m such that θ(aj bt ) = arj+[(t−1)n−(t−2)]s bt ⇒ θ is onto.
Hence Aut(QD2m ) ∼
= Af f (Zn ).
7. Structure of Automorphisms of QD2m
Proposition 7.1. For any integer v ≥ 1 and θ = rx + s be an automorphism of
v
QD2m then θv =id if and only if r ∈ Rm
and s ∈ ZDiv(r v−1 + r v−2 + · · · + r + 1).
Proof. We will use the mathematical induction on v to show first the following
statement: θv (x) = r v x + (r v−1 + r v−2 + · · · + r + 1)s.
For v = 2, 3 θ2 (x) = θ[θ(x)] = θ(rx + s) = r(rx + s) + s = r 2 x + rs + s =
r 2 x + (r + 1)s. Similarly, θ3 (x) = r 3 x + r 2 s + rs + s = r 3 x + (r 2 + r + 1)s.
Suppose that the statement is true for v = i that is
i
θ (x) = r i x + (r i−1 + r i−2 + · · · + r + 1)s. We will show that the statement is
true for v = i + 1. Take θi+1 (x) = θ[θi (x)] = θ[r i x + (r i−1 + r i−2 + · · · + r + 1)s]
θi+1 (x) = r[r i x + (r i−1 + r i−2 + · · · + r + 1)s] + s = r i+1 x + (r i + r i−1 + · · · + r + 1)s].
Hence , the statement is true for all positive values of v. We have identified the
identity automorphism with x ∈ Af f (Zn ), thus r v ≡ 1 (mod n) and (r v−1 + r v−2 +
v
· · · + r + 1)s ≡ 0 (mod n) ⇔ r ∈ Rm
and s ∈ ZDiv(r v−1 + r v−2 + · · · + r + 1) as
required.
3
Example 7.2. Let G = QD56 and v = 3. In this case R28
= {1, 9, 11, 15, 23, 25},
take r = 9, then ZDiv(92 + 9 + 1) = ZDiv(91) = {0, 2, 4, 6, 8, 10, 12}. Now, consider
ON THE STRUCTURE OF INVOLUTIONS AND SYMMETRIC SPACES OF QUASI DIHEDRAL GROUP
5
θ1 = 9x+6 ∈ Aut(QD56 ), then after simple calculation we get θ13 = x =id. Similarly,
for θ2 = 9x + 12 ∈ Aut(QD56 ), we have θ23 = x =id.
Proposition 7.3. For any m and v ≥ 1, we have
X
gcd(r v−1 + r v−2 + · · · + r + 1, n)
|Autv (QD2m )| =
r∈Rvm
v
Proof. This follows from above Proposition: for any r ∈ Rm
there are
|ZDiv(r v−1 + r v−2 + · · · + r + 1)| = gcd(r v−1 + r v−2 + · · · + r + 1, n)
element s such that (r v−1 +r v−2 +· · ·+r +1)s ≡ 0 (mod n) and every automorphism
θ ∈ Autv (QD2m ) must be of this form.
Proposition 7.4. Let θ1 = r1 x+s1 and θ2 = r2 x+s2 be two different automorphism
of QD2m . Then θ1 ∼ θ2 if and only if r1 = r2 and f s1 − s2 ∈ hr1 − 1i ≤ Zn for some
f ∈ Um .
Proof. Given that θ1 = r1 x + s1 and θ2 = r2 x + s2 ∈ Aut(QD2m ). Suppose that
σ = f x + g be any arbitrary automorphism of QD2m and define σ −1 = f −1 x − f −1 g.
We will check that σ ·σ −1 = x. For this take σ ·σ −1 = σ(σ −1 (x)) = σ(f −1 x−f −1 g) =
f (f −1 x − f −1 g) + g = x − g + g = x. Now, we calculate
(7.1)
σθ1 σ −1 = σ · (θ1 · σ −1 )
Take θ1 · σ −1 = θ1 [σ −1 (x)] = θ1 [f −1 x − f −1 g] = r1 (f −1 x − f −1 g) + s1 = r1 f −1 x −
r1 f −1 g + s1 put in 7.1, we get σθ1 σ −1 = σ(r1 f −1 x − r1 f −1 g + s1 ) = f (r1 f −1 x −
r1 f −1 g+s1 )+g σθ1 σ −1 = r1 x−r1 g+f s1 +g = r1 x+f s1 −g(r1 −1). Now, by definition
σθ1 σ −1 = θ2 if and only if r1 = r2 and f s1 − g(r1 − 1) = s2 ⇒ f s1 − s2 = g(r1 − 1)
⇒ f s1 − s2 ∈ hr1 − 1i.
Example 7.5. Let G = QD64 and consider θ1 = 7x + 3, θ2 = 7x + 14, θ3 = 7x + 11
three automorphisms of QD64 . Then it is easy to check that θ1 ≁ θ2 because there
does not exist such f ∈ U32 for which 3f − 14 ∈ h6i. And θ1 ∼ θ3 because there
exist some f ∈ U32 for which 3f − 14 ∈ h6i.
Proposition 7.6. Suppose m is fixed and let v ≥ 1.
v
(1) For any r ∈ Rm
; hr − 1i ≤ ZDiv(r v−1 + r v−2 + · · · + r + 1).
v−1 +r v−2 +···+r+1)
v
(2) For any r ∈ Rm
; Um acts on the cosets ZDiv(r hr−1i
.
(3) The set Autv (QD2m ) is partitioned into equivalence classes indexed by pair
v−1 +r v−2 +···+r+1)
v
(r, A), where r ∈ Rm
and A is an orbit of Um on ZDiv(r hr−1i
.
Proof.
(1) We know that (r − 1).(r v−1 + r v−2 + · · · + r + 1) = r v − 1 ≡ 0 (mod n)
⇒ (r − 1).(r v−1 + r v−2 + · · · + r + 1) ≡ 0 (mod n) ⇒ (r − 1) ∈ ZDiv(r v−1 +
r v−2 + · · · + r + 1) ⇒ hr − 1i ≤ ZDiv(r v−1 + r v−2 + · · · + r + 1).
6
ZAHID RAZA, IMRAN AND BIJAN DAVVAZ
(2) We simply recognize that Um = Aut(Zn ) acts on Zn by multiplication and
since every subgroup is cyclic, Um must stabilize the subgroups of Zn .
v−1 +r v−2 +···+r+1)
.
(3) It is simply Proposition 7.4 in terms of the Um -action on ZDiv(r hr−1i
Example 7.7. Let G = QD40 ,and v = 2. Then U20 = {1, 3, 7, 9, 11, 13, 17, 19} and
Z10 = {0, 1, 2, . . . , 9}.
2
(1) Since v = 2, so R20
= {1, 9, 11, 19}. Take r = 9 then h9 − 1i = h8i =
{0, 2, 4, 6, 8} and ZDiv(10) = {0, 1, 2, . . . , 9}. So, {0, 2, 4, 6, 8} ⊆ {0, 1, 2, . . . , 9}
(2) For U20 action on the cosets ZDiv(r+1)
. Take r = 9 then ZDiv(r+1)
= ZDiv(10)
=
hr−1i
hr−1i
h8i
E={0,1,2,...,9}
F ={0,2,4,6,8}
= x + F ∀ x ∈ E provide two sets F and F1 = {1, 3, 5, 7, 9}.
Now, the action of U20 on the cosets
(3)
ZDiv(10)
h8i
has two sets.
ZDiv(10)
h8i
= y.F =
F ∀ y ∈ U20 and ZDiv(10)
= y.F1 = F1 ∀ y ∈ U20
h8i
ZDiv(10)
The action of U20 on h8i = F, F1 . Then the number of equivalence classes
2
of Aut(QD40 ) indexed by the pair (r, A) where r ∈ R20
and A is an orbit of
ZDiv(10)
U20 on h8i is 8.
v
Theorem 7.8. If m is a fixed positive integer and r ∈ Rm
, then the number
v−1
v−2
ZD iv(r
+r
+···+r+1)
of orbits of Um on
is equal to the number of divisors of
hr−1i
gcd(r−1,n). gcd(r v−1 +r v−2 +···+r+1,n)
.
n
Proof. The Um -orbit on Zn are indexed by the subgroups of Zn . Thus the Um -orbit
v−1 +r v−2 +···+r+1)
on ZDiv(r hr−1i
are indexed by subgroups L ≤ Zn such that
hr − 1i ≤ L ≤ ZDiv(r v−1 + r v−2 + · · · + r + 1).
It is well known that the subgroup lattice of Zn is isomorphic to the divisor lattice
of n. The subgroup hr − 1i corresponds to the divisor gcd(r − 1, n) and the subgroup
n
,
ZDiv(r v−1 + r v−2 + · · · + r + 1) correspond to the divisor gcd(rv−1 +rv−2
+···+r+1,n)
and the subgroups between these groups correspond to the divisor of n between
n
gcd(r − 1, n) and gcd(rv−1 +rv−2
in the divisor lattice. Finally, it is known
+···+r+1,n)
that this sub-lattice of the divisor of n is isomorphic to the divisor lattice of
gcd(r − 1, n)
gcd(r − 1, n). gcd(r v−1 + r v−2 + · · · + r + 1, n)
=
.
n
n
gcd(r v−1 +r v−2 +···+r+1,n)
8. Fixed Ssts and Symmetric Spaces of Automorphisms
In this section if θ is an automorphism the fixed-point set Hθ and the symmetric
space Qθ will be discussed, where Hθ = {x ∈ G | θ(x) = x} and Qθ = {g ∈ G | g =
ON THE STRUCTURE OF INVOLUTIONS AND SYMMETRIC SPACES OF QUASI DIHEDRAL GROUP
7
xθ(x)−1 for some x ∈ G}. When θ is understood to be fixed the subscript from our
notation has been dropped.
The following theorem characterizes the sets H and Q in the case of quasi-dihedral
groups.
Theorem 8.1. If θ = rx + s is an automorphism of G = QD2m of finite order, then
′
′
H = {aj | j(r + 2s − 1) ≡ 0 (mod n)} ∪ {aj b | j (r + 2s − 1) ≡ −s (mod n)} and
′
Q = {aj | j ∈ hr + 2s − 1i ∪ [l(1 − r) − s] (mod n)} where, j and l are even numbers.
Proof. Since, θ is an automorphism and θ(aj bt ) = arj+[(t−1)n−(t−2)]s bt . First we will
find set the H. Let us consider aj ∈ G, then by definition of H, we have θ(aj ) = aj ,
θ(aj ) = a[r+(2−n)s]j = aj , ⇒ n | j(r + 2s − 1).
′
′
′
′
′
Now consider aj b ∈ G, then θ(aj b) = aj b so, aj b = a[r+(2−n)s]j as b, thus n |
′
′
′
(r+2s−1)j +s. Hence, H = {aj | j(r+2s−1) ≡ 0 (mod n)}∪{aj b | j (r+2s−1) ≡
′
−s (mod n)}, where j is even number.
Now we will find the set Q. L aj ∈ G be an arbitrary element and let ai be a fixed
element of G, where j 6= i, then by definition of Q, aj = ai θ(ai )−1 = ai θ(a−i ) =
ai .a−[r+(2−n)s]i = ai[1−r−(2−n)s] ⇒ j ≡ i[1 − (r + 2s)] (mod n) ⇒ j ∈ hr + 2s − 1i.
Now consider al b ∈ G, where l is even number, then aj = (al b)θ(al b)−1 =
(al b)θ(al b) = (al b)(arl+s b) = al (barl+s )b = al a(rl+s)(n−1) bb = al+(rl+s)(n−1) ⇒ j =
l + (rl + s)(n − 1) ⇒ j = l[1 + r(n − 1)] + s(n − 1) ⇒ j ≡ l(1 − r) − s (mod n).
So, Q = {aj | j ∈ hr + 2s − 1i ∪ [l(1 − r) − s] (mod n)}, where l is an even
numbers.
Example 8.2. Let us consider G = QD64 and θ = 3x + 12 ∈ Aut(D64 ), then
′
′
H = {aj | 10j ≡ 0 (mod 16)} ∪ {aj b | 10j ≡ −4 (mod 16)} = {1, a8 , a2 b, a10 b} and
′
Q = {aj | j ∈ h10i ∪ (14j + 4) (mod 16)} = {1, a2 , a4 , . . . , a14 }
Using the descriptions of H and Q , we obtain further results as well.
Corollary 8.3. If θ = rx + s is an automorphism of QD2m of finite order, where
m = 4k, n = 2k and k is even, then H will be cyclic or non-cyclic according to the
following conditions
(1) Let gcd(r + 2s − 1, n) = d1 . If s = 2w + 1 is odd and d1 ∤ w then H will be
cyclic that is H ∼
= ZDiv(r + 2s − 1) and | H |= d1
(2) Let gcd(r + 2s − 1, k) = d2 . If s = 2w is even and d2 | w then we have two
cases
a): If w is odd, then H will be cyclic | H |= d1 and H ∼
= ZDiv(r +2s−1).
b): If w is even
i): If d2 ∤ w then H will be cyclic | H |= d1 and H ∼
= ZDiv(r +
2s − 1).
ii): If d2 | w then H will be non-cyclic. We have two cases
8
ZAHID RAZA, IMRAN AND BIJAN DAVVAZ
k
I-): if 4 ∤ k then | H |= 23 d1 H = {ha 2 i, a2l b, a2q b | l 6=
q for some l, q ∈ {1, 2, ...k}}. It is not a subgroup of G.
II-): if 4 | k then | H |= 2d1 and H ∼
= ZDiv(r + 2s − 1) × Z2 .
Example 8.4. Let G = QD48 , then gcd(r, m) = 1 ⇒ r ∈ U24 = {1, 5, 7, 11, 13, 17, 19, 23}
and s ∈ Z12 = {0, 1, 2, . . . , 11}.
(1) Consider θ1 = 7x + 11 ∈ Aut(QD48 ) then r = 7, s = 2(5) + 1 = 11
⇒ r + 2s − 1 = 4 and d1 = gcd(28, 12) = 4 ⇒ 4 ∤ 5 then H1 = {1, a3, a6 , a9 }
is cyclic. and ZDiv4 = {0, 3, 6, 9} ⇒ H1 ∼
= ZDiv4 and O(H1) = d1 = 4
(2) a): Take θ2 = 11x + 10 ∈ Aut(QD48 ) then r = 11, s = 2(5) = 10
⇒ r + 2s − 1 = 6 and d1 = gcd(6, 12) = 6, d2 = gcd(6, 6) = 6 then
H2 = {1, a2 , a4 , . . . , a10 } is cyclic and ZDiv6 = {0, 2, 4, 6, 8, 10} ⇒ H2 ∼
=
ZDiv6 and O(H2) = d1 = 6
b):
i): For θ3 = 11x + 4 ∈ Aut(QD48 ) then r = 11, s = 2(2) = 4
⇒ r + 2s − 1 = 6 and d1 = gcd(6, 12) = 6, d2 = gcd(6, 6) = 6
⇒ 6 ∤ −2 then H3 = {1, a2 , a4 , . . . , a10 } is cyclic and ZDiv6 =
{0, 2, 4, 6, 8, 10} ⇒ H3 ∼
= ZDiv6 and O(H3 ) = d1 = 6
ii): I-: For θ4 = 5x + 8 ∈ Aut(QD48 ) then r = 5, s = 2(4) = 8
⇒ r + 2s − 1 = 8 and d1 = gcd(8, 12) = 4, d2 = gcd(8, 6) = 2
⇒ 2 | −4 then H4 = {1, a3 , a6 , a9 , a2 b, a8 b} is non-cyclic and
O(H4 ) = 32 .d1 = 6
II-: Take θ5 = 7x + 4 ∈ Aut(QD48 ) then r = 7, s = 2(2) = 4
⇒ r + 2s − 1 = 2 and d1 = gcd(2, 12) = 2, d2 = gcd(2, 6) =
2 ⇒ 2 | −2 then H5 = {1, a6 , a4 b, a10 b} is non-cyclic and
O(H5 ) = 2.d1 = 4
Corollary 8.5. If m = 4k, n = 2k and k is odd, then H will be cyclic or non-cyclic
according to the following conditions
(1) Let gcd(r + 2s − 1, n) = d1 . If s = 2w + 1 is odd then H will be cyclic
| H |= d1 and H ∼
= ZDiv(r + 2s − 1).
(2) Let gcd(r + 2s − 1, n) = d1 . If s = 2w is even then
a): If d1 = n then H will be cyclic | H |= d1 = n and H ∼
= ZDiv(r +
2s − 1).
b): If d1 = 2 then H will be non-cyclic | H |= 23 d1 = 23 (2) = 3 and
H = {1, ak , a2l b} for some l ∈ {1, 2, ..., k} is not a subgroup of G.
Example 8.6. Let G = QD40 where m = 20, n = 10, k = 5. Since gcd(r, m) = 1
⇒ r ∈ U20 = {1, 3, 7, 9, 11, 13, 17, 19} and s ∈ Z10 = {0, 1, 2, . . . , 9}.
(1) Consider θ1 = 3x + 5 ∈ Aut(QD40 ) then r = 3, s = 5 ⇒ r + 2s − 1 = 2
and gcd(2, 10) = 2 then H1 = {1, a5 } is cyclic and ZDiv2 = {0, 5} ⇒ H1 ∼
=
ZDiv2 and | H1 |= 2
ON THE STRUCTURE OF INVOLUTIONS AND SYMMETRIC SPACES OF QUASI DIHEDRAL GROUP
9
(2)
a): For θ2 = 7x + 2 ∈ Aut(QD40 ) then r = 7, s = 2 ⇒ r + 2s − 1 = 0
and gcd(0, 10) = 10 then H2 = {1, a2 , a3 , . . . , a9 } is cyclic and ZDiv0 =
{0, 1, 2, . . . , 9} ⇒ H2 ∼
= ZDiv0 and | H2 |= 10
b): Take θ3 = 7x+4 ∈ Aut(QD40 ) then r = 7, s = 4 ⇒ r+2s−1 = 4 and
gcd(4, 10) = 2 then H3 = {1, a5 , a4 b} is non-cyclic and | H3 |= 32 .2 = 3
Corollary 8.7. If θ = rx + s ∈ Autv (QD2m ) be of finite order, where m = 4k, n =
2k and k is even number, then Q will be cyclic or non-cyclic according to the
following conditions
(1) If s = 2w + 1 is odd, then Q will be non-cyclic.
(2) Let gcd(r + 2s − 1, n) = d1 . If s = 2w is even, then
′
a): If d1 | (1 − r)j − s and
i): if r + 2s − 1 6≡ k (mod n) then Q will be cyclic.
ii): If r + 2s − 1 ≡ k (mod n) we have two cases
• if 1 − r ≡ k (mod n) then Q will be cyclic.
• If 1 − r 6≡ k (mod n) then Q will be non-cyclic.
′
b): If d1 ∤ (1 − r)j − s and
i): if −s 6≡ k (mod n) then Q will be non-cyclic.
ii): If −s ≡ k (mod n) then Q will be cyclic.
Example 8.8. Let G = QD48 where m = 24, n = 12, k = 6. Since gcd(r, m) = 1
⇒ r ∈ U24 = {1, 5, 7, 11, 13, 17, 19, 23} and s ∈ Z12 = {0, 1, 2, . . . , 11}.
(1) Consider θ1 = 7x + 11 ∈ Aut(QD48 ) then r = 7, s = 11 ⇒ r + 2s − 1 = 4
then Q1 = {1, a, a4 , a8 } is non-cyclic.
(2) a):
i): For θ2 = 5x + 8 ∈ Aut(QD48 ) then r = 5, s = 8 ⇒ r + 2s − 1 =
′
′
8 6≡ k and d1 = gcd(8, 12) = 4, (1 − r)j − s = −4j − 8 at
′
j = 2 yeilds 4 | −16 then Q2 = {1, a4 , a8 } is cyclic.
ii): Take θ3 = 7x + 6 ∈ Aut(QD48 ) then r = 7, s = 6 ⇒ r + 2s − 1 =
′
′
6 ≡ k and d1 = gcd(6, 12) = 6, (1 − r)j − s = −6j − 6 at
′
j = 2 yeilds 6 | −18 and 1 − r = −6 ≡ 6 ≡ k then Q3 = {1, a6 }
is cyclic. And for θ4 = 11x + 4 ∈ Aut(QD48 ) then r = 11, s = 4
′
⇒ r + 2s − 1 = 6 ≡ k and d1 = gcd(6, 12) = 6, (1 − r)j − s =
′
′
−10j − 4 at j = 2 yeilds 6 | −24 and 1 − r = −10 ≡ 2 6≡ k then
Q4 = {1, a4 , a6 , a8 } is non-cyclic.
b):
i): Consider θ5 = x + 10 ∈ Aut(QD48 ) then r = 1, s = 10 ⇒
′
′
r + 2s − 1 = 8 and d1 = gcd(8, 12) = 4, (1 − r)j − s = 0j − 10 ≡
2 ⇒ 4 ∤ 2 and −s = −10 ≡ 2 6≡ k then Q5 = {1, a2 , a4 , a8 } is
non-cyclic.
ii): For θ6 = 5x+ 6 ∈ Aut(QD48 ) then r = 5, s = 6 ⇒ r + 2s −1 = 4
′
′
′
and d1 = gcd(4, 12) = 4, (1 − r)j − s = −4j − 6 for any j ,
10
ZAHID RAZA, IMRAN AND BIJAN DAVVAZ
′
d1 ∤ −4j −6 and −s = −6 ≡ 6 ≡ k then Q6 = {1, a2 , a4 , . . . , a10 }
is cyclic.
Corollary 8.9. If m = 4k, n = 2k and k is odd number, then Q will be cyclic or
non-cyclic according to the following conditions
(1) If s = 2w is even then Q will be cyclic.
(2) If s = 2w + 1 is odd
a): Q will be non-cyclic if either r + 2s − 1 ≡ 0 (mod n) or 1 − r ≡
0 (mod n).
b): Q will be cyclic
i): if 1 − r ≡ 0 and s ≡ k (mod n).
ii): if r + 2s − 1 6≡ 0 and 1 − r 6≡ 0 (mod n).
Example 8.10. Let G = QD40 where m = 20, n = 10, k = 5. Since gcd(r, m) = 1
⇒ r ∈ U20 = {1, 3, 7, 9, 11, 13, 17, 19} and s ∈ Z10 = {0, 1, 2, . . . , 9}.
(1) For θ1 = 7x + 6 ∈ Aut(QD40 ) then r = 7, s = 6 ⇒ r + 2s − 1 = 8 then
Q1 = {1, a2 , a4 , a6 , a8 } is cyclic.
(2) a): Take θ2 = x + 7 ∈ Aut(QD40 ) then r = 1, s = 7 ⇒ r + 2s − 1 =
4 and 1 − r = 0 then Q2 = {1, a2 , a3 , a4 , a6 , a8 } is non-cyclic.
b):
i): Consider θ3 = 11x + 5 ∈ Aut(QD40 ) then r = 11, s = 5
⇒ r + 2s − 1 = 0, 1 − r = 0, s = 5 ≡ k then Q3 = {1, a5 } is
cyclic.
ii): For θ4 = 7x + 5 ∈ Aut(QD40 ) then r = 7, s = 5 ⇒ r + 2s − 1 =
6, 1 − r = −6 6≡ 0 then Q4 = {1, a, a2 , . . . , a9 } is cyclic.
Corollary 8.11. Let θ = rx + s ∈ Autv (QD2m ) be of finite order, where m =
4k, n = 2k and k is even number. Then Q is isomorphic to a cyclic group as
follows:
(1) If | Q |= 2 then Q = hak : an = 1i and Q ∼
= hki.
k
(2) If | Q |= 2 then
a): Let gcd(r + 2s − 1, n) = d1 . If n ∤ r + 2s − 1 then Q = had1 : an = 1i
and Q ∼
= hd1 i.
b): If n | r + 2s − 1 and gcd(r + 2s − 1, n) = n then Q = ha4 : an = 1i
and Q ∼
= h4i.
(3) If | Q |= k and gcd(r + 2s − 1, n) = d1 then
′
a): if (r + 2s − 1) ∩ (1 − r)j − s = ∅ then Q = had1 /2 : an = 1i
and Q ∼
= h d21 i.
′
b): if (r+2s−1)∩(1−r)j −s 6= ∅ then Q = had1 : an = 1i and Q ∼
= hd1 i.
Corollary 8.12. Let θ = rx + s ∈ Autv (QD2m ) be of finite order, if k is odd
number. Then the conditions for Q to be cyclic and isomorphism relationship are:
(1) If s is even, then
ON THE STRUCTURE OF INVOLUTIONS AND SYMMETRIC SPACES OF QUASI DIHEDRAL GROUP
11
a): if r = 1, s = 0 | Q |= 1 then Q = {1} and Q ∼
= h0i.
2
n
∼
b): if | Q |= k then Q = ha : a = 1i = h2i.
(2) If s is odd, then
a): Let n | 1 − r, n | s − k. If | Q |= 2 then Q = hak : an = 1i ∼
= hki.
b): if r + 2s − 1 6≡ 0 and 1 − r 6≡ 0 (mod n) and | Q |= n then Q =
ha : an = 1i ∼
= h1i.
Corollary 8.13. Let θ = rx + s ∈ Aut(QD2m ) be a fixed automorphism. Then for
any H and Q, HQ =
6 QD2m .
Example 8.13.1. Consider the involution θ1 = 17x + 8 of QD48 . Then H1 =
{1, a3 , a6 , a9 , a2 b, a8 b} and Q1 = {1, a4 , a8 } ⇒ H1 Q1 6= QD48 . Now, consider
the automorphism θ2 = 5x + 3 of order 4. Then H2 = {1, a6 b} and Q2 =
{1, a2 , a4 , . . . , a10 , a, a5 , a9 } ⇒ H2 Q2 6= QD48 .
Proposition 8.14. Let θ = rx + s ∈ Autv (QD2m ) and for any al ∈ Q.
′
i): If l is odd, then H\Q = {{aj } | j ∈ hr + 2s − 1i ∪ [(1 − r)j − s] (mod n)}
′
ii): If l is even then H\Q = {{aj , a−j } | j ∈ hr+2s−1i∪[(1−r)j −s] (mod n)}
′
where, j is any even number. In either case G\Q = {Q}, that is there is only a
single G-orbit on Q.
Proof. Since, H is fixed by θ thus the action of H on Q is simply by conjugation.
Notice that Q ⊆ hai ≤ QD2m so, the action of H on general element al of Q. Let
ai ∈ H then, ai .al .a−i = ai+l−i = al . So, hai fixes Q point wise.
′
′
′
′
′
′
′
Suppose ai b ∈ H, then (ai b).al .(ai b)−1 = ai (bal )b−1 a−i = ai (al(n−1) b)b−1 a−i =
′
′
′
l
l
ai +l(n−1)−i = al(n−1) = aln .a−l = (a2n ) 2 .a−l = (am ) 2 .a−l = a−l so, aj b takes
′
al to a−l ⇒ {aj , a−j } is an orbit ⇔ aj b ∈ H which is true ⇔ l is even.
Now, for every element of Q is in G-orbit of 1 ∈ Q. Since, every element of Q are
of the form al ∈ Q. and G has two type of elements ai and ai b. Now let us supposed
that q1 = al = ai .θ(ai )−1 for some ai ∈ G, thus q1 = ai .θ(a−i ) = ai .a−ri+[2−n]y =
ai−ri+[2−n]y = a−i(r−1)+(2−n)y = av(r−1)+(2−n)y , where v = −i.
Also, al = (ai b).θ(ai b)−1 for some ai b ∈ G ⇒ al = (ai b).θ(ai b) = (ai b).(ari+y b) =
i
a (bari+y )b = ai .a(ri+y)(n−1) b.b = ai+(ri+y)(n−1) b2 Let us denote this element q2 =
al = a−v+(y−vr)(n−1) , where v = −i.
For a−v ∈ G, we have a−v .1.θ(a−v )−1 = a−v .θ(av ) = a−v .arv+(2−n)y = av(r−1)+(2−n)y =
q1 . Now for a−v b ∈ G, we have (a−v b).1.θ(a−v b)−1 = (a−v b).θ(a−v b) = (a−v b).a−rv+y b =
a−v (ba−rv+y )b = a−v .a(−rv+y)(n−1) b.b = a−v+(y−rv)(n−1) = q2 .
Therefore, for any q ∈ Q there exist g ∈ G such that g.1.θ(g)−1 = q. Hence
G\Q = {Q}, there is a single G-orbit on Q.
Example 8.14.1. Revisiting θ1 , θ2 of Example 8.13.1 applying Proposition 8.14 to
obtain that for θ1 . Since all l are even, therefore H1 \Q1 = {{1}, {a4 , a−4 }, {a8, a−8 }},
12
ZAHID RAZA, IMRAN AND BIJAN DAVVAZ
for θ2 there exists even and odd powers of elements of Q. For all even powers
H2 \Q2 = {{1}, {a2 , a−2 }, . . . , {a10 , a−10 }, {a, a−1 }, {a5 , a−5 }, {a9 , a−9 }} and odd
powers H2 \Q2 = {{1}, {a2 }, . . . , {a10 }, {a}, {a5 }, {a9 }}.
9. Involutions in Aut(QD2m )
If θ = rx + s is an automorphism such that θ2 =id, then Proposition 7.1 gives
2
r ∈ Rm
and s ∈ ZDiv(r + 1).
Proposition 9.1. Suppose that m = 4pt11 pt22 , where the pi are distinct odd primes.
2
Then | Rm
|= 8
Proof. Since, m = 4pt11 pt22 ⇒ n = 2pt11 pt22 . Suppose that r ∈ Zn with r 2 ≡ 1 (mod n)
∵ gcd(r, m) = 1 ⇒ gcd(r, 4pt11 pt22 ) = 1. But m is even therefore, r is odd. Assume
that r = 2q + 1 for some 0 ≤ q < (n − 1). Now, r 2 = (2q + 1)2 = 4q 2 + 4q + 1
⇒ 4q 2 +4q ≡ 0 (mod n) ⇒ 4q(q+1) = l.(2pt11 pt22 ) ∵ gcd(4, 2pt11 pt22 ) = 2 ⇒ 2q(q+1) =
l.(pt11 pt22 ) (mod pt11 pt22 ). Either q or q + 1 is odd.
l
.pt11 pt22 ⇒ q = h.pt11 pt22 where 1 ≤ h ≤ 4. Now,
(1) If q is even, then q = 2(q+1)
r = 2q + 1 ⇒ r = 2h.pt11 pt22 + 1 ⇒ r ≡ 1(mod n)
(2) If q + 1 is even, then q + 1 = 2ql .pt11 .pt22 ⇒ q + 1 = h.pt11 pt22 ⇒ q = h.pt11 pt22 − 1
where 1 ≤ h ≤ 4. Now, r = 2q + 1 ⇒ r = 2h.pt11 pt22 − 1 ⇒ r ≡ −1(mod n)
⇒ r ≡ ±1(mod n),
t
Proposition 9.2. Suppose that m = 2α pt11 pt22 . . . pqq where the pi are distinct odd
primes. Then
2q+1 α = 2
2
2q+2 α = 3
| Rm
|=
q+3
2
α ≥ 4.
Proof. Similar to the Propositions 10.1.
Example 9.3. Let G = QD192 , then U96 = {1, 5, 7, 11, 13, 17, 19, 23, 25, 29, 31, 35, 37,
2
41, 43, 47, 49, 53, 55, 59, 61, 65, 67, 71, 73, 77, 79, 83, 85, 89, 91, 95} and R96
= {1, 7, 17,
2
4
23, 25, 31, 41, 47, 49, 55, 65, 71, 73, 79, 89, 95} thus, | R96 |= 2 = 16
Corollary 9.4.
|Aut2 (QD2m )| =
X
gcd(r + 1, n).
r∈R2m
2
Theorem 9.5. Let r ∈ Rm
, then the followings hold:
(1) hr − 1i ≤ ZDiv(r + 1)
iv(r+1) |≤ 2
(2) Nr =| ZDhr−1i
ON THE STRUCTURE OF INVOLUTIONS AND SYMMETRIC SPACES OF QUASI DIHEDRAL GROUP
13
(3) Nr = 2 if and only if m = 2α l ; α ≥ 2, and l is odd, r ≡ ±1 (mod 2α−1 ).
2
(1) Given that r ∈ Rm
, then (r − 1).(r + 1) = r 2 − 1 ≡ 0 ⇒ (r − 1) ≤
ZDiv(r + 1) ⇒ hr − 1i ≤ ZDiv(r + 1).
(2) Suppose
Proof.
|
(9.1)
ZDiv(r + 1)
|= j
hr − 1i
n
It is well known that | hr − 1i |= gcd(r−1,n)
and | ZDiv(r + 1) |= gcd(r + 1, n)
put values in 9.1
gcd(r + 1, n)
=j
n
gcd(r−1,n)
(9.2)
⇒ gcd(r + 1, n). gcd(r − 1, n) = n.j
t
Now, suppose that m = 2α pt11 pt22 . . . pqq where α ≥ 2 ⇒ α − 1 ≥ 1
t
⇒ n = 2α−1 pt11 pt22 . . . pqq where pi are distinct odd primes and ti ≥ 1. Then
gcd(r + 1, n) = 2α1 pa11 pa22 . . . paq q
with α1 ≤ (α − 1) and 0 ≤ ai ≤ ti ∀ i and
gcd(r − 1, n) = 2α2 pb11 pb22 . . . pbqq
with α2 ≤ (α − 1) and 0 ≤ bi ≤ ti . Now, for all i ∈ {1, 2, . . . q} either ai = 0
or bi = 0. Indeed, if ai > 0 and bi > 0 then pi divides both (r + 1) and
(r − 1) which is impossible. ∵ (pi > 2)
Similarly, either α1 ≤ 1 or α2 ≤ 1. Otherwise 2min{α1 ,α2 } divides (r + 1)
and (r − 1) which is impossible.
Since, from 9.2
gcd(r + 1, n). gcd(r − 1, n) = n.j
(2α1 pa11 pa22 . . . paq q ).(2α2 pb11 pb22 . . . pbqq ) = j.2α−1 pt11 pt22 . . . ptqq
(2α1 +α2 pa11 +b1 pa22 +b2 . . . paq q +bq ) = j.2α−1 pt11 pt22 . . . ptqq
Now, since ∀ i either ai = 0 or bi = 0 ⇒ ai + bi = ti
t
2
(9.3)
α1 +α2
= j.2
α−1
pt11 pt22 . . . pqq
a +bq
pa11 +b1 pa22 +b2 . . . pq q
2α1 +α2 = j.2α−1 .
As α1 ≤ (α − 1) and α2 ≤ (α − 1) and since either α1 ≤ 1 or α2 ≤ 1,
⇒ (α − 1) ≤ (α1 + α2 ) ≤ α
equation 9.3 ⇒ j = 2α1 +α2 −α+1 ⇒ 1 ≤ j ≤ 2.
14
ZAHID RAZA, IMRAN AND BIJAN DAVVAZ
(3) By Theorem 7.8, Nr is the number of divisors of
gcd(r − 1, n). gcd(r + 1, n)
n
and Nr is the number of divisor of j computed above. Therefore,
Nr = 1 if j = 1 and Nr = 2 if j = 2.
In either case,
ZDiv(r + 1)
|
|≤ 2.
hr − 1i
Finally according to the proof of (2), j = 2 if and only if
α1 = α − 1, α2 = 1 or α1 = 1, α2 = α − 1.
If α1 = α − 1, α2 = 1 then 2α−1 divides (r + 1).
If α1 = 1, α2 = α − 1 then 2α−1 divides (r − 1).
⇒ r ≡ ±1 (mod 2α−1 ).
Example 9.6. Let G = QD48 where m = 23 .3, α = 3, then
2
U24 = {1, 5, 7, 11, 13, 17, 19, 23} and R24
= {1, 5, 7, 11, 13, 17, 19, 23}.
2
(1) Let r = 5 ∈ R24 then hr − 1i = h4i = {0, 4, 8} = B1 and
ZDiv(r + 1) = ZDiv6 = {0, 2, 4, 6, 8, 10} = B2 clearly, h4i ≤ ZDiv6.
iv(6) = {0,2,4,6,8,10} = B2 = x + B ∀ x ∈ B
(2) For r = 5, ZDh4i
1
2
{0,4,8}
B1
0 + B1 = {0, 4, 8} = B1 , 2 + B1 = {2, 6, 10} = B3
4 + B1 = {4, 8, 0} = B1 , 6 + B1 = {6, 10, 2} = B3
8 + B1 = {8, 0, 4} = B1 , 10 + B1 = {10, 2, 6} = B3
= {B1 , B2 } ⇒ Nr = N5 = 2.
U24 action on ZDiv6
h4i
3
(3) Since, m = 2 .3 ⇒ l = 3 > 1 is odd and r = 5 ≡ 1 (mod 22 ).
Thus N5 = 2 as calculated in (2).
t
Corollary 9.7. Suppose that m = 2α pt11 pt22 . . . pqq , where the pi are distinct odd
primes and α ≥ 2. Then the number of equivalence classes C4k of Aut2 (QD8k ) is
given by
q+1
2
α=2
C4k =
q+2
2
α ≥ 3.
2
Example 9.8. Let G = QD384 . Thus R192
= {1, 17, 31, 47, 49, 65, 79, 95, 97, 113, 127,
2
|= 24 . It is easy to check that the values r =
143, 145, 161, 175, 191} ⇒| R192
1, 65, 97, 161 satisfy r ≡ 1 (mod 25 ) and the values r = 31, 95, 127, 191 satisfy
r ≡ −1 (mod 25 ). So, Cm =| {1, 31, 65, 95, 97, 127, 161, 191} |= 23 .
Now for θ an involution, the set of twisted involutions is given in the followings
porosities:
ON THE STRUCTURE OF INVOLUTIONS AND SYMMETRIC SPACES OF QUASI DIHEDRAL GROUP
15
Proposition 9.9. If θ2 =id, then the set of twisted involutions is given as
R = {ai | (r + 1)i ≡ −2s (mod n)} ∪ {ai b | (r − 1)i ≡ −s (mod n)}
2
Proof. Since, θ is an involution therefore, r ∈ Rm
and s ∈ ZDiv(r + 1). To find the
i
i
i −1
ri+(2−n)s
set R, , let a ∈ G, then θ(a ) = (a ) ⇒ a
= a−i ⇒ ri + (2 − n)s = −i
⇒ (r + 1)i + (2 − n)s = 0 ⇒ (r + 1)i ≡ −2s (mod n).
Now for ai b ∈ G, we have, θ(ai b) = (ai b)−1 ⇒ ari+s b = ai b ⇒ ri + s = i
⇒ (r − 1)i ≡ −s (mod n), hence R = {ai | (r + 1)i ≡ −2s (mod n)} ∪ {ai b |
(r − 1)i ≡ −s (mod n)}.
Corollary 9.10. If θ2 =id, m = 4k, n = 2k and k is even number, then Q is a
subgroup of hai and the natural isomorphism ψ : hai −→ Zn gives using Theorem
8.1 and Corollary 8.7 and 8.11
(1) If | Q |= 2, then ψ(Q) = hki.
(2) If | Q |= k2 then
hd1 i If n ∤ r + 2s − 1 and d1 = gcd(r + 2s − 1, n)
ψ(Q) =
h4i If n | r + 2s − 1 and n = gcd(r + 2s − 1, n).
(3) If | Q |= k and gcd(r + 2s − 1, n) = d1 then
ψ(Q) =
h d21 i If {(r + 2s − 1) ∩ [l(1 − r) − s]} = ∅
hd1 i If {(r + 2s − 1) ∩ [l(1 − r) − s]} =
6 ∅,
where l is an even number.
Corollary 9.11. If θ2 =id, m = 4k, n = 2k and k is even number, then Q is a
subgroup of hai and the natural isomorphism ψ : hai −→ Zn gives using Theorem
8.1 and Corollary 8.9 and 8.12
(1) If s is even, then
h0i If r = 1, s = 0, | Q |= 1
ψ(Q) =
h2i If | Q |= k.
(2) If s is odd, then
hki | Q |= 2 If n | 1 − r and n | s − k
ψ(Q) =
h1i | Q |= n If n ∤ r + 2s − 1 and n ∤ 1 − r.
Corollary 9.12. In any Case R 6= Q.
References
[1] A. Bishop, C. Cyr, J. Hutchens, C. May, N. Schwartz, and B. Turner (2013). ”On involutions
and generalized symmetric spaces of dicyclic groups”. arXiv:1310.0121v1
[2] Anders Bjorner and Francesco Brenti (2005). ”Combinatorics of coxter groups” . Gradate Text
in Mathematics, vol.231, Springer, New York.
16
ZAHID RAZA, IMRAN AND BIJAN DAVVAZ
[3] Nicolas Bourbaki (2002). ”Lie groups and Lie algebras”. Chapter 4-6, Elements of Mathematics
(Berlin), Springer-Verlag , Berlin , Translated from the (1968) French original by Andrew
Pressley.
[4] K. K. A. Cunningham, T. J. Edgar, A. G. Helminck, B. F. Jones, OH, R. Schwell , J.F.
Vasquez. (2012). ”On the Structure of Involution and Symmetric Spaces of Dihedral Groups”.
arXiv:1205.3207v1.
[5] A. G. Helminck (1994). Symmetric k- varities. Proc. Sympos. Pure Math. 56, no. 1, 233-279.
[6] A. G. Helminck (1988). ”Algebraic groups with a computing pair of involutions and semisimple symmetric spaces”. Adv. in Math 71 , 21-91.
[7] A. G. Helminck and S.P. Wang (1993). ”On rationality properties of involutions of reductive
groups”. Adv. in Math. 99, 26-96.
[8] Gary L. Walls (1986). ”Automorphism groups”. Amer. Math. Monthly 93, no. 6, 459-462.
Department of Mathematics, College of Sciences,University of Sharjah, UAE.
E-mail address: [email protected]
Department of Mathematics, National University of Computer and Emerging
Sciences, Lahore, Pakistan.
E-mail address: imran [email protected]
Department of Mathematics, Yazd University, Yazd, Iran
E-mail address: [email protected]
| 4 |
arXiv:1702.05933v2 [] 22 Jan 2018
Qualitative robustness for bootstrap approximations
Katharina Strohriegl,University of Bayreuth
[email protected]
January 23, 2018
Abstract
An important property of statistical estimators is qualitative robustness, that is
small changes in the distribution of the data only result in small chances of the distribution of the estimator. Moreover, in practice, the distribution of the data is commonly
unknown, therefore bootstrap approximations can be used to approximate the distribution of the estimator. Hence qualitative robustness of the statistical estimator under the
bootstrap approximation is a desirable property. Currently most theoretical investigations on qualitative robustness assume independent and identically distributed pairs of
random variables. However, in practice this assumption is not fulfilled. Therefore, we
examine the qualitative robustness of bootstrap approximations for non-i.i.d. random
variables, for example α-mixing and weakly dependent processes. In the i.i.d. case qualitative robustness is ensured via the continuity of the statistical operator, representing
the estimator, see Hampel (1971) and Cuevas and Romo (1993). We show, that qualitative robustness of the bootstrap approximation is still ensured under the assumption
that the statistical operator is continuous and under an additional assumption on the
stochastic process. In particular, we require a convergence condition of the empirical
measure of the underlying process, the so called Varadarajan property.
Keywords: stochastic processes, qualitative robustness, bootstrap, α-mixing, weakly dependent AMS: 60G20, 62G08, 62G09, 62G35
1
Introduction
The overwhelming part of theoretical publications in statistical machine learning was done
under the assumption that the data is generated by independent and identically distributed
(i.i.d.) random variables. However, this assumption is not fulfilled in many practical applications so that non-i.i.d. cases increasingly attract attention in machine learning. An
important property of an estimator is robustness. It is well known that many classical estimators are not robust, which means that small changes in the distribution of the data generating process may highly affect the results, see for example Huber (1981), Hampel (1968),
1
Jurečková and Picek (2006) or Maronna et al. (2006) for some books on robust statistics.
Qualitative robustness is a continuity property of the estimator and means roughly speaking:
small changes in the distribution of the data only lead to small changes in the distribution
(i.e. the performance) of the estimator. In this way the following kinds of "small errors"
are covered: small errors in all data points (rounding errors) and large errors in only a
small fraction of the data points (gross errors, outliers). Qualitative robustness of estimators has been defined originally in Hampel (1968) and Hampel (1971) in the i.i.d. case and
has been generalized to estimators for stochastic processes in various ways, for example, in
Papantoni-Kazakos and Gray (1979), Bustos (1980), which will be the one used here, Cox
(1981), Boente et al. (1987), Zähle (2015), and Zähle (2016), for a more local consideration
of qualitative robustness, see for example Krätschmer et al. (2017).
Often the finite sample distribution of the estimator or of the stochastic process of interest is
unknown, hence an approximation of the distribution is needed. Commonly, the bootstrap
is used to receive an approximation of the unknown finite sample distribution by resampling
from the given sample.
The classical bootstrap, also called the empirical bootstrap, has been introduced by Efron
(1979) for i.i.d. random variables. This concept is based on drawing a bootstrap sample
∗ ) of size m ∈ N with replacement out of the original sample (Z , . . . , Z ), n ∈
(Z1∗ , . . . , Zm
1
n
N, and approximate the theoretical distribution Pn of (Z1 , . . . , Zn ) using the bootstrap
sample. For the empirical bootstrap the approximation of the distribution via the bootstrap
∗ ), hence P ∗ =
is given by
the empirical
distribution of the bootstrap sample (Z1∗ , . . . , Zm
n
P
m
1
∗
δZi denotes the dirac measure. The bootstrap sample itself has
⊗ni=1 m
i=1 δZi , where
1 Pn
distribution ⊗m
i=1 n
i=1 δZi .
For an introduction to the bootstrap see for example Efron and Tibshirani (1993) and
van der Vaart (1998, Chapter 3.6). Besides the empirical bootstrap many other bootstrap methods have been developed in order to find good approximations also for noni.i.d. observations, see for example Singh (1981), Lahiri (2003), and the references therein.
In Section 2.2 the moving block bootstrap introduced by Künsch (1989) and Liu and Singh
(1992) is used to approximate the distribution of an α-mixing stochastic process.
It is, also in the non-i.i.d. case, still desirable that the estimator is qualitatively robust
even for the bootstrap approximation. That is, the distribution of the estimator under the
bootstrap approximation LPn∗ (Sn ), n ∈ N, of the assumed, ideal distribution Pn should still
be close to the distribution of the estimator under the bootstrap approximation LQ∗n (Sn ),
n ∈ N, of the real contaminated distribution Qn . Remember that this is a random object as
Pn∗ respectively Q∗n are random. For notational convenience all bootstrap values are noted
as usual with an asterisk.
To show qualitative robustness often generalizations of Hampel’s theorem are used, as it
is often hard to show qualitative robustness directly. For the i.i.d. case Hampel’s Theorem
ensures qualitative robustness of a sequence of estimators, if these estimators are continuous
and can be represented by a statistical operator which is continuous in the distribution of the
2
data generating stochastic process. Accordingly we try to find results similar to Hampel’s
theorem for the case of bootstrap approximations for non-i.i.d. cases.
Generalizations of Hampel’s theorem to non-i.i.d. cases can be found in Zähle (2015) and
Zähle (2016). For a slightly different generalization of qualitative robustness, Hampel’s
theorem has been formulated for strongly stationary and ergodic processes in Cox (1981) and
Boente et al. (1982). In Strohriegl and Hable (2016) a generalization of Hampel’s Theorem
to a broad class of non-i.i.d. stochastic processes is given. Cuevas and Romo (1993) describes
a concept of qualitative robustness of bootstrap approximations for the i.i.d. case and for
real valued estimators. Also a generalization of Hampel’s theorem to this case is given. In
Christmann et al. (2013, 2011) qualitative robustness of Efron’s bootstrap approximation
is shown for the i.i.d. case for a class of regularized kernel based learning methods, i. e. not
necessarily real valued estimators. Moreover Beutner and Zähle (2016) describes consistency
of the bootstrap for plug in estimators.
The next chapter contains a definition of qualitative robustness of the bootstrap approximation of an estimator and the main results. In Chapter 2.1 Theorem 2.2 shows qualitative
robustness of the bootstrap approximation of an estimator for independent but not necessarily identically distributed random variables, Chapter 2.2 contains Theorem 2.6 and 2.7
which generalize the result in Christmann et al. (2013) to α-mixing sequences with values
in Rd . All proofs are deferred to the appendix.
2
Qualitative robustness for bootstrap estimators
Throughout this paper, let (Z, dZ ) be a Polish space with some metric dZ and Borelσ-algebra B. Denote by M(Z N ) the set of all probability measures on (Z N , B ⊗N ). Let
(Z N , B ⊗N , M(Z N )) be the underlying statistical model. If nothing else is stated, we always
use Borel-σ-algebras for topological spaces. Let (Zi )i∈N be the coordinate process on Z N ,
that is Zi : Z N → Z, (zj )j∈N 7→ zi , i ∈ N. Then the process has law PN under PN ∈ M(Z N ).
Moreover let Pn := (Z1 , . . . , Zn ) ◦ PN be the n-th order marginal distribution of PN for every
n ∈ N and PN ∈ M(Z N ). We are concerned with a sequence of estimators (Sn )n∈N on the
stochastic process (Zi )i∈N . The estimator may take its values in any Polish space H with
some metric dH ; that is, Sn : Z n → H for every n ∈ N.
Our work applies to estimators which can be represented by a statistical operator S :
M(Z) → H, that is,
S Pwn = Sn (wn ) = Sn (z1 , . . . , zn )
∀ wn = (z1 , . . . , zn ) ∈ Z n , ∀ n ∈ N,
(1)
P
where Pwn denotes the empirical measure defined by Pwn (B) := n1 ni=1 IB (zi ), B ∈
B, for the observations wn = (z1 , ..., zn ) ∈ Z n . Examples of such estimators are Mestimators, R-estimators, see Huber (1981, Theorem 2.6), or Support Vector Machines,
see Hable and Christmann (2011).
3
Based on the generalization of Hampel’s concept of Π-robustness from Bustos (1980), we
define qualitative robustness for bootstrap approximations for non-i.i.d sequences of random
variables. The stronger concept of Π-robustness is needed here, as we do not assume to have
i.i.d. random variables, which are used in Cuevas and Romo (1993).
Therefore the definition of qualitative robustness stated below is stronger than the definition
in Cuevas and Romo (1993), i. e. if we use this definition for the i.i.d. case the assumption
dBL (Pn , Qn ) = dBL (⊗ni=1 P, ⊗ni=1 Q) < δ implies dBL (P, Q) < δ, where dBL denotes the
bounded Lipschitz metric. This can be seen similar to the proof of Lemma 3.1 in Section
2.1.
Now, let PN∗ be the approximation of PN with respect to the bootstrap. Define the bootstrap sample (Z1∗ , . . . , Zn∗ ) as the first n coordinate projections Zi∗ : Z N → Z, where
the law of the stochastic process (Zi∗ )i∈N has to be chosen according to the bootstrap
procedure. For the empirical bootstrap, for example, the bootstrap sample is chosen
via drawing with replacement from the given observations
z1 , . . . , zℓ , ℓ ∈ N. Hence the
1 Pℓ
distribution of the bootstrap sample
is ⊗n∈N ℓ i=1 δzi , with finite sample distributions
1 Pℓ
1 Pℓ
∗
∗
n
⊗j=1 ℓ i=1 δzi = (Z1 , . . . , Zn ) ⊗n∈N ℓ i=1 δzi .
Contrarily to the classical case of qualitative robustness the distribution of the estimator
unP
der Pn∗ , LPn∗ (Sn ) is a random probability measure, as the distribution Pn∗ = ⊗ni=1 1ℓ ℓi=1 δZi∗ ,
Zi∗ : Z N → Z, is random. Hence the mapping zN 7→ LPn∗ (Sn ), zN ∈ Z N , is itself a
random variable with values in M(H), i. e. on the space of probability measures on H,
equipped with the weak topology on M(H). The measurability of this mapping is ensured
by Beutner and Zähle (2016, Lemma D1).
Contrarily to the original definitions of qualitative robustness in Bustos (1980) the bounded
Lipschitz metric dBL is used instead of the Prohorov metric π for the definition of qualitative
robustness of the bootstrap approximation below. This is equivalent to Cuevas and Romo
(1993). Let X be a separable metric space, then the bounded Lipschitz metric on the space
of probability measures M(X ) on X is defined by:
Z
Z
dBL (P, Q) := sup
f dP − f dQ ; f ∈ BL(X ), kf kBL ≤ 1
(y)|
where k·kBL := |·|1 +k·k∞ denotes the bounded Lipschitz norm with |f |1 = supx6=y |f (x)−f
d(x,y)
and k · k∞ the supremum norm kf k∞ := supx |f (x)| and the space of bounded Lipschitz
functions is defined as BL := {f : X → R | f Lipshitz and kf kBL < ∞}. This is due to technical reasons only. Both metrics metricize the weak topology on the space of all probability
measures M(X ), for Polish spaces X , see, for example, Huber (1981, Chapter 2, Corollary
4.3) or Dudley (1989, Theorem 11.3.3), and therefore can be replaced while adapting δ on
the left hand-side of implication (2). If X is a Polish space, so is M(X ) with respect to the
weak topology, see Huber (1981, Chapter 2, Theorem 3.9). Hence the bounded Lipschitz
metric on the right-hand side of implication (2) operates on a space of probability measures
on the Polish space M(X ). Therefore the Prohorov metric and the bounded Lipschitz metric can again be replaced while adapting ε in (2). Similar to Cuevas and Romo (1993) the
4
proof of the theorems below rely on the fact that the set of bounded Lipschitz functions
BL is a uniform Glivenko-Cantelli class, which implies uniform convergence of the bounded
Lipschitz metric of the empirical measure to a limiting distribution, see Dudley et al. (1991).
Therefore the definition is given with respect to the bounded Lipschitz metric.
Definition 2.1 (Qualitative robustness for bootstrap approximations)
Let PN ∈ M(Z N ) and let PN∗ ∈ M(Z N ) be the bootstrap approximation of PN . Let P ⊂
M(Z N ) with PN ∈ P. Let Sn : Z n → H, n ∈ N, be a sequence of estimators. Then the
sequence of bootstrap approximations (LPn∗ (Sn ))n∈N is called qualitatively robust at PN with
respect to P if, for every ε > 0, there is δ > 0 such that there is n0 ∈ N such that for every
n ≥ n0 and for every QN ∈ P,
dBL (Pn , Qn ) < δ ⇒ dBL (L(LPn∗ (Sn )), L(LQ∗n (Sn ))) < ε.
(2)
Here L(LPn∗ (Sn )) (respectively L(LQ∗n (Sn ))) denotes the distribution of the bootstrap approximation of the estimator Sn under Pn∗ (respectively Q∗n ).
This definition of qualitative robustness with respect to the subset P indicates that we do
not show (2) for arbitrary probability measures QN ∈ M(Z N ). All of our results require
the contaminated process to at least have the same structure as the ideal process. This is
due to the use of the bootstrap procedure. The empirical bootstrap, which is used below,
only works well for a few processes, see for example Lahiri (2003), hence the assumptions
on the contaminated process are necessary. To our best knowledge there are no results
concerning qualitative robustness of the bootstrap approximation for general stochastic
processes without any assumptions on the second process and it is probably very hard to
show this for every QN ∈ M(Z N ), respectively P = M(Z N ). Another difference to the
classical definition of qualitative robustness in Bustos (1980) is the restriction to n ≥ n0 .
As the results for the bootstrap are asymptotic results, we can not achieve the equicontinuity
for every n ∈ N, but only asymptotically.
As the estimators can be represented by a statistical operator which depends on the empirical
measure it is crucial to concern stochastic processes which at last provide convergence of their
empirical measure. Therefore, Strohriegl and Hable (2016) proposed to choose Varadarajan
process. Let (Ω, A, µ) be a probability space. Let (Zi )i∈N , Zi : Ω → Z, i ∈ N, be a stochastic
process and Wn := (Z1 , . . . , Zn ). Then the stochastic process (Zi )i∈N is called a (strong)
Varadarajan process if there exists a probability measure P ∈ M(Z) such that
π(PWn , P ) −−−−→ 0 almost surely.
n→∞
The stochastic process (Zi )i∈N is called weak Varadarajan process if
π(PWn , P ) −−−−→ 0 in probability.
n→∞
Examples for Varadarajan processes are certain Markov Chains, some mixing processes,
ergodic process and processes which satisfy a law of large numbers for events in the sense
of Steinwart et al. (2009, Definition 2.1), see Strohriegl and Hable (2016) for details.
5
2.1
Qualitative robustness for independent not identically distributed
processes
In this section we relax the i.i.d. assumption in view of the identical distribution. We assume the random variables Zi , i ∈ N, to be independent, but not necessarily identically
distributed.
The result below generalizes Christmann et al. (2013, Theorem 3) and Christmann et al.
(2011), as the assumptions on the stochastic process are weaker as well as those on the
statistical operator. Compared to Theorem 3 in Cuevas and Romo (1993), which shows
qualitative robustness of the sequence of bootstrap estimators with values in R, we have
to strengthen the assumptions on the sample space, but do not need the estimator to
be uniformly continuous. But keep in mind, that the assumption dBL (Pn , Qn ) < δ implies dBL (P, Q) < δ, which is used for the i.i.d. case, in Christmann et al. (2013) and
Cuevas and Romo (1993).
Theorem 2.2 Let the sequence of estimators (Sn )n∈N be represented by a statistical operator S : M(Z) → H via (1) for a Polish space H and let (Z, dZ ) be a totally bounded metric
space.
Let PN = ⊗i∈N P i , P i ∈ M(Z) be an infinite product measure such that the coordinate process (Zi )i∈N , Zi : Z N → zi ,i ∈ N, is a strong Varadarajan process with limiting distribution
P . Moreover define P := QN ∈ M(Z N ); QN = ⊗i∈N Qi , Qi ∈ M(Z) . Let S : M(Z) →
H be continuous at P with respect to dBL and let the estimators Sn : Z n → H, n ∈ N, be
continuous.
Then the sequence of bootstrap approximations (LPn∗ (Sn ))n∈N , is qualitatively robust at PN
with respect to P.
Remark 2.3 The required properties on the statistical operator S and on the sequence of
estimators (Sn )n∈N in Theorem 2.2 ensure the qualitative robustness of (Sn )n∈N , as long as
the assumptions on the underlying stochastic processes are fulfilled.
The proof shows that the bootstrap approximation of every sequence of estimators (Sn )n∈N
which is qualitatively robust in the sense of the definitions in Bustos (1980) and Strohriegl and Hable
(2016, Definition 1) is qualitatively robust in the sense of Theorem 2.2.
Hence Hampel’s theorem for the i.i.d. case can be generalized to bootstrap approximations
and to the case of not necessarily identically distributed random variables if qualitative
robustness is based on the definition of Π-robustness.
Unfortunately, the assumption on the space (Z, dZ ) to be totally bounded seems to be
necessary. In the proof of Theorem 2.2 we use a result of Dudley et al. (1991) to show
uniformity on the space of probability measures M(Z). This result needs the bounded
Lipschitz functions to be a uniform Glivenko-Cantelli class, which is equivalent to (Z, dZ )
being totally bounded, see Dudley et al. (1991, Proposition 12). In order to weaken the
6
assumption on (Z, dZ ), probably another way to show uniformity on the space of probability
measures M(Z) has to be found.
A short look on the metrics used on Z n is advisable. We consider Z n as the n-fold product
space of the Polish space (Z, dZ ). The product space Z n is again a Polish space (in the
product topology) and it is tempting to use a p-product metric dn,p on Z n , that is,
dZ (z1 , z1′ ), . . . , dZ (zn , zn′ ) p
(3)
dn,p (z1 , . . . , zn ), (z1′ , . . . , zn′ ) =
where k · kp is a pn -norm on Rn for 1 ≤ p ≤ ∞. For example, dn,2 is the Euclidean metric
on Rn and dn,∞ (z1 , . . . , zn ), (z1′ , . . . , zn′ ) = maxi d(zi , zi′ ); all these metrics are strongly
equivalent. However, these common metrics do not cover the intuitive meaning of qualitative
robustness as the distance between two points in Z n (i.e., two data sets) is small only if
all coordinates are close together (small rounding errors). So points where only a small
fraction of the coordinates are far-off (gross errors) are excluded. Using these metrics, the
qualitative robustness of the sample mean at every PN ∈ M(Z N ) can be shown, see e.g.
Strohriegl and Hable (2016, Proposition 1). But the sample mean is a highly non-robust
estimator, as gross errors have great impact on the estimate. Following Boente et al. (1987),
we use the metric dn on Z n :
dn (z1 , . . . , zn ), (z1′ , . . . , zn′ ) = inf ε > 0 : ♯{i : d(zi , zi′ ) ≥ ε}/n ≤ ε .
(4)
This metric on Z n covers both kinds of "small errors". Though dn is not strongly equivalent to dn,p in general, it is topologically equivalent to the p-product metrics dn,p , see
Strohriegl and Hable (2016, Lemma 1). Hence, Z n is metrizable also with metric dn . Moreover the continuity of Sn on Z n is with respect to the product topology on Z n which can,
due to the topological equivalence of these two metrics, be seen with respect to the common
metrics dn,p .
The next part gives two examples of stochastic processes of independent, but not necessarily
identically distributed random variables, which are Varadarajan processes. In particular
these stochastic processes even satisfy a strong law of large numbers for events (SLLNE)
in the sense of Steinwart et al. (2009) and therefore are, due to Strohriegl and Hable (2016,
Theorem 2), strong Varadarajan processes. The first example is rather simple and describes
a sequence of univariate normal distributions.
Example 1 Let (ai )i∈N ⊂ R be a sequence with limi→∞ ai = a ∈ R and let |ai | ≤ c, for
some constant c > 0 for all i ∈ N. Let (Zi )i∈N , Zi : Ω → R, be a stochastic process where
Zi , i ∈ N, are independent and Zi ∼ N (ai , 1), i ∈ N. Then the process (Zi )i∈N is a strong
Varadarajan process.
The second example are stochastic processes where the distributions of the random variables
Zi , i ∈ N, are lying in a so-called shrinking ε-neighbourhood of a probability measure P .
Example 2 Let (Z, B) be a measurable space and let (Zi )i∈N be a stochastic process with
independent random variables Zi : Ω → Z, Zi ∼ P i , where
P i = (1 − εi )P + εP̃ i
7
for a sequence εi → 0, i → ∞, εi > 0 and P̃ i , P ∈ M(Z), i ∈ N. Then the process (Zi )i∈N
is a strong Varadarajan process.
The next corollary shows, that Support Vector Machines are qualitatively robust. For a
detailed introduction to Support Vector Machines see e.g., Schölkopf and Smola (2002) and
Steinwart and Christmann (2008). Let Dn := (z1 , z2 , . . . , zn ) = ((x1 , y1 ), (x2 , y2 ), . . . , (xn , yn ))
be a given dataset.
Corollary 2.4 Let Z = X × Y, Y ⊂ R closed, be a totally bounded, metric space and let
(Zi )i∈N be a stochastic process where the random variables Zi , i ∈ N, are independent and
Zi ∼ P i := (1 − εi )P + εi P̃ i , P, P̃ i ∈ M(Z). Moreover let (λn )n∈N be a sequence of positive
real valued numbers with λn → λ0 , n → ∞, for some λ0 > 0. Let H be a reproducing
kernel Hilbert space with continuous and bounded kernel k and let Sλn : (X × Y)n → H be
the SVM estimator, which maps Dn to fL∗ ,Dn ,λn for a continuous and convex loss function
L : X × Y × Y → [0, ∞[. It is assumed that L(x, y, y) = 0 for every (x, y) ∈ X × Y and that
L is additionally Lipschitz continuous in the last argument.
Then we have for every ε > 0 there is δ > 0 such that there is n0 ∈ N such that for all
n ≥ n0 and for every process (Z̃i )i∈N , where Z̃i are independent and have distribution Qi ,
i ∈ N:
dBL (Pn , Qn ) < δ ⇒ dBL (L(LPn∗ (Sn )), L(LQ∗n (Sn ))) < ε.
That is, the sequence of bootstrap approximations is qualitatively robust if the second
(contaminated) process (Z̃i )i∈N is still of the same kind, i.e. still independent, as the original
uncontaminated process (Zi )i∈N .
2.2
Qualitative robustness for the moving block bootstrap of α-mixing
processes
Dropping the independence assumption we now focus on real valued mixing processes, in
particular on strongly stationary α-mixing or strong mixing stochastic processes. The mixing
notion is an often used and well-accepted dependence notion which quantifies the degree
of dependence of a stochastic process. There exist several types of mixing coefficients, but
all of them are based on differences between probabilities µ(A1 ∩ A2 ) − µ(A1 )µ(A2 ). There
is a large literature on this dependence structure. For a detailed overview on mixing, see
Bradley (2005), Bradley (2007a,b,c), and Doukhan (1994) and the references therein. The
α-mixing structure has been introduced in Rosenblatt (1956). Also examples of relations
between dependence structures and mixing coefficients can be found in the references above.
Let Ω be a set equipped with two σ-algebras A1 and A2 and a probability measure µ. Then
the α-mixing coefficient is defined by
α(A1 , A2 , µ) := sup{|µ(A1 ∩ A2 ) − µ(A2 )µ(A2 )| | A1 ∈ A1 , A2 ∈ A2 }.
By definition the coefficients equal zero, if the σ-algebras are independent.
8
Moreover mixing can be defined for stochastic processes. We follow Steinwart et al. (2009,
Definition 3.1):
Definition 2.5 Let (Zi )i∈N be a stochastic process, Zi : Ω → Z, i ∈ N, and let σ(Zi ) be the
σ-algebra generated by Zi , i ∈ N. Then the α-bi- and the α-mixing coefficients are defined
by
α((Z)i∈N , µ, i, j) = α(σ(Zi ), σ(Zj ), µ)
α((Z)i∈N , µ, n) = sup α(σ(Zi ), σ(Zi+n ), µ).
i≥1
A stochastic process (Zi )i∈N is called α- mixing with respect to µ if
lim α((Z)i∈N , µ, n) = 0.
n→∞
It is called weakly α-bi-mixing with respect to µ if
n
n
1 XX
α((Z)i∈N , µ, i, j) = 0.
n→∞ n2
lim
i=1 j=1
Instead of Efron’s empirical bootstrap another bootstrap approach is used in order to represent the dependence structure of an α-mixing process. Künsch (1989) and Liu and Singh
(1992) introduced the moving block bootstrap (MBB). Often resampling of single observations can not preserve the dependence structure of the process, therefore they decided to
take blocks of length b of observations instead. The dependence structure of the process is
preserved, within these blocks. The block length b increases with the number of observations n for asymptotic considerations. A slight modification of the original moving block
bootstrap, see for example Politis and Romano (1990) and Shao and Yu (1993), is used in
the next two theorems in order to avoid edge effects.
The proofs are based on central limit theorems for empirical processes. There are several
results concerning the moving block bootstrap of the empirical process in case of mixing processes, see for example Bühlmann (1994), Naik-Nimbalkar and Rajarshi (1994), and
Peligrad (1998, Theorem 2.2) for α-mixing sequences and Radulović (1996) and Bühlmann
(1995) for β-mixing sequences. To our best knowledge there are so far no results concerning
qualitative robustness for bootstrap approximations of estimators for α-mixing stochastic
processes. Therefore, Theorem 2.6 shows qualitative robustness for a stochastic process with
values in R. The proof is based on Peligrad (1998, Theorem 2.2), which provides a central
limit theorem under assumptions on the process, which are weaker than those in Bühlmann
(1994) and Naik-Nimbalkar and Rajarshi (1994). In the case of Rd -valued, d > 1, stochastic
processes, stronger assumptions on the stochastic process are needed, as the central limit
theorem in Bühlmann (1994) requires stronger assumptions, see Theorem 2.7.
Let Z1 , . . . , Zn , n ∈ N, be the first n projections of a real valued stochastic process (Zi )i∈N
and let b ∈ N, b < n, be the block length. Then, for fixed n ∈ N, the sample can be
9
divided into blocks Bi,b := (Zi , . . . , Zi+b−1 ). If i > n − b + 1, we define Zn+j = Zj , for the
missing elements of the blocks. To get the MBB bootstrap sample Wn∗ = (Z1∗ , . . . , Zn∗ ), ℓ
numbers I1 , . . . , Iℓ from the set {1, . . . , n} are randomly chosen with replacement. Without
loss of generality it is assumed that n = ℓb, if n is not a multiple of b we simply cut
the last block, which is usually done in literature. Then the sample consists of the blocks
∗ =
∗
= ZI2 , . . . , Zℓb
BI1 ,b , BI2 ,b , . . . , BIℓ ,b , that is Z1∗ = ZI1 , Z2∗ = ZI1 +1 , . . . , Zb∗ =I1 +b−1 , Zb+1
ZIℓ +b−1 .
As we are interested in estimators Sn , n ∈ N, which can be represented by a statistical
operator S : M(Z) → H via S(Pwn ) = Sn (z1 , . . . , zn ), P
for a Polish space H, see (1), the
1
empirical measure of the bootstrap sample PWn∗ = n ni=1 δZi∗ should approximate the
P
empirical measure of the original sample PWn = n1 ni=1 δZi . Contrarily to qualitative
robustness in the case of independent and not necessarily identically distributed random
variables (Theorem 2.2), the assumptions on the statistical operator S are strengthened for
the case of α-mixing sequences. In particular the statistical operator S is assumed to be
uniformly continuous for all P ∈ (M(Z), dBL ). For the first theorem we assume the random
variables Zi , i ∈ N, to be real valued and bounded. Without loss of generality we assume
0 ≤ Zi ≤ 1, otherwise a transformation leads to this assumption. For the bootstrap for
the true as well as for the contaminated process, we assume the block length b(n) and the
number of blocks ℓ(n) to be sequences of integers satisfying
nh ∈ O(b(n)), b(n) ∈ O(n1/3−a ), for some 0 < h <
1
1
− a, 0 < a < ,
3
3
b(n) = b(2q ) for 2q ≤ n < 2q+1 , q ∈ N, b(n) → ∞, n → ∞ and b(n) · ℓ(n) = n, n ∈ N.
Theorem 2.6 Let PN ∈ M(RN ) be a probability measure on (RN , B ⊗N ) such that the coordinate process (Zi )i∈N , Zi : RN → R is bounded, strongly stationary, and α-mixing with
X
m>n
α(σ(Z1 , . . . , Zi ), σ(Zi+m , . . .), PN ) = O(n−γ ), i ∈ N, for some γ > 0.
(5)
Let P ⊂ M(RN ) be the set of probability measures such that the coordinate process fulfils
the properties above for the same γ > 0. Let H be a Polish space, with some metric dH ,
let (Sn )n∈N be a sequence of estimators which can be represented by a statistical operator
S : M(R) → H via (1). Moreover let Sn be continuous and let S be additionally uniformly
continuous with respect to dBL . Then the sequence of estimators (Sn )n∈N is qualitatively
robust at PN with respect to P.
The assumptions on the stochastic process are on the one hand, together with the assumptions on the block length, used to ensure the validity of the bootstrap approximation and
on the other hand, together with the assumptions on the statistical operator, respectively
the sequence of estimators, to ensure the qualitative robustness.
10
The next theorem generalizes this result to stochastic processes with values in [0, 1]d , d > 1,
instead of [0, 1] ⊂ R. Therefore, for example, the bootstrap version of the SVM estimator
is qualitatively robust under weak conditions. The proof of the next theorem follows the
same lines as the proof of the theorem above, but another central limit theorem, which is
shown in Bühlmann (1994), is used. Therefore the assumptions on the mixing property of
the stochastic process are stronger and the random variables Zi , i ∈ N, are assumed to
have continuous marginal distributions. Again the bootstrap sample results of a moving
block bootstrap where ℓ(n) blocks of length b(n) are chosen, again assuming ℓ(n) · b(n) = n.
Moreover, let b(n) be a sequences of integers satisfying
1
b(n) = O(n 2 −a ) for some a > 0.
Theorem 2.7 Assume Z = [0, 1]d , d > 1. Let PN be a probability measure such that the
coordinate process (Zi )i∈N , Zi : Z N → Z is strongly stationary and α-mixing with
∞
X
1
(m + 1)8d+7 (α(σ(Z1 , . . . , Zi ), σ(Zi+m , . . .), PN )) 2 < ∞, i ∈ N.
(6)
m=0
Assume that Zi has continuous marginal distributions for all i ∈ N. Define the set of
probability measures P ⊂ M(Z) such that the coordinate process is strongly stationary and
α-mixing as in (6).
Let H be a Polish space, wit some metric dH , (Sn )n∈N be a sequence of estimators such that
Sn : Z n → H is continuous and assume that Sn can be represented by a statistical operator
S : M(Z) → H via (1) which is additionally uniformly continuous with respect to dBL .
Then the sequence of estimators (Sn )n∈N is qualitatively robust at PN with respect to P.
Although the assumptions on the statistical operator S, compared to Theorem 2.2, were
strengthened in order to generalize the qualitative robustness to α-mixing sequences in
Theorem 2.6 and 2.7, M-estimators are still an example for qualitative robust estimators
if the sample space (Z, dZ ), Z ⊂ R is compact. The compactness of (Z, dZ ) implies the
compactness of the space (M(Z), dBL ), see Parthasarathy (1967, Theorem 6.4). As the
statistical operator S is continuous, the compactness of M(Z) implies the uniform continuity
of S. Another example of M-estimators which are uniformly continuous even if the input
space is not compact is given in Cuevas and Romo (1993, Theorem 4).
Acknowledgements: This research was partially supported by the DFG Grant 291/2-1
"Support Vector Machines bei stochastischer Unabhängigkeit". Moreover I would like to
thank Andreas Christmann for helpful discussions on this topic.
3
Proofs
This section contains the proofs of the main theorems and corollaries.
11
3.1
Proofs of Section 2.1
Before proving Theorem 2.1, we state a rather technical lemma, connecting the product
measure
⊗ni=1 P i ∈ M(Z n ) of independent random variables to their mixture measure
1 Pn
i
i=1 P ∈ M(Z). Let (Z, dZ ) be a Polish space.
n
Lemma 3.1 Let Pn , Qn ∈ M(Z n ) such that Pn = ⊗ni=1 P i and Qn = ⊗ni=1 Qi , P i , Qi ∈
M(Z), i ∈ N. Then for all δ > 0:
!
n
n
1X i 1X i
dBL (Pn , Qn ) ≤ δ ⇒ dBL
P ,
Q ≤ δ.
n
n
i=1
i=1
Proof: Let BL1 be the set of bounded Lipschitz functions with kf kBL ≤ 1.By assumption
we have dBL (Pn , Qn ) ≤ δ. Moreover for a function f : Z → R:
Z
Z
Z
i
f (zi ) dP i (zi ) d ⊗j6=i P j (zj ) .
(7)
f (zi ) dP (zi ) =
Z n−1
Z
Z
Then,
"
" n
# Z
#
n
1X i
1X i
f (zi ) d
f (zi ) d
sup
P (zi ) −
Q (zi )
n
n
Z
f ∈BL1 (Z) Z
i=1
i=1
Z
n Z
1X
i
i
=
sup
f (zi ) dQ (zi )
f (zi ) dP (zi ) −
f ∈BL1 (Z) n i=1
Z
Z
Z
n Z
1X
(7)
= sup
f (zi ) dP i (zi ) d ⊗j6=i P j (zj )
f ∈BL1 (Z) n i=1
Z n−1 Z
Z
Z
i
j
f (zi ) dQ (zi ) d ⊗j6=i Q (zj )
−
Z
=
sup
f ∈BL1 (Z)
n
Z n−1 Z
n Z
X
1
n
i=1
1X
≤
sup
n
f ∈BL1 (Z)
i=1
Z
Zn
Zn
f (zi ) d
f (zi ) d
⊗nj=1 P j (zj )
⊗nj=1 P j (zj )
−
−
Z
Zn
Z
Zn
f (zi ) d
⊗nj=1 Qj (zj )
f (zi ) d ⊗nj=1 Qj (zj ) .
Now every function f ∈ BL1 (Z) can be identified as a function f˜ : Z n → Z, (z1 , . . . , zn ) 7→
f˜(z1 , . . . , zn ) := f (zi ). This function is also Lipschitz continuous on Z n :
|f˜(z1 , . . . , zn )−f˜(z1′ , . . . , zn′ )| = |f (zi ) − f (zi′ )|
≤ |f |1 d(zi , zi′ ) ≤ |f |1 (dZ (z1 , z1′ ) + . . . + dZ (zi , zi′ ) + . . . + dZ (zn , zn′ )),
12
where dZ (z1 , z1′ ) + . . . + dZ (zi , zi′ ) + . . . + dZ (zn , zn′ ) induces the product topology on Z n .
That is f˜ ∈ BL1 (Z n ). Note that this is also true for every p-product metric dn,p in Z n ,
1 ≤ p ≤ ∞, as they are strongly equivalent. Hence,
!
Z
Z
n
n
n
1X
1X i 1X i
g dQn
g dPn −
P ,
Q ≤
sup
dBL
n
n
n
Zn
g∈BL1 (Z n ) Z n
i=1
i=1
i=1
n
1X
≤
dBL (Pn , Qn ) ≤ δ,
n
i=1
which yields the assertion.
Proof of Theorem 2.2: To prove Theorem 2.2 we first use the triangle inequality to
split the bounded Lipschitz distance between the distribution of the estimator Sn , n ∈ N,
into two parts regarding the distribution of the estimator under the joint distribution Pn of
(Z1 , . . . , Zn ):
dBL (LPn∗ (Sn ), LQ∗n (Sn )) ≤ dBL (LPn∗ (Sn ), LPn (Sn )) + dBL (LPn (Sn ), LQ∗n (Sn )) .
{z
} |
{z
}
|
I
II
Then the representation of the estimator Sn by the statistical operator S and the continuity of this operator in P together with the Varadarajan property and the independence
assumption on the stochastic process yield the assertion.
First we regard part I: Define the distribution PN ∈ M(Z N ) and let PN∗ be the bootstrap
approximation of PN . Define, for n ∈ N, the random variables
Wn : Z N → Z n , Wn = (Z1 , . . . , Zn ), zN 7→ Wn (zN ) = wn = (z1 , . . . , zn ), and
Wn′ : Z N → Z n , Wn′ = (Z1′ , . . . , Zn′ ), zN 7→ wn′ ,
such that Wn (PN ) = Pn and Wn′ (PN∗ ) = Pn∗ .
Denote the bootstrap sample by Wn∗ := (Z1∗ , . . . , Zn∗ ), Wn∗ : Z N → Z n , zN 7→ wn∗ .
As Efron’s empirical bootstrap is used, the bootstrap sample, which is chosen
Pvia resampling
with replacement out of Z1 , . . . , Zℓ , ℓ ∈ N, has distribution Zi∗ ∼ PWℓ = 1ℓ ℓj=1 δZj , i ∈ N,
approximation of Pℓ , ℓ ∈ N,
respectively Wn∗ := (Z1∗ , . . . , Zn∗ ) ∼ ⊗ni=1 PWℓ . The bootstrap P
is the empirical measure of the bootstrap sample Pℓ∗ = ⊗ℓi=1 n1 nj=1 δZj∗ .
∗ , and W′ by K ∈ M(Z N × Z N × Z N ).
Further denote the joint distribution of WN , WN
N
N
Then, KN has marginal distributions KN (B1 × Z N × Z N ) = PN (B1 ) for all B1 ∈ B ⊗N ,
KN (Z N × B2 × Z N ) = ⊗i∈N PWn (B2 ) for all B2 ∈ B ⊗N , and KN (Z N × Z N × B3 ) = PN∗ (B3 )
for all B3 ∈ B ⊗N .
Then,
LPn (Sn ) = Sn (Pn ) = Sn ◦ Wn (PN ) and LPn∗ (Sn ) = Sn (Pn∗ ) = Sn ◦ Wn′ (PN∗ )
13
and therefore
dBL (LPn∗ (Sn ), LPn (Sn )) = dBL (L(Sn ◦ Wn′ ), L(Sn ◦ Wn )).
By assumption the coordinate process (Zi )i∈N consists of independent random variables,
hence we have Pn = ⊗ni=1 P i , for P i = Zi (PN ), i ∈ N.
Moreover (Z, dZ ) is assumed to be a totally bounded metric space. Then, due to Dudley et al.
(1991, Proposition 12), the set BL1 (Z, dZ ) is a uniform Glivenko-Cantelli class. That is, if
Zi ∼ P i.i.d. i ∈ N, we have for all η > 0:
N
lim sup PN
zN ∈ Z | sup dBL (PWm (zN ) , P ) > η
= 0.
n→∞ P ∈M(Z)
m≥n
∗ ), m ∈ N, which is found by resampling
Applying this to the bootstrap sample (Z1∗ , . . . , Zm
with replacement out of the original sample (Z1 , . . . , Zn ), we have, for all wn ∈ Z n ,
lim
sup ⊗i∈N Pwn
zN ∈ Z N | sup dBL (PWm
∗ (z ) , Pwn ) > η
= 0.
N
n→∞ P
m≥n
wn ∈M(Z)
Let ε > 0 be arbitrary but fixed. Then, for every δ0 > 0 there is n1 ∈
n ≥ n1 and all Pwn ∈ M(Z):
δ0
n
∗
n
∗
⊗i=1 Pwn
wn ∈ Z | dBL (Pwn , Pwn ) ≤
≥1−
4
N such that for all
ε
.
8
(8)
And, using the same argumentation
for the sequence of random variables Zi′ , i ∈ N, which
1 Pn
are i.i.d. and have distribution n i=1 δZi∗ = PWn∗ :
∗
N
lim
sup PN
zN ∈ Z | sup dBL (PWm
′ (z ) , Pw∗ ) > η
= 0.
N
n
n→∞ P
∗ ∈M(Z)
wn
m≥n
Respectively, for every δ0 > 0 there is n2 ∈ N such that for all n ≥ n2 and all Pwn∗ ∈ M(Z):
δ0
ε
∗
′
n
Pn
wn ∈ Z | dBL (Pwn′ , Pwn∗ ) ≤
(9)
≥1− .
2
8
As the process (Zi )i∈N is a strong Varadarajan process by assumption, there exists a probability measure P ∈ M(Z) such that
dBL (PWn , P ) −→ 0 almost surely with respect to PN , n → ∞.
That is, for every δ0 > 0 there is n3 ∈ N such that for all n ≥ n3 :
ε
δ0
n
≥1− .
Pn
wn ∈ Z | dBL (Pwn , P ) ≤
2
4
14
(10)
The continuity of the statistical operator S : M(Z) → H in P ∈ M(Z) yields: for every
ε > 0 there exists δ0 > 0 such that for all Q ∈ M(Z):
dBL (P, Q) ≤ δ0
⇒
ε
dH (S(P ), S(Q)) ≤ .
4
(11)
As the Prohorov metric πdH is bounded by the Ky Fan metric, see Dudley (1989, Theorem
11.3.5) we conclude:
πdH (LPn∗ (Sn ), LPn (Sn )) = πdH (Sn ◦ Wn′ , Sn ◦ Wn )
≤ inf ε̃ > 0 | KN dH (Sn ◦ Wn′ , Sn ◦ Wn ) > ε̃ ≤ ε̃
= inf ε̃ > 0 | (Wn , Wn∗ , Wn′ )(KN ) (wn , wn∗ , wn′ ) ∈ Z n × Z n × Z n |
dH (Sn (wn′ ), Sn (wn )) > ε̃, wn∗ ∈ Z n ≤ ε̃ .
Due to the definition of the statistical operator S, this is equivalent to
inf ε̃ > 0 | (Wn , Wn∗ , Wn′ )(KN ) (wn , wn∗ , wn′ ) ∈ Z n × Z n × Z n |
dH (S(Pwn′ ), S(Pwn )) > ε̃, wn∗ ∈ Z n
(12)
≤ ε̃ .
The triangle inequality
dH (S(Pwn′ ), S(Pwn )) ≤ dH (S(Pwn′ ), S(P )) + dH (S(P ), S(Pwn )),
and the continuity of the statistical operator S, see (11), then yield, for all ε > 0,
o
n
ε
(Wn , Wn∗ , Wn′ )(KN ) (wn , wn∗ , wn′ ) ∈ Z n × Z n × Z n | dH (S(Pwn′ ), S(Pwn )) > , wn∗ ∈ Z n
2
n
ε
∗
′
∗
′
n
n
n
≤ (Wn , Wn , Wn )(KN ) (wn , wn , wn ) ∈ Z × Z × Z | dH (S(Pwn′ ), S(P )) >
4
o
ε
or dH (S(P ), S(Pwn )) > , wn∗ ∈ Z n
4
(11)
≤ (Wn , Wn∗ , Wn′ )(KN ) (wn , wn∗ , wn′ ) ∈ Z n × Z n × Z n | dBL (Pwn′ , P ) > δ0
or dBL (P, Pwn ) > δ0 , wn∗ ∈ Z n }) .
Using the triangle inequality,
and
dBL (Pwn′ , P ) ≤ dBL (Pwn′ , Pwn∗ ) + dBL (Pwn∗ , P )
dBL (Pwn∗ , P ) ≤ dBL (Pwn∗ , Pwn ) + dBL (Pwn , P ),
15
(13)
(14)
gives for all n ≥ max{n1 , n2 , n3 }:
(Wn , Wn∗ , Wn′ )(KN ) (wn , wn∗ , wn′ ) ∈ Z n × Z n × Z n | dBL (Pwn′ , P ) > δ0
or dBL (P, Pwn ) > δ0 , wn∗ ∈ Z n })
δ0
(wn , wn∗ , wn′ ) ∈ Z n × Z n × Z n | dBL (Pwn′ , Pwn∗ ) >
2
δ0
or dBL (Pwn∗ , P ) >
or dBL (P, Pwn ) > δ0
2
(14)
δ0
∗
′
≤ (Wn , Wn , Wn )(KN )
(wn , wn∗ , wn′ ) ∈ Z n × Z n × Z n | dBL (Pwn′ , Pwn∗ ) >
2
δ0
δ0
or dBL (P, Pwn ) >
or dBL (Pwn∗ , Pwn ) >
4
4
δ0
δ0
∗
′
n
n
≤ Pn
wn ∈ Z | dBL (Pwn′ , Pwn∗ ) >
wn ∈ Z | dBL (Pwn , P ) >
+ Pn
2
4
δ0
+ ⊗ni=1 Pwn
wn∗ ∈ Z n | dBL (Pwn∗ , Pwn ) >
4
(8),(9),(10) ε
ε ε
ε
<
+ +
=
.
8 4 8
2
(13)
≤ (Wn , Wn∗ , Wn′ )(KN )
Hence, for all ε > 0 there are n1 , n2 , n3 ∈ N such that vor all n ≥ max{n1 , n2 , n3 }, the
infimum in (12) is bounded by 2ε . Therefore
πdH (LPn∗ (Sn ), LPn (Sn )) <
ε
.
2
The equivalence between the Prohorov metric and the bounded Lipschitz metric for Polish
spaces, see Huber (1981, Chapter 2, Corollary 4.3), yields the existence of n0,1 ∈ N such
that for all n ≥ n0,1 :
ε
(15)
dBL (LPn∗ (Sn ), LPn (Sn )) < .
2
To prove the convergence of the term in part II, consider the distribution QN ∈ M(Z N ) and
let Q∗N be the bootstrap approximation of QN . Define, for n ∈ N, the random variables
W̃n : Z N → Z n , W̃n = (Z̃1 , . . . , Z̃n ), zN 7→ w̃n with distribution W̃n (QN ) = Qn ,
W̃n′ : Z N → Z n , W̃n′ = (Z̃1′ , . . . , Z̃n′ ), zN 7→ w̃n′ , with distribution W̃n′ (Q∗N ) = Q∗n , and
the bootstrap sample
W̃n∗ : Z N → Z n , W̃n∗ = (Z̃1∗ , . . . , Z̃n∗ ), zN 7→ w̃n∗ , with distribution
P
ℓ
⊗ni=1 QW̃ℓ = ⊗ni=1 1ℓ i=1 δZ̃i .
Moreover let K̃N ∈ M(Z N × Z N × Z N × Z N ) denote the joint distribution of WN , W̃N ,
∗ , and W̃′ . Then, K̃ ∈ M(Z N × Z N × Z N × Z N ) has marginal distributions P , Q ,
W̃N
N
N
N
N
⊗i∈N QW̃n , and Q∗N .
16
First, similar to the argumentation for part I, Efron’s bootstrap and Dudley et al. (1991,
Proposition 12) give for w̃n ∈ Z n :
N
⊗n∈N Qw̃n
lim
sup
zN ∈ Z | sup dBL (QW̃∗ (zN ) , Qw̃n ) > η
= 0.
n→∞ Q
m
m≥n
w̃n ∈M(Z)
Hence, for arbitrary, but fixed ε > 0, for every δ0 > 0 there is n4 ∈ N such that for all
n ≥ n4 and all Qw̃n ∈ M(Z):
δ0
ε
∗
n
n
w̃n ∈ Z | dBL (Qw̃n∗ , Qw̃n ) ≤
⊗i=1 Qw̃n
(16)
≥1− .
6
10
Further,
lim
n→∞ Q
sup
∗ ∈M(Z)
w̃n
Q∗N
zN ∈ Z N | sup dBL (QW̃′
m≥n
m
∗
(zN ) , Qw̃n ) > η
= 0.
Respectively,
for every δ0 > 0 there is n5 ∈ N such that for all n ≥ n5 and all Qw̃n∗ =
1 Pn
∗
i=1 δz̃i ∈ M(Z):
n
Q∗n
δ0
ε
′
n
w̃n ∈ Z | dBL (Qw̃n′ , Qw̃n∗ ) ≤
≥1− .
6
10
(17)
Moreover, as the random variables Zi , Zi ∼ P i , i ∈ N,
Pare independent, the bounded
Lipschitz distance between the empirical measure and n1 ni=1 P i can be bounded, due to
Dudley et al. (1991, Theorem 7). As totally bounded spaces are particularly separable,
see Denkowski et al. (2003, below Corollary 1.4.28), Dudley et al. (1991, Proposition 12)
provides that BL1 (Z, dZ ) is a uniform Glivenko-Cantelli class. The proof of this proposition
does not depend on the distributions of the random variables Zi , i ∈ N, and is therefore also
valid for independent and not necessarily identically distributed random variables. Hence
Dudley et al. (1991, Theorem 7) yields for all η > 0:
!
)!
(
n
X
1
= 0,
zN ∈ Z N | sup dBL PWm (zN ) ,
PN
lim
sup
Pi > η
n→∞ (P i )
n
N
m≥n
i∈N ∈(M(Z))
i=1
as long as the assumptions of Proposition 12 in Dudley et al. (1991) apply. As BL1 (Z, dZ )
is bounded, we have F0 = BL1 (Z, dZ ), see Dudley et al. (1991, page 499, before Proposition
10), hence it is sufficient to show that BL1 (Z, dZ ) is image admissible Suslin. By assumption (Z, dZ ) is totally bounded, hence BL1 (Z, dZ ) is separable with respect to k · k∞ , see
Strohriegl and Hable (2016, Lemma 3). As f ∈ BL1 (Z, dZ ) implies kf k∞ ≤ 1, the space
BL1 (Z, dZ ) is a bounded subset of (Cb (Z, dZ ), k · k∞ ), which is due to Dudley (1989, Theorem 2.4.9) a complete space. Now, BL1 (Z, dZ ) is a closed subset of (Cb (Z, dZ ), k · k∞ ) with
17
respect to k · k∞ . Hence BL1 (Z, dZ ) is complete, due to Denkowski et al. (2003, Proposition
1.4.17). Therefore BL1 (Z, dZ ) is separable and complete with respect to k · k∞ and particularly a Suslin space, see Dudley (2014, p.229). As Lipschitz continuous functions are also
equicontinuous, Dudley (2014, Theorem 5.28 (c)) gives that BL1 (Z, dZ ) is image admissible
Suslin.
Hence, Dudley et al. (1991, Theorem 7) yields
!
n
1X i
−→ 0 almost surely with respect to PN , n → ∞,
P
dBL PWn ,
sup
n
(P i )i∈N ∈(M(Z))N
i=1
and
n
dBL
sup
(Qi )i∈N ∈(M(Z))N
1X i
QW̃n ,
Q
n
i=1
!
−→ 0 almost surely with respect to QN , n → ∞.
That is, there is n6 ∈ N such that for all n ≥ n6
(
wn ∈ Z n | dBL
Pn
and Qn
(
w̃n ∈ Z n | dBL
n
1X i
P wn ,
P
n
1
Qw̃n ,
n
i=1
n
X
i=1
Qi
!
δ0
≤
6
)!
≥1−
ε
,
10
(18)
!
δ0
≤
6
)!
≥1−
ε
.
10
(19)
Moreover, due to Lemma 3.1, we have
δ0
dBL (Pn , Qn ) ≤
6
n
⇒
dBL
n
1X i 1X i
P ,
Q
n
n
i=1
i=1
!
≤
δ0
.
6
(20)
Then the strong Varadarajan property of (Zi )i∈N yields that there is n7 ∈ N such that for
all n ≥ n7 :
ε
δ0
n
(21)
Pn
≥1− .
wn ∈ Z | dBL (Pwn , P ) ≤
6
10
Similar to the argumentation for part I we conclude, using again the boundedness of the
Prohorov metric πdH by the Ky Fan metric, see Dudley (1989, Theorem 11.3.5):
πdH (LPn (Sn ), LQ∗n (Sn )) = πdH (Sn ◦ Wn , Sn ◦ W̃n′ )
= inf{ε̃ > 0 | (Wn , W̃n , W̃n∗ , W̃n′ )(K̃N ) (wn , w̃n , w̃n∗ , w̃n′ ) ∈ Z n × Z n × Z n × Z n |
dH (Sn (wn ), Sn (w̃n′ )) > ε̃, w̃n , w̃n∗ ∈ Z n ≤ ε̃}.
Due to the definition of the statistical operator S, this is equivalent to
inf{ε̃ > 0 | (Wn , W̃n , W̃n∗ , W̃n′ )(K̃N ) (wn , w̃n , w̃n∗ , w̃n′ ) ∈ Z n × Z n × Z n × Z n |
dH (S(Pwn ), S(Qw̃n′ )) > ε̃, w̃n , w̃n∗ ∈ Z n ≤ ε̃}.
18
Moreover the triangle inequality yields
dH (S(Pwn ), S(Qw̃n′ )) ≤ dH (S(Pwn ), S(P )) + dH (S(P ), S(Qw̃n′ )).
Hence, for all n ≥ max{n4 , n5 , n6 , n7 }, we obtain
n
(Wn , W̃n , W̃n∗ , W̃n′ )(K̃N ) (wn , w̃n , w̃n∗ , w̃n′ ) ∈ Z n × Z n × Z n × Z n |
o
ε
dH (S(Pwn ), S(Qw̃n′ )) > , w̃n , w̃n∗ ∈ Z n
2
n
≤ (Wn , W̃n , W̃n∗ , W̃n′ )(K̃N )
(wn , w̃n , w̃n∗ , w̃n′ ) ∈ Z n × Z n × Z n × Z n |
o
ε
ε
dH (S(Pwn ), S(P )) > or dH (S(P ), S(Qw̃n′ )) > , w̃n , w̃n∗ ∈ Z n .
4
4
The continuity of the statistical operator S in P , see (11), gives
dBL (P, QW̃′ ) ≤ δ0
⇒
and dBL (P, PWn ) ≤ δ0
⇒
n
ε
,
4
ε
dH (S(P ), S(PWn )) ≤ .
4
dH (S(P ), S(QW̃′ )) ≤
n
Further, the triangle inequality yields
n
dBL (P, Qw̃n′ ) ≤ dBL (P, Pwn ) + dBL
+ dBL
1X i
P
P wn ,
n
i=1
!
n
1X i
Q , Qw̃n
n
i=1
!
+ dBL
n
n
i=1
i=1
1X i 1X i
P ,
Q
n
n
!
+ dBL (Qw̃n , Qw̃n∗ ) + dBL (Qw̃n∗ , Qw̃n′ ).
(22)
Therefore we conclude, for all n ≥ max{n4 , n5 , n6 , n7 },
(Wn , W̃n , W̃n∗ , W̃n′ )(K̃N ) (wn , w̃n , w̃n∗ , w̃n′ ) ∈ Z n × Z n × Z n × Z n |
o
ε
ε
dH (S(Pwn ), S(P )) > or dH (S(P ), S(Qw̃n′ )) > , w̃n , w̃n∗ ∈ Z n
4
4
(11)
≤ (Wn , W̃n , W̃n∗ , W̃n′ )(K̃N ) (wn , w̃n , w̃n∗ , w̃n′ ) ∈ Z n × Z n × Z n × Z n |
dBL (Pwn , P ) > δ0 or dBL (P, Qw̃n′ ) > δ0 , w̃n , w̃n∗ ∈ Z n
(22)
(wn , w̃n , w̃n∗ , w̃n′ ) ∈ Z n × Z n × Z n × Z n |
!
n
δ0
1X i
δ0
>
P
or dBL Pwn ,
dBL (Pwn , P ) >
6
n
6
i=1
!
!
n
n
n
δ0
1X i
1X i 1X i
δ0
or dBL
P ,
Q >
Q , Qw̃n >
or dBL
n
n
6
n
6
i=1
i=1
i=1
δ0
δ0
or dBL (Qw̃n∗ , Qw̃n′ ) >
or dBL (Qw̃n , Qw̃n∗ ) >
.
6
6
≤ (Wn , W̃n , W̃n∗ , W̃n′ )(K̃N )
19
P
P
Now, assume dBL (Pn , Qn ) ≤ δ60 , then (20) yields dBL n1 ni=1 P i , n1 ni=1 Qi ≤ δ60 , therefore this term can be omitted. Note that this is only proven for the p-product metrics on
Z n and not for the metric dn from (4). For this metric we need a different argumentation,
which is stated below the next calculation.
Hence, for all n ≥ max{n4 , n5 , n6 , n7 },
(Wn , W̃n , W̃n∗ , W̃n′ )(K̃N ) (wn , w̃n , w̃n∗ , w̃n′ ) ∈ Z n × Z n × Z n × Z n |
dH (S(Pwn ), S(Qw̃n′ )) > ε, w̃n , w̃n∗ ∈ Z n
(20)
(wn , w̃n , w̃n∗ , w̃n′ ) ∈ Z n × Z n × Z n × Z n |
!
!
n
n
δ0
δ0
1X i
1X i
δ0
dBL (Pwn , P ) >
>
or dBL Pwn ,
P
or dBL
Q , Qw̃n >
6
n
6
n
6
i=1
i=1
δ0
δ0
or dBL (Qw̃n∗ , Qw̃n′ ) >
or dBL (Qw̃n , Qw̃n∗ ) >
6
6
δ0
≤ Pn
wn ∈ Z n | dBL (Pwn , P ) >
6
!
)!
(
n
X
δ
1
0
Pi >
wn ∈ Z n | dBL Pwn ,
+ Pn
n
6
i=1
!
(
)!
n
δ0
1X i
n
+ Qn
w̃n ∈ Z | dBL
Q , Qw̃n >
n
6
i=1
δ0
∗
n
n
w̃n ∈ Z | dBL Qw̃n , Qw̃n∗ >
+ ⊗i=1 Qw̃n
6
δ
0
w̃n′ ∈ Z n | dBL Qw̃n∗ , Qw̃n′ >
+ Q∗n
6
(16),(17)(18),(19),(21) ε
ε
ε
ε
ε
ε
+
+
+
+
= .
<
10 10 10 10 10
2
≤ (Wn , W̃n , W̃n∗ , W̃n′ )(K̃N )
In order to show the above bound for the metric dn , see (4), on Z n , we use another variant
of the triangle inequality in (22):
dBL (P, Qw̃n′ ) ≤ dBL (P, Pwn ) + dBL (Pwn , Qw̃n ) + dBL (Qw̃n , Qw̃n∗ ) + dBL (Qw̃n∗ , Qw̃n′ ). (23)
δ2
Assume dBL (Pn , Qn ) ≤ 640 . Then, the strong equivalence between the Prohorov metric
and the bounded Lipschitz metric
on Polish spaces, see Huber (1981, Chapter 2, Corollary
p
4.3), yields πdn (Pn , Qn ) ≤ dBL (Pn , Qn ) ≤ δ80 . Due to Dudley (1989, Theorem 11.6.2),
probability measure µ ∈ M(Z n × Z n ) with
πdn (Pn , Qn ) ≤ δ80 implies the existence of a n
o
≤
(wn , w̃n ) ∈ Z n × Z n | dn (wn , w̃n ) > δ80
δ0
δ0
δ0
1 Pn
1 Pn
i=1 δzi , n
i=1 δz̃i ≤ 8 and
8 . By a simple calculation dn (wn , w̃n ) ≤ 8 implies πdn n
we have:
δ0
δ0
n
n
≤ .
µ
(wn , w̃n ) ∈ Z × Z | πdn (Pwn , Qw̃n ) >
8
8
marginal distributions Pn and Qn , such that µ
20
Again the equivalence between the metrics π and dBL yields:
µ
δ0
δ0
(wn , w̃n ) ∈ Z n × Z n | dBL (Pwn , Qw̃n ) >
≤ .
4
8
∗ , and W̃′ such that the distriNow we choose the joint distribution K̃N of WN , W̃N , W̃N
N
N
N
n
n
n
bution of (Wn , W̃n ) : Z × Z → Z × Z is µ ∈ M(Z × Z n ). Then we conclude:
(Wn , W̃n , W̃n∗ , W̃n′ )(K̃N ) (wn , w̃n , w̃n∗ , w̃n′ ) ∈ Z n × Z n × Z n × Z n |
o
ε
ε
dH (S(Pwn ), S(P )) > or dH (S(P ), S(Qw̃n′ )) > , w̃n , w̃n∗ ∈ Z n
4
4
(11),(23)
≤ (Wn , W̃n , W̃n∗ , W̃n′ )(K̃N ) (wn , w̃n , w̃n∗ , w̃n′ ) ∈ Z n × Z n × Z n × Z n |
δ0
δ0
dBL (Pwn , P ) >
or dBL (Pwn , Qw̃n ) >
4
4
δ0
δ0
or dBL (Qw̃n∗ , Qw̃n′ ) >
.
or dBL (Qw̃n , Qw̃n∗ ) >
4
4
δ0
n
≤
Pn
wn ∈ Z | dBL (Pwn , P ) >
4
δ0
+µ
(wn , w̃n ) ∈ Z n × Z n | dBL (Pwn , Qw̃n ) >
4
δ0
+ ⊗ni=1 Qw̃n
w̃n∗ ∈ Z n | dBL Qw̃n , Qw̃n∗ >
4
δ0
w̃n′ ∈ Z n | dBL Qw̃n∗ , Qw̃n′ >
+ Q∗n
.
4
Now, adapting the inequalities in (16), (17), and (21) in ε respectively n yields the boundδ2
edness of the above term by 2ε for dBL (Pn , Qn ) ≤ 640 and for all n ≥ {n4 , n5 , n7 }.
Now we can go on with the proof similar for both kinds of metrics on Z n .
The equivalence between the Prohorov metric and the bounded Lipschitz metric on Polish
spaces, see Huber (1981, Chapter 2, Corollary 4.3), yields the existence of n0,2 ∈ N such
that for all n ≥ n0,2 , dBL (Pn , Qn ) ≤
δ0
6
(respectively dBL (Pn , Qn ) ≤
ε
dBL (LPn (Sn ), LQ∗n (Sn )) < .
2
δ02
64 )
implies
(24)
Now, (15) and (24) yield for all n ≥ max{n0,1 , n0,2 }:
dBL (LPn∗ (Sn ), LQ∗n (Sn )) < ε.
(25)
Recall that LPn∗ (Sn ) =: ζn and LQ∗n (Sn ) =: ξn are random quantities with values in M(H).
Hence (25) is equivalent to
E dBL (LPn∗ (Sn ), LQ∗n (Sn )) < ε, for all n ≥ max{n0,1 , n0,2 },
21
respectively
E [dBL (ζn , ξn )] < ε, for all n ≥ max{n0,1 , n0,2 }.
Therefore, for all f ∈ BL1 (M(Z)) and for all n ≥ max{n0,1 , n0,2 }:
Z
Z
f d(L(ζn )) − f d(L(ξn )) = |Ef (ζn ) − Ef (ξn )| ≤ E |f (ζn ) − f (ξn )|
≤ E (|f |1 dBL (ζn , ξn ))
< ε,
by a variant of Strassen’s Theorem, see Huber (1981, Chapter 2, Theorem 4.2, (2)⇒(1)).
That is,
dBL (L(LPn∗ (Sn )), L(LQ∗n (Sn ))) < ε for all n ≥ max{n0,1 , n0,2 }.
Hence for every ε > 0 we find δ =
δ0
6
and n0 = max{n0,1 , n0,2 } such that for all n ≥ n0 :
dBL (Pn , Qn ) < δ
⇒
dBL (L(LPn∗ (Sn )), L(LQ∗n (Sn ))) < ε,
which yields the assertion.
Proof of Example 1:
Without any restriction we assume a = 0. Otherwise regard the process Zi − a, i ∈ N. By
assumption, the random variables Zi , i ∈ N, are independent. Hence IB ◦ Zi , i ∈ N, are
independent, see for example Hoffmann-Jørgensen (1994, Theorem 2.10.6) for all measurable
B ∈ B, as IB is a measurable function. According to Steinwart et al. (2009, Proposition
2.8), (Zi )i∈N
P satisfies the SLLNE if there is a probability measure P in M(Z) such that
limn→∞ n1 ni=1 Eµ IB ◦ Zi = P (B) for all measurable B ∈ B. Hence:
n
n
1X
1X
E µ I B ◦ Zi =
n
n
i=1
i=1
1
Z
n
1X
IB dZi (µ) =
n
i=1
Z
IB fi dλ1 ,
2
where fi (x) = √12π e− 2 (x−ai ) denotes the density of the normal distribution N (0, 1) with
respect to the Lebesgue measure λ1 . Moreover define g : R → R by
− 1 (x+c)2
, x < −c
e 2
1
√ ,
−c ≤ x ≤ c
x ∈ R.
g(x) =
2π
− 21 (x−c)2
, c<x
e
Therefore |fi | ≤ |g|, for all i ∈ N, g is integrable and due to Lebesgue’s Theorem, see for
example Hoffmann-Jørgensen (1994, Theorem 3.6):
n
1X
n→∞ n
lim
i=1
Z
IB fi dλ1 = lim
n→∞
Z
n
1X
IB fi dλ1 =
n
i=1
22
Z
n
1X
IB fi dλ1 .
n→∞ n
lim
i=1
(26)
1 2
We have fi → f0 , where f0 = √12π e− 2 x for all x ∈ R, as ai → 0 and therefore the Lemma of
Kronecker,P
see for example Hoffmann-Jørgensen (1994, Theorem 4.9, Equation 4.9.1) yields:
limn→∞ n1 ni=1 fi (x) = f0 (x) for all x ∈ X .
Now (26) yields the SLLNE:
n
1X
n→∞ n
lim
i=1
Z
IB fi dλ1 =
Z
IB f0 dλ1 = P (B), for al B ∈ B.
With Strohriegl and Hable (2016, Zheorem 2) the Varadarajan property is given.
Proof of Example 2:
Similar to the proof of Example 1, we first show the SLLNE, that is there exists a probability
measure P ∈ M(Z) such that
n
1X
n→∞ n
lim
i=1
Z
IB ◦ Zi dµ = P (B), for all measurable B ⊂ Ω.
Now let B ⊂ Ω be an arbitrary measurable set. Then:
n
n Z
n Z
1X
1X
i
IB dP = lim
IB d[(1 − εi )P + εi P̃ i ]
IB ◦ Zi dµ = lim
n→∞ n
n→∞ n
Z
Z
i=1
i=1
i=1
Z
Z
n Z
n
n
X
X
1X
1
1
IB dP + lim
IB dP̃ i .
(27)
IB dP − lim
εi
εi
= lim
n→∞ n
n→∞ n
n→∞ n
Z
Z
Z
1X
lim
n→∞ n
Z
i=1
As, 0 ≤
1
n
Pn
i=1 εi
R
i=1
IB dP ≤
1
n
n
Pn
i=1 εi
1X
lim
εi
n→∞ n
i=1
Z
i=1
and εi → 0, we have
n
1X
IB dP ≤ lim
εi −→ 0, n → ∞
n→∞ n
i=1
and similarly
n
1X
εi
lim
n→∞ n
i=1
Z
n
1X
εi −→ 0,
n→∞ n
IB dP̃ i ≤ lim
i=1
n → ∞.
Hence (27) yields
n
n
i=1
i=1
1X
1X
lim
IB ◦ Zi = lim
n→∞ n
n→∞ n
Z
IB dP = P (B)
and therefore, due to Strohriegl and Hable (2016, Theorem 2), the assertion.
23
Proof of Corollary 2.4:
Due to Example 2, the stochastic process is a Varadarajan process. Hable and Christmann
(2011, Theorem 3.2) ensures the continuity of the statistical operator S : M(Z) → H, P 7→
fL∗ ,P,λ for a fixed value λ ∈ (0, ∞). Moreover Hable and Christmann (2011, Corollary
3.4) yields the continuity of the estimator Sn : Z n → H, Dn 7→ fL∗ ,Dn ,λ for every fixed
λ ∈ (0, ∞). Hence for fixed λ > 0 the bootstrap approximation of the SVM estimator is
qualitatively robust, for the given assumptions. Moreover the proof of Theorem 2.2, equation
(25), and the equivalence between between bounded Lipschitz metric and Prokhorov distance
yield: for every ε > 0 there is δ > 0 such that there is n0 ∈ N such that for all n ≥ n0 and
if dBL (Pn , Qn ) ≤ δ:
π(LPn∗ (Sn ), LQ∗n (Sn )) < ε almost surely.
Similarly to the proof of the qualitative robustness in Strohriegl and Hable (2016, Theorem
4) we get: for every ε > 0 there is nε , such that for all n ≥ nε :
kfL∗ ,Dn ,λn − fL∗ ,Dn ,λ0 kH ≤
ε
.
3
And the same argumentation as in the proof of the qualitative robustness of the SVM
estimator for the non-i.i.d. case in Strohriegl and Hable (2016, Theorem 4) for the cases
n0 ≤ n ≤ nε and n > nε yields the assertion.
3.2
Proofs of Section 2.2
Proof of Theorem 2.6:
Proof of Theorem 2.6: Let PN∗ , Q∗N ∈ M(Z N ) be the bootstrap approximations of the
true distribution PN and the contaminated distribution QN . First, the triangle inequality
yields:
dBL (LPn∗ (Sn ), LQ∗n (Sn ))
≤ dBL (LPn∗ (Sn ), LPn (Sn )) + dBL (LPn (Sn ), LQn (Sn )) + dBL (LQn (Sn ), LQ∗n (Sn )) .
{z
} |
|
{z
} |
{z
}
I
II
III
First, we regard the term in part II. Let σ(Zi ),Pi ∈ N, be the σ-algebra generated by Zi .
Due to the assumptions on the mixing process m>n α(σ(Z1 , . . . , Zi ), σ(Zi+m , . . .), PN ) =
O(n−γ ), i ∈ N, γ > 0, the sequence (α(σ(Z1 , . . . , Zi ), σ(Zi+m , . . .), µ))m∈N is a null sequence. Moreover it is bounded by the definition of the α-mixing coefficient which, due to
24
the strong stationarity, does not depend on i. Therefore
n
n
n
n
1 XX
1 XX
α((Zi )i∈N , PN , i, j) = 2
α(σ(Zi ), σ(Zj ), PN )
n2
n
i=1 j=1
≤
2
n2
≤
2
n2
i=1 j=1
n
n X
X
i=1 j≥i
n
n X
X
α(σ(Zi ), σ(Zj ), PN )
α(σ(Z1 , . . . , Zi ), σ(Zj , . . .), PN )
i=1 j≥i
n n−i
2 XX
α(σ(Z1 , . . . , Zi ), σ(Zi+ℓ , . . .), PN )
= 2
n
i=1 ℓ=0
stationarity
≤
n
2X
α(σ(Z1 , . . . , Zi ), σ(Zi+ℓ , . . .), PN ), i ∈ N
n
ℓ=0
−→ 0, n → ∞.
Hence, the process is weakly α-bi-mixing with respect to PN , see Definition 2.5. Due to the
stationarity assumption,
the process (Zi )i∈N is additionally asymptotically mean stationary,
P
that is limn→∞ n1 ni=1 EIB ◦Zi = P (B) for all B ∈ A for a probability measure P . Therefore
the process satisfies the WLLNE, see Steinwart et al. (2009, Proposition 3.2), and therefore
is a weak Varadarajan process, see Strohriegl and Hable (2016, Theorem 2).
As the process is assumed to be a Varadarajan process and due to the assumptions on the sequence of estimators (Sn )n∈N , qualitative robustness of (Sn )n∈N is ensured by Strohriegl and Hable
(2016, Theorem 1). Together with the equivalence between the Prohorov metric and the
bounded Lipschitz metric for Polish spaces, see Huber (1981, Chapter 2, Corollary 4.3), it
follows:
For every ε > 0 there is δ > 0 such that for all n ∈ N and for all Qn ∈ M(Z n ) we have:
dBL (Pn , Qn ) < δ
⇒
ε
dBL (LPn (Sn ), LQn (Sn )) < .
3
This implies
ε
E [dBL (LPn (Sn ), LQn (Sn ))] < .
3
Hence the convergence of the term in part II is shown.
(28)
To prove the convergence of the term in part I, consider the distribution PN ∈ M(Z N )
and let PN∗ be the bootstrap approximation of PN , via the blockwise bootstrap. Define, for
n ∈ N, the random variables
Wn : Z N → Z n , Wn = (Z1 , . . . , Zn ), zN 7→ wn = (z1 , . . . , zn ), and
Wn′ : Z N → Z n , Wn′ = (Z1′ , . . . , Zn′ ), zN 7→ wn′ ,
such that Wn (PN ) = Pn and Wn′ (PN∗ ) = Pn∗ .
Moreover denote the bootstrap sample by Wn∗ : Z N → Z n , Wn∗ := (Z1∗ , . . . , Zn∗ ), zN 7→ wn∗ ,
25
and the distribution
of Wn∗ by P n . The blockwise bootstrap approximation of Pm , m ∈ N,
1 Pn
∗
m
is Pm = ⊗j=1 n i=1 δZi∗ , m ∈ N. Note that the sample Z1∗ , . . . , Zn∗ depends and on the
blocklength b(n) and on the number of blocks ℓ(n).
∗ , and W′ by K ∈ M(Z N × Z N × Z N ).
Further denote the joint distribution of WN , WN
N
N
Then, KN has marginal distributions KN (B1 × Z N × Z N ) = PN (B1 ) for all B1 ∈ B ⊗N ,
KN (Z N × B2 × Z N ) = P N (B2 ) for all B2 ∈ B ⊗N , and KN (Z N × Z N × B3 ) = PN∗ (B3 ) for all
B3 ∈ B ⊗N .
Then,
LPn (Sn ) = Sn (Pn ) = Sn ◦ Wn (PN ) and LPn∗ (Sn ) = Sn (Pn∗ ) = Sn ◦ Wn′ (PN∗ )
and therefore
dBL (LPn∗ (Sn ), LPn (Sn )) = dBL (L(Sn ◦ Wn′ ), L(Sn ◦ Wn )).
By assumption we have 0 ≤ zi ≤ 1, i ∈ N. Hence Zi (zN ) = zi ∈ [0, 1], i. e. Z = [0, 1], which
is a totally bounded metric space. Therefore the set BL1 ([0, 1]) is a uniform GlivenkoCantelli class, due to Dudley et al. (1991, Proposition 12). Similar to part I of the proof of
Theorem 2.2, the blockwise bootstrap structure and the Glivenko-Cantelli property yield:
lim
n→∞ P
sup
∗ ∈M(Z)
wn
PN∗
N
zN ∈ Z | sup dBL (PWm
′ (z ) , Pw∗ ) > η
N
n
m≥n
= 0.
Respectively, for fixed ε > 0, for every δ0 > 0 there is n1 ∈ N such that for all n ≥ n1 and
all Pwn∗ ∈ M(Z):
ε
δ0
∗
′
n
(29)
≥1− .
Pn
wn ∈ Z | dBL (Pwn′ , Pwn∗ ) ≤
2
6
P
P
Regard the process Gn (t) = √1n ni=1 I{Zi∗ ≤t} − √1n ni=1 I{Zi ≤t} , t ∈ R. Due to the assumptions on the process and on the moving block bootstrap, Theorem 2.3 in Peligrad (1998)
yields the almost sure convergence in distribution to a Brownian bridge G:
n
n
i=1
i=1
1 X
1 X
√
I{Zi∗ ≤t} − √
I{Zi ≤t} −→D G(t),
n
n
t∈R
(30)
almost surely with respect to PN , n → ∞, in the Skorohod topology on D[0, 1]. Here −→D
indicates convergence in distribution and D[0, 1] denotes the space of cadlag functions on
[0, 1], for details see for example Billingsley (1999, p. 121).
This is equivalent to
n
n
i=1
i=1
1 X
1 X
√
I{Zi∗ ≤t} − √
I{Zi ≤t} −→D G(t), almost surely with respect to PN , n → ∞,
n
n
26
for all continuity points t of G, see Billingsley (1999, (12.14), p. 124).
Multiplying by
√1
n
yields for any fixed continuity point t ∈ R :
n
n
i=1
i=1
1X
1X
1
I{Zi∗ ≤t} −
I{Zi ≤t} − √ G(t) −→D 0 almost surely with respect to PN , n → ∞.
n
n
n
As convergence in distribution to a finite constant implies convergence in probability, see
for example van der Vaart (1998, Theorem 2.7(iii)), and as √1n G(t) → 0 in probability, for
all t ∈ R:
n
n
i=1
i=1
1X
1X
I{Zi∗ ≤t} −
I{Zi ≤t} −→P 0 almost surely with respect to PN , n → ∞,
n
n
for all continuity points t of G, where −→P denotes the convergence in probability.
Hence, Dudley (1989, Theorem 11.12) yields the convergence of the corresponding probability measures:
!
n
n
1X
1X
dBL
δZi∗ ,
δZi −→P 0 almost surely with respect to PN , n → ∞.
n
n
i=1
i=1
Respectively
dBL (PWn∗ , PWn ) −→P 0 almost surely with respect to PN , n → ∞.
Define the set Bn = wn ∈ Z n | dBL (PWn∗ , Pwn ) −→P 0, n → ∞ . Hence,
o
n
=1
Pn (Bn ) = PN zN ∈ Z N | Wn (zN ) ∈ Bn
and, for all wn ∈ Bn , there is n2,wn ∈ N such that for all n ≥ n2,wn ∈ N:
δ0
ε
∗
n
< .
Pn
wn ∈ Z | dBL Pwn∗ , Pwn >
4
6
(31)
(32)
By assumption we have 0 ≤ zi ≤ 1, i ∈ N. Hence the space of probability measures
{Pwn | wn ∈ [0, 1]n } is a subset of M([0, 1]) and therefore tight, as [0,1] is a compact
space, see e. g. (Klenke, 2013, Example 13.28). Then Prohorov’s Theorem, see for example Billingsley (1999, Theorem 5.1) yields relative compactness of M([0, 1], dBL ) and in
particular the relative compactness of the set {Pwn | wn ∈ [0, 1]n }. As M([0, 1], dBL ) is
a complete space, see Dudley (1989, Theorem 11.5.5), relative compactness equals total
27
boundedness. That is, there exists a finite dense subset P̃ of {Pwn | wn ∈ [0, 1]n } such that
for all ρ > 0 and Pwn ∈ {Pwn | wn ∈ [0, 1]n } there is P̃ρ ∈ P̃ such that
(33)
dBL (P̃ρ , Pwn ) ≤ ρ.
The triangle inequality yields:
dBL Pwn∗ , Pwn ≤ dBL Pwn∗ , P̃ρ + dBL P̃ ρ , Pwn .
Define ρ = δ40 . Then (32) yields for every P̃ρ ∈ P̃ the existence of an integer n ≥ n2,P̃ ∈ N
such that, for all n ≥ n2,P̃ and all wn ∈ Bn :
Pn
δ0
wn∗ ∈ Z n | dBL Pwn∗ , Pwn >
2
δ
δ
0
0
∗
n
or dBL P̃ρ , Pwn >
wn ∈ Z | dBL Pwn∗ , P̃ρ >
≤ Pn
4
4
δ (32) ε
(33)
0
∗
n
≤ Pn
wn ∈ Z | dBL Pwn∗ , P̃ρ >
<
.
4
6
Hence, for all n ≥ n2 := maxP̃ ∈P̃ {n2,P̃ } and for all wn ∈ Bn , we have:
sup
Pwn ∈M(Z)
Pn
wn∗
n
∈ Z | dBL P
∗
wn
, P wn
δ0
>
2
ε
< .
6
(34)
Due to the uniform continuity of the operator S, for every ε > 0 there is δ0 > 0 such that
for all P, Q ∈ M(Z) :
dBL (P, Q) ≤ δ0
⇒
ε
dH (S(P ), S(Q)) ≤ .
3
(35)
Moreover, the triangle inequality yields:
dBL (Pwn′ , Pwn ) ≤ dBL (Pwn′ , Pwn∗ ) + dBL (Pwn∗ , Pwn ).
(36)
Again we use the relation between the Prohorov metric πdH and the Ky Fan metric, Dudley
(1989, Theorem 11.3.5):
πdH LPn∗ (Sn ), LPn (Sn )) = πdH (Sn ◦ Wn′ , Sn ◦ Wn )
o
o
n
n
∗
≤ ε̃
dH (Sn ◦ Wn′ , Sn ◦ Wn ) > ε̃, wN
∈ ZN
≤ inf ε̃ > 0 | KN
= inf ε̃ > 0 | (Wn , Wn∗ , Wn′ )(KN ) (wn , wn∗ , wn′ ) ∈ Z n × Z n × Z n |
dH (Sn (wn′ ), Sn (wn )) > ε̃, wn∗ ∈ Z n ≤ ε̃ .
28
Due to the definition of the statistical operator S, this is equivalent to
inf{ε̃ > 0 | (Wn , Wn∗ , Wn′ )(KN ) (wn , wn∗ , wn′ ) ∈ Z n × Z n × Z n |
dH (S(Pwn′ ), S(Pwn ) > ε̃, wn∗ ∈ Z n ≤ ε̃}.
Due to the uniform continuity of S, see (35), we obtain, for all n ≥ max{n1 , n2 } :
n
o
ε
(Wn , Wn∗ , Wn′ )(KN ) (wn , wn∗ , wn′ ) ∈ Z n × Z n × Z n | dH (S(Pwn′ ), S(Pwn )) > , wn∗ ∈ Z n
3
(35)
≤ (Wn , Wn∗ , Wn′ )(KN ) (wn , wn∗ , wn′ ) ∈ Z n × Z n × Z n | dBL (Pwn′ , Pwn ) > δ0 , wn∗ ∈ Z n
= (Wn , Wn∗ , Wn′ )(KN ) (wn , wn∗ , wn′ ) ∈ Z n × Z n × Z n |
{wn ∈
/ Bn , dBL (Pwn′ , Pwn ) > δ0 } or {wn ∈ Bn , dBL (Pwn′ , Pwn ) > δ0 }, wn∗ ∈ Z n
≤ (Wn , Wn∗ , Wn′ )(KN ) (wn , wn∗ , wn′ ) ∈ Z n × Z n × Z n |
wn ∈
/ Bn , dBL (Pwn′ , Pwn ) > δ0 , wn∗ ∈ Z n
+(Wn , Wn∗ , Wn′ )(KN ) (wn , wn∗ , wn′ ) ∈ Z n × Z n × Z n |
wn ∈ Bn , dBL (Pwn′ , Pwn ) > δ0 , wn∗ ∈ Z n
(31)
= (Wn , Wn∗ , Wn′ )(KN ) (wn , wn∗ , wn′ ) ∈ Z n × Z n × Z n |
wn ∈ Bn , dBL (Pwn′ , Pwn ) > δ0 , wn∗ ∈ Z n .
The triangle inequality, (36), then yields for all n ≥ max{n1 , n2 }:
(Wn , Wn∗ , Wn′ )(KN ) (wn , wn∗ , wn′ ) ∈ Z n × Z n × Z n | wn ∈ Bn , dBL (Pwn′ , Pwn ) > δ0 , wn∗ ∈ Z n
n
(36)
≤ (Wn , Wn∗ , Wn′ )(KN ) (wn , wn∗ , wn′ ) ∈ Z n × Z n × Z n | {wn ∈ Bn
δ0 o
δ0
∗
∗
′
and dBL (Pwn , Pwn ) > } or {wn ∈ Bn and dBL (Pwn , Pwn ) > }
2
2
δ
0
≤ Pn∗
wn′ ∈ Z n | wn ∈ Bn , dBL (Pwn′ , Pwn∗ ) >
2
δ0
∗
n
wn ∈ Z | wn ∈ Bn , dBL (Pwn∗ , Pwn ) >
+ Pn
2
(29),(32) ε
ε
ε
+
=
.
<
6 6
3
The equivalence between the Prohorov metric and the bounded Lipschitz metric on Polish
spaces, see Huber (1981, Chapter 2, Corollary 4.3), yields the existence of ñ1 such that for
every n ≥ ñ1 :
ε
dBL (LPn∗ (Sn ), LPn (Sn )) < .
3
And therefore
ε
E dBL LPn∗ (Sn ), LPn (Sn ) < .
3
29
(37)
For the convergence of the term in part III the same argumentation as for part I can be
applied, as the assumptions on QN and Q∗N are the same as for PN and P∗N . In particular for
every ε > 0 there is ñ2 ∈ N such that for all n ≥ ñ2 :
ε
dBL LQ∗n (Sn ), LQn (Sn ) < ,
3
respectively
ε
E dBL LQ∗n (Sn ), LQn (Sn ) < .
3
(38)
Hence, (28), (37), and (38) yield, for all n ≥ max{ñ1 , ñ2 }:
ε ε ε
E dBL LPn∗ (Sn ), LQ∗n (Sn ) < + + = ε.
3 3 3
As LPn∗ (Sn ) and LQ∗n (Sn ) are random variables itself we have, due to Huber (1981, Chapter
2 Theorem 4.2, (2)⇒(1)), for all n ≥ max{ñ1 , ñ2 }:
dBL L(LPn∗ (Sn )), L(LQ∗n (Sn )) < ε.
Hence, for all ε > 0 there is δ > 0 such that there is n0 = max{ñ1 , ñ2 } ∈ N such that, for
all n ≥ n0 :
dBL (Pn , Qn ) < δ ⇒ dBL (L(LPn∗ (Sn )), L(LQ∗n (Sn ))) < ε
and therefore the assertion.
Proof of Theorem 2.7:
Proof of Theorem 2.7: The proof follows the same lines as the proof of Theorem 2.6 and
therefore we only state the different steps. Again we start with the triangle inequality:
dBL (LPn∗ (Sn ), LQ∗n (Sn ))
≤ dBL (LPn∗ (Sn ), LPn (Sn )) + dBL (LPn (Sn ), LQn (Sn )) + dBL (LQn (Sn ), LQ∗n (Sn )) .
{z
} |
{z
} |
{z
}
|
I
II
III
To proof the convergence of the term in part II, we need the weak Varadarajan property
of the stochastic process. Due to the definition α(σ(Z1 , . . . , Zi ), σ(Zi+ℓ , . . .), µ) ≤ 2 for all
ℓ ∈ N, i ∈ N, and obviously:
α(σ(Z1 , . . . , Zi ), σ(Zi+ℓ , . . .), PN ) ≤ ℓ + 1, ℓ > 0.
30
(39)
Hence, due to the strong stationarity of the stochastic process, we have:
n
n
n
n
1 XX
1 XX
α((Z
)
,
P
,
i,
j)
=
α(σ(Zi ), σ(Zj ), PN )
i i∈N
N
n2
n2
i=1 j=1
i=1 j=1
≤
n
n
2 XX
α(σ(Zi ), σ(Zj ), PN )
n2
≤
2
n2
=
n n−i
2 XX
α(σ(Z1 , . . . , Zi ), σ(Zi+ℓ , . . .), PN )
n2
i=1 j≥i
n
n X
X
α(σ(Z1 , . . . , Zi ), σ(Zj , . . .), PN )
i=1 j≥i
i=1 ℓ=0
n
stationarity 2 X
≤
=
(39)
≤
(6)
n
ℓ=0
α(σ(Z1 , . . . , Zi ), σ(Zi+ℓ , . . .), PN ), i ∈ N
n
1
1
2X
(α(σ(Z1 , . . . , Zi ), σ(Zi+ℓ , . . .), PN )) 2 (α(σ(Z1 , . . . , Zi ), σ(Zi+ℓ , . . .), PN )) 2 , i ∈ N
n
ℓ=0
n
X
2
n
ℓ=0
1
(ℓ + 1) (α(σ(Z1 , . . . , Zi ), σ(Zi+ℓ , . . .), PN )) 2 , i ∈ N
−→ 0, n → ∞.
Now, the same argumentation as in the proof of Theorem 2.6 yields the weak Varadarajan
property and therefore, for all ε > 0,
ε
E [dBL (LPn (Sn ), LQn (Sn ))] < .
3
(40)
Regarding the term in part I, we use a central limit theorem for the blockwise bootstrapped
empirical process by Bühlmann (1994, Corollary 1 and remark) to show its convergence.
Again, regard the distribution PN ∈ M(Z N ) and let PN∗ be the bootstrap approximation of
PN , via the blockwise bootstrap. Define, for all n ∈ N, the random variables
Wn : Z N → Z n , Wn = (Z1 , . . . , Zn ), zN 7→ wn , and
Wn′ : Z N → Z n , Wn′ = (Z1′ , . . . , Zn′ ), zN 7→ wn′ ,
such that Wn (PN ) = Pn and Wn′ (PN∗ ) = Pn∗ .
Moreover denote the bootstrap sample by Wn∗ : Z N → Z n , Wn∗ := (Z1∗ , . . . , Zn∗ ), zN 7→
∗ =
wn∗ , and
the distribution of Wn∗ by P n . The bootstrap approximation of Pm is Pm
P
n
1
m
∗
∗ , m ∈ N, by definition of the bootstrap procedure. Note that
⊗m
j=1 n
i=1 δZi = ⊗j=1 PWn
∗
∗
the sample Z1 , . . . , Zn depends and on the blocklength b(n) and on the number of blocks
ℓ(n).
∗ , and W′ by K ∈ M(Z N × Z N × Z N ).
Further denote the joint distribution of WN , WN
N
N
Then, KN has marginal distributions KN (B1 × Z N × Z N ) = PN (B1 ) for all B1 ∈ B ⊗N ,
31
KN (Z N × B2 × Z N ) = P N (B2 ) for all B2 ∈ B ⊗N , and KN (Z N × Z N × B3 ) = PN∗ (B3 ) for all
B3 ∈ B ⊗N .
Then,
LPn (Sn ) = Sn (Pn ) = Sn ◦ Wn (PN ) and LPn∗ (Sn ) = Sn (Pn∗ ) = Sn ◦ Wn′ (PN∗ )
and therefore
dBL (LPn∗ (Sn ), LPn (Sn )) = dBL (L(Sn ◦ Wn′ ), L(Sn ◦ Wn )).
As Z = [0, 1]d is compact, it is in particular totally bounded. Hence the set BL1 (Z, dZ )
is a uniform Glivenko-Cantelli class, due to Dudley et al. (1991, Proposition 12). Similar
to part I of the proof of Theorem 2.6, the bootstrap structure and the Glivenko-Cantelli
property given above yield for arbitrary, but fixed ε > 0:
for every δ0 > 0 there is n0 ∈ N such that, for all n ≥ n0 and all Pwn∗ ∈ M(Z),
δ0
ε
∗
n
′
Pn
wn ∈ Z | dBL (Pwn′ , Pwn∗ ) ≤
≥1− .
2
6
Now, regard the empirical process of (Z1 , . . . , Zn ). Set t = (t1 , . . . , td ) ∈ Rd . Moreover
t < b means ti < bi for all i ∈ {1, . . . , d}. Hence we can define the empirical process and
the blockwise bootstrapped empirical process by
n
1X
I{Zi ≤t}
n
n
and
i=1
1X
I{Zi∗ ≤t} .
n
i=1
P
P
Regard the process Gn (t) = √1n ni=1 I{Zi∗ ≤t} − √1n ni=1 I{Zi ≤t} , t ∈ [0, 1]d . Now, due to the
assumptions on the stochastic process and on the moving block bootstrap, Bühlmann (1994,
Corollary 1 and remark) yields the almost sure convergence in distribution to a Gaussian
process G:
n
n
1 X
1 X
∗
√
I{Zi ≤t} − √
I{Zi ≤t} −→D G(t), t ∈ [0, 1]d ,
n
n
i=1
i=1
almost surely with respect to PN , n → ∞, in the (extended) Skorohod topology on D d ([0, 1]).
The space D d ([0, 1]) is a generalization of the space of cadlag functions on [0, 1], see
Billingsley (1999, Chapter 12), and consists of functions f : [0, 1]d → R. A detailed description of this space and the extended Skorohod topology can be found in Straf (1972,
1969) and Bickel and Wichura (1971). The definition of the space D d ([0, 1]) can, for example, be found in Bickel and Wichura (1971, Chapter 3).
Straf (1972, Lemma 5.4) yields, that the above convergence in the Skorohod topology is
equivalent to the convergence for all continuity points t of G. Hence,
n
n
i=1
i=1
1 X
1 X
√
I{Zi∗ ≤t} − √
I{Zi ≤t} −→D G(t) almost surely with respect to PN , n → ∞,
n
n
32
for all continuity points t of G.
Multiplying by
√1
n
yields, for every continuity point t of G,
n
n
i=1
i=1
1X
1X
1
I{Zi∗ ≤t} −
I{Zi ≤t} − √ G(t) −→D 0 almost surely with respect to PN , n → ∞.
n
n
n
As convergence in distribution to a constant implies convergence in probability, see e. g.
van der Vaart (1998, Theorem 2.7(iii)) and as √1n G(t) converges in probability to 0, for all
fixed continuity points t ∈ [0, 1]d of G:
n
n
i=1
i=1
1X
1X
I{Zi∗ ≤t} −
I{Zi ≤t} −→P 0 almost surely with respect to PN , n → ∞.
n
n
This yields the convergence of the corresponding probability measures, see for example
Billingsley (1995, Chapter 29) for a theory on Rd :
dBL (
n
n
i=1
i=1
1X
1X
δZi∗ ,
δZi ) −→P 0 almost surely with respect to PN , n → ∞,
n
n
respectively
dBL (PWn∗ , PWn ) −→P 0 almost surely with respect to PN , n → ∞.
As the space [0, 1]d is compact, we can use an argumentation similar to the proof of Theorem
2.6. Then, for every ε > 0, there is n1 ∈ N such that for all n ≥ n1
ε
dBL LPn∗ (Sn ), LPn (Sn ) < ,
3
respectively,
ε
E dBL LPn∗ (Sn ), LPn (Sn ) < .
(41)
3
The convergence of the term in part III follows simultaneously to part I for the distributions
QN and Q∗N . Hence, for every ε > 0, there is n2 ∈ N such that for all n ≥ n2
ε
(42)
E dBL LQ∗n (Sn ), LQn (Sn ) < .
3
The combination of (40), (41), and (42) yields for all n ≥ max{n1 , n2 }:
ε ε ε
E dBL LPn∗ (Sn ), LQ∗ (Sn ) < + + = ε.
3 3 3
As LPn∗ (Sn ) and LQ∗n (Sn ) are random variables itself we have, due to Huber (1981, Chapter
2, Theorem 4.2, (2)⇒(1)), for all n ≥ max{n1 , n2 } :
dBL L(LPn∗ (Sn )), L(LQ∗n (Sn )) < ε.
33
Hence, for all ε > 0 there is δ > 0 such that there is n0 = max{n1 , n2 } ∈ N such that for all
n ≥ n0 :
dBL (Pn , Qn ) < δ ⇒ dBL (L(LPn∗ (Sn )), L(LQ∗n (Sn ))) < ε.
This yields the assertion.
References
E. Beutner and H. Zähle. Functional delta-method for the bootstrap of quasi-Hadamard
differentiable functionals. Electron. J. Stat., 10, 2016.
P. J. Bickel and M. J. Wichura. Convergence criteria for multiparameter stochastic processes
and some applications. Ann. Math. Statist., 42:1656–1670, 1971.
P. Billingsley. Probability and measure. Wiley Series in Probability and Mathematical
Statistics. John Wiley & Sons, Inc., New York, third edition, 1995.
P. Billingsley. Convergence of probability measures. Wiley Series in Probability and Statistics: Probability and Statistics. John Wiley & Sons, Inc., New York, second edition,
1999.
G. Boente, R. Fraiman, and V. J. Yohai. Qualitative robustness for general stochastic
processes. Technical report, Department of Statistics, University of Washington, 1982.
G. Boente, R. Fraiman, and V. J. Yohai. Qualitative robustness for stochastic processes.
The Annals of Statistics, 15(3):1293–1312, 1987.
R. C. Bradley. Basic properties of strong mixing conditions. A survey and some open
questions. Probab. Surv., 2:107–144, 2005.
R. C. Bradley. Introduction to strong mixing conditions. Vol. 1. Kendrick Press, Heber City,
UT, 2007a.
R. C. Bradley. Introduction to strong mixing conditions. Vol. 2. Kendrick Press, Heber City,
UT, 2007b.
R. C. Bradley. Introduction to strong mixing conditions. Vol. 3. Kendrick Press, Heber City,
UT, 2007c.
P. Bühlmann. Blockwise bootstrapped empirical process for stationary sequences. Ann.
Statist., 22(2):995–1012, 1994.
P. Bühlmann. The blockwise bootstrap for general empirical processes of stationary sequences. Stochastic Process. Appl., 58(2):247–265, 1995.
O. Bustos. On qualitative robustness for general processes. unpublished manuscript, 1980.
34
A. Christmann, M. Salibian-Barrera, and S. Van Aelst. On the stability of bootstrap estimators. arXiv preprint arXiv:1111.1876, 2011.
A. Christmann, M. Salibián-Barrera, and S. Van Aelst. Qualitative robustness of bootstrap
approximations for kernel based methods. In Robustness and complex data structures,
pages 263–278. Springer, Heidelberg, 2013.
D. D. Cox. metrics on stochastic processes and qualitative robustness. Technical report,
Department of Statistics, University of Washington, 1981.
A. Cuevas and J. Romo. On robustness properties of bootstrap approximations. J. Statist.
Plann. Inference, 37(2):181–191, 1993.
Z. Denkowski, S. Migórski, and N. S. Papageorgiou. An introduction to nonlinear analysis:
applications. Kluwer Academic Publishers, Boston, MA, 2003.
P. Doukhan. Mixing. Springer, New York, 1994.
R. M. Dudley. Real Analysis and Probability. Chapman&Hall, New York, 1989.
R. M. Dudley. Uniform central limit theorems, volume 63 of Cambridge Studies in Advanced
Mathematics. Cambridge University Press, Cambridge, 2014.
R. M. Dudley, E. Giné, and J. Zinn. Uniform and universal Glivenko-Cantelli classes. J.
Theoret. Probab., 4(3):485–510, 1991.
B. Efron. Bootstrap methods: another look at the jackknife. Ann. Statist., 7(1):1–26, 1979.
B. Efron and R. J. Tibshirani. An introduction to the bootstrap, volume 57 of Monographs
on Statistics and Applied Probability. Chapman and Hall, New York, 1993.
R. Hable and A. Christmann. On qualitative robustness of support vector machines. Journal
of Multivariate Analysis, 102:993–1007, 2011.
F. R. Hampel. Contributions to the theory of robust estimation. PhD thesis, Univ. California,
Berkeley, 1968.
F. R. Hampel. A general qualitative definition of robustness. Annals of Mathematical
Statistics, 42:1887–1896, 1971.
J. Hoffmann-Jørgensen. Probability with a view toward statistics. Vol. I. Chapman & Hall
Probability Series. Chapman & Hall, New York, 1994.
P. J. Huber. Robust statistics. John Wiley & Sons Inc., New York, 1981.
J. Jurečková and J. Picek. Robust statistical methods with R. Chapman & Hall/CRC, Boca
Raton, FL, 2006.
A. Klenke. Probability theory: a comprehensive course. Springer Science & Business Media,
2013.
35
V. Krätschmer, A. Schied, and H. Zähle. Domains of weak continuity of statistical functionals with a view toward robust statistics. J. Multivariate Anal., 158:1–19, 2017.
H. R. Künsch. The jackknife and the bootstrap for general stationary observations. Ann.
Statist., 17(3):1217–1241, 1989.
S. N. Lahiri. Resampling methods for dependent data. Springer Series in Statistics. Springer,
New York, 2003.
R. Y. Liu and K. Singh. Moving blocks jackknife and bootstrap capture weak dependence.
In Exploring the limits of bootstrap (East Lansing, MI, 1990), Wiley Ser. Probab. Math.
Statist. Probab. Math. Statist., pages 225–248. Wiley, New York, 1992.
R. A. Maronna, R. D. Martin, and V. J. Yohai. Robust statistics. Wiley Series in Probability
and Statistics. John Wiley & Sons Ltd., Chichester, 2006.
U. V. Naik-Nimbalkar and M. B. Rajarshi. Validity of blockwise bootstrap for empirical
processes with stationary observations. Ann. Statist., 22(2):980–994, 1994.
P. Papantoni-Kazakos and R. M. Gray. Robustness of estimators on stationary observations.
The Annals of Probability, 7(6):989–1002, 1979.
K. R. Parthasarathy. Probability measures on metric spaces, volume 352. American Mathematical Soc., 1967.
M. Peligrad. On the blockwise bootstrap for empirical processes for stationary sequences.
Ann. Probab., 26(2):877–901, 1998.
D. N. Politis and J. P. Romano. A circular block-resampling procedure for stationary data.
In Exploring the limits of bootstrap (East Lansing, MI, 1990), Wiley Ser. Probab. Math.
Statist. Probab. Math. Statist., pages 263–270. 1990.
D. Radulović. The bootstrap for empirical processes based on stationary observations.
Stochastic Process. Appl., 65, 1996.
M. Rosenblatt. A central limit theorem and a strong mixing condition. Proc. Nat. Acad.
Sci. U. S. A., 42:43–47, 1956.
B. Schölkopf and A. J. Smola. Learning with Kernels. Massachusetts Institute of Technology,
Cambridge, 2002.
Q. M. Shao and H. Yu. Bootstrapping the sample means for stationary mixing sequences.
Stochastic Process. Appl., 48(1):175–190, 1993.
K. Singh. On the asymptotic accuracy of Efron’s bootstrap. Ann. Statist., 9(6):1187–1195,
1981.
I. Steinwart and A. Christmann. Support vector machines. Information Science and Statistics. Springer, New York, 2008.
36
I. Steinwart, D. Hush, and C. Scovel. Learning from dependent observations. Journal of
Multivariate Analysis, 100:175–194, 2009.
M. L. Straf. A general skorohod space, 1969.
M. L. Straf. Weak convergence of stochastic processes with several parameters. pages
187–221, 1972.
K. Strohriegl and R. Hable. On qualitative robustness for stochastic processes. Metrika,
pages 895–917, 2016.
A. W. van der Vaart. Asymptotic statistics, volume 3 of Cambridge Series in Statistical and
Probabilistic Mathematics. Cambridge University Press, Cambridge, 1998.
H. Zähle. Qualitative robustness of statistical functionals under strong mixing. Bernoulli,
21(3):1412–1434, 2015.
H. Zähle. A definition of qualitative robustness for general point estimators, and examples.
Journal of Multivariate Analysis, 143:12–31, 2016.
37
| 10 |
Efficiency of change point tests in high
dimensional settings
John A D Aston∗
Claudia Kirch
†
arXiv:1409.1771v2 [] 25 Jun 2016
June 28, 2016
Abstract
While there is considerable work on change point analysis in univariate time series, more and more data being collected comes from high dimensional multivariate settings. This paper introduces the asymptotic concept of high dimensional
efficiency which quantifies the detection power of different statistics in such situations. While being related to classic asymptotic relative efficiency, it is different
in that it provides the rate at which the change can get smaller with dimension
while still being detectable. This also allows for comparisons of different methods with different null asymptotics as is for example the case in high-dimensional
change point settings. Based on this new concept we investigate change point
detection procedures using projections and develop asymptotic theory for how
full panel (multivariate) tests compare with both oracle and random projections.
Furthermore, for each given projection we can quantify a cone such that the corresponding projection statistic yields better power behavior if the true change
direction is within this cone. The effect of misspecification of the covariance on
the power of the tests is investigated, because in many high dimensional situations estimation of the full dependency (covariance) between the multivariate
observations in the panel is often either computationally or even theoretically
infeasible. It turns out that the projection statistic is much more robust in this
respect in terms of size and somewhat more robust in terms of power. The theoretic quantification by the theory is accompanied by simulation results which
confirm the theoretic (asymptotic) findings for surprisingly small samples. This
shows in particular that the concept of high dimensional efficiency is indeed suitable to describe small sample power, and this is demonstrated in a multivariate
example of market index data.
Keywords: CUSUM; High Dimensional Efficiency; Model Misspecification;
Panel Data; Projections
AMS Subject Classification 2000: 62M10;
1 Introduction
There has recently been a renaissance in research for statistical methods for change
point problems [Horváth and Rice, 2014]. This has been driven by applications where
non-stationarities in the data can often be best described as change points in the data
∗ Statistical
Laboratory, DPMMS, University of Cambridge, Cambridge, CB3 9HD, UK;
[email protected]
† Otto-von-Guericke University Magdeburg, Department of Mathematics, Institute of Mathematical
Stochastics, Postfach 4120, 39106 Magdeburg, Germany; [email protected]
1
1 Introduction
generating process [Eckley et al., 2011, Frick et al., 2014, Aston and Kirch, 2012b].
However, data sets are now routinely considerably more complex than univariate time
series classically studied in change point problems [Page, 1954, Robbins et al., 2011,
Aue and Horváth, 2013, Horváth and Rice, 2014], and as such methodology for detecting and estimating change points in a wide variety of settings, such as multivariate
[Horváth et al., 1999, Ombao et al., 2005, Aue et al., 2009b, Kirch et al., 2015] functional [Berkes et al., 2009, Aue et al., 2009a, Hörmann and Kokoszka, 2010, Aston and
Kirch, 2012a] and high dimensional settings [Bai, 2010, Horváth and Hušková, 2012,
Chan et al., 2012, Cho and Fryzlewicz, 2015] have recently been proposed. In panel
data settings, these include methods based on taking maxima statistics across panels
coordinate-wise [Jirak, 2015], using sparsified binary segmentation for multiple change
point detection [Cho and Fryzlewicz, 2015], uses of double CUSUM procedures [Cho,
2015], as well as those based on structural assumptions such as sparsity [Wang and
Samworth, 2016].
Instead of looking at more and more complicated models, this paper uses a simple
mean change setting to illustrate how the power is influenced in high dimensional
settings. The results and techniques can subsequently be extended to more complex
change point settings as well as different statistical frameworks, such as two sample
tests. We make use of the following two key concepts: Firstly, we consider contiguous
changes where the size of the change tends to zero as the sample size and with it the
number of dimensions increases leading to the notion of high dimensional efficiency.
This concept is closely related to Asymptotic Relative Efficiency (ARE) (see Lehmann
[1999, Sec. 3.4] and Lopes et al. [2011] where ARE is used in a high dimensional
setting). Secondly, as a benchmark we investigate a class of tests based on projections,
where the optimal (oracle) projection test is closely related to the likelihood ratio test
under the knowledge of the direction of the change. Such tests can also be used in
order to include a priori information about the expected change direction, where we
can quantify how wrong the assumed direction can be and still yield better results than
a full multivariate statistic which uses no information about the change direction.
The aims of the paper are threefold: Firstly, we will investigate the asymptotic properties of tests based on projections as a plausible way to include prior information into
the tests. Secondly, by using high dimensional efficiency, we consider several projection tests (including oracle and random projections as benchmarks) and compare them
with the efficiency of existing tests that take the full covariance structure into account.
Finally, as in all high dimensional settings, the dependency between the components of
the series can typically neither be effectively estimated nor even uniquely determined
(for example if the sample size is less than the multivariate dimension) unless restrictions on the covariance are enforced. By considering the effect of misspecification of
the model covariance on the size as well as efficiency, we can quantify the implications
of this for different tests.
Somewhat obviously, highest efficiency can only be achieved under knowledge of the
direction of the change. However, data practitioners, in many cases, explicitly have
prior knowledge in which direction changes are likely to occur. It should be noted at
this point, that changes in mean are equivalent to changes of direction in multivariate
time series. In frequentist testing situations, practitioners’ main interest is in test
statistics which have power against a range of related alternatives while still controlling
the size. For example, an economist may check the performance of several companies
looking for changes caused by a recession. There will often be a general idea as to
which sectors of the economy will gain or lose by the recession and therefore a good
idea, at least qualitatively, as to what a change will approximately look like (downward
resp. upward shift depending on which sector a particular company is in) if there is a
change present. Similarly, in medical studies, it will often be known a-priori whether
genes are likely to be co-regulated causing changes to be in similar directions for groups
of genes in genetic time series.
2
1 Introduction
Incorporating this a-priori information about how the change affects the components
by using corresponding projections can lead to a considerable power improvement if the
change is indeed in the expected direction. It is also important that, as in many cases
the a-priori knowledge is qualitative, the test has higher power than standard tests not
only for that particular direction but also for other directions close by. Additionally,
these projections lead to tests where the size is better controlled if no change is present.
Using the concept of high dimensional efficiency allows the specification of a cone
around a given projection such that the projection statistic has better power than
the multivariate/panel statistic if the true change is within this cone. In addition,
while the prior information itself might be reliable, inherent misspecification in other
parts of the model, such as the covariance structure, will have a detrimental effect on
detection, and it is of interest to quantify the effect of these as well.
The results in this paper will be benchmarked against taking the simple approach of
using a random projection in a single direction to reduce the dimension of the data.
Random projections are becoming increasingly popular in high dimensional statistics
with applications in Linear Discriminant Analysis [Durrant and Kabán, 2010] and two
sample testing [Lopes et al., 2011, Srivastava et al., 2014]. This is primarily based on
the insight from the Johnson-Lindenstrauss lemma that an optimal projection in the
sense that the distances are preserved for a given set of data is independent of the
dimension of the data [Johnson and Lindenstrauss, 1984] and thus random projections
can often be a useful way to perform a dimension reduction for the data [Baraniuk
et al., 2008]. However, in our context, we will see that a random projection will not
work as well as truly multivariate methods, let alone projections with prior knowledge,
but can only serve as a lower benchmark.
We will consider a simple setup for our analysis, although one which is inherently the
base for most other procedures, and one which can easily be extended to complex
time dependencies and change point definitions using corresponding results from the
literature [Kirch and Tadjuidje Kamgaing, 2014a, Kirch and Tajduidje Kamgaing,
2014b]. For a set of observations Xi,t , 1 6 i 6 d = dT , 1 6 t 6 T , the change point
model is defined to be
Xi,t = µi + δi,T g(t/T ) + ei,t ,
1 6 i 6 d = dT , 1 6 t 6 T,
(1.1)
where E ei,t = 0 for all i and t with 0 < σi2 = var ei,t < ∞ and g : [0, 1] → R is
a Riemann-integrable function. Here δi,T indicates the size of the change for each
component. This setup incorporates a wide variety of possible changes by the suitable
selection of the function g, as will be seen below. For simplicity, for now it is assumed
that {ei,t : t ∈ Z} are independent, i.e. we assume independence across time but
not location. If the number of dimensions d is fixed, the results readily generalise to
situations where a multivariate functional limit theorem exists as is the case for many
weak dependent time series. If d can increase to infinity with T , then generalizations
are possible if the {ei,t : 1 6 t 6 T } form a linear process in time but the errors are
independent between components (dependency between components will be discussed
in detail in the next section). Existence of moments strictly larger than two is needed
in all cases. Furthermore, the developed theory applies equally to one- and two-sample
testing and can be seen as somewhat analogous to methods for multivariate adaptive
design [Minas et al., 2014].
The change (direction) is given by ∆d = (δ1,T , . . . , δd,T )T and the type of alternative
is given by the function g in rescaled time. While g is defined in a general way, it
includes as special cases most of the usual change point alternatives, for example,
0 0≤u≤θ
• At most one change (AMOC): g(u) =
1 θ<u≤1
3
2 Change Points and Projections
0 0 ≤ u ≤ θ1
1 θ1 < u < θ 2
• Epidemic change (AMOC): g(u) =
0 θ2 < u ≤ 1
The form of g will influence the choice of test statistic to detect the change point. As
in the above two examples in the typical definition of change points the function g is
modelled by a step function (which can approximate many smooth functions well). In
such situations, test statistics based on partial sums of the observations have been well
studied [Csörgő and Horváth, 1997]. It will be shown that statistics based on partial
sums are robust (in the sense of still having non-zero power) to a wide variety of g.
The model in (1.1) is defined for univariate (d = 1), multivariate (d fixed) or panel
data (d → ∞). The panel data (also known as “small n large p” or “high dimensional
low sample size”) setting is able to capture the small sample properties very well in
situations where d is comparable or even larger than T using asymptotic considerations.
In this asymptotic framework the detection ability or efficiency of various tests can be
defined by the rates at which vanishing alternatives can still be detected. However,
many of our results, particularly for the proposed projection tests, are also qualitatively
valid in the multivariate or d fixed setting.
The paper proceeds as follows. In Section 2, the concept of high dimensional efficiency
as a way of comparing the power of high dimensional tests is introduced. This is
done using projection statistics, which will also act as benchmarks. In Section 3,
the projection based statistics will be compared with the panel based change point
statistics already suggested in Horváth and Hušková [2012], both in terms of control of
size and efficiency, particular with relation to the (mis)specification of the dependence
structure. Section 4 provides a short illustrative example with respect to multivariate
market index data. Section 5 concludes with some discussion of the different statistics
proposed, while Section 6 gives the proofs of the results in the paper. In addition,
rather than a separate simulation section, simulations will be interspersed throughout
the theory. They complement the theoretic results, confirming that the conclusion are
already valid for small samples, thus verifying that the concept of high-dimensional
efficiency is indeed suitable to understand the power behavior of different test statistics.
In all cases the simulations are based on 1000 repetitions of i.i.d. normally distributed
data for each set of situations, and unless otherwise stated the number of time points
is T = 100 with the change (if present) occurring half way through the series. Except
in the simulations concerning size itself, all results are empirically size corrected to
account for the size issues for the multivariate (panel) statistic that will be seen in
Figure 2.1.
2 Change Points and Projections
2.1 High dimensional efficiency
As the main focus of this paper is to compare several test statistics with respect
to their detection power, we introduce a new asymptotic concept that allows us to
understand this detection power in a high dimensional context. In the subsequent
sections, simulations accompanying the theoretic results will show that this concept is
indeed able to give insight into the small sample detection power.
Consider a typical testing situation, where (possibly after reparametrization) we test
H0 : a = 0,
against
H1 : a 6= 0.
(2.1)
4
2 Change Points and Projections
Typically, large enough alternatives, will be detected by all reasonable statistics for
a given problem. In asymptotic theory this corresponds to fixed alternatives, where
a = c 6= 0, for which tests typically have asymptotic power one.
To understand the small sample power of different statistics such asymptotics are
therefore not suitable. Instead the asymptotics for local or contiguous alternatives
with a = aT → 0 are considered. For a panel setting we define:
Definition 2.1. Consider the testing situation (2.1) with sample size T → ∞ and
sample dimension d = dT → ∞. A test with statistic T (X1 , . . . , XT ) has (absolute)
high dimensional efficiency E(ad ) for a sequence of alternatives ad if
L
(i) T (X1 , . . . , XT ) −→ L for some non-degenerate limit distribution L under H0 ,
P
(ii) T (X1 , . . . , XT ) −→ ∞ if
L
(iii) T (X1 , . . . , XT ) −→ L if
√
√
T E(ad ) → ∞,
T E(ad ) → 0.
Note that the E(ad ) is only defined up to multiplicative constants, and has to be understood as a representative of a class.
√
In particular this shows
√ that the asymptotic power is√one if T E(ad ) → ∞, but
equal to the level if T E(ad ) → 0. Typically, for T E(ad ) → α 6= 0 it holds
L
D
T (X1 , . . . , XT ) −→ L(α) 6= L, usually resulting in an asymptotic power strictly between the level and one. In the classic notion (with d constant) of absolute relative
efficiency (ARE, or Pitman Efficiency) for test statistics with a standard normal limit
it is the additive shift between L(α) and L [see Lehmann, 1999, Sec 3.4]) that shows
power differences for different statistics. Consequently, this shift has been used to
define asymptotic efficiency. For different null asymptotics the comparison becomes
much more cumbersome as the quantiles of the different limit distributions had to be
taken into account as well. In our definition above, we concentrate on the efficiency in
terms of the asymptotic rates with respect to the increasing dimension (as two estimators are equivalent up to a dimension free constant). Should the rates be equivalent,
classic notions of ARE then apply, although with the usual difficulties should the limit
distributions be different.
In the asymptotic panel setup, on the other hand, the differences with respect to
the dimension d are now visible in the rates, with which contiguous alternatives can
disappear and still be asymptotically detectable. Therefore, we chose this rate to
define asymptotic high dimensional efficiency. Additionally, it is no longer required
that different test statistics have the same limit distribution under the null hypothesis
(which would be a problem in this paper).
2.2 Projections
We now describe how projections can be used to obtain change point statistics in
high dimensional settings, which will be used as both benchmark statistics for a truly
multivariate statistic as well as a reasonable alternative if some knowledge about the
direction of the change is present.
In model (1.1), the change ∆d = (δ1,T , . . . , δd,T )T (as a direction) is always a rank
one (vector) object no matter the number of components d. This observation suggests
that knowing the direction of the change ∆d in addition to the underlying covariance
structure can significantly increase the signal-to-noise ratio. Furthermore, for µ and
∆d /k∆d k (but not k∆d k) known with i.i.d. normal errors, one can easily verify that
5
2 Change Points and Projections
the corresponding likelihood ratio statistic is obtained as a projection statistic with
projection vector Σ−1 ∆d , which can also be viewed as an oracle projection. Under
(1.1) it holds
hXd (t), pd i = hµ, pd i + h∆d , pd ig(t/T ) + het , pd i,
where Xd (t) = (X1,t , . . . , Xd,T )T , µ = (µ1 , . . . , µd )T and et = (e1,t , . . . , ed,t )T . The
projection vector pd plays a crucial role in the following analysis and will be called
the search direction. This representation shows that the projected time series exhibits
the same behavior as before as long as the change is not orthogonal to the projection
vector. Furthermore, the power is the better the larger h∆d , pd i and the smaller the
variance of het , pd i is. Consequently, an optimal projection in terms of power depends
on ∆d as well as Σ = var e1 . In applications, certain changes are either expected or of
particular interest e.g. an economist looking at the performance of several companies
expecting changes caused by a recession will have a good idea which companies will
profit or lose. This knowledge can be used to increase the power in directions close
to the search direction pd while decreasing it for changes that are close to orthogonal
to it. Using projections can furthermore robustify the size of the test under the null
hypothesis with respect to misspecification and estimation error.
In order to qualify this informal statement, we will consider contiguous changes for
several change point tests, where k∆d k → 0 but with such a rate that the power of the
corresponding test is strictly between the size and one, as indicated in the previous
subsection.
In order to be able to prove asymptotic results for change point statistics based on
projections even if d → ∞, we need to make the following assumptions on the underlying error structure. This is much weaker than the independence assumption as
considered by Horváth and Hušková [2012]. Furthermore, we do not need to restrict
the rate with which d grows. If we do have restrictions on the growth rate in particular
for the multivariate setting with d fixed, these assumptions can be relaxed and more
general error sequences can be allowed.
Assumption A. 1. Let η1,t (d), η2,t (d), . . . be independent with E ηi,t (d) = 0, var ηi,t (d) =
1 and E |ηi,t (d)|ν 6 C < ∞ for some ν > 2 and all i and d. For t = 1, . . . , T we additionally assume for simplicity that (η1,t (d), η2,t (d), . . .) are identically distributed
(leading to data which is identically distributed across time). The errors within the
components are then given as linear processes of these innovations:
X
X
el,t (d) =
al,j (d)ηj,t (d), l = 1, . . . , d,
al,j (d)2 < ∞
j>1
j>1
or equivalently in vector notation et (d) = (e1,t (d), . . . , ed,t (d))T and aj (d) = (a1,j (d), . . . , ad,j (d))T
X
et (d) =
aj (d)ηj,t (d).
j>1
These assumptions allow us to considered many varied dependency relationships between the components (and we will concentrate on within the component dependency
at this point, as temporal dependency adds multiple layers of notational difficulties,
but little in the way of insight as almost all results generalise simply for weakly dependent and linear processes).
The following three cases of different dependency structures are very helpful in understanding different effects that can occur and will be used as examples throughout the
paper:
6
2 Change Points and Projections
Case C. 1 (Independent Components). The components are independent, i.e. aj =
(0, . . . , sj , . . . , 0)T the vector which is sj > 0 at point j and zero everywhere else, j 6 d,
and aj = 0 for j > d + 1. In particular, each channel has variance
σj2 = s2j .
Case C. 2 (Fully Dependent Components). There is one common factor to all components, leading to completely dependent components, i.e. a1 = Φd = (Φ1 , . . . , Φd )T ,
aj = 0 for j > 2. In this case,
σj2 = Φ2j .
This case, while being somewhat pathological, is useful for gaining intuition into the
effects of possible dependence and also helps with understanding the next case.
Case C. 3 (Mixed Components). The components contain both an independent and
dependent term. Let aj = (0, . . . , sj , . . . , 0)T the vector which is sj > 0 at point j and
zero everywhere else, and ad+1 = Φd = (Φ1 , . . . , Φd )T , aj = 0 for j > d + 2. Then
σj2 = s2j + Φ2j
This mixed case allows consideration of dependency structures between cases C.1 and
C.2. It is used in the simulations with Φd = Φ(1, . . . , 1)T , where Φ = 0 corresponds to
C.1 and Φ → ∞ corresponds to C.3. We also use this particular example for the panel
statistic in Section 3.2 to quantify the effect of misspecification.
Of course, many other dependency structures are possible, but these three cases give
insight into the cases of no, complete and some dependency respectively. In particular,
as the change is always rank one, taking a rank one form of dependency, as in cases
C.2 and as part of C.3, still allows somewhat general conclusions to be drawn.
2.3 Change point statistics
Standard statistics such as the CUSUM statistic are based on partial sum processes, so
in order to quantify the possible power gain by the use of projections we will consider
the partial sum process of the projections, i.e.
bT xc
T
X
X
1
hXd (t), pd i − 1
hXd (j), pd i ,
(2.2)
Ud,T (x) = hZT (x), pd i = √
T j=1
T t=1
bT xc
T
X
X
1
bT xc
ZT,i (x) = 1/2
Xi,t −
Xi,t ,
(2.3)
T t=1
T
t=1
where Xd (t) = (X1,1 , . . . , Xd,T )T .
Different test statistics can be defined for a range of g in (1.1), however, assuming that
g 6≡ 0, the hypothesis of interest is
H0 : ∆d = 0
versus the alternative
H1 : ∆d 6= 0.
7
2 Change Points and Projections
Test statistics are now defined in order to give good power characteristics for a particular g function. For example, the classic AMOC statistic for univariate and multivariate
change point detection is based on Ud,T (x)/τ (pd ), with
τ 2 (pd ) = pTd var (e1 (d)) pd .
(2.4)
Typically, either the following max or sum type statistics are used:
max w(k/T )
16k6T
Ud,T (k/T )
,
τ (pd )
T
1 X
Ud,T (k/T )
,
w(k/T )
T
τ (pd )
k=1
where w > 0 is continuous (which can be relaxed) and fulfills (2.10) (confer e.g. the
book by Csörgő and Horváth [1997]). The choice of weight function w(·) can increase
power for certain locations of the change points [Kirch et al., 2015].
For the epidemic change, typical test statistics are given by
max
16k1 <k2 6T
or
1
T2
1
|Ud,T (k2 /T ) − Ud,T (k1 /T ) |,
τ (pd )
X
16k1 <k2 6T
1
|Ud,T (k2 /T ) − Ud,T (k1 /T ) |.
τ (pd )
In the next section we first derive a functional central limit theorem for the process
Ud,T (x), which implies the asymptotic null behavior for the above tests. Then, we
derive the asymptotic behavior of the partial sum process under contiguous alternatives
to obtain the high dimensional efficiency for projection statistics.
2.4 Efficiency of Change point tests based on projections
In this section, we derive the efficiency of change point tests based on projections under
rather general assumptions. Furthermore, we will see that the size behavior is very
robust with respect to deviations from the assumed underlying covariance structure.
The power on the other hand turns out to be less robust but more so than statistics
taking the full multivariate information into account.
2.4.1 Null Asymptotics
As a first step towards the efficiency of projection statistics, we derive the null asymptotics. This is also of independent interest if projection statistics are applied to a given
data set in order to find appropriate critical values. In the following theorem d can
be fixed but it is also allowed that d = dT → ∞, where no restrictions on the rate of
convergence are necessary.
Theorem 2.1. Let model (1.1) hold. Let pd be a possibly random projection independent of {ei,t : 1 6 t 6 T, 1 6 i 6 d}. Furthermore, let pTd cov(e1 (d))pd 6= 0 (almost
surely), which means that the projected data is not degenerate with probability one.
a) Under Assumption A.1 and if {pd } is independent of {ηi,t (d) : i > 1, 1 6 t 6 T },
then it holds under the null hypothesis
Ud,T (x)
D[0,1]
: 0 6 x 6 1 | pd −→ {B(x) : 0 6 x 6 1}
a.s.,
(2.5)
τ (pd )
where B(·) is a standard Brownian bridge.
8
2 Change Points and Projections
b) For i.i.d. error sequences {et (d) : t = 1, . . . , d}, et (d) = (e1,t (d), . . . , ed,t (d))T with
an arbitrary dependency structure across components, and if E |e1,t (d)|ν 6 C < ∞
for all t and d as well as
pTd
kpd k21
= o(T 1−2/ν )
cov(et )pTd
where kak1 =
Pd
j=1
a.s.,
(2.6)
|aj |, then (2.5) holds.
2
The assertions remain true if τ 2 (pd ) is replaced by τbd,T
such that for all > 0
P
!
2
τbd,T
−1 > →0
τ 2 (pd )
a.s.
(2.7)
Assumption (2.6) is always fulfilled for the multivariate situation with d fixed or if
d is growing sufficiently
slowly with respect to T as the left hand side of (2.6) is
√
always bounded by d if pTd cov(e)pd /kpd k2 is bounded away from zero. Otherwise,
the assumption may hold for certain projections but not others. However, in this case,
it is possible to put stronger assumptions on the error sequence such as in a), which
are still much weaker than the usual assumption for panel data, that components
are independent. In these cases projection methods hold the size asymptotically, no
matter what the dependency structure between components is and without having to
estimate this dependency structure.
This is in contrast to the multivariate statistic which suffers from considerable size
distortions if this underlying covariance structure is estimated incorrectly. The estimation of the covariance structure is a difficult problem in higher dimensions in
particular since an estimator for the inverse is needed with additional numerical problems arising. The problem becomes even harder if time series errors are present, in
which case the long-run covariance rather than the covariance matrix needs to be estimated [Hörmann and Kokoszka, 2010, Aston and Kirch, 2012b, Kirch et al., 2015].
While the size of the projection procedure is unaffected by the underlying dependency
across components, we will see in the next section that for optimal efficiency hence
power we need not only to know the change ∆d but also the inverse of the covariance
matrix. Nevertheless the power of projection procedures turns out to be more robust
with respect to misspecification than a size-corrected panel statistic, that takes the
full multivariate information into account.
The following lemma shows that the following two different estimators for τ (pd ) under
the null hypothesis are both consistent. The second one is typically still consistent in
the presence of one mean change which usually leads to a power improvement in the
test for small samples. An analogous version can be defined for the epidemic change
situation. However, it is much harder to get an equivalent correction in the multivariate setting because the covariance matrix determines how different components are
weighted, which in turn has an effect on the location of the maximum. This problem
does not arise in the univariate situation, because the location of the maximum does
not depend on the variance estimate.
Lemma 2.2. Consider
!2
T
T
X
X
1
1
2
τb1,d,T
(pd ) =
pT et (d)
pTd et (d) −
T j=1
T i=1 d
9
(2.8)
2 Change Points and Projections
as well as
b
kd,T
2
τb2,d,T
(pd ) =
2
b
kd,T
1 X T
1 X T
pd ej (d) −
p ei (d) +
T j=1
T i=1 d
T
X
j=b
kd,T +1
T
X
pTd et (d) − 1
T
i=b
kd,T +1
(2.9)
where
b
kd,T = arg max Ud,T (t/T ).
t=1,...,T
a) Under the assumptions of Theorem 2.1 a) both estimators (2.8) as well as (2.9)
fulfill (2.7).
b) Under the assumptions of Theorem 2.1 b), then estimator (2.8) fulfills (2.7) under
the assumption
pTd
kpd k21
= o(T 1−2/ min(ν,4) )
cov(et )pTd
a.s.,
while estimator (2.9) fulfills it under the assumption
kpd k21
= o(T 1−2/ min(ν,4) (log T )−1 )
pTd cov(et )pTd
a.s.,
The following theorem gives the null asymptotic for the simple CUSUM statistic for
the at most one change, other statistics as given in Section 2.3 can be dealt with along
the same lines.
Corollary 2.3. Let the assumptions of Theorem 2.1 be fulfilled and τb(pd ) fulfill (2.7)
under the null hypothesis, then for all x ∈ R it holds under the null hypothesis
!
2
U
(k/T
)
d,T
2
2
2
P
max w (k/T ) 2
6 x pd → P max w (t)B (t) 6 x
a.s.
06t61
16k6T
τb (pd )
Z 1
2
X
U
(k/T
)
1
d,T
2
2
2
w (k/T ) 2
6 x pd → P
w (t)B (t) dt 6 x
a.s.
P
T
τb (pd )
0
16k6T
for any continuous weight function w(·) with
lim tα w(t) < ∞,
t→0
sup
w(t) < ∞
lim (1 − t)α w(t)
t→1
for some 0 6 α < 1/2,
for all 0 < η 6
η6t61−η
1
.
2
(2.10)
As can be seen in Figure 2.1, regardless of whether the variance is known or estimated,
the projection methods all maintain the correct size even when there is a high degree
of dependence between the different components (the specific projection methods and
indeed the non-projection methods will be characterised in Section 2.5 below). The
full tests, where size is not controlled, will be discussed in Section 3.
2.4.2 Absolute high dimensional efficiency
We are now ready to derive the high dimensional efficiency of projection statistics.
Furthermore, we show that a related estimator for the location of the change is asymptotically consistent.
10
2
pTd ei (d) ,
2 Change Points and Projections
1
Oracle
Random
Quasi-Oracle
Pre-Oracle
0.9
0.8
0.7
H&H 2i
0.6
H&H i + i
size
2
2
0.5
0.4
0.3
0.2
0.1
0
0
0.2
0.4
0.6
0.8
1
(a) Known variance
0.9
0.8
0.8
0.7
0.6
0.6
0.5
0.5
size
size
0.7
0.9
Oracle
Random
Quasi-Oracle
Pre-Oracle
H&H Var
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0
0
0.2
0.4
0.6
0.8
1
(b) Estimated variance as in (2.8)
0
0
Oracle
Random
Quasi-Oracle
Pre-Oracle
H&H Var
0.2
0.4
0.6
0.8
1
(c) Estimated variance as in (2.9)
Figure 2.1: Size of tests as the degree of dependency between the components increases.
As can be seen, all the projection methods, Oracle, Quasi-Oracle, PreOracle and Random projections defined in Section 2.5 maintain the size
of the tests. Those based on using the full information as described in
Section 3 have size problems as the degree of dependency increases. The
simulations correspond to Case C.3 with sj = 1, Φj = φ, j = 1, . . . , d with
d = 200, where φ is given on the x-axis).
Theorem 2.4. Let the assumptions of Theorem 2.1 either a) or b) on the errors
respectively pd be fulfilled. Furthermore, consider a weight function w(·) as in CorolR
2
R1
x
lary 2.3 fulfilling w2 (x) 0 g(t) dt − x 0 g(t) dt 6= 0. Then, both the max and sum
statistic from Corollary 2.3 have the following absolute high dimensional efficiency:
E1 (∆d , pd ) :=
|h∆d , pd i|
k∆d kkpd k cos(α∆d ,pd )
=
,
τ (pd )
τ (pd )
(2.11)
where τ 2 (pd ) is as in (2.4) and αu,v is the (smallest) angle between u and v. In
addition, the asymptotic power increases with increasing multiplicative constant.
In the following, E1 (∆d , pd ) is fixed to the above representative of the class, so that
different projection procedures with the same rate but with different constants can be
compared.
Remark 2.1. For random projections the high dimensional efficiency is a random
variable. The convergences in Definition 2.1 is understood given the projection vector
pd , where
√ we get either a.s.-convergence or P -stochastic convergence depending on
whether T E1 (∆d , pd ) converges a.s. or in a P -stochastic sense (in the latter case the
assertion follows from the subsequence-principle).
11
2 Change Points and Projections
Remark 2.2. In particular all deviations from a stationary
mean are
Rx
R 1detectable with
asymptotic power one for weight functions w > 0 as 0 g(t) dt − x 0 g(t) dt 6= 0 for
non-constant g. It is this g function which determines which weight function gives
best power.
We derive the high dimensional efficiency for a given g and disappearing magnitude of
the change k∆d k. For an epidemic change situation with g(x) = 1{ϑ1 <x6ϑ2 } for some
0 < ϑ1 < ϑ2 < 1, this means that the duration of the change is relatively large but the
magnitude relatively small with respect to the sample size. Alternatively, one could
also consider the situation, where the duration gets smaller asymptotically (see e.g.
[Frick et al., 2014]) resulting in a different high dimensional efficiency, which is equal
for both the projection as well as the multivariate or panel statistic, as long as the same
weight function and the same type of statistic (sum/max) is used. Some preliminary
investigations suggest that in this case using projections based on principle component
analysis similar to Aston and Kirch [2012a] can be advantageous, however this is not
true for the setting discussed in this paper.
The above result shows in particular that sufficiently large changes (as defined by the
high dimensional efficiency) are detected asymptotically with power one. For such
changes in the at-most-one-change situation, the following corollary shows that the
corresponding change point estimator is consistent in rescaled time.
√
Corollary 2.5. Let the assumptions of Theorem 2.4 hold and additionally T E1 (∆d , pd ) →
∞ a.s. Under the alternative of one abrupt change, i.e. g(x) = 1{x>ϑ} for some
0 < ϑ < 1, the estimator
%
$
2
arg maxk Ud,T
(k/T )
b
ϑT =
T
is consistent for the change point in rescaled time, i.e.
P ϑbT − ϑ > | pd → 0
a.s.
2
An analogous statement holds, if the arg max of w2 (k/T )Ud,T
(k/T ) is used instead
2
2
and w (x) ((x − ϑ)+ − x(1 − ϑ)) has a unique maximum at ϑ, which is the case for
many standard weight functions such as w(t) = (t(1 − t))−β for some 0 6 β < 1/2.
In the next section we will further investigate the high dimensional efficiency and see
that the power depends essentially on the angle between Σ1/2 pd and the ’standardized’
change Σ−1/2 ∆ if Σ is invertible. In fact, the smaller the angle the larger the power.
Some interesting insight will also come from the situation where Σ is not invertible by
considering case C.2 above.
2.5 High dimensional efficiency of oracle and random projections
In this section, we will further investigate the high dimensional efficiency of certain
particular projections that can be viewed as benchmark projections. In particular, we
will see that the efficiency depends only on the angle between the projection and the
change both properly scaled with the underlying covariance structure.
The highest efficiency is obtained by o = Σ−1 ∆d as the next theorem shows, which
will be called the oracle projection. This oracle is equivalent to a projection after first
standardizing the data on the ’new’ change Σ−1/2 ∆d . The corresponding test is related
to the likelihood ratio statistic for i.i.d. normal innovations, where both the original
12
2 Change Points and Projections
mean and the direction (but not magnitude) of the change are known. As a lower (worst
case) benchmark we consider a scaled random projection rd,Σ = Σ−1/2 rd , where rd is
a random projection on the d-dimensional unit sphere. This is equivalent to a random
projection onto the unit sphere after standardizing the data. Both projections depend
on Σ which is usually not known so that it needs to be estimated. The latter is rather
problematic in particular in high dimensional settings without additional parametric
or sparsity assumptions (see Zou et al. [2006], Bickel and Levina [2008] and Fan et al.
[2013] including related discussion). Furthermore, it is actually the inverse that needs
to be estimated which results in additional numerical problems if d is large. For this
reason we check the robustness of the procedure with respect to not knowing or
misspecifying Σ in a second part of this section
In Section 3 we will compare the efficiency of the above projections with a procedure
taking the full information into account. We will show that we lose an order d1/4 in
terms of high dimensional efficiency between the oracle and the full panel data statistic
and another d1/4 between the panel and the random projection.
2.5.1 Correctly scaled projections
The following proposition characterizes which projection yields an optimal high dimensional efficiency associated with the highest power.
Proposition 2.6. If Σ is invertible, then
E1 (∆, pd ) = kΣ−1/2 ∆d k cos(αΣ−1/2 ∆d ,Σ1/2 pd ).
(2.12)
Proposition 2.6 shows in particular, that after standardizing the data, i.e. for Σ =
Id , the power depends solely on the cosine of the angle between the oracle and the
projection (see Figure 2.2).
From the representation in this proposition it follows immediately that the ’oracle’
choice for the projection to maximize the high dimensional efficiency is o = Σ−1 ∆d as
it maximizes the only term which involves the projection namely cos(αΣ−1/2 ∆d ,Σ1/2 pd ).
Therefore, we define:
Definition 2.2. The projection o = Σ−1 ∆d is called oracle if Σ−1 exists. Since the
projection procedure is invariant under multiplications with non-zero constants of the
projected vector, all non-zero multiples of the oracle have the same properties, so that
they correspond to a class of projections.
By Proposition 2.6 the oracle choice leads to a high dimensional efficiency of E1 (∆d , o) =
kΣ−1/2 ∆d k.
Another way of understanding the Oracle projection is the following: If we first
standardize the data, then for a projection on a unit (w.l.o.g.) vector the variance
of the noise is constant and the signal is given by the scalar product of Σ−1/2 ∆
and the (unit) projection vector, which is obviously maximized by a projection with
Σ−1/2 ∆/kΣ−1/2 ∆k which is equivalent to using pd = Σ−1 ∆ as a projection vector
for the original non-standardized version.
So, if we know Σ and want to maximize the efficiency respectively power close to
a particular search direction sd of our interest, we should use the scaled search
direction sΣ,d = Σ−1 sd as a projection.
Because the cosine falls very slowly close to zero, the efficiency will be good if the search
direction is not too far off the true change. From this, one could get the impression
13
2 Change Points and Projections
1
0.9
Random
Search
H&H
0.8
0.7
Power
0.6
0.5
0.4
0.3
0.2
0.1
0
0
0.2
0.4
0.6
0.8
Radians
1
1.2
1.4
1.6
Figure 2.2: Power of tests as the angle between the search direction and the oracle
increases. As can be seen, the search projection method decreases similarly
to cosine of the angle, while the random projection and Horváth -Hušková
tests as introduced in Section 3 are given for comparison. (Here Σd = Id ,
d = 200, and ∆d = 0.05 1d , corresponding to Case C.1).
that even a scaled random projection rΣ,d = Σ−1/2 rd may not do too badly, where
rd is a uniform random projection on the unit sphere. This is equivalent to using a
random projection on the unit sphere after standardizing the data, which also explains
the different scaling as compared to the oracle or the scaled search direction, where
the change ∆d is also transformed to Σ−1/2 ∆d by the standardization. However, since
for increasing d the space covered by the far away angles is also increasing, the high
dimensional√efficiency of the scaled random projection is not only worse than the oracle
by a factor d but also by a factor d1/4 than a full multivariate or panel statistic which
will be investigated in detail in Section 3.
The following theorem shows the high dimensional efficiency of the scaled random
projection.
Theorem 2.7. Let the alternative hold, i.e. k∆d k 6= 0. Let rd be a random uniform
projection on the d-dimensional unit sphere and rΣ,d = Σ−1/2 rd , then for all > 0
there exist constants c, C > 0, such that
d
P c 6 E12 (∆d , rΣ,d )
6
C
> 1 − .
kΣ−1/2 ∆d k2
Such a random projection on the unit sphere can be obtained as follows: Let X1 , . . . , Xd
be i.i.d. N(0,1), then rd = (X1 , . . . , Xd )T /k(X1 , . . . , Xd )T k is uniform on the d-dimensional
unit sphere [Marsaglia, 1972].
Comparing the high dimensional efficiency of the scaled random projection with the
one obtained for the oracle
projection (confer Proposition 2.6) it becomes apparent
√
that we lose an order d. In Section 3 we will see that the panel statistic taking the
full multivariate information into account has a contiguous rate just between those
two losing d1/4 in comparison to the oracle but gaining d1/4 in comparison to a scaled
random projection. From these results one obtains a cone around the search direction
such that the projection statistic has higher power than the panel statistic, if the true
change falls within this cone.
Figure 2.3 shows the results of some simulations showing that a change that can be
detected for the oracle with constant power as d increases rapidly loses power for the
14
2 Change Points and Projections
1
0.9
0.8
0.7
power
0.6
0.5
random projection
/4 projection
Horvath & Huskova
oracle projection
0.4
0.3
0.2
0.1
0
10
1
10
2
10
3
d
Figure 2.3: Power of the tests as d increases with a fixed sample size (T = 100). Here
k∆d k = const.and Σd = Id , i.e. kΣ−1/2 ∆d k = const., corresponding to
Case C.1. This gives roughly constant power for fixed angle projection
tests (as k∆d k is constant), while results in decreasing power for both the
panel statistic test and random projections as predicted by theory.
panel statistic as predicted by its high dimensional efficiency in Section 3 as well as
for the random projection. This and the following simulations show clearly that the
concept of high dimensional efficiency is indeed capable of explaining the small sample
power of a statistic very well.
2.5.2 The oracle in the case of non-invertibility
Let us now have a look at the situation if Σ is not invertible hence the above oracle
does not exist. To this end, let us consider Case C.2 above – other non-invertible
dependent situations can essentially be viewed in a very similar fashion, but become
a combination of the two scenarios below.
Case C. 2 (Fully dependent Components). In this case Σ = Φd ΦTd is a rank 1 matrix
and not invertible. Consequently, the oracle as in Definition 2.2 does not exist. To
understand the situation better, we have to distinguish two scenarios:
(i) If Φd is not a multiple of ∆d we can transform the data into a noise-free sequence
that only contains the signal by projecting onto a vector that is orthogonal to Φd
(cancelling the noise term) but not to ∆d . All such projections are in principle
equivalent as they yield the same signal except for a different scaling which is not
important if there is no noise present. Consequently, all such transformations
could be called oracle projections.
(ii) On the other hand if ∆d is a multiple of Φd , then any projection cancelling the
noise will also cancel the signal. Projections that are orthogonal to Φd hence
by definition also to ∆d will lead to a constant deterministic sequence hence to
a degenerate situation. All other projections lead to the same (non-degenerate)
15
2 Change Points and Projections
time series except for multiplicative constants and different means (under which
the proposed change point statistics are invariant by definition) so all of them
could be called oracles.
The following interpretation also explains the above mathematical findings: In this
situation, all components are obtained from one common factor {ηt } with different
weights according to Φd i.e. they move in sync with those weights. If a change is
proportional to Φd it could either be attributed to the noise coming from {ηt } or from
a change, so it will be difficult to detect as we are essentially back in a duplicated
rank one situation and no additional information about the change can be obtained
from the multivariate situation. However, if it is not proportional to Φ, then it is
immediately clear (with probability one) that a change in mean must have occurred
(as the underlying time series no longer moves in sync). This can be seen to some extent
in Figure 2.4, where the different panels in the figure mimic the different scenarios as
outlined above (with a large value of φ being close to the non-invertible situation).
2.5.3 Misscaled projections with respect to the covariance structure
The analysis in the previous section requires the knowledge or a precise estimate of
the inverse (structure) of Σ. However, in many situations such an estimate may not be
feasible or too imprecise due to one or several of the below reasons, where the problems
get worse due to the necessity for inversion
• If d is large in comparison to T statistical estimation errors can accumulate and
identification may not even be possible [Bickel and Levina, 2008].
• The theory can be generalized to time series errors but in this case the covariance
matrix has to be replaced by the long-run covariance (which is proportional to
the spectrum at 0) and is much more difficult to estimate [Aston and Kirch,
2012b, Kirch and Tadjuidje Kamgaing, 2012].
• Standard covariance estimators will be inconsistent under alternatives as they are
contaminated by the change points. Consequently, possible changes have to be
taken into account, but even in a simple at most one change situation it is unclear
how best to generalize the standard univariate approach as in (2.9) as opposed
to (2.8) to a multivariate situation as the estimation of a joint location already
requires an initial weighting for the projection (or the multivariate statistic).
Alternatively, component-wise univariate estimation of the change points could
be done but require a careful asymptotic analysis in particular in a setting with
d → ∞.
• If d is large, additional numerical errors may arise when inverting the matrix
[Higham, 2002, Ch 14].
We will now investigate the influence of misspecification or estimation errors on the
high dimensional efficiency of a misscaled oracle oM = M−1 ∆d in comparison to
the misscaled random projection rM,d = M−1/2 rd , where we only assume that
the assumed covariance structure M is symmetric and positive definite and model A.1
is fulfilled.
Theorem 2.8. Let the alternative hold, i.e. k∆d k =
6 0. Let rd be a random projection on the d-dimensional unit sphere and rM,d = M−1/2 rd be the misscaled random
projection. Then, there exist for all > 0 constants c, C > 0, such that
!
tr M−1/2 ΣM−1/2
2
6 C > 1 − ,
P c 6 E1 (∆d , rM,d )
kM−1/2 ∆d k2
where tr denotes the trace.
16
2 Change Points and Projections
1
1
0.8
0.8
0.6
0.4
0.4
0.2
0
0
Oracle
Random
Pre-Oracle
H&H Sigma
H&H Var
0.6
Oracle
Random
Pre-Oracle
H&H Sigma
H&H Var
0.2
0.2
0.4
0.6
0.8
0
0
1
0.2
0.4
0.6
0.8
1
(a) Angle between ∆d and Φ = 0 radians (b) Angle between ∆d and Φ = π/8 radians
1
1
0.8
0.8
0.6
0.6
Oracle
Random
Pre-Oracle
H&H Sigma
H&H Var
0.4
0.2
0
0
0.2
0.4
0.6
0.8
Oracle
Random
Pre-Oracle
H&H Sigma
H&H Var
0.4
0.2
1
0
0
0.2
0.4
0.6
0.8
1
(c) Angle between ∆d and Φ = π/4 radians (d) Angle between ∆d and Φ = π/2 radians
Figure 2.4: Power of tests as the angle between the change and the direction of dependency increases. As can be seen, if the change lies in the direction of
dependency, then all methods struggle, which is in line with the theory of
Section 2.5. However, if the change is orthogonal to the dependency structure the projection method works regardless of whether the dependency is
taken into account or not. H&H Sigma and Var as in Section 3 represent
the panel tests taking into account the true or estimated variances of the
components. All results are empirically size corrected to account for the
size issues seen
√ in Figure 2.1. (sj = 1, Φj = φ, j = 1, . . . , d with d = 200,
k∆d k = 0.05 d, corresponding to Case C.3), with φ as given on the x-axis.
We are now ready to prove the main result of this section stating that the high dimensional efficiency of a misscaled oracle can never be worse than the corresponding
misscaled random projection.
Theorem 2.9. Let Assumption A.1 hold. Denote the misscaled oracle by oM =
M−1 ∆d , then
E12 (∆d , oM ) >
kM−1/2 ∆d k2
tr(M−1/2 ΣM−1/2 )
where tr denotes the trace and equality holds iff there is only one common factor which
is weighted proportional to ∆d ,
Because it is often assumed that components are independent and it is usually feasible to estimate the variances of each component, we consider the correspondingly
misscaled oracles, which are scaled with the identity matrix (pre-oracle) respectively
17
2 Change Points and Projections
with the diagonal matrix of variances (quasi-oracle). The quasi-oracle is of particular
importance as it uses the same type of misspecification as the panel statistic discussed
in Section 3 below.
Definition 2.3.
(i) The projection po = ∆d is called pre-oracle.
2
2 T
2
2
(ii) The projection qo = Λ−1
d ∆d = (δ1 /σ1 , . . . , δd /σd ) , Λd = diag(σ1 , . . . , σd ) is
2
called quasi-oracle, if σj > 0, j = 1, . . . , d.
As with the oracle, these projections should be seen as representatives of a class of
projections.
The following proposition shows that in the important special case of uncorrelated
components, the (quasi-)oracle and pre-oracle have an efficiency of same order if the
variances in all components are bounded and bounded away from zero. The latter
assumption is also needed for the panel statistic below and means that all components
are on similar scales. In addition, the efficiency of the quasi-oracle is even in the
misspecified situation always better than an unscaled random projection.
Proposition 2.10. Assume that all variances are on the same scale, i.e. there exist
c, C such that 0 < c 6 σi2 < C < ∞ for i = 1, . . . , d.
a) Let Σ = diag(σ12 , . . . , σd2 ), then
c2 2
E (∆d , qo) 6 E12 (∆d , po) 6 E12 (∆, qo) = kΣ−1/2 ∆d k2 .
C2 1
b) Under Assumption A.1, it holds
E12 (∆d , qo) >
c2 k∆d k2
.
C 2 tr(Σ)
Figures 2.4 and 2.5 show the results of some small sample simulations of case C.3
confirming again that the theoretical findings based on the high dimensional efficiency
are indeed able to predict the small sample power for the test statistics. Essentially,
the following assertions are confirmed:
1) The power of the pre- and quasi-oracle is always better than the power of the
misscaled random projection (the random projection assumes an identity covariance
structure).
2) The power of the (correctly scaled) oracle can become as bad as the power of the
(misscaled) random projection but only if Φd ∼ ∆d . In this case the power of
the misscaled panel statistic (i.e. where the statistic but not the critical values are
constructed under the wrong assumption of independence between components) is
equally bad.
3) While the power of the (misscaled) panel statistic becomes as bad as the power
of the (misscaled) random projection for φ → ∞ irrespective of the angle between
∆d and Φd , it can be significantly better for the pre- and quasi-oracle. In fact, we
will see in Section 3 that the high dimensional efficiency of the misspecified panel
statistic will be of the same order as a random projection for any choice Φd with
ΦTd Φd ∼ d, irrespective of the direction of any change that might be present.
We will now have a closer look at the three standard examples in order to understand
the behavior in the simulations better (Case C.1 is included in the simulations for
Φ = 0, while C.3 is the limiting case for Φ → ∞).
18
2 Change Points and Projections
Case C. 1 (Independent components). If the components are uncorrelated, each with
variance σi2 , i.e. Σ1 = diag(σ12 , . . . , σd2 ), we get
tr(Σ1 ) =
d
X
σj2 ,
j=1
which is of order d if 0 < c 6 σj2 6 C < ∞. Proposition 2.10, Theorem 2.7 and
Theorem 2.8 show that in this situation
√ both the high dimensional efficiency of the
pre- and (quasi-)oracle are of an order d better than the correctly scaled and unscaled
random projection.
The second case shows that high dimensional efficiency of misscaled oracles can indeed
become as bad as a random projection and helps in the understanding of the mixed
case:
Case C. 2 (Fully dependent components). As already noted in 2.5.2 we have to
distinguish two cases:
(i) If ∆d is not a multiple of Φd , then the power depends on the angle of the
projection with Φd with maximal power for an orthogonal projection. So the
goodness of the oracles depends on their angle with the vector Φd .
(ii) If ∆d is a multiple of Φd , the pre- and quasi-oracle are not orthogonal to the
change, hence they share the same high dimensional efficiency with any scaled
random projection as all random projections are not orthogonal to Φd with probability 1.
We can now turn to the mixed case that is also used in the simulations.
Case C. 3 (Mixed case). Let aj = (0, . . . , sj , . . . , 0)T the vector which is sj > 0 at
point j and zero everywhere else, and ad+1 = Φd = (Φ1 , . . . , Φd )T , aj = 0 for j > d+2.
Then Σ3 = diag(s21 , . . . , s2d ) + Φd ΦTd and
tr(Σ3 ) =
d
X
s2j +
j=1
d
X
Φ2j .
(2.13)
j=1
The high dimensional efficiency of the pre-oracle can become as bad as for the random
projection if the change ∆d is a multiple of the common factor Φd and there is a
substantial common effect. This is similar to Case C.2 (which can be seen as a limiting
case for increasing kΦd k). Intuitively, the problem is the following: By projecting
onto the change, we want to maximize the signal i.e. the change in the projected
sequence while minimizing the noise. In this situation however, the common factor
dominates the noise in the projection as it essentially adds √
up in a linear manner, while
the uncorrelated components add up only in the order of d (CLT). Now, projecting
onto ∆d = Φd maximizes not only the signal but also the noise, which is why we
cannot gain anything (but this also holds true for competing procedures as in Section
3 below).
P
2
Pd
d
More precisely, in C.3 it holds τ 2 ( po) = j=1 s2j δj2 +
δ
Φ
. If additionally
j
j
j=1
∆d = kΦd , for some k > 0, we get the following high dimensional efficiency for the
pre-oracle by (2.11)
E1 (∆d , po) = r
Pd
2
i=1 si
k∆d k
2
δi
k∆d k
.
+ kΦd
19
k2
3 High dimensional efficiency of panel change point test
The high dimensional efficiency for the unscaled random projection is given by (confer
Theorem 2.8 and (2.13))
k∆d k
qP
d
2
j=1 sj
.
+ kΦd k2
As soon as sj , Φj are of the same order, i.e. 0 < c 6 sj , Φj 6 C < ∞ for all j, the
pre-oracle behaves as badly as the unscaled random projection. The same holds for the
quasi-oracle under the same assumptions. Interestingly, however, in this particular
situation, even the oracle is of the same order as the random projection if the sj are
of the same order, i.e. 0 < c 6 sj < C < ∞. More precisely we get (for a proof we
refer to the Section 6)
v
u Pd δ2
j
u
u j=1 s2j
k∆d k
t
.
(2.14)
E1 (∆d , o) = r
Pd
Pd Φ2j
δj2
j=1
1 + j=1 s2
j
Figure 2.4 shows simulations which confirm the underlying theory in finite samples.
On the other hand, if ∆d is orthogonal to Φd , then the noise from Φd cancels for the
pre-oracle projection and we get the rate
E1 (∆d , po) = r
k∆d k
2 ,
δi
2
s
i=1 i k∆d k
Pd
which is of the order k∆d k2 if the sj are all of the same order. Anything between
those two cases is possible and depends on the angle between ∆ and Φd (again see
Figures 2.4 and 2.5 for finite sample simulations).
The following interpretation also explains the above mathematical findings: In situation C.3, each component has a common factor {ηt } weighted according to Φd plus
some independent noise. If a change occurs in sync with the common factor it will be
difficult to detect as in order to get the correct size, we have to allow for the random
movements of {ηt } thus increasing the critical values in that direction. In directions
orthogonal to it, we only have to take the independent noise into account which yields
comparably smaller noise in the projection. In an economic setting, this driving factor
could for example be thought of as an economic factor behind certain companies (e.g.
ones in the same industry). If a change occurs in those companies proportional to this
driving factor it will be difficult to distinguish a different economic state of this driving
factor from a mean change that is proportional to the influence of this factor.
3 High dimensional efficiency of panel change point
test
In this section, we will compare the power of the above projection tests with corresponding CUSUM tests that take the full multivariate information into account. First
statistics of this type were developed for the multivariate setting with d fixed [Horváth
et al., 1999]. The multivariate change point statistic (using the full multivariate information and no additional knowledge about the change) for the at most one mean
change is given as a weighted maximum or sum of the following quadratic form
VdM (x) = ZT (x)T AZT (x)T ,
(3.1)
20
3 High dimensional efficiency of panel change point test
1
0.9
0.8
0.7
1
Oracle
Pre-Oracle
Quasi-Oracle
Random
H&H Sigma
H&H Var
0.9
0.7
0.6
Power
Power
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0
0.01
Oracle
Pre-Oracle
Quasi-Oracle
Random
H&H Sigma
H&H Var
0.8
0.1
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
0
0.01
0.1
0.02
0.03
0.04
0.05
||
0.06
0.07
0.08
0.09
0.1
||
(a) No Dependency - φ = 0
(b) φ = 0.5
1
0.9
Oracle
Pre-Oracle
Quasi-Oracle
Random
H&H Sigma
H&H Var
0.8
0.7
Power
0.6
0.5
0.4
0.3
0.2
0.1
0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
0.1
||
(c) φ = 1
Figure 2.5: Power of tests as the dependency increases. The covariance structure becomes closer to degenerate across the three graphs, but in all cases the
pre-oracle and quasi-oracle still outperform random projections, although
they become closer as the degeneracy increases. Here different variances
are used across components, namely si = 0.5 + i/d, Φi = φi , i = 1, . . . , d,
d = 200, angle(Φ, ∆d )=π/4, corresponding
to Case C.3, and size of change
√
as given on the x-axis (multiplied by d).
where ZT (x) = (ZT,1 (x), . . . , ZT,d (x))T is defined as in (2.3). The usual choice is
A = Σ−1 , where Σ is the covariance matrix of the multivariate observations. The
weighting with Σ−1 has the advantages that it (a) leads to a pivotal limit and (b)
the statistic can detect all changes no matter what the direction. The second remains
true for any positive definite matrix A, the first also remains true for lower rank
matrices with a decorrelation property of the errors, where this latter approach is
essentially a projection (into a lower-dimensional space) as discussed in the previous
sections. For an extensive discussion of this issue for the example of changes in the
autoregressive structure of time series we refer to Kirch et al. [2015]. The choice
A = Σ−1 corresponds to the correctly scaled case, while the misscaled case corresponds
to the choice A = M −1 .
However, this multivariate setup is not very suitable for the theoretic power comparison we are interested in because the limit distribution (a sum of d squared Brownian
bridges with covariance matrix Σ1/2 AΣ1/2 ) still depends on d as well as the possible
misspecification. Therefore, a comparison needs to take both the rates, the additive
term and the noise level (which depends also on the misspecification of the covariance)
present in the limit distribution into account. For the panel data settings on the other
hand, where d → ∞, all the information about d is contained only in the rates rather
than the limit distribution as in the previous sections. This makes the results inter-
21
3 High dimensional efficiency of panel change point test
pretable in terms of the high dimensional efficiency. The panel null limit distribution
differs from the one obtained for the projections but they are at least on the same
scale, and not dependent on d nor the covariance structure Σ. Furthermore, the panel
statistic is strongly related to the multivariate statistic so that the same qualitative
statements can be expected, which is confirmed by simulations (results not shown).
We will now introduce the statistic for detecting changes in the mean introduced by
Horváth and Hušková [2012], termed the HH Panel Statistic in this paper. Unlike in
the above theory for projections, it is necessary to assume independence between components. Because the proofs are based on a central limit theorem across components,
they cannot be generalized to uncorrelated (but dependent) data. For this reason, we
cannot easily derive the asymptotic theory after standardization of the data. This is
different from the multivariate situation, where this can easily be achieved.
We are interested in a comparison of the high dimensional efficiency for correctly
specified covariance, i.e. A = Σ−1 , in addition to a comparison in the misspecified case,
A = M −1 . The latter has already been discussed by Horváth and Hušková [2012] to
some extent. To be precise, a common factor is introduced as in C.3 and the limit of the
statistic (with A = Λ−1 ) under the assumption that the components are independent
(i.e. Λ being a diagonal matrix) is considered. Because of the necessity to estimate
the unknown covariance structure for practical purposes, the same qualitative effects
as discussed here can be expected if a statistic and corresponding limit distribution
were available for the covariance matrix Σ.
3.1 Efficiency for panel change point tests for independent panels
The above multivariate statistics have been adapted to the panel data setup under the
assumption of independent components by Bai [2010] for estimation as well as Horváth
and Hušková [2012] for testing. Those statistics are obtained as weighted maxima or
sum of the following (univariate) partial sum process
d
1 X
Vd,T (x) = √
d i=1
1 2
bT xc(T − bT xc)
Z (x) −
σi2 T,i
T2
,
(3.2)
where ZT,i is as in (2.3) and σi2 = var ei,1 .
The following theorem gives a central limit theorem for this partial sum process (under
the null) from which null asymptotics of the corresponding statistics can be derived.
It was proven by Horváth and Hušková [2012, Theorem 1], under somewhat more
general assumptions allowing in particular for time series errors (in the form of linear processes). While this makes estimation of the covariances more difficult and less
precise as long-run covariances need to be estimated, it has no effect on the high dimensional efficiency. Therefore, we will concentrate on the i.i.d. (across time) situation
in this work to keep things simpler purely in terms of the calculations.
Theorem 3.1. Let Model (1.1) hold with {ei,t : i, t} independent (where the important
assumption is the independence across components) such that var ei,t > c > 0 for all
Pd
i and lim supd→∞ d1 i=1 E |ei,t |ν < ∞ for some ν > 4. Furthermore, let Td2 → 0.
Then, it holds under the null hypothesis of no change
x2
D[0,1] √
,
Vd,T (x) −→ 2(1 − x)2 W
(1 − x)2
where W (·) is a standard Wiener process.
22
3 High dimensional efficiency of panel change point test
The following theorem derives the high dimensional efficiency in this setting for HH
Panel statistics such as max06x61 Vd,T (x), which we use in simulations with both
R1
known and estimated standard deviations or 0 Vd,T (x).
Theorem 3.2. Let the assumptions of Theorem 3.1 on the errors be fulfilled, which
implies in particular that Σ = diag(σ12 , . . . , σd2 ). Then, the high dimensional efficiency
of HH Panel statistic tests is given by
E2 (∆d ) =
1
kΣ−1/2 ∆d k.
d1/4
Equivalent assertions to Corollary 2.5 can be obtained analogously.
Comparing this high dimensional efficiency with the ones given in Theorem 2.4, Proposition 2.6 as well as Theorem 2.7, we note that the high dimensional efficiency of the
above HH Panel Statistic is an order d1/4 worse than for the oracle but a d1/4 better
than the scaled random projection (also see Figure 2.3). By Theorem 2.4 we also get
an indication of how wrong our assumption on ∆d can be while still obtaining a better
efficiency than with the full multivariate information. More precisely, the theorems
define a cone around the search direction such that the projection statistic has higher
efficiency than the panel statistic if the true change direction is in this cone. We can
see the finite sample nature of this phenomena in Figure 2.2.
3.2 Efficiency of panel change point tests under dependence
between Components
We now turn again to the misspecified situation, where we use the above statistic in
a situation where components are not uncorrelated. Following Horváth and Hušková
[2012], we consider the mixed case C.3 for illustration. The next proposition derives
the null limit distribution for that special case. It turns out that the limit as well as
convergence rates depend on the strength of the contamination by the common factor.
Lemma 3.3. Let Case C.3 hold with ν > 4, 0 < c 6 si 6 C < ∞ and Φ2i 6 C < ∞
for all i and some constants c, C and consider Vd,T (x) defined as in (3.2), where
σi2 = var ei,1 but the rest of the dependency structure is not taken into account. The
asymptotic behavior of Vd,T (x) then depends on the behavior of
Ad :=
d
X
Φ2
i
i=1
σi2
.
√
a) If Ad / d → 0, then the dependency is negligible, i.e.
x2
D[0,1] √
2
Vd,T (x) −→ 2(1 − x) W
,
(1 − x)2
where W (·) is a standard Wiener process.
√
b) If Ad / d → ξ, 0 < ξ < 1, then
x2
D[0,1] √
Vd,T (x) −→ 2(1 − x)2 W
+ ξ (B 2 (x) − x(1 − x)),
(1 − x)2
where W (·) is a standard Wiener process and B(·) is a standard Brownian bridge.
√
c) If Ad / d → ∞, then
√
d Vd,T (x) D[0,1] 2
−→ B (x) − x(1 − x),
Ad
where {B(x) : 0 6 x 6 1} is a standard Brownian bridge.
23
3 High dimensional efficiency of panel change point test
Because Ad in the above theorem cannot feasibly be estimated, this result cannot be
used to derive critical values for panel test statistics. Consequently, the exact shape
of the limit distribution in the above lemma is not important. However, the lemma
is necessary to derive the high dimensional efficiency of the panel statistics in this
misspecified case. Furthermore, it indicates that using the limit distribution from the
previous section to derive critical values will result in asymptotically wrong sizes if a
stronger contamination by a common factor is present. The simulations in Figure 2.1
also confirm this fact and show that the size distortion can be enormous. It does not
matter whether the variance of the components in the panel statistic takes into account
the dependency or simply uses the noise variance (Figure 2.1(a)), or whether a change
is accounted for or not in the estimation (Figure 2.1(b)-(c)). This illustrates, that
the full panel statistic is very sensitive with respect to deviations from the assumed
underlying covariance structure in terms of size.
In the situation of a) and b) above, the dependency structure introduced by the common factor is still small enough asymptotically to not change the high dimensional
efficiency as given in Theorem 3.2, which is analogous to the proof of Theorem 3.2.
Therefore, we will now concentrate on situation c) in the below proposition, which is
the case where the noise coming from the common factor does not disappear asymptotically.
Theorem
3.4. Let the assumptions of Lemma 3.3 on the errors be fulfilled and
√
Ad / d → ∞, then the corresponding panel statistics have high dimensional efficiency
s
1
1
1
T
√
,..., 2
∆d .
E3 (∆d ) =
∆d diag 2
s1 + Φ21
sd + Φ2d
Ad
The
next corollary shows
that the efficiency of the quasi oracle (which is scaled with
1
1
diag s2 +Φ2 , . . . , s2 +Φ2 analogously to the panel statistic) is always at least as good
1
1
d
d
as the efficiency of the panel statistic. Additionally, the efficiency of the panel statistic
becomes as bad as the efficiency of the corresponding (diagonally) scaled random
projection if Ad /d → A > 0, which is typically the case if the dependency is nonsparse and non-negligible.
Corollary 3.5. Let the assumptions of Lemma 3.3 on the errors be fulfilled, then the
following assertions hold:
a) The high dimensional efficiency of the quasi-oracle is always at least as good as
the one of the misspecified panel statistic, i.e. with Σ = diag(σ12 , . . . , σd2 ) + ΦΦT ,
Λd = diag(σ12 , . . . , σd2 ), it holds
E12 (∆d , qo) >
∆Td Λ−1
d ∆d
,
1 + Ad
where equality holds iff ∆d ∼ Φ.
b) If Ad /d → A > 0, then the high dimensional efficiency of the panel statistic is as
bad as a randomly scaled projection, i.e.
E32 (∆d ) =
∆Td Λ−1
d ∆d
(Ad + o(1)).
d
In particular, for Ad /d → A > 0 the efficiency of the misscaled panel statistic is always
as bad as the efficiency of the random projection, this only holds for the misscaled
(quasi-) projection if ∆d ∼ Φ. This effect can be clearly see in Figures 2.4 and 2.5,
where in all cases H&H Sigma refers to the panel statistic using known variance, and
H&H Var uses an estimated variance, showing again that this concept of efficiency is
very well suited to understand the small sample power behavior of the corresponding
statistics.
24
4 Data Example
Multivariate Binary Segmentation using Diagonal Covariance
0
-2
FTSE
NASDAQ
DAX
NIKKEI
Hang Seng
CAC
-4
-6
Fuller Log Returns
Fuller Log Returns
Multivariate Binary Segmentation using Diagonal Covariance
0
-2
-8
-10
FTSE
NASDAQ
DAX
NIKKEI
Hang Seng
CAC
-4
-6
-8
-10
0
50
100
150
200
250
0
50
100
Jan-Nov 2015
Multivariate Binary Segmentation using Full Covariance
0
200
-2
-6
Fuller Log Returns
FTSE
NASDAQ
DAX
NIKKEI
Hang Seng
CAC
-4
-8
-10
FTSE
NASDAQ
DAX
NIKKEI
Hang Seng
CAC
-4
-6
-8
-10
0
50
100
150
200
250
0
50
100
Jan-Nov 2015
150
200
250
2015
Projection Binary Segmentation
0
Projection Binary Segmentation
0
Projection
Projection
-10
Fuller Log Returns
Fuller Log Returns
250
Multivariate Binary Segmentation using Full Covariance
0
-2
Fuller Log Returns
150
2015
-20
-30
-40
-10
-20
-30
-40
0
50
100
150
200
250
0
Jan-Nov 2015
50
100
150
200
250
2015
Figure 4.1: Estimated change point locations for the market indices from binary segmentation based on different test statistics and different spans of data.
First Column: Data from Jan-Nov 2015, Second Column: Data from all
of 2015. First Row: Multivariate statistic with full covariance estimation; Second Row: Multivariate Statistic with Diagonal Variance Estimate;
Third Row: Projection Statistic in direction [1,1,1,1,1,1]. Red vertical lines
indicate changes deemed to be significant at 5% level.
4 Data Example
As an illustrative example which shows the small sample behaviour of the statistics
illustrated above also apply in real data, we examine the stability of change points
detected by different methods in several world stock market indices. More specifically,
the Fuller Log Squared Returns [Fuller, 1996, p 496] of the FTSE, NASDAQ, DAX,
NIKKEI, Hang Seng and CAC 1 indices for the year 2015 were examined for change
points. Tests based on the multivariate statistics using full covariance estimates, a multivariate statistic using only variance estimates (i.e., a diagonal covariance structure),
a projection statistic in the average direction [1,1,1,1,1,1], and a projection statistic in
the direction of European countries vs non-European countries [1,-1,1,-1,-1,1] (orthogonal to the average direction) were carried out. Given the considerable dependence
between the different components, we would expect economies to likely rise and fall
together, justifying the use of the former projection direction. However, we think it
unlikely that there will be changes of the kind that when European markets goes up,
non-European markets go down, and visa versa, so take this projection as an example of direction where no change is likely. It should be noted at this point that the
multivariate statistic treats both of these alternatives as equally likely. As there are
possible multiple changes points in this data, we examine stability by performing binary segmentation using the proposed tests, firstly on data from January to November
2015, and then subsequently adding in the data from December 2015.
As can be seen in Figure 4.1, the multivariate test statistic is considerably less robust
than the average projection based statistic, both to the length of the data, as well as
to the choice of the covariance estimate. The major cause of this instability was that
the CUSUM statistic over time had two peaks, but the location of the maximal peak
differed from one to the other when further data was added. This caused knock-on
1 We
only use a small number of series to allow reliable estimates for the covariance to be used in
the full multivariate statistic.
25
5 Conclusions
Table 4.1: Location, Statistic and p-value for the changes found in the 2015 market
index data. (Limit Distributions: Multivariate: sum of six independent
Brownian Bridges; Projection: Single Brownian Bridge)
Multivariate: Full Covariance
Day
143
169
Statistic Value
6.8541
5.1581
p
0.0012
0.0173
Multivariate: Diagonal Covariance
Day
144
169
Statistic Val
9.9995
11.7030
p-value
< 10−5 < 10−5
Projection: scaled [1 1 1 1 1 1]
Day
12
110
121
144
169
Statistic Value 2.1307 3.5390 2.9518 3.3173
2.0900
p
0.0285 0.0017 0.0057 0.0027
0.0307
effects in the entire binary segmentation. Here, in all cases, independence in time was
assumed as once the changes were accounted for, there was little evidence of temporal
dependence in the data. However, even if time series dependence is accounted for by
using an estimate of the long run covariance in place of the independent covariance
estimate, there is no difference in the qualitative conclusions (although the change
points themselves varied considerably in all cases depending on the parameters chosen
in the long run covariance estimation procedure [Politis, D.N., 2005]). In addition,
the projection estimate was robust to whether the direction was scaled by the full
covariance, the diagonal of the covariance or not scaled at all, as well as to increasing
the length of the data.
The p-values for the changes on full year’s data are given in Table 4.1. While it can be
seen that the projection p-vals are larger for the two common change points than in the
multivariate case, the same changes are detected with all methods. However, additional
changes are found with the projection method, and the p-vals are well below the critical
value of 5%. This shows that having knowledge of the likely direction of change can
allow further changes to be found beyond those in an unrestricted multivariate search.
As expected though, using an unlikely direction does not find change points, with
the hypothesis that there are no changes which affect European markets differently to
non-European markets being accepted (p=0.18).
5 Conclusions
The primary aims of this paper were to introduce a theoretic method to compare
the small sample behavior of different high dimensional tests by asymptotic methods.
Furthermore, projection based statistics are introduced into the analysis of change
points in high dimensions and compared and contrasted with the panel based statistics
that are currently available. The new concept of high dimensional efficiency allows
a comparison of the magnitude of changes that can be detected asymptotically as
the number of dimensions increases. All the tests in the paper are benchmarked
against random projections. Because the space covered by far away angles increases
rapidly with the dimension, the power of these becomes very poor in higher dimensions
rendering random projections useless in practice for detecting change points.
In summary, the following two assertions were proven: First, a suitable projection will
substantially increase the power of detection but at the cost of a loss in power if the
change is at a large angle away from the projection vector. Second, projections are
26
6 Proofs
more robust compared to the panel based statistic with respect to misspecification in
the covariance structure both in terms of size and power.
The panel statistic [Bai, 2010, Horváth and Hušková, 2012] test works well in situations
where the panels are independent across dimension, and there is little to no information
about the direction of the change. However, as soon as dependency is present, the size
properties of these statistics become difficult and their high dimensional efficiencies
mimic those of random projections. Misspecification of the covariance structure can
be problematic for all tests. However, if the direction of the likely change is known, then
it is always preferable to use the corresponding projection (scaled with the assumed
covariance structure), rather than either the panel statistic or a random projection,
regardless of whether the covariance is misspecified or not.
This results in this paper raise many questions for future work. It would be of considerable interest to determine whether projections can be derived using data driven
techniques, such as sparse PCA, for example, and whether such projections would be
better than random projections. Preliminary work suggests that this may be so in
some situations but not others, and a nice procedure by Wang and Samworth [2016]
investigates a related technique. Further many multiple change point procedures use
binary segmentation or related methods to find the multiple change points, so much
of the work here would apply equally in suitably defined sub intervals which are then
assumed to contain at most one change. This was the approach taken in the data
example here. In addition, all the results here have been assessed with respect to
choosing a single projection for the test which is optimal if the direction of the change
is known. However, in some situations only qualitative information is known or several
change scenarios are of interest. Then, it could be very beneficial to determine how
best to combine this information into testing procedures based on several projections,
where a standard subspace approach may not be ideal as the information about the
likely direction of changes is lost. Finally, while the framework in this paper concentrates on tests with a given size, as soon as a-priori information is considered, then it is
natural to ask whether related Bayesian approaches are of use, and indeed quantifying
not only the a-priori direction of change, but also its uncertainty, prior to conducting
the test is a natural line of further research.
6 Proofs
Proof of Theorem 2.1. We need to prove the following functional
Pd central limit
theorem for the triangular array of projected random variables Yt,d = j=1 pj (d)ej,t (d)
given the (possibly random) projection pd = (p1 (d), . . . , pd (d))T :
bT xc
D[0,1]
X
1
p
Yt,d : 0 6 x 6 1 | pd −→ {W (x) : 0 6 x 6 1}
a.s.,
T τ 2 (pd )
t=1
(6.1)
where {W (·)} denotes a standard Wiener process.
The proof for tightness is analogous to the one given in Theorem 16.1 of Billingsley
[1968] as it only depends on the independence across time (which also holds conditionally given pd due to the independence of pd and {et (d)}). Similarly, the proof for the
convergence of the finite dimensional distributions follows the proof of Theorem 10.1
in Billingsley [1968], where we need to use the Lindeberg-Levy-version of the univariate central limit theorem for triangular arrays. More precisely, we need to prove the
27
6 Proofs
Lindeberg condition given by
E
2
Y1,d
√
1
| pd
τ 2 (pd ) {Y1,d /τ (pd )> T }
!
→0
a.s.
for any > 0. The following Lyapunov-type condition implies the above Lindeberg
condition:
T
ν
ν
pd e1 (d)
Y1,d
| pd = E
| pd = o(T ν/2−1 )
a.s.,
(6.2)
E
τ (pd )
τ (pd )
where ν > 2 as given in the theorem. Let
p̃d = q
pd
,
pTd cov e1 (d)pd
then the above Lyapunov condition is equal to
ν
E p̃Td e1 (d) | pd = o(T ν/2−1 )
a.s.
P
T
In the situation of a) cov e1 (d) =
j>1 aj (d)aj (d) and we get by the Rosenthal
inequality (confer e.g. Lin and Bai [2010, 9.7c])
ν
n
X
E
p̃Td aj (d)ηj,1 (d) | pd
j=m
6 O(1)
n
X
p̃Td aj (d)
ν
E |ηj,1 (d)|ν + O(1)
n
X
ν/2
p̃Td aj (d)
2
var ηj,1 (d)
,
j=m
j=m
where the right-hand side is bounded for any m, n with a bound that does not depend
on T or d and converges to zero forPm, n → ∞ as E |ηj (d)|ν 6 C hence var ηj (d) 6 1+C
n
and by definition of p̃d it holds j=m |p̃Td aj (d)|2 6 p̃Td cov e1 (d) p̃d 6 1, hence also
P
n
|p̃Td aj (d)|ν 6 |p̃Td aj (d)|2 and j=m |p̃Td aj (d)|ν 6 1.
Consequently, the infinite series exists in an Lν -sense with the following uniform (in
T and d) moment bound
ν
E p̃Td e1 (d) | pd = O(1) = o(T ν/2−1 )
a.s.
(6.3)
To prove the Lyapunov-condition under the assumptions of b) we use the Jensseninequality which yields
!ν
!
d
X
|p̃i,d |
ν
T
ν
|ei,1 (d)|
| pd
E p̃d e1 (d) | pd = kp̃d k1 E
kp̃d k1
i=1
ν
d
X
|p̃i,d |
kpd k1
= o(T ν/2−1 )
6 kp̃d kν1
E |ei,1 (d)|ν 6 C q
a.s.
kp̃
k
T
T
d
1
p cov(e (d))p
i=1
d
1
d
(6.4)
Proof of Lemma 2.2. With the notation of the proof of Theorem 2.1 both estimators (as functions of pd ) fulfill (j = 1, 2)
2
τbj,d,T
(pd )
2
= τbj,d,T
(p̃d ).
τ 2 (pd )
28
6 Proofs
First by the independence across time we get by the van Bahr-Esseen inequality (confer
e.g. Lin and Bai [2010, 9.3 and 9.4]) for some constant C > 0, which may differ from
line to line,
b
X
Epd
ν/2
p̃Td ej (d)
2
−1
6 C(b − a)max(1,ν/4) Epd
2
p̃Td e1 (d) − 1
ν/2
j=a+1
ν
6 C (b − a)max(1,ν/4) max 1, Epd p̃Td e1 (d)
a.s.,
in a),
C(b − a)max(1,ν/4)
ν
6
kp
k
1
d
C(b − a)max(1,ν/4) max 1, √ T
, in b),
(6.5)
pd cov e1 (d)pd
by (6.3) resp. (6.4), where Epd denotes the conditional expectation given pd . An
application of the Markov-inequality now yields for any > 0
T
X
1
2
P
p̃Td ej (d) − 1 > pd
T j=1
(
C
T −ν/2+max(1,ν/4)
a.s.,
in a),
6 ν/2
C
−ν/2+max(1,ν/4)
ν/2−ν/ min(ν,4)
T
o(T
)
a.s., in b),
ν/2
→0
a.s.
Similar arguments yield
T
X
1
P
p̃T ej (d) > pd → 0
T j=1 d
a.s.
2
proving a) and b) for τb1,d,T
(pd ).
From (6.5) it follows by Theorem B.1 resp. B.4 in Kirch [2006]
Epd max
16k6T
6
k
X
ν/2
2
p̃Td ej (d) − 1
j=1
(4−ν)+ ν
CT max(1,ν/4) (log T ) 2(4−ν)
CT max(1,ν/4) (log T )
→0
(4−ν)+ ν
2(4−ν)
a.s.,
max 1, √
in a),
kpd k1
pT
d cov e1 (d)pd
ν
, in b),
a.s.
An application of the Markov inequality now yields for any > 0
k
X
1
2
P max
( p̃Td ej (d) − 1 > pd → 0
a.s.
16k6T T
j=1
By the independence across time it holds
T
−k
X
TX
2
2
L
p̃Td ej (d) − 1 : 1 6 k 6 T =
p̃Td ej (d) − 1 : 1 6 k 6 T ,
j=1
j=k+1
which implies
1
P max
16k6T T
T
X
2
p̃Td ej (d) − 1 > pd → 0
j=k+1
29
a.s.
6 Proofs
Pk
T
Similar assertions can be obtained along the same lines for max16k6T T1
j=1 p̃d ej (d)
PT
2
T
b2,d,T
(pd ).
as well as max16k6T T1
j=k+1 p̃d ej (d) , which imply the assertion for τ
Proof of Corollary 2.3. By an application of the continuous mapping theorem
and Theorem 2.1 we get the assertions for the truncated maxima resp. the sums over
[τ T, (1−τ )T ] for any τ > 0 towards equivalently truncated limit distributions. Because
we assume independence across time (with existing second moments) the Hájek-Rényi
inequality yields for all > 0
!
k
X
T
P
p̃d et (d) > pd → 0 a.s.
max w(k/T )
16k6τ T
P
t=1
max
w(k/T )
(1−τ )T 6k6
T
X
!
p̃Td et (d)
> pd
→0
a.s.
t=k+1
as τ → 0 uniformly in T , where the notation of the proof of Theorem 2.1 has been
used. This in addition to an equivalent argument for the limit process shows that the
truncation is asymptotically negligible proving the desired results.
√
Proof of Theorem 2.4. We consider the situation where T E1 (∆d , pd ) converges
a.s. Under alternatives it holds
bT xc
T
X
X
√
Ud,T (x; e)
bT
xc
1
Ud,T (x)
=
+ sgn(∆Td pd ) T E1 (∆d , pd )
g(i/T ) −
g(j/T ) ,
τ (pd )
τ (pd )
T i=1
T 2 j=1
where Ud,T (x; e) is the corresponding functional of the error process. By Theorem 2.1
it holds
Ud,T (x; e)
D[0,1]
: 0 6 x 6 1 | pd −→ {B(x) : 0 6 x 6 1}
a.s.
τ (pd )
Furthermore, by the Riemann-integrability of g(·) it follows
sup
06x61
Z x
Z 1
bT xc
T
1 X
bT xc X
g(i/T ) −
g(j/T
)
−
g(t)
dt
−
x
g(t)
dt
→ 0.
T i=1
T 2 j=1
0
0
For any τ > 0
max
w2 (k/T )
τ 6k/T 61−τ
=T
E12 (∆d , pd )
2
Ud,T
(k/T )
τ 2 (pd )
sup
τ 6x61−τ
2
Z
x
Z
g(t) dt − x
w (x)
0
0
1
!
2
g(t) dt + oPpd (1)
a.s.,
where Ppd denotes the conditional probability given pd . Because by assumption
R
2
R1
x
supτ 6x61−τ w2 (x) 0 g(t) dt − x 0 g(t) dt > 0 for some τ > 0, so that the above
term becomes unbounded asymptotically. This gives the assertion for the max statistics, similar arguments give the assertion for the sum statistic.
Proof of Corollary 2.5. Similarly to the proof of Theorem 2.4 it follows (where the
uniformity at 0 and 1 follows by the assumptions on the rate of divergence for w(·) at
0 or 1)
sup w2 (x)
0<x<1
2
Ud,T
(x)
2
− ((x − ϑ)+ − x(1 − ϑ)) = oPpd (1)
2
2
τ (pd )T E1 (∆d , pd )
30
a.s.,
6 Proofs
which implies the assertion by standard arguments on noting that
ϑbT = arg max w2 (x)
06x61
2
Ud,T
(x)
,
τ 2 (pd )T E12 (∆d , pd )
2
ϑ = arg max w2 (x) ((x − ϑ)+ − x(1 − ϑ)) .
06x61
Proof of Proposition 2.6. The assertion follows from
τ 2 (pd ) = pTd Σpd = kΣ1/2 pd k2 ,
|h∆d , pd i| = (Σ−1/2 ∆d )T (Σ1/2 pd ) = kΣ−1/2 ∆d k kΣ1/2 pd k cos(αΣ−1/2 ∆d ,Σ1/2 pd ).
Proof of Theorem 2.7. Let X d = (X1 , . . . , Xd )T be N(0,Id ), then by Marsaglia
L
[1972] it holds rd = (X1 , . . . , Xd )T /k(X1 , . . . , Xd )T k and it follows by (2.11)
E12 (∆d , Σ−1/2 rd )
d
kΣ−1/2 ∆d k2
L
=
−1/2
XT
∆d
dΣ
kΣ−1/2 ∆d k
2
XT
d Xd
E XT
d Xd
Since the numerator has a χ21 distribution (not depending on d), there exist for any
> 0 constants 0 < c1 < C1 < ∞ such that
2
T −1/2
Xd Σ
∆d
sup P c1 6
6 C1 > 1 − .
kΣ−1/2 ∆d k
d>1
Furthermore, the denominator has a χ2d -distribution divided by its expectation, consequently an application of the Markov-inequality yields for any > 0 the existence of
0 < C2 < ∞ such that
!
X Td X d
sup P
> C2 6 .
E X Td X d
d>1
−1
By integration by parts we get E X Td X d
6 2/d for d > 3 so that another application of the Markov-inequality yields that for any > 0 there exists c2 > 0 such
that
!
X Td X d
lim sup P
6 c2 6 ,
E X Td X d
d→∞
completing the proof of the theorem by standard arguments.
Proof of Theorem 2.8. Let X d = (X1 , . . . , Xd )T be N(0,Id ), then as in the proof
of Theorem 2.7 it holds
tr(M−1/2 ΣM−1/2 ) L
=
E12 (∆, M−1/2 rd )
kM−1/2 ∆d k2
−1/2
XT
∆d
dM
kM−1/2 ∆d k
2
−1/2 ΣM−1/2 X
XT
d
dM
tr(M−1/2 ΣM−1/2 )
.
The proof of the lower bound is analogous to the proof of Theorem 2.7 by noting that
(A = M−1/2 ΣM−1/2 )
E XT AX = E
d
X
i,j=1
ai,j Xi Xj =
d
X
ai,j δi,j =
i,j=1
d
X
i=1
31
ai,i = tr(A).
6 Proofs
For the proof of the upper bound, first note that by a spectral decomposition it holds
d
XT M−1/2 ΣM−1/2 X L X
=
αj Xj2 ,
tr(M−1/2 ΣM−1/2 )
j=1
for some 0 < αd 6 . . . 6 α1 ,
d
X
αj = 1.
j=1
From this we get on the one hand by the Markov inequality
1/4
d
X
c
E(|X12 |−1/4 ),
P
αj Xj2 6 c 6 P (α1 X12 6 c) 6
α
1
j=1
√
where E(|X12 |−1/4 ) = Γ(1/4)/(21/4 π) exists (as can be seen using the density for a
χ21 -distribution). On the other hand it holds for any c 6 1/2 by another application
of the Markov inequality
d
d
d
X
X
X
P
αj Xj2 6 c 6 P
αj Xj2 − 1 > 1/2 6 8
αi2 6 8α1 .
j=1
j=1
i=1
By chosing c = min(1/2, (E(|X12 |−1/4 ))−4 /8 5 ) we finally get
d
X
supP
P
αj Xj2 6 c
d
i=1
0<αd 6...6α1 ,
αi =1
j=1
sup
6
min
P
0<αd 6...6α1 , d
i=1 αi =1
8α1
!
1/4
, 8α1
6 ,
completing the proof.
Proof of Theorem 2.9. By the Cauchy-Schwarz inequality
X
X
X
aj aTj M−1 ∆d =
(aTj M−1 ∆d )2 6
aTj M−1 aj ∆Td M−1 ∆d
τ 2 (M−1 ∆d ) = ∆Td M−1
j>1
j>1
j>1
= tr M−1/2
X
aj aTj M−1/2 ∆Td M−1 ∆d ,
j>1
which implies the assertion by (2.11).
Proof of Proposition 2.10. Assertion a) follows from
|h∆d , poi|2 =
d
2
X
δi,T
i=1
T
τ 2 ( po) = po Σ po =
σi2
!2
σi2
i=1
d
2
X
δi,T
i=1
> c2
d
2
X
δi,T
σi2
!2
2
= c2 |h∆d , qoi| ,
σi2
σi4 6 C 2 |h∆d , qoi| .
Concerning b) first note that by the Cauchy-Schwarz inequality with Λ = diag(σ12 , . . . , σd2 )
τ 2 ( qo) =
X
(∆Td Λ−1 aj )2 6 ∆Td Λ−2 ∆d
j>1
X
j>1
This implies assertion b) by (2.11) on noting that
|∆Td Λ−1 ∆d |2 >
|∆Td ∆d |2
.
C2
32
aTj aj 6
∆Td ∆d
tr(Σ).
c2
6 Proofs
Proof of Equation 2.14. By Proposition 2.6 it holds for ∆d = k Φd
E12 (∆d , o) = kΣ−1/2 ∆d k2 = ∆Td (D + Φd ΦTd )−1 ∆d ,
where D = diag(s21 , . . . , s2d )T . Hence
−1
D −1/2 ∆d
∆Td (D + Φd ΦTd )−1 ∆d = (D −1/2 ∆d )T Id + (D −1/2 Φd )(D −1/2 Φd )T
=
(D −1/2 ∆d )T D −1/2 ∆d
1 + D −1/2 ΦTd D −1/2 Φd
,
where the last line follows from the fact that D −1/2 ∆d = kD −1/2 Φd is an eigenvector
of Id + (D −1/2 Φd )(D −1/2 Φd )T with eigenvalue 1 + (D −1/2 Φd )T D −1/2 Φd hence also
for the inverse of the matrix with inverse eigenvalue.
Proof of Theorem 3.2. Similarly as in the proof of Theorem 2.4 it holds
bT xc
T
X
X
√
1
bT
xc
g(j/T ) +
g(j/T ) ,
ZT,i (x) = ZT,i (x; e) + δi,T T
T j=1
T 2 j=1
where ZT,i (x; e) is the corresponding functional for the error sequence (rather than
the actual observations). From this it follows
Z x
Z 1
2
Vd,T (x) = Vd,T (x; e) + T E2 (∆d )
g(t) dt − x
g(t) dt + o(1) + RT (x),
0
0
where RT (x) is the mixed term given by
√
Z x
Z 1
d
2 T X δi,T
Z
(x;
e)
g(t)
dt
−
x
RT (x) = √
g(t)
dt
+
o(1)
T,i
d i=1 σi2
0
0
which by an application of the Hájek -Rényi inequality (across time) yields
P
d
sup |RT (x)| > c
06x61
= O (1)
1
1 1 X δi2
T
= OP (1) √ T E2 (∆d ).
2
2
2
c d i=1 σi
c d
From this the assertion follows by an application of Theorem 3.1.
Proof of Lemma 3.3. The proof follows closely the proof of (28) – (30) in Horváth
and Hušková [2012] but where we scale diagonally with the true variances. We will give
a short sketch for the sake of completeness. The key is the following decomposition
Vd,T (x)
2
bT xc
d
T
2
X
X
X
1
bT xc
bT xc (T − bT xc)
1
si
=√
ηi,t (d) −
ηi,T (d) −
2
2 T
s
+
Φ
T
T2
d i=1
i
i
t=1
t=1
bT xc
bT xc
d
T
T
X
X
X
X
X
Φi si
1
bT xc
1
bT xc
2
√
+√
ηi,t (d) −
ηi,T (d) √
ηd+1,t (d) −
ηd+1,t (d)
2 + Φ2
s
T
T t=1
T
T
d i=1 i
i
t=1
t=1
t=1
2
bT xc
T
1
1 X
bT xc X
+
ηd+1,t (d) −
ηd+1,t (d) √ Ad .
T
T t=1
d
t=1
33
6 Proofs
The first term converges to the limit given in a). To see this, note that the proof of
the Lyapunov condition in Horváth and Hušková [2012] following equation (39) still
holds because s2i /(s2i + Φ2i ) is uniformly bounded from above by assumption (showing
that the numerator is bounded) while again by assumption
d
1X
s4i
> D > 0,
d i=1 (s2i + φ2i )2
showing that the denominator is bounded. Similarly, the proof of tightness in Horváth
and Hušková [2012] (equations (43) and following) remains valid. The asymptotic
variance remains the same under a) and b) because by assumption
d
3
1X
s4i
− 1 6 Ad → 0.
d i=1 (s2i + Φ2i )2
d
The middle term in the above decomposition is bounded by an application of the Hájek
-Rényi inequality
bT xc
d
T
X
X
X
Φ
s
1
bT
xc
1
i i
√
ηi,t (d) −
ηi,T (d) > D
P sup √
T
0<x<1
d i=1 s2i + Φ2i T
t=1
t=1
d
= O(1)
1X
φ2i s2i
1
= O(1) Ad ,
2
d j=1 (si + φ2i )2
d
√
which converges to 0 for a) and b) – for c) we multiply the original statistic by d/Ad ,
which means this term is multiplied with d/A2d leaving us with 1/Ad which converges to
√
PbT xc
bT xc PT
0 if Ad / d → ∞. Similarly, we can bound √1T
t=1 ηd+1,t (d) ,
t=1 ηd+1,t (d) − T
showing that the middle term is asymptotically negligible. The assertions now follow
by an application of the functional central limit theorem for
P
2
bT xc
bT xc PT
1
η
(d)
−
η
(d)
.
d+1,t
d+1,t
t=1
t=1
T
T
Proof of Theorem
3.4. The proof is analogous to the one of Theorem 3.2 on noting
√
that E32 (∆d ) = Add E22 (∆d ) and σi2 = s2i +Φ2i by using Lemma 3.3 c) above. Concerning
eT (x) note that ei,t = si ηi,t +Φi ηd+1,t , so that the remainder term
the remainder term R
can be split into two terms. The first
term can be dealt
q
with analogously to the proof
1
of Theorem 3.2 and is of order OP
Ad T E3 (∆d ) , while for the second summand
we get by an application of the Cauchy-Schwarz-inequality
v
u Pd δ2
bT xc
i
d
T
X
X
X
√ u
t i=1 σi2
1
δ i φi
bT xc
= OP ( T )
sup
η
−
η
d+1,t
d+1,t
2
T t=1
Ad
06x61 Ad i=1 σi
t=1
q
=O
T E32 (∆d ) .
Proof of Corollary 3.5.
holds
−1
∆Td Λ−1
d ΣΛd ∆d
=
By an application of the Cauchy-Schwarz inequality it
d
X
2
δi,T
i=1
6
d
2
X
δi,T
i=1
σi2
1+
d
X
Φ2
i
i=1
σi2
s2i
+
(s2i + Φ2i )2
d
X
δi,T Φi
s2 + Φ2i
i=1 i
!
= ∆Td Λ−1
d ∆d (1 + Ad ),
34
!2
References
which implies assertion a) on noting that
E12 (∆d , qo) =
2
(∆Td Λ−1
d ∆d )
.
−1
∆Td Λ−1
d ΣΛd ∆d
b) This follows immediately from Theorem 2.8 since by 0 < c 6 s2j 6 C < ∞ as well
as as Φ2i 6 C, it follows that
1
1
,
.
.
.
,
∆d .
k∆d k2 ∼ ∆Td diag 2
s1 + Φ21
s2d + Φ2d
Acknowledgements
The first author was supported by the Engineering and Physical Sciences Research
Council (UK) grants : EP/K021672/2 & EP/N031938/1. Some of this work was
done while the second author was at KIT where her position was financed by the
Stifterverband für die Deutsche Wissenschaft by funds of the Claussen-Simon-trust.
Furthermore, this work was supported by the Ministry of Science, Research and Arts,
Baden-Württemberg, Germany. Finally, the authors would like to thank the Isaac
Newton Institute for Mathematical Sciences, Cambridge, for support and hospitality
during the programme ’Inference for Change-Point and Related Processes’, where part
of the work on this paper was undertaken.
References
J. A. D. Aston and C. Kirch. Detecting and estimating changes in dependent functional
data. Journal of Multivariate Analysis, 109:204–220, 2012a.
J. A. D. Aston and C. Kirch. Evaluating stationarity via change-point alternatives
with applications to fMRI data. Annals of Applied Statistics, 6:1906–1948, 2012b.
A. Aue and L. Horváth. Structural breaks in time series. Journal of Time Series
Analysis, 34:1–16, 2013.
A. Aue, R. Gabrys, L. Horváth, and P. Kokoszka. Estimation of a change-point in the
mean function of functional data. Journal of Multivariate Analysis, 100:2254–2269,
2009a.
A. Aue, S. Hörmann, L. Horváth, and M. Reimherr. Break detection in the covariance
structure of multivariate time series models. Annals of Statistics, 37:4046–4087,
2009b.
J. Bai. Common Breaks in Means and Variances for Panel Data. Journal of Econometrics, 157:78–92, 2010.
R. Baraniuk, M. Davenport, R. DeVore, and M. Wakin. A Simple Proof of the Restricted Isometry Property for Random Matrices. Constructive Approximation, 28:
253–263, 2008.
I. Berkes, R. Gabrys, L. Horváth, and P. Kokoszka. Detecting changes in the mean of
functional observations. Journal of the Royal Statistical Society: Series B (Statistical
Methodology), 71:927–946, 2009.
35
References
P. J. Bickel and E. Levina. Regularized estimation of large covariance matrices. Annals
of Statistics, 36:199–227, 2008.
P. Billingsley. Convergence of probability measures. John Wiley & Sons, 1968.
J. Chan, L. Horváth, and M. Hušková. Darling–Erdős limit results for change–point
detection in panel data. Journal of Statistical Planning and Inference, 2012.
H. Cho. Change-point detection in panel data via double cusum statistic. Electronic
Journal of Statistics, in press, 2015.
H. Cho and P. Fryzlewicz. Multiple-change-point detection for high dimensional time
series via sparsified binary segmentation. Journal of the Royal Statistical Society:
Series B (Statistical Methodology), 77(2):475–507, 2015.
M. Csörgő and L. Horváth. Limit Theorems in Change-Point Analysis. Wiley, Chichester, 1997.
R. J. Durrant and A. Kabán. Compressed Fisher linear discriminant analysis: Classification of randomly projected data. In Proceedings of the 16th ACM SIGKDD
international conference on Knowledge discovery and data mining, pages 1119–1128.
ACM, 2010.
I. Eckley, P. Fearnhead, and R. Killick. Analysis of changepoint models. In D. Barber,
A. Cemgil, and S. Chiappa, editors, Bayesian Time Series Models, pages 215–238.
Cambridge University Press, 2011.
J. Fan, Y. Liao, and M. Mincheva. Large covariance estimation by thresholding principal orthogonal complements. Journal of the Royal Statistical Society: Series B
(Statistical Methodology), 75:603–680, 2013.
K. Frick, A. Munk, and H. Sieling. Multiscale change point inference. Journal of the
Royal Statistical Society: Series B (Statistical Methodology), 76:495–580, 2014.
W. A. Fuller. Introduction to statistical time series. John Wiley & Sons, 1996.
N. J. Higham. Accuracy and stability of numerical algorithms. Siam, 2002.
S. Hörmann and P. Kokoszka. Weakly dependent functional data. Annals of Statistics,
38:1845–1884, 2010.
L. Horváth and M. Hušková. Change-point detection in panel data. Journal of Time
Series Analysis, 33:631–648, 2012.
L. Horváth and G. Rice. Extensions of some classical methods in change point analysis.
TEST, 23:219–255, 2014.
L. Horváth, P. Kokoszka, and J. Steinebach. Testing for Changes in Multivariate
Dependent Observations with an Application to Temperature Changes. Journal of
Multivariate Analysis, 68:96–119, 1999.
M. Jirak. Uniform change point tests in high dimension. Ann. Statist., 43(6):2451–
2483, 12 2015. doi: 10.1214/15-AOS1347.
W. B. Johnson and J. Lindenstrauss. Extensions of lipschitz mappings into a hilbert
space. Contemporary mathematics, 26:189–206, 1984.
C. Kirch.
Resampling Methods for the Change Analysis of Dependent Data.
PhD thesis, University of Cologne, Cologne, 2006.
http://kups.ub.unikoeln.de/volltexte/2006/1795/.
C. Kirch and J. Tadjuidje Kamgaing. Testing for parameter stability in nonlinear
autoregressive models. Journal of Time Series Analysis, 33:365–385, 2012.
36
References
C. Kirch and J. Tadjuidje Kamgaing. Monitoring time series based on estimating functions. Technical report, Technische Universität Kaiserslautern, Fachbereich Mathematik, 2014a.
C. Kirch and J. Tajduidje Kamgaing. Detection of change points in discrete valued
time series. Handbook of discrete valued time series. In: Davis RA, Holan SA, Lund
RB, Ravishanker N, 2014b.
C. Kirch, B. Mushal, and H. Ombao. Detection of changes in multivariate time series
with applications to eeg data. Journal of the American Statistical Association, 110:
1197–1216, 2015.
E. L. Lehmann. Elements of Large Sample Theory. Springer Berlin Heidelberg, 1999.
Z. Lin and Z. Bai. Probability inequalities. Springer, 2010.
M. Lopes, L. Jacob, and M. J. Wainwright. A more powerful two-sample test in high
dimensions using random projection. In Advances in Neural Information Processing
Systems, pages 1206–1214, 2011.
G. Marsaglia. Choosing a point from the surface of a sphere. Annals of Mathematical
Statistics, 43:645–646, 1972.
G. Minas, J. A. D. Aston, and N. Stallard. Adaptive Multivariate Global Testing.
Journal of the American Statistical Association, 109:613–623, 2014.
H. Ombao, R. Von Sachs, and W. Guo. SLEX analysis of multivariate nonstationary
time series. Journal of the American Statistical Association, 100:519–531, 2005.
E. S. Page. Continuous Inspection Schemes. Biometrika, 41:100–115, 1954.
Politis, D.N. Higher-order accurate, positive semi-definite estimation of large-sample
covariance and spectral density matrices. 2005. Preprint: Department of Economics,
UCSD, Paper 2005-03R, http://repositories.cdlib.org/ucsdecon/2005-03R.
M. Robbins, C. Gallagher, R. Lund, and A. Aue. Mean shift testing in correlated data.
Journal of Time Series Analysis, 32:498–511, 2011.
R. Srivastava, P. Li, and D. Ruppert. RAPTT: An Exact Two-Sample Test in High
Dimensions Using Random Projections. ArXiv e-prints, 1405.1792, 2014.
T. Wang and R. J. Samworth. High-dimensional changepoint estimation via sparse
projection. ArXiv, 1606.06246, 2016.
H. Zou, T. Hastie, and R. Tibshirani. Sparse principal component analysis. Journal
of Computational and Graphical Statistics, 15:265–286, 2006.
37
| 10 |
Image-Image Domain Adaptation with Preserved Self-Similarity and
Domain-Dissimilarity for Person Re-identification
Weijian Deng† , Liang Zheng‡ , Guoliang Kang‡ , Yi Yang‡ , Qixiang Ye† , Jianbin Jiao†∗
†
University of Chinese Academy of Sciences ‡ University of Technology Sydney
arXiv:1711.07027v2 [] 10 Jan 2018
[email protected], [email protected]
self-similarity
Abstract
Person re-identification (re-ID) models trained on one
domain often fail to generalize well to another. In our attempt, we present a “learning via translation” framework.
In the baseline, we translate the labeled images from source
to target domain in an unsupervised manner. We then
train re-ID models with the translated images by supervised
methods. Yet, being an essential part of this framework, unsupervised image-image translation suffers from the information loss of source-domain labels during translation.
Our motivation is two-fold. First, for each image, the
discriminative cues contained in its ID label should be
maintained after translation. Second, given the fact that two
domains have entirely different persons, a translated image
should be dissimilar to any of the target IDs. To this end,
we propose to preserve two types of unsupervised similarities, 1) self-similarity of an image before and after translation, and 2) domain-dissimilarity of a translated source image and a target image. Both constraints are implemented
in the similarity preserving generative adversarial network
(SPGAN) which consists of a Siamese network and a CycleGAN. Through domain adaptation experiment, we show
that images generated by SPGAN are more suitable for domain adaptation and yield consistent and competitive re-ID
accuracy on two large-scale datasets.
(a)
(b) and domainFigure 1: Illustration
of self-similarity
dissimilarity. In each triplet, left: a source-domain image,
middle: a source-target translated version of the source image, right: an arbitrary target-domain image. We require
that 1) a source image and its translated image should contain the same ID, i.e., self-similarity, and 2) the translated
image should be of a different ID with any target image,
i.e., domain dissimilarity. Note: the source and target domains contain entirely different IDs.
single-domain re-ID methods may be limited in real world
scenarios, where domain-specific labels are not available.
A common strategy for this problem is unsupervised domain adaptation (UDA). But this line of methods assume
that the source and target domains contain the same set of
classes. Such assumption does not hold for person re-ID because different re-ID datasets usually contain entirely different persons (classes). In domain adaptation, a recent trend
consists in image-level domain translation [18, 4, 28]. In
the baseline approach, two steps are involved. First, labeled images from the source domain are transferred to the
target domain, so that the transferred image has a similar
style with the target domain. Second, the style-transferred
images and their associated labels are used in supervised
learning in the target domain. In literature, commonly used
style transfer methods include [27, 22, 46, 53]. In this paper,
we use CycleGAN [53] following the practice in [27, 18].
In person re-ID, there is a distinct yet unconsidered requirement for the baseline described above: the visual content associated with the ID label of an image should be preserved after image-image translation. In our scenario, such
1. Introduction
This paper considers domain adaptation in re-ID. The
re-ID task aims at searching for the relevant images to the
query. In our setting, the source domain is fully annotated,
while the target domain does not have ID labels. In the
community, domain adaptation re-ID are gaining increasing popularity, because 1) of the expensive labeling process
and 2) when models trained on one dataset are directly used
on another, the re-ID accuracy drops dramatically [6] due
to dataset bias [41]. As a result, current fully supervised,
∗ Corresponding
domain-dissimilarity
Author
1
domain-dissimila
Source domain
…
Target domain
similarity preserving
image-to-image translation
…
source domain
Feature
learning
re-ID
model
target domain
Figure 2: Pipeline of the “learning via translation” framework consisting of two steps. First, we translate the labeled images
from a source domain to a target domain in an unsupervised manner. Second, we train re-ID models with the translated
images using supervised feature learning methods. The major contribution consists in the first step, i.e., similarity preserving
…
image-image translation.
…
visual content usually refers to the underlying (latent) ID in2. Related Work
formation for a foreground pedestrian. To meet this requireImage-image translation. Image-image translation
ment tailored for re-ID, we need additional constraints on
aims
at constructing a mapping function between two dothe mapping function. In this paper, we propose a solution
mains.
A representative mapping function is the conditional
to this requirement, motivated from two aspects. First, a
GAN
[20],
which using paired training data produces imtranslated image, despite of its style changes, should contain
pressive
image-to-image
transition results. However, the
the same underlying identity with its corresponding source
paired
training
data
is
often
difficult to acquire. Unsuperimage. Second, in re-ID, the source and target domains convised
image-image
translation
is thus more applicable since
tain two entirely different sets of identities. Therefore, a
data
collection
is
easier.
To
tackle
unpaired settings, a cytranslated image should be different from any image in the
cle
consistency
loss
is
introduced
by
[22, 46, 53]. In [3],
target dataset in terms of the underlying ID.
an
unsupervised
distance
loss
is
proposed
for one side doThis paper introduces the Similarity Preserving cyclemain
mapping.
In
[27],
a
general
framework
is proposed by
consistent Generative Adversarial Network (SPGAN), an
making
a
shared
latent
space
assumption.
Our
work aims to
unsupervised domain adaptation approach which generates
find
a
mapping
function
between
source
domain
and target
images for effective target-domain learning. SPGAN is
domain,
and
we
more
concerned
with
similarity
preserving
composed of a Siamese network (SiaNet) and a CycleGAN.
translation.
Using a contrastive loss, the SiaNet pulls close a translated
Neural style transfer [12, 23, 43, 21, 5, 24, 19, 25] is
image and its counter part in the source, and push away the
another strategy of image-image translation, which aims at
translated image and any image in the target. In this manner,
replicating the style of one image, while our work focuses
the contrastive loss satisfies the specific requirement in peron learning the mapping function between two domains,
son re-ID. Note that, the added constraint is unsupervised,
rather than two images.
i.e., the source labels are not used during domain adaptation.
During training, in each mini-batch (batch size = 1), a trainUnsupervised domain adaptation. Our work relates
ing image is firstly used to update the Generator (of Cycleto unsupervised domain adaptation (UDA) where no laGAN), then the Discriminator (of CycleGAN), and finally
beled target images are available during training. In this
the convolutional layers in SiaNet. Through the coordinacommunity, some methods aim to learn a mapping betion between CycleGAN loss and the SiaNet loss, we are
tween source and target distributions [37, 13, 9, 38]. Corable to generate samples which not only possess the style of
relation Alignment (CORAL) [38] propose to match the
the target domain and but also preserve their underlying ID
mean and covariance of two distributions. Recent methinformation.
ods [18, 4, 28] use an adversarial approach to learn a transUsing SPGAN, we are able to create a dataset on the tarformation in the pixel space from one domain to another.
get domain in an unsupervised manner. The dataset inherits
Other methods seek to find a domain-invariant feature space
the labels from the source domain and thus can be used in
[34, 31, 10, 30, 42, 11, 2]. Long et al. [30] and Tzeng et al.
supervised learning in the target domain. The contributions
[42] use the Maximum Mean Discrepancy (MMD) [15] for
of this work are summarized below:
this purpose. Ganin et al. [11] and Ajakan et al. [2] introduce a domain confusion loss to learn domain-invariant
• Minor contribution: we present a “learning via translafeatures. Different from the settings in this paper, most of
tion” baseline for domain adaptation in person re-ID.
the UDA methods assume that class labels are the same
across domains, while different re-ID datasets contain en• Major contribution: we introduce SPGAN to improve
tirely different person identities (classes). Therefore, the
the baseline. SPGAN works by preserving the underapproaches mentioned above can not be utilized directly for
lying ID information during image-image translation.
domain adaptation in re-ID.
2
Method
Train. Set Test Set
Supervised
Ttrain
Ttest
Direct Transfer
Strain
Ttest
Ttest
CycleGAN (basel.) G(Strain )
SPGAN
Gsp (Strain )
Ttest
Unsupervised re-ID. Hand-craft features [32, 14, 7, 33,
26, 49] can be directly employed for unsupervised re-ID.
All these methods focus on feature design, but the rich information from the distribution of samples in the dataset
has not been fully exploited. Some methods are based
on saliency statistics [48, 44]. In [47], K-means clustering is used for learning a unsupervised asymmetric metric.
For unsupervised domain adaptation re-ID, the authors of
[35] propose an asymmetric multi-task dictionary learning
method to transfer the view invariant representation learned
on source data to target data.
Recently, several works focus on label estimation of the
unlabeled target dataset. Ye et al. [45] use graph matching
for cross-camera label estimation. Fan et al. [6] propose
a progressive method based on the iterations between Kmeans clustering and IDE [50] fine-tuning. Liu et al. [29]
employ a reciprocal search process to refine the estimated
labels. Our work aims to learn re-ID models that can be
utilized directly to target domain, and can potentially cooperate with label estimation methods in model initialization.
Accuracy
+++++
++
+++
++++
Table 1: A brief summary of different methods considered
in this paper. “G” and “Gsp ” denote the Generator in CycleGAN and SPGAN, respectively. Strain , Ttrain , Ttest denote the training set of the source dataset, the training set
and testing set of the target dataset, respectively.
not resolved (to be shown in Table 2). Using CycleGAN and
SPGAN to generate a new training set, which is more styleconsistent with the target, respectively yields improvement.
3.2. SPGAN: Approach Details
3.2.1
CycleGAN Revisit
CycleGAN introduces two generator-discriminator pairs,
{G, DT } and {F, DS }, which map a sample from source
(target) domain to target (source) domain and produce a
sample which is indistinguishable from those in the target
(source) domain, respectively. For generator G and its associated discriminator DT , the adversarial loss is,
3. Proposed Method
3.1. Baseline Overview
Given an annotated dataset S from source domain and
unlabeled dataset T from target domain, our goal is to use
the labeled source images to train a re-ID model that generalizes well to target domain. Figure 2 presents a pipeline of
the “learning via translation” framework, which consists of
two steps, i.e., source-target image translation for training
data creation, and supervised feature learning for re-ID.
LT adv (G, DT , px , py ) =Ey∼py [(DT (y) − 1)2 ]
+ Ex∼px [(DT (G(x))2 ],
(1)
where px and py denote the sample distributions in the
source and target domain, respectively. For generator F and
its associated discriminator DS , the adversarial loss is,
• Source-target image translation. Using a generative
function G(·) that translates the annotated dataset S
from the source domain to target domain in an unsupervised manner, we “create” a labeled training dataset
G(S) on the target domain. In this paper, we use CycleGAN [53], following the practice in [27, 18].
LSadv (F, DS , py , px ) =Ex∼px [(DS (x) − 1)2 ]
+ Ey∼py [(DS (F (y))2 ].
(2)
Considering there exist infinitely many alternative mapping functions due to the lack of paired training data, CycleGAN introduces a cycle-consistent loss, which attempts
to recover the original image after a cycle of translation and
reverse translation, to reduce the space of possible mapping
functions. The cycle-consistent loss is,
• Feature learning. With the translated dataset G(S)
that contains labels, feature learning methods are applied to train re-ID models. Specifically, we adopt the
same setting as [50], in which the rank-1 accuracy and
mAP on the fully-supervised Market-1501 dataset is
75.8% and 52.2%.
Lcyc (G, F ) =Ex∼px [kF (G(x)) − xk1 ]
+ Ey∼py [kG(F (y)) − yk1 ].
The focus of this paper is to improve Step 1, so that with
better training samples, the overall re-ID accuracy can be
improved. The experiment will validate the proposed Step
2 (Gsp (·)) on several feature learning methods. A brief
summary of different methods considered in this paper is
presented in Table 1. We denote the method “Direct Transfer” as directly using the training set S instead of G(S) for
model learning. This method yields the lowest accuracy because the style difference between the source and target is
(3)
Apart from the cycle-consistent loss, adversarial loss, we
use the target domain identity constraint as an auxiliary for
image-image translation. Target domain identity constraint
was introduced by [40] to regularize the generator to be the
identity matrix on samples from target domain, written as,
Lide (G, F, px , py ) =Ex∼px kF (x) − xk1
+ Ey∼py kG(y) − yk1 .
3
(4)
Training image pair selection. In Eq. 5, the contrastive loss uses binary labels of input image pairs. The
design of the pair similarities reflects the “self-similarity”
and “domain-dissimilarity” principles. Note that, we select
training pairs in an unsupervised manner, so that we use
the contrastive loss without additional annotations.
Formally, CycleGAN has two generators, i.e., generator G which maps source-domain images to the style of the
target domain, and generator F which maps target-domain
images to the style of the source domain. Suppose two samples denoted as xS and xT come from the source domain
and target domain, respectively. Given G and F , we define
two positive pairs: 1) xS and G(xS ), 2) xT and F (xT ). In
either image pair, the two images contain the same person;
the only difference is that they have different styles. In the
learning procedure, we encourage the whole network to pull
these two images close.
On the other hand, for generator G and F , we also define
two types of negative training pairs: 1) G(xS ) and xT , 2)
F (xT ) and xS . Such design of negative training pairs is
based on the prior knowledge that datasets in different reID domains have entirely different sets of IDs. As a result,
a translated image should be of different ID from any target
image. In this manner, the network pushes two dissimilar
images away. Training pairs are shown in Fig. 1. Some
positive pairs are also shown in (a) and (d) of each column
in Fig. 4.
Overall objective function. The final SPGAN objective
can be written as,
SiaNet
G
DT
DS
F
source domain
target domain
Figure 3: SPGAN consists of two components: a SiaNet
(top) and CycleGAN (bottom). CycleGAN learns mapping
functions G and F between two domains, and the SiaNet
learns a latent space that constrains the learning procedure
of mapping functions.
As mentioned [53], generators G and F may change the
color of output images without Lide . In experiment, we observe that the model may generate unreal results without
Lide (Fig. 4(b)). This is undesirable for re-ID feature learning. By turning on the identity loss, the color of the input
and output can be preserved (see Section 4.3).
3.2.2
SPGAN
Applied in person re-ID, similarity preserving is an essential
function to generate improved samples for domain adaptation. As analyzed in Section 1, we aim to preserve the IDrelated information for each translated image. We emphasize that such information should not be the background or
image style, but should be underlying and latent. To fulfill
this goal, we integrate a SiaNet with CycleGAN, as shown
in Fig 3. During training, CyleGAN is to learn a mapping
function between two domains, and SiaNet is to learn a latent space that constrains the learning procedure of mapping
function.
Similarity preserving loss function. We utilize the contrastive loss [16] to train SiaNet,
Lcon (i, x1 , x2 ) =(1 − i){max(0, m − d)}2 + id2 ,
Lsp = LT adv + LSadv + λ1 Lcyc + λ2 Lide + λ3 Lcon ,
(6)
where λt , t ∈ {1, 2, 3} controls the relative importance of
four objectives. The first three losses belong to the CycleGAN formulation [53], and the contrastive loss induced by
SiaNet imposes a new constraint on the system.
SPGAN training procedure. In the training phase, SPGAN are divided into three components which are learned
alternately, the generators, discriminators, and SiaNet.
When the parameters of two components are fixed, the parameters of the third component is updated. We train the
SPGAN until the convergence or the maximum iterations.
(5)
3.3. Feature Learning
where x1 and x2 are a pair of input vectors, and i represents
the binary label assigned to this pair. i = 1 if x1 and x2
are positive pair; i = 0 if x1 and x2 are negative pair. m
is the margin that defines the separability in the embedding
space. When m = 0, the loss of the negative training pair is
not back-propagated in the system. When m > 0, both positive and negative sample pairs are considered. A larger m
means that the loss of negative training samples has a higher
weight in back propagation. d denotes the Euclidean distance between two input vectors: d(x1 , x2 ) = kx1 − x2 k2 .
Feature learning is the second step of the “learning via
translation” framework. Once we have the style-transferred
dataset G(S) composed of the translated images and their
associated labels, the feature learning step is the same as
supervised methods. Since we mainly focus on Step 1
(source-target image translation), we adopt the baseline IDdiscriminative Embedding (IDE) specified in [50]. We employ ResNet-50 [17] as the base model and only modify
the output dimension of the last fully-connected layer to the
4
Market
Duke
Duke
Market
1×1×2048
7×4×2048
pooling
7×7×2048
(a)
partition
concat
(overlap=1)
pooling
conv5 feature maps
1×1×4096
(b)
7×4×2048
1×1×2048
Figure 5: Illustration of LMP. We partition the feature map
into P (P = 2) parts horizontally with one pixel overlap.
We conduct global max/avg pooling on each part and concatenate the feature vectors as the final representation.
(c)
Duke images
Duke images to Market style
Market images
Market images to Duke style
(d)
Figure 4: Visual examples of image-image translation. The
left four columns map Market images to the Duke style,
and the right four columns map Duke images to the Market
style. From top to bottom: (a) original image, (b) output of
CycleGAN, (c) output of CycleGAN + Lide , and (d) output
of SPGAN. Images produced by SPGAN have the target
style while preserving the ID information in the source.
number of training identities. During testing, given an input image, we can extract the 2,048-dim Pool5 vector for
retrieval under the Euclidean distance.
Local Max Pooling. To further improve re-ID performance on the target dataset T , we introduce a feature pooling method named as local max pooling (LMP). It works
on a well-trained IDE model and can reduce the impact of
noisy signals incurred by the fake examples. In the original ResNet-50, global average pooling (GAP) is conducted
on Conv5. In our proposal (Fig. 5), we first partition the
Conv5 feature maps to P parts horizontally with one pixel
overlap. Then, we conduct global max/avg pooling on each
part. Finally, we concatenate the output of global max pooling (GMP) or GAP of each part as the final feature representation. The procedure is nonparametric, and can be directly used in the testing phase. In the experiment, we will
compare local max pooling and local average pooling, and
demonstrate the superiority of the former (LMP).
Figure 6: Sample images of (upper left:) DukeMTMC-reID
dataset, (lower left:) Market-1501 dataset, (upper right:)
Duke images which are translated to Market style, and
(lower right:) Market images translated to Duke style. We
use SPGAN for image-image translation.
4. Experiment
tities for testing. Each identity is captured by at most 6
cameras. All the bounding boxes are produced by DPM [8].
DukeMTMC-reID is a re-ID version of the DukeMTMC
dataset [36]. It contains 34,183 image boxes of 1,404 identities: 702 identities are used for training and the remaining 702 for testing. There are 2,228 queries and 17,661
database images. For both dataset, we rank-1 accuracy and
mAP for re-ID evaluation [49]. Sample images of the two
datasets are shown in Fig. 6.
4.1. Datasets
4.2. Implementation Details
We select two large-scale re-ID datasets for experiment,
i.e., Market-1501 [49] and DukeMTMC-reID [36, 51].
Market-1501 is composed of 1,501 identities, 12,936 training images and 19,732 gallery images (with 2,793 distractors). It is split into 751 identities for training and 750 iden-
SPGAN training and testing. We use Tensorflow [1]
to train SPGAN using the training images of Market-1501
and DukeMTMC-reID. Note that, we do not use any ID annotation during the learning procedure. In all experiment,
we empirically set λ1 = 10, λ2 = 5, λ3 = 2 in Eq. 6
5
Methods
Supervised Learning
Direct Transfer
CycleGAN (basel.)
CycleGAN (basel.) + Lide
SPGAN (m = 0)
SPGAN (m = 1)
SPGAN (m = 2)
SPGAN (m = 2) + LMP
rank-1
66.7
33.1
38.1
38.5
37.7
39.5
41.1
46.4
DukeMTMC-reID
rank-5 rank-10 rank-20
79.1
83.8
88.7
49.3
55.6
61.9
54.4
60.5
65.9
54.6
60.8
66.6
53.1
59.5
65.6
55.0
61.4
67.3
56.6
63.0
69.6
62.3
68.0
73.8
mAP
46.3
16.7
19.6
19.9
20.0
21.0
22.3
26.2
rank-1
75.8
43.1
45.6
48.1
49.2
48.7
51.5
57.7
Market-1501
rank-5 rank-10 rank-20
89.6
92.8
95.4
60.8
68.1
74.7
63.8
71.3
77.8
66.2
72.7
80.1
66.9
74.0
80.0
65.7
73.0
79.3
70.1
76.8
82.4
75.8
82.4
87.6
mAP
52.2
17.0
19.1
20.7
20.5
21.0
22.8
26.7
Table 2: Comparison of various methods on the target domains. When tested on DukeMTMC-reID, Market-1501 is used as
source, and vice versa. “Supervised learning” denotes using the full ID labels on the corresponding target dataset. “Direct
Transfer” means directly applying the source-trained model on the target domain (see Section 3.1). By varying m specified
in Eq. 5, the sensitivity of SPGAN to the relative importance of the positive and negative pairs is shown. When local max
pooling (LMP) is applied, the number of parts is set to 6. We use IDE [50] for feature learning.
and m = 2 in Eq. 5. With an initial learning rate 0.0002,
and model stop training after 5 epochs. During the testing procedure, we employ the Generator G for Market-1501
→ DukeMTMC-reID translation and the Generative F for
DukeMTMC-reID → Market-1501 translation. The translated images are used for training re-ID models.
For CycleGAN, we adopt the architecture released by its
authors. For SiaNet, it contains 4 convolutional layers, 4
max pooling layers and 1 fully connected (FC) layer, configured as below. (1) Conv. 4 × 4, stride = 2, #feature maps
= 64; (2) Max pooling 2 × 2, stride = 2; (3) Conv. 4 × 4,
stride = 2, #feature maps = 128; (4) Max pooling 2 × 2,
stride = 2; (5) Conv. 4 × 4, stride = 2, feature maps = 256;
(6) Max pool 2 × 2, stride = 2; (7) Conv. 4 × 4, stride = 2,
#feature maps = 512; (8) Max pooling 2 × 2, stride = 2; (9)
FC, output dimension = 128.
Feature learning for re-ID. Following [50], we train a
classification network for re-ID embedding learning, named
ID-discriminative Embedding (IDE). Specifically, ResNet50 [17] pretrained on ImageNet is used for fine-tuning on
the translated training set. We modify the output of the
last fully-connected layer to 751 and 702 for Market-1501
and DukeMTMC-reID, respectively. We use mini-batch
stochastic gradient descent to train the CNN model on a
Tesla K80 GPU. Training parameters such as batch size,
maximum number epochs, momentum and gamma are set
to 16, 50, 0.9 and 0.1, respectively. The initial learning rate
is set as 0.001, and decay to 0.0001 after 40 epochs.
model on the target domain. For instance, the ResNet-50
model trained and tested on Market-1501 achieves 75.8%
in rank-1 accuracy, but drops to 43.1% when trained on
DukeMTMC-reID and tested on Market-1501. A similar
drop can be observed when DukeMTMC-reID is used as
the target domain, which is consistent with the experiments
reported in [6]. The reason behind the performance drop is
the bias of data distributions in different domains.
The effectiveness of the “learning via translation”
baseline using CycleGAN. In this baseline domain adaptation approach (Section 3.1), we first translate the label images from the source domain to the target domain and then
use the translated images to train re-ID models. As shown
in Table 2, this baseline framework effectively improves the
re-ID performance in the target dataset. Compared with
the direct transfer method, the CycleGAN transfer baseline
gains +2.5% and +2.1% improvements in rank-1 accuracy
and mAP on Market-1501. When tested on DukeMTMCreID, the performance gain is +5.0% and +2.9% in rank-1
accuracy and mAP, respectively. Through such an imagelevel domain adaptation method, effective domain adaptation baselines can be learned.
The impact of the target domain identity constraint.
We conduct experiment to verify the influence of the identity loss on performance in Table 2. We arrive at mixed
observations. On the one hand, on DukeMTMC-reID, compared with the CycleGAN baseline, CycleGAN + Lide
achieves similar rank-1 accuracy and mAP. On the other
hand, on Market-1501, CycleGAN + Lide gains +2.5% and
1.6% improvement in rank-1 accuracy and mAP, respectively. The reason is that Market-1501 has a larger intercamera variance. When translating Duke images to the Market style, the translated image may be more prone to translation errors induced by the camera variances. Therefore,
the identity loss is more effective when Market is the target
4.3. Evaluation
Comparison between supervised learning and direct
transfer. The supervised learning method and the direct
transfer method are specified in Table 1. When comparing
the two methods in Table 2, we can clearly observe a large
performance drop when directly using a source-trained
6
CycleGAN+Lide
48.5
48.1
48
46.2
44
48.0
49.1
45.5
46
43.1
42
40
IDE
IDE +
51.5
52
50.1
50
Impact of 𝜆3 on re-ID accuracy
SPGAN
Rank-1 accuracy (%)
Rank-1 accuracy (%)
52
Direct Transfer
51.5
51
49.7
50
49
49.0
48.1
48
47
46
0
SVDNet
49.2
48.5
1
2
4
6
10
value of 𝜆3
Figure 7: Domain adaptation performance with different feature learning methods, including IDE (Section 3.3),
IDE+ [52], and SVDNet [39]. Three domain adaptation
methods are compared, i.e., direct transfer, CycleGAN with
identity loss, and the proposed SPGAN. The results are on
Market-1501.
Figure 8: λ3 (Eq. 6) v.s re-ID accuracy. A larger λ3 means
larger weight of similarity preserving constraint.
negative pairs in Eq. 5, (m = 0), SPGAN only marginally
improve the accuracy on Market-1501, and even compromises the system on Duke. When increasing m to 2, we
have much superior accuracy. It indicates that the negative
pairs are critical to the system.
Moreover, we evaluate the impact of λ3 in Eq. 6 on
Market-1501. λ3 controls the relative importance of the
proposed similarity preserving constraint. As shown in Fig.
9, the proposed constraint is proven effective when compared to λ3 = 0, but a larger λ3 does not bring more gains
in accuracy. Specifically, λ3 = 2 yields the best accuracy.
Local max pooling further improves the transfer performance. We apply the LMP on the Conv5 layer to mitigate the influence of noise. Note that LMP is directly
adopted in the feature extraction step for testing without
fine-tuning. We empirically study how the number of parts
and the pooling mode affect the performance. Experiment
is conducted on SPGAN. The performance of various numbers of parts (P = 1, 2, 3, 6) and different pooling modes
(max or average) is provided in Table 3. When we use average pooling and P = 1, we have the original GAP used
in ResNet-50. From these results, we speculate that with
more parts, a finer partition leads to higher discriminative
descriptors and thus higher re-ID accuracy.
Moreover, we test LMP on supervised learning and domain adaptation scenarios with three feature learning methods, i.e., IDE [50], IDE+ [52], and SVDNet [39]. As shown
in Fig. 9, LMP does not guarantee stable improvement on
supervised learning as observed in “IDE+ ” and SVDNet.
However, when applied in the scenario of domain adaptation with SPGAN, LMP yields improvement over IDE,
IDE+ , and SVDNet. The superiority of LMP probably lies
in that max pooling filters out some noisy signals in the descriptor induced by SPGAN.
domain. Considering that the performance never drops, we
use the target domain identity constraint as an auxiliary tool
for image-image translation. As shown in Fig. 4, this loss
helps CycleGAN prevent from generating strangely colored
results.
The effectiveness of the proposed SPGAN. On top of
the CycleGAN baseline, we replace CycleGAN with SPGAN (m = 2). The effectiveness of the proposed similarity preserving constraint can be seen in Table 2. Compared with Cycle + Lide , on DukeMTMC-reID, the similarity preserving constraint leads to +2.6% and +2.4% improvement over CycleGAN + Lide in rank-1 accuracy and
mAP, respectively. On Market-1501, the gains are +3.4%
and +2.1% in rank-1 accuracy and mAP, respectively. The
working mechanism of SPGAN consists in preserving the
underlying visual cues associated with the ID labels. The
consistent improvement suggests that this working mechanism is critical for generating suitable samples for training
in the target domain. Examples of translated images by SPGAN are shown in Fig. 6.
Comparison of different feature learning methods. In
Step 2, we evaluate three feature learning methods, i.e., IDE
[50] (described in Section 3.3), IDE∗ [52], and SVDNet
[39], as shown in Fig. 7. An interesting observation is that,
while IDE+ and SVDNet are superior to IDE under the scenario of “Direct Transfer”, the three learning methods are
basically on par with each other when using training samples generated by SPGAN. A possible explanation is that
many of the generated samples are imperfect, which has a
larger effect on those better learning schemes.
Sensitivity of SPGAN to key parameters. The margin
m defined in Eq. 5 is a key parameter. If m = 0, the loss of
negative pairs is not back propagated. If m gets larger, the
weight of negative pairs in loss calculation increases. We
conduce experiment to verify the impact of m, and results
are shown in Table 2. When turning off the contribution of
4.4. Comparison with State-of-the-art Methods
We compare the proposed method with the state-ofthe-art unsupervised learning methods on Market-1501 and
DukeMTMC-reID in Table 4 and Table 5, respectively.
7
Supervised Learning
SPGAN
Rank-1 accuracy (%)
90
80
75.8
Methods
82.8 82.3
82.4 82.2
78.2
Market-1501
Setting Rank-1 Rank-5 Rank-10
Bow [49]
SQ
35.8
52.4
60.3
SQ
27.2
41.6
49.1
LOMO [26]
UMDL [35]
SQ
34.5
52.6
59.6
SQ
45.5
60.7
66.7
PUL [6]*
Direct transfer
SQ
43.1
60.8
68.1
Direct transfer MQ
47.9
65.5
73.0
MQ
54.5
CAMEL [47]
SPGAN
SQ
51.5
70.1
76.8
MQ
57.0
73.9
80.3
SPGAN
SPGAN+LMP SQ
57.7
75.8
82.4
Supervised Learning+LMP
SPGAN+LMP
70
57.7
60
51.5
50
56.7
50.1
53.8
49.1
40
30
IDE
IDE +
SVDNet
Figure 9: Experiment of LMP (P = 6) on scenarios of supervised learning and SPGAN. Three feature learning methods are compared, i.e., IDE [50], IDE+ [52], and SVDNet
[39]. The results are on Market-1501.
#parts mode
1
2
3
6
dim
Avg
2048
Max
Avg
4096
Max
Avg
6144
Max
Avg
12288
Max
mAP
14.8
8.0
12.4
20.5
17.0
20.6
26.3
22.8
27.1
26.7
Table 4: Comparison with state of the art on Market-1501. *
denotes unpublished papers. “SQ” and “MQ” are the singlequery and multiple-query settings, respectively.
DukeMTMC-reID Market-1501
rank-1
mAP
rank-1 mAP
41.1
22.3
51.5 22.8
44.3
25.0
55.7 21.8
42.3
23.3
54.4 25.0
45.6
25.5
57.3 26.2
43.1
23.6
54.9 25.5
45.5
25.6
57.4 26.4
44.1
24.4
55.9 26.0
46.4
26.2
57.7 26.7
Methods
Bow[49]
LOMO[26]
UMDL[35]
PUL[6]*
Direct transfer
SPGAN
SPGAN+LMP
Table 3: Performance of various pooling strategies with different numbers of parts (P ) and pooling modes (maximum
or average) over SPGAN. The best results are in bold.
Rank-1
17.1
12.3
18.5
30.0
33.1
41.1
46.4
DukeMTMC-reID
Rank-5 Rank-10
28.8
34.9
21.3
26.6
31.4
37.6
43.4
48.5
49.3
55.6
56.6
63.0
62.3
68.0
mAP
8.3
4.8
7.3
16.4
16.7
22.3
26.2
Table 5: Comparison with state of the art on DukeMTMCreID. * denotes unpublished papers.
mAP = 22.3%. Compared with the second best method, i.e.,
PUL [6], our result is +11.1% higher in rank-1 accuracy.
Therefore, the superiority of SPGAN can be concluded.
Market-1501. On Market-1501, we first compare our
results with two hand-crafted features, i.e., Bag-of-Words
(BoW) [49] and local maximal occurrence (LOMO) [26].
Those two hand-crafted features are directly applied on test
dataset without any training process, their inferiority can be
clearly observed. We also compare existing unsupervised
methods, including the Clustering-based Asymmetric MEtric Learning (CAMEL) [47], the Progressive Unsupervised
Learning (PUL) [6], and UMDL [35]. The results of UMDL
are reproduced by Fan et al. [6]. In the single-query setting,
we achieve rank-1 accuracy = 51.5% and mAP = 22.8%. It
outperforms the second best method [6] by +6.0% in rank-1
accuracy. In the multiple-query setting, we arrive at rank1 accuracy = 57.0% and mAP = 27.1%, which is +2.5%
higher than CAMEL [47]. The comparisons indicate the
competitiveness of the proposed method on Market-1501.
DukeMTMC-reID. On DukeMTMC-reID, we compare
the proposed method with BoW [49], LOMO [26], UMDL
[35], and PUL [6] under the single-query setting (there is no
multiple-query setting in DukeMTMC-reID). The result obtained by the proposed method is rank-1 accuracy = 41.1%,
5. Conclusion
This paper focuses on domain adaptation in person reID. When models trained on one dataset are directly transferred to another dataset, the re-ID accuracy drops dramatically due to dataset bias. To achieve improved performance
in the new dataset, we present a “learning via translation”
baseline for domain adaptation, characterized by 1) unsupervised image-image translation and 2) supervised feature
learning. We further propose that the underlying (latent)
ID information for the foreground pedestrian should be preserved after image-image translation. To meet this requirement tailored for re-ID, we introduce the unsupervised selfsimilarity and domain-dissimilarity for similarity preserving image generation (SPGAN). We show that SPGAN better qualifies the generated images for domain adaptation and
yields consistent improvement over the CycleGAN.
8
References
[20] P. Isola, J. Zhu, T. Zhou, and A. A. Efros. Image-to-image
translation with conditional adversarial networks. In CVPR,
2017. 2
[21] J. Johnson, A. Alahi, and L. Fei-Fei. Perceptual losses for
real-time style transfer and super-resolution. In ECCV, 2016.
2
[22] T. Kim, M. Cha, H. Kim, J. K. Lee, and J. Kim. Learning to
discover cross-domain relations with generative adversarial
networks. In ICML, 2017. 1, 2
[23] C. Li and M. Wand. Precomputed real-time texture synthesis
with markovian generative adversarial networks. In ECCV,
2016. 2
[24] Y. Li, C. Fang, J. Yang, Z. Wang, X. Lu, and M. Yang. Diversified texture synthesis with feed-forward networks. In
CVPR, 2017. 2
[25] Y. Li, N. Wang, J. Liu, and X. Hou. Demystifying neural
style transfer. In IJCAI, 2017. 2
[26] S. Liao, Y. Hu, X. Zhu, and S. Z. Li. Person re-identification
by local maximal occurrence representation and metric
learning. In CVPR, 2015. 2, 8
[27] M. Liu, T. Breuel, and J. Kautz. Unsupervised image-toimage translation networks. In NIPS, 2017. 1, 2, 3
[28] M. Liu and O. Tuzel. Coupled generative adversarial networks. In NIPS, 2016. 1, 2
[29] Z. Liu, D. Wang, and H. Lu. Stepwise metric promotion for
unsupervised video person re-identification. In ICCV, 2017.
3
[30] M. Long, Y. Cao, J. Wang, and M. I. Jordan. Learning transferable features with deep adaptation networks. In ICML,
2015. 2
[31] M. Long, G. Ding, J. Wang, J. Sun, Y. Guo, and P. S. Yu.
Transfer sparse coding for robust image representation. In
CVPR, 2013. 2
[32] B. Ma, Y. Su, and F. Jurie. Covariance descriptor based
on bio-inspired features for person re-identification and face
verification. Image Vision Comput., 2014. 2
[33] T. Matsukawa, T. Okabe, E. Suzuki, and Y. Sato. Hierarchical gaussian descriptor for person re-identification. In CVPR,
2016. 2
[34] S. Motiian, M. Piccirilli, D. A. Adjeroh, and G. Doretto. Unified deep supervised domain adaptation and generalization.
In ICCV, 2017. 2
[35] P. Peng, T. Xiang, Y. Wang, M. Pontil, S. Gong, T. Huang,
and Y. Tian. Unsupervised cross-dataset transfer learning for
person re-identification. In CVPR, 2016. 3, 8
[36] E. Ristani, F. Solera, R. Zou, R. Cucchiara, and C. Tomasi.
Performance measures and a data set for multi-target, multicamera tracking. In European Conference on Computer
Vision workshop on Benchmarking Multi-Target Tracking,
2016. 5
[37] K. Saenko, B. Kulis, M. Fritz, and T. Darrell. Adapting visual category models to new domains. In ECCV, 2010. 2
[38] B. Sun, J. Feng, and K. Saenko. Return of frustratingly easy
domain adaptation. In AAAI, 2016. 2
[39] Y. Sun, L. Zheng, W. Deng, and S. Wang. SVDNet for pedestrian retrieval. In ICCV, 2017. 7, 8
[1] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean,
M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur,
J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner,
P. A. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and
X. Zheng. Tensorflow: A system for large-scale machine
learning. In OSDI, 2016. 5
[2] H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, and
M. Marchand. Domain-adversarial neural networks. arXiv
preprint arXiv:1412.4446, 2014. 2
[3] S. Benaim and L. Wolf. One-sided unsupervised domain
mapping. In NIPS, 2017. 2
[4] K. Bousmalis, N. Silberman, D. Dohan, D. Erhan, and D. Krishnan. Unsupervised pixel-level domain adaptation with
generative adversarial networks. In CVPR, 2017. 1, 2
[5] T. Q. Chen and M. Schmidt. Fast patch-based style transfer
of arbitrary style. arXiv preprint arXiv:1612.04337, 2016. 2
[6] H. Fan, L. Zheng, and Y. Yang. Unsupervised person reidentification: Clustering and fine-tuning. arXiv preprint
arXiv:1705.10444, 2017. 1, 3, 6, 8
[7] M. Farenzena, L. Bazzani, A. Perina, V. Murino, and
M. Cristani. Person re-identification by symmetry-driven accumulation of local features. In CVPR, 2010. 2
[8] P. Felzenszwalb, D. McAllester, and D. Ramanan. A discriminatively trained, multiscale, deformable part model. In
CVPR, 2008. 5
[9] B. Fernando, A. Habrard, M. Sebban, and T. Tuytelaars. Unsupervised visual domain adaptation using subspace alignment. In ICCV, 2013. 2
[10] Y. Ganin and V. S. Lempitsky. Unsupervised domain adaptation by backpropagation. In ICML, 2015. 2
[11] Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle,
F. Laviolette, M. Marchand, and V. S. Lempitsky. Domainadversarial training of neural networks. Journal of Machine
Learning Research, 2016. 2
[12] L. A. Gatys, A. S. Ecker, and M. Bethge. Image style transfer
using convolutional neural networks. In CVPR, 2016. 2
[13] B. Gong, Y. Shi, F. Sha, and K. Grauman. Geodesic flow
kernel for unsupervised domain adaptation. In CVPR, 2012.
2
[14] D. Gray and H. Tao. Viewpoint invariant pedestrian recognition with an ensemble of localized features. In ECCV, 2008.
2
[15] A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Schölkopf, and
A. J. Smola. A kernel two-sample test. Journal of Machine
Learning Research, 2012. 2
[16] R. Hadsell, S. Chopra, and Y. LeCun. Dimensionality reduction by learning an invariant mapping. In CVPR, 2006.
4
[17] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning
for image recognition. In CVPR, 2016. 4, 6
[18] J. Hoffman, E. Tzeng, T. Park, and J.-Y. Zhu. Cycada: Cycleconsistent adversarial domain adaptation. arXiv preprint
arXiv:1711.03213, 2017. 1, 2, 3
[19] X. Huang and S. J. Belongie. Arbitrary style transfer in realtime with adaptive instance normalization. In ICCV, 2017.
2
9
[40] Y. Taigman, A. Polyak, and L. Wolf. Unsupervised crossdomain image generation. ICLR, 2016. 3
[41] A. Torralba and A. A. Efros. Unbiased look at dataset bias.
In CVPR, 2011. 1
[42] E. Tzeng, J. Hoffman, N. Zhang, K. Saenko, and T. Darrell.
Deep domain confusion: Maximizing for domain invariance.
arXiv preprint arXiv:1412.3474, 2014. 2
[43] D. Ulyanov, V. Lebedev, A. Vedaldi, and V. S. Lempitsky.
Texture networks: Feed-forward synthesis of textures and
stylized images. In ICML, 2016. 2
[44] H. Wang, S. Gong, and T. Xiang. Unsupervised learning
of generative topic saliency for person re-identification. In
BMVC, 2014. 3
[45] M. Ye, A. J. Ma, L. Zheng, J. Li, and P. C. Yuen. Dynamic label graph matching for unsupervised video re-identification.
In ICCV, 2017. 3
[46] Z. Yi, H. Zhang, P. Tan, and M. Gong. Dualgan: Unsupervised dual learning for image-to-image translation. In ICCV,
2017. 1, 2
[47] H. Yu, A. Wu, and W. Zheng. Cross-view asymmetric metric
learning for unsupervised person re-identification. In ICCV,
2017. 3, 8
[48] R. Zhao, W. Ouyang, and X. Wang. Unsupervised salience
learning for person re-identification. In CVPR, 2013. 3
[49] L. Zheng, L. Shen, L. Tian, S. Wang, J. Wang, and Q. Tian.
Scalable person re-identification: A benchmark. In ICCV,
2015. 2, 5, 8
[50] L. Zheng, Y. Yang, and A. G. Hauptmann. Person reidentification: Past, present and future. arXiv preprint
arXiv:1610.02984, 2016. 3, 4, 6, 7, 8
[51] Z. Zheng, L. Zheng, and Y. Yang. Unlabeled samples generated by gan improve the person re-identification baseline in
vitro. In ICCV, 2017. 5
[52] Z. Zhong, L. Zheng, D. Cao, and S. Li. Re-ranking person
re-identification with k-reciprocal encoding. 2017. 7, 8
[53] J. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired imageto-image translation using cycle-consistent adversarial networks. ICCV, 2017. 1, 2, 3, 4
10
| 1 |
The Yagita invariant of symplectic groups of large rank
arXiv:1803.09561v1 [] 26 Mar 2018
Cornelia M. Busch∗
Ian J. Leary†
March 28, 2018
Abstract
Fix a prime p, and let O denote a subring of C that is either integrally closed or
contains a primitive pth root of 1. We determine the Yagita invariant at the prime p
for the symplectic group Sp(2n, O) for all n ≥ p − 1.
1
Introduction
The Yagita invariant p◦ (G) of a discrete group G is an invariant that generalizes the period
of the p-local Tate-Farrell cohomology of G, in the following sense: it is a numerical invariant
defined for any G that is equal to the period when the p-local cohomology of G is periodic.
Yagita considered finite groups [6], and Thomas extended the definition to groups of finite
vcd [5]. In [3] the definition was extended to arbitrary groups and p◦ (G) was computed for
G = GL(n, O) for O any integrally closed subring of C and for sufficiently large n (depending
on O).
In [2], one of us computed the Yagita invariant for Sp(2(p+1), Z). Computations from [3]
were used to provide an upper bound and computations with finite subgroups and with
mapping class groups were used to provide a lower bound [4]. The action of the mapping
class group of a surface upon the first homology of the surface gives a natural symplectic
representation of the mapping class group of a genus p + 1-surface inside Sp(2(p + 1), Z).
In the current paper, we compute p◦ (Sp(2n, O)) for each n ≥ p − 1 for each O for which
p◦ (GL(n, O)) was computed in [3]. By using a greater range of finite subgroups we avoid
having to consider mapping class groups.
Throughout the paper, we fix a prime p. Before stating our main result we recall the
definitions of the symplectic group Sp(2n, R) over a ring R, and of the Yagita invariant
∗
The first author acknowledges support from ETH Zürich, which facilitated this work
The second author would like to thank the Isaac Newton Institute for Mathematical Sciences for support
and hospitality during the programme Non-positive Curvature, Group Actions and Cohomology, when work
on this paper was undertaken. This work was supported by EPSRC grant no. EP/K032208/1 and by a grant
from the Leverhulme Trust.
†
1
p◦ (G), which depends on the prime p as well as on the group G. The group Sp(2n, R) is the
collection of invertible 2n × 2n matrices M over R such that
0 In
T
.
M JM = J, where J :=
−In 0
Here M T denotes the transpose of the matrix M, and as usual In denotes the n × n identity
matrix. Equivalently M ∈ Sp(2n, R) if M defines an isometry of the antisymmetric bilinear
form on R2n defined by hx, yi := xT Jy. If C is cyclic of order p, then the group cohomology
ring H ∗(C; Z) has the form
H ∗ (C; Z) ∼
= Z[x]/(px), x ∈ H 2 (C; Z).
If C is a cyclic subgroup of G of order p, define n(C) a positive integer or infinity to be the
supremum of the integers n such that the image of H ∗ (G; Z) → H ∗ (C; Z) is contained in the
subring Z[xn ]. Now define
p◦ (G) := l. c. m.{2n(C) : C ≤ G, |C| = p}.
It is easy to see that if H ≤ G then p◦ (H) divides p◦ (G) [3, Prop. 1].
In the following theorem statement and throughout the paper we let ζp be a primitive
pth root of 1 in C and we let O denote a subring of C with F ⊆ C as its field of fractions.
We assume that either ζp ∈ O or that O is integrally closed in C. We define l := |F [ζp ] : F |,
the degree of F [ζp ] as an extension of F . For t ∈ R with t ≥ 1, we define ψ(t) to be the
largest integer power of p less than or equal to t.
Theorem 1. With notation as above, for each n ≥ p − 1, the Yagita invariant p◦ (Sp(2n, O))
is equal to 2(p − 1)ψ(2n/l) for l even and equal to 2(p − 1)ψ(n/l) for l odd.
By the main result of [3], the above is equivalent to the statement that p◦ (Sp(2n, O)) =
p (GL(2n, O)) when l is even and p◦ (Sp(2n, O)) = p◦ (GL(n, O)) when l is odd. By definition
Sp(2n, O) is a subgroup of GL(2n, O) and there is an inclusion GL(n, O) → Sp(2n, O) defined
by
A
0
,
A 7→
0 (AT )−1
◦
and so for any n, p◦ (GL(n, O)) divides p◦ (Sp(2n, O)), which in turn divides p◦ (GL(2n, O)).
Lemma 2. If M is in the symplectic group, then M is conjugate to (M −1 )T = (M T )−1 , and
det(M) = 1.
Proof. The matrix J defining the symplectic form satisfies J 4 = I, and so in particular it is
invertible. The equation M T JM = J implies the equation JMJ −1 = (M T )−1 .
The usual way to show that the determinant of M is equal to 1 is via the Pfaffian. The
Pfaffian is a function A 7→ pf(A) on the set of skew-symmetric matrices, which is polynomial
in the matrix coefficients and is a square root of the determinant, i.e., pf(A)2 = det(A) for
2
each skew-symmetric matrix A. Given these properties, it is easy to verify that the identity
pf(M T AM) = det(M)pf(A) holds for all matrices M and all skew-symmetric matrices A.
Since J is invertible, pf(J) 6= 0, and if M is symplectic, the equations
pf(J) = pf(M T JM) = det(M)pf(J)
imply that det(M) = 1.
Proposition 3. Let f (X) be a polynomial over the field Fp , all of whose roots lie in F×
p . If
n
there is a polynomial g and an integer n so that f (X) = g(X ), then n has the form n = mpq
for some m dividing p − 1 and some positive integer q. If p is odd and for each i ∈ F×
p , the
multiplicity of i as a root of f is equal to that of −i, then m is even.
Proof. The only part of this that is not contained in [3, Prop. 6] is the final statement. Since
(1 − iX)(1 + iX) = 1 − i2 X 2 is a polynomial in X 2 , the final statement follows. For the
benefit of the reader, we sketch the rest of the proof. If n = mpq where p does not divide
q
m, then g(X n ) = g(X m )p , so we may assume that q = 0. If g(Y ) = 0 has roots yi , then
the roots of g(X m) = 0 are the roots of yi − X m = 0. Since p does not divide m, these
polynomials have no repeated roots; since their roots are assumed to lie in Fp it is now easy
to show that m divides p − 1.
Corollary 4. With notation as in Theorem 1, let G be a subgroup of Sp(2n, F ). Then
the Yagita invariant p◦ (G) divides the number given for p◦ (Sp(2n, O)) in the statement of
Theorem 1.
Proof. As in [3, Cor. 7], for each C ≤ G of order p, we use the total Chern class to give an
upper bound for the number n(C) occuring in the definition of p◦ (G). If C is cyclic of order
p, then C has p distinct irreducible complex representations, each 1-dimensional. If we write
H ∗ (C; Z) = Z[x]/(px), then the total Chern classes of these representations are 1 + ix for
each i ∈ Fp , where i = 0 corresponds to the trivial representation. The total Chern class of a
direct sum of representations is the product of the total Chern classes, and so when viewed
as a polynomial in Fp [x] = H ∗ (C; Z) ⊗Fp , the total Chern class of any faithful representation
ρ : C → GL(2n, C) is a non-constant polynomial of degree at most 2n all of whose roots lie
in F×
p . Now let F be a subfield of C with l = |F [ζp ] : F | as in the statement. The group C
has (p−1)/l non-trivial irreducible representations over F , each of dimension l, and the total
Chern classes of these representations have the form 1 − ixl , where i ranges over the (p − 1)/l
distinct lth roots of unity in Fp . In particular, the total Chern class of any representation
ρ : C → GL(2n, F ) ≤ GL(2n, C) is a polynomial in xl whose x-degree is at most 2n. If ρ has
image contained in Sp(2n, C), then it factors as ρ = ι ◦ ρe with ρe : C → Sp(2n, C) and ι is the
inclusion of Sp(2n, C) in GL(2n, C). In this case the matrix representing a generator for C is
conjugate to the transpose of its own inverse; in particular it follows that the multiplicities of
the irreducible complex representations of C with total Chern classes 1 + ix and 1 − ix must
be equal for each i. Hence in this case, if p is odd, the total Chern class of the representation
ρ = ι ◦ ρe is a polynomial in x2 . If p = 2 (which implies that l = 1) then the total Chern
class of any representation ρ : C → GL(2n, C) has the form (1 + x)i , where i is equal to the
3
number of non-trivial irreducible summands. Since Sp(2n, C) ≤ SL(2n, C) it follows that
for symplectic representations i must be even, and so for p = 2, the total Chern class is a
polynomial in x2 .
In summary, let ρe be a faithful representation of C in Sp(2n, F ). In the case when l is
odd, then the total Chern class of ρe is a non-constant polynomial f˜(y) = f (x) in y = x2l
such that f (x) has degree at most 2n, f˜(y) has degree at most n/l, and all roots of f, f˜ lie
in F×
p . In the case when l is even, the total Chern class of ρ is a non-constant polynomial
f˜(y) = f (x) in y = xl such that f (x) has degree at most 2n, f˜(y) has degree at most 2n/l,
and all roots of both lie in F×
p . By Proposition 3, it follows that each n(C) is a factor of the
◦
number given for p (Sp(2n, O)), and hence the claim.
Lemma 5. Let H ≤ G with |G : H| = m, and let ρ be a symplectic representation of H
on V = O2n . The induced representation IndG
H (ρ) is a symplectic representation of G on
2mn
∼
W := OG ⊗OH V = O .
Proof. Let e1 , . . . , en , f1 , . . . , fn be the standard basis for V = O2n , so that the bilinear form
hv, wi := v T Jw on V is given by
hei , ej i = 0 = hfi , fj i, hei , fj i = −hfi , ej i = δij .
The representation ρ is symplectic if and only if each ρ(h) preserves this bilinear form.
Let t1 , . . . , tm be a left transversal to H in G, so that OG = ⊕m
i=1 ti OH as right OHmodules. Define a bilinear form h , iW on W by
+
* m
m
m
X
X
X
i
i
hv i , w i i.
:=
ti ⊗ w
ti ⊗ v ,
i=1
i=1
W
i=1
To see that this bilinear form is preserved by the OG-action on W , fix g ∈ G and define
a permutation π of {1, . . . , m} and elements h1 , . . . , hm ∈ H by the equations gti = tπ(i) hi .
Now for each i, j with 1 ≤ i, j ≤ m
hInd(ρ(g))ti ⊗ v, Ind(ρ(g))tj ⊗ wiW = tπ(i) ⊗ ρ(hi )v, tπ(j) ⊗ ρ(hj )w
W
= δπ(i)π(j) hρ(hi )v, ρ(hj )wi
= δij hρ(hi )v, ρ(hi )wi
= δij hv, wi
= hti ⊗ v, tj ⊗ wiW .
To see that h , iW is symplectic, define basis elements
equations
E n(i−1)+j
:= ti ⊗ ej , and
F n(i−1)+j
:= ti ⊗ fj ,
E 1 , . . . , E mn , F 1 , . . . , F mn
for W by the
for 1 ≤ i ≤ m, 1 ≤ j ≤ n.
It is easily checked that for 1 ≤ i, j ≤ mn
hE i , E j iW = 0 = hF i , F j iW , hE i , F j iW = −hF i , E j iW = δij ,
and so with respect to this basis for W , the bilinear form h , iW is the standard symplectic
form.
4
Proposition 6. With notation as in Theorem 1, the Yagita invariant p◦ (Sp(2n, O)) is divisible by the number given in the statement of Theorem 1.
Proof. To give lower bounds for p◦ (Sp(2n, O)) we use finite subgroups. Firstly, consider
the semidirect product H = Cp ⋊Cp−1 , where Cp−1 acts faithfully on Cp ; equivalently this
is the group of affine transformations of the line over Fp . It is well known that the image
of H ∗ (G; Z) inside H ∗ (Cp ; Z) ∼
= Z[x]/(px) is the subring generated by xp−1 . It follows
that 2(p − 1) divides p◦ (G) for any G containing H as a subgroup. The group H has a
faithful permutation action on p points, and hence a faithful representation in GL(p − 1, Z),
where Zp−1 is identified with the kernel of the H-equivariant map Z{1, . . . , p} → Z. Since
GL(p − 1, Z) embeds in Sp(2(p − 1), Z) we deduce that H embeds in Sp(2n, O) for each O
and for each n ≥ p − 1.
To give a lower bound for the p-part of p◦ (Sp(2n, O)) we use the extraspecial p-groups.
For p odd, let E(p, 1) be the non-abelian p-group of order p3 and exponent p, and let E(2, 1)
be the dihedral group of order 8. (Equivalently in each case E(p, 1) is the Sylow p-subgroup
of GL(3, Fp ).) For m ≥ 2, let E(p, m) denote the central product of m copies of E(p, 1),
so that E(p, m) is one of the two extraspecial groups of order p2m+1 . Yagita showed that
p◦ (E(p, m)) = 2pm for each m and p [6]. The centre and commutator subgroup of E(p, m)
are equal and have order p, and the abelianization of E(p, m) is isomorphic to Cp2m . The
irreducible complex representations of E(p, m) are well understood: there are p2m distinct
1-dimensional irreducibles, each of which restricts to the centre as the trivial representation,
and there are p − 1 faithful representations of dimension pm , each of which restricts to the
centre as the sum of pm copies of a single (non-trivial) irreducible representation of Cp . The
group G = E(p, m) contains a subgroup H isomorphic to Cpm+1 , and each of its faithful pm dimensional representations can be obtained by inducing up a 1-dimensional representation
H → Cp → GL(1, C).
According to Bürgisser, Cp embeds in Sp(2l, O) (resp. in Sp(l, O) when l is even) provided
that O is integrally closed in C [1]. Here as usual, l := |F [ζp], F | and F is the field of
fractions of O. If instead ζp ∈ O, then l = 1 and clearly Cp embeds in GL(1, O) and
hence also in Sp(2, O) = Sp(2l, O). Taking this embedding of Cp and composing it with
any homomorphism H → Cp we get a symplectic representation ρ of H on O2l for any l
(resp. on Ol for l even). For a suitable homomorphism we know that IndG
H (ρ) is a faithful
2lpm
lpm
representation of G on O
(resp. on O
for l even) and by Lemma 5 we see that IndG
H (ρ)
m
is symplectic. Hence we see that E(m, p) embeds as a subgroup of Sp(2lp , O) for any l
and as a subgroup of Sp(lpm , O) in the case when l is even. Since p◦ (E(m, p)) = 2pm , this
shows that 2pm divides p◦ (Sp(2lpm , O)) always and that 2pm divides p◦ (Sp(lpm , O)) in the
case when l is even.
Corollary 4 and Proposition 6 together complete the proof of Theorem 1.
We finish by pointing out that we have not computed p◦ (Sp(2n, O)) for general O when
n < p−1; to do this one would have to know which metacyclic groups Cp ⋊Ck with k coprime
to p admit low-dimensional symplectic representations.
5
References
[1] B. Bürgisser, Elements of finite order in symplectic groups. Arch. Math. (Basel) 39
(1982), no. 6, 501–509.
[2] C. M. Busch, The Yagita invariant of some symplectic groups Mediterr. J. Math. 10
(2013), no. 1, 137–146.
[3] H. H. Glover, I. J. Leary, and C. B. Thomas, The Yagita Invariant of general linear
groups. Algebraic topology: new trends in localization and periodicity (Sant Feliu de
Guı́xols, 1994), Progr. Math. 136 (1996), 185–192.
[4] H. H. Glover, G. Mislin, and Y. Xia, On the Yagita invariant of mapping class groups.
Topology 33 (1994), no. 3, 557–574.
[5] C. B. Thomas, Free actions by p-groups on products of spheres and Yagita’s invariant
po(G). Transformation groups (Osaka, 1987), Lecture Notes in Math. 1375 (1989),
326–338.
[6] N. Yagita, On the dimension of spheres whose product admits a free action by a nonabelian group. Quart. J. Math. Oxford Ser. (2) 36 (1985), no. 141, 117–127.
Authors’ addresses:
[email protected]
Department of Mathematics
ETH Zürich
Rämistrasse 101
8092 Zürich
Switzerland
[email protected]
School of Mathematical Sciences,
University of Southampton,
Southampton,
SO17 1BJ
United Kingdom
6
| 4 |
arXiv:1506.08867v1 [cs.MS] 26 Jun 2015
Java Implementation of a Parameter-less
Evolutionary Portfolio
José C. Pereira
Fernando G. Lobo
CENSE and DEEI-FCT
CENSE and DEEI-FCT
Universidade do Algarve
Universidade do Algarve
Campus de Gambelas
Campus de Gambelas
8005-139 Faro, Portugal
8005-139 Faro, Portugal
[email protected]
[email protected]
Abstract
The Java implementation of a portfolio of parameter-less evolutionary algorithms is presented. The Parameter-less Evolutionary Portfolio implements a heuristic that performs adaptive selection of parameter-less evolutionary algorithms in accordance with performance criteria that are measured during running time. At present time, the portfolio includes three
parameter-less evolutionary algorithms: Parameter-less Univariate Marginal Distribution Algorithm, Parameter-less Extended Compact Genetic Algorithm, and Parameter-less Hierarchical Bayesian Optimization Algorithm. Initial experiments showed that the parameter-less
portfolio can solve various classes of problems without the need for any prior parameter setting
technique and with an increase in computational effort that can be considered acceptable.
1
Introduction
The Parameter-less Evolutionary Portfolio (P-EP) implements a heuristic that performs adaptive
selection of parameter-less evolutionary algorithms (P-EAs) in accordance with performance criteria that are measured during running time. This heuristic is inspired by the parameter-less genetic
algorithm (P-GA) (Harik and Lobo, 1999) and was first proposed by Lobo and Lima (2010). We direct the interested reader to these papers for a more general and detailed description of the heuristic
itself.
The main goal of this technical report is to present a Java implementation of the Parameterless Evolutionary Portfolio (P-EPJava) whose source code is available for free download at
https://github.com/JoseCPereira/2015ParameterlessEvolutionaryPortfolioJava.
The remainder of this paper is organized as follows. In Section 2 we briefly describe the
main concept of the Parameter-less Evolutionary Portfolio. In Section 3 we discuss the P-EPJava
implementation itself and provide detailed instructions on how to use it and how to implement new
problems with it.
1
2
The Parameter-less Evolutionary Algorithm
The Parameter-less Evolutionary Portfolio always includes P-EAs that can be informally, but quantifiably, ordered by their increasing “complexity”. The more complex P-EAs should be capable of
tackling more difficult problems, usually at the expense of using an increased cost in model building. Following this order, the P-EP alternates between each algorithm on a continuous loop, giving
the same amount of CPU time to all P-EAs in each iteration. Starting with a chosen initial time, T0 ,
the allowed CPU time is updated at each loop iteration to at least match the maximum time spent
in one generation by any of the P-EAs. At the same time, because they are able to advance further
in the search due to faster model building, simpler P-EAs are eliminated from the loop as soon as
their current best average fitness is lower than the best average fitness of a more complex P-EA.
The use of parameter-less algorithms allows P-EP to work as a black-box algorithm, without
the need for any prior parameter settings. Additionally, P-EP is designed to run forever, physical
constraints aside, because in practice the quality of the optimal solution is often unknown for many
problems, making it impossible to distinguish, for instance, when an algorithm has reached the
optimum result from when it simply got “stuck” in some plateau of the search space. Therefore,
P-EP leaves to the user the decision when to stop the computation, based on the quality of the
solutions already found and on the time and resources that she or he is willing to spend.
3
A Java Implementation of the Parameter-less Evolutionary
Portfolio
At present time, the Java implementation of the Parameter-less Evolutionary Portfolio (PEPJava) includes three P-EAs: Parameter-less Univariate Marginal Distribution Algorithm
(P-UMDA)1 , Parameter-less Extended Compact Genetic Algorithm (P-ECGA) (Lobo, 2000),
and Parameter-less Hierarchical Bayesian Optimization Algorithm (P-HBOA) (Pelikan et al.,
2007). As presented in detail in another arXiv report from the same authors, these three
P-EAs (plus the P-GA) are also implemented in Java as independent algorithms. The
source and binary files of those Java implementations are available for free download at
https://github.com/JoseCPereira/2015ParameterlessEvolutionaryAlgorithmsJava.
All algorithms integrated in the evolutionary portfolio must be parameter-less in the sense that
they use the population sizing method employed by the Parameter-less Genetic Algorithm (Harik
and Lobo, 1999). In addition, each P-EA must satisfy the following two constraints in order to be
integrated in the P-EPJava:
1. The algorithm represents possible solutions (individuals) as strings of zeros and ones.
2. All individuals have the same string size.
1 See,
for example, Mühlenbein and Paaß (1996) for a description of the original, non parameter-less version of the
UMDA.
2
3.1
How to use the P-EPJava
The P-EPJava is a Java application developed with the Eclipse2 IDE. The available code is already
compiled and can be executed using the command line.
Run the P-EAJava from a command line
1. Unzip the source file 2015ParameterlessPortfolio.zip to any directory.
2. Open your favourite terminal and execute the command
cd [yourDirectory]/2015ParameterlessEvolutionaryPortfolio/bin
where [yourDirectory] is the name of the directory chosen in step 1.
3. Execute the command
java com/z PORTFOLIO/PORTFOLIO ./PortParameters.txt
The argument “PortParameters.txt” is in fact the name of the file containing all the options
concerning the portfolio settings and can be changed at will.
After each execution of a single or multiple runs, the P-EPJava produces one output file –
PORTFOLIO * *.txt – that records how each run progressed in terms of time allowed to each
algorithm, which algorithms were deactivated and when, number of fitness calls performed by
each algorithm, best individual and average fitnesses, the evolution of the population sizes, among
other relevant information. All this information is also displayed on the screen during execution
time.
At present time, the P-EPJava version made available with this paper already includes the
following set of test problems:
ZERO Problems
0→
1→
2→
3→
4→
5→
6→
ZeroMax
Zero Quadratic
Zero 3-Deceptive
Zero 3-Deceptive Bipolar
Zero 3-Deceptive Overlapping
Zero Concatenated Trap-k
Zero Uniform 6-Blocks
ONE Problems
10 →
11 →
12 →
13 →
14 →
15 →
16 →
OneMax
Quadratic
3-Deceptive
3-Deceptive Bipolar
3-Deceptive Overlapping
Concatenated Trap-k
Uniform 6-Blocks
Hierarchical Problems
21 →
22 →
Hierarchical Trap One
Hierarchical Trap Two
The Zero problems always have the string with all zeros as their best individual. The One
problems are the same as the Zero problems but their best individual is now the string with all
2 Version:
Kepler Service Release 2
3
ones. A description of these problems can be found, for instance, in Pelikan et al. (2000). The
Hierarchical problems are thoroughly described in Pelikan (2005).
It is also possible to define a noisy version for any of the previous problems. This is done by
adding a non-zero Gaussian noise term to the fitness function.
The source code that implements all the problems mentioned in this section can be found in the
file src/com/z PORTFOLIO/Problem.java.
As mentioned previously, all options concerning the evolutionary portfolio are in the file
PortParameters.txt. In particular, it is in this file that are made the choices for the problem to
be solved.
To choose a particular problem the user must set the value of the following three options:
Line 81: problemType
Line 90: stringSize
Line 107: sigmaK
(defines the noise component)
All other options are set to default values and their role in the behaviour or the portfolio
is explained with detail in the file’s comments. This is also true for the parameters specific to
the parameter-less strategy and to each of the implemented parameter-less algorithms which are
defined in four separate files:
PARAMETER-LESS: ParParameters.txt
UMDA: UMDAParameters.txt
ECGA: ECGAParameters.txt
HBOA: HBOAParameters.txt
Note that the default settings defined in these four files were chosen to ensure a robust behavior
of the corresponding algorithms, in accordance with current theory. Therefore, the user is advised
to proceed with caution when performing any changes in those settings. In fact, the whole idea behind the portfolio and the parameter-less strategy is to eliminate the need of such fine tuning when
solving a particular problem. After choosing a problem to be solved and a particular algorithm to
solve it, the user has only to press the start button and wait until the P-EPJava finds a solution with
good enough quality.
3.2
How to implement a new problem with P-EAJava
The P-EPJava uses the design pattern strategy (Gamma et al., 1995) to decouple the implementation of a particular problem from the remaining portfolio structure (see Figure 1). As a consequence, to plug in a new problem to the framework it is only necessary to define one class that
implements the interface IProblem and change some input options to include the new choice. The
interface IProblem can be found in the file src/com/z PORTFOLIO/Problem.java.
In the following let us consider that we want to solve a new problem called NewProblem with
one of the parameter-less algorithms. To plug in this problem it is necessary to:
4
Figure 1: The P-EPJava uses the design pattern strategy (Gamma et al., 1995) to allow an easy
implementation of new problems to be solved by the framework.
1. Define a class called NewProblem in the file src/com/z PORTFOLIO/Problem.java. The
signature of the class will be
2. Code the body of the function computeFitness(Individual) according to the nature of problem
newProblem. The class Individual provides all the necessary functionalities to operate with
the string of zeros and ones that represents an individual (e.g., getAllele(int)). This class can
be found in the file src/com/z PORTFOLIO/Individual.java.
3. To define the new problem option, add the line
to the switch command in line 174 of the file src/com/z PORTFOLIO/PortParameter.java.
The case number – 99 – is a mere identifier of the new problem option. The user is free to
choose other value for this purpose. The rest of the line is to be written verbatim.
4. Validate the new problem – option value 99 – by adding the case problemType == 99 to the
conditional in line 111 of the same PortParameter.java file.
Although not strictly necessary, it is also advisable to keep updated the problem menu in the
file PortParameters.txt.
5
Acknowledgements
The current version of the P-EAJava is one of the by-products of the activities performed by the
PhD fellow, José C. Pereira within the doctoral program entitled Search and Optimization with
Adaptive Selection of Evolutionary Algorithms. The work program is supported by the Portuguese
Foundation for Science and Technology with the doctoral scholarship SFRH/BD/78382/2011 and
with the research project PTDC/EEI-AUT/2844/2012.
References
Eiben, A. E. and Smith, J. E. (2003). Introduction to Evolutionary Computing. SpringerVerlag.
Gamma, E., Helm, R., Johnson, R., and Vlissides, J. (1995). Design patterns: elements of reusable
object-oriented software. Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA.
Goldberg, D. E. (1989). Genetic algorithms in search, optimization, and machine learning.
Addison-Wesley, Reading, MA.
Goldberg, D. E., Deb, K., and Thierens, D. (1993). Toward a better understanding of mixing in
genetic algorithms. Journal of the Society of Instrument and Control Engineers, 32(1):10–16.
Harik, G. R. (1999). Linkage learning via probabilistic modeling in the ECGA. IlliGAL Report
No. 99010, Illinois Genetic Algorithms Laboratory, University of Illinois at Urbana-Champaign,
Urbana, IL.
Harik, G. R. and Lobo, F. G. (1999). A parameter-less genetic algorithm. In Banzhaf, W. et al.,
editors, Proceedings of the Genetic and Evolutionary Computation Conference GECCO-99,
pages 258–265, San Francisco, CA. Morgan Kaufmann.
Holland, J. H. (1975). Adaptation in Natural and Artificial Systems. University of Michigan Press,
Ann Arbor, MI.
Lobo, F. G. (2000). The parameter-less genetic algorithm: Rational and automated parameter
selection for simplified genetic algorithm operation. PhD thesis, Universidade Nova de Lisboa,
Portugal. Also IlliGAL Report No. 2000030.
Lobo, F. G. and Lima, C. F. (2007). Adaptive population sizing schemes in genetic algorithms. In
Parameter Setting in Evolutionary Algorithms, pages 185–204. Springer.
Lobo, F. G. and Lima, C. F. (2010). Towards automated selection of estimation of distribution
algorithms. In Pelikan, M. and Branke, J., editors, ACM Genetic and Evolutionary Computation
Conference (GECCO-2010), Companion Material, pages 1945–1952. ACM.
Mühlenbein, H. and Paaß, G. (1996). From recombination of genes to the estimation of distributions I. Binary parameters. In Voigt, H.-M. et al., editors, Parallel Problem Solving from Nature
– PPSN IV, pages 178–187, Berlin. Kluwer Academic Publishers.
6
Pelikan, M. (2005). Hierarchical Bayesian Optimization Algorithm: Toward a New Generation of
Evolutionary Algorithms. Springer.
Pelikan, M. and Goldberg, D. (2006). Hierarchical bayesian optimization algorithm. In Pelikan,
M., Sastry, K., and CantúPaz, E., editors, Scalable Optimization via Probabilistic Modeling,
volume 33 of Studies in Computational Intelligence, pages 63–90. Springer Berlin Heidelberg.
Pelikan, M., Goldberg, D. E., and Cantú-Paz, E. (2000). Linkage problem, distribution estimation,
and Bayesian networks. Evolutionary Computation, 8(3):311–341.
Pelikan, M., Hartmann, A., and Lin, T.-K. (2007). Parameter-less hierarchical bayesian optimization algorithm. In Parameter Setting in Evolutionary Algorithms, volume 54, pages 225–239.
Springer Berlin Heidelberg, Berlin.
Pelikan, M. and Lobo, F. G. (1999). Parameter-less genetic algorithm: A worst-case time and space
complexity analysis. IlliGAL Report No. 99014, University of Illinois at Urbana-Champaign,
Illinois Genetic Algorithms Laboratory, Urbana, IL.
Spinosa, E. and Pozo, A. (2002). Controlling the population size in genetic programming. In
Proceedings of the 16th Brazilian Symposium on Artificial Intelligence: Advances in Artificial
Intelligence, SBIA ’02, pages 345–354, London, UK, UK. Springer-Verlag.
Thierens, D. and Goldberg, D. E. (1993). Mixing in genetic algorithms. In Forrest, S., editor,
Proceedings of the Fifth International Conference on Genetic Algorithms, pages 38–45, San
Mateo, CA. Morgan Kaufmann.
7
| 9 |
Sound Mixed-Precision Optimization with Rewriting
Eva Darulova
Einar Horn
Saksham Sharma
MPI-SWS
[email protected]
UIUC
[email protected]
IIT Kanpur
[email protected]
arXiv:1707.02118v1 [] 7 Jul 2017
ABSTRACT
Finite-precision arithmetic computations face an inherent tradeoff
between accuracy and efficiency. The points in this tradeoff space
are determined, among other factors, by different data types but
also evaluation orders. To put it simply, the shorter a precision’s
bit-length, the larger the roundoff error will be, but the faster the
program will run. Similarly, the fewer arithmetic operations the
program performs, the faster it will run; however, the effect on the
roundoff error is less clear-cut. Manually optimizing the efficiency
of finite-precision programs while ensuring that results remain
accurate enough is challenging. The unintuitive and discrete nature
of finite-precision makes estimation of roundoff errors difficult;
furthermore the space of possible data types and evaluation orders
is prohibitively large.
We present the first fully automated and sound technique and
tool for optimizing the performance of floating-point and fixedpoint arithmetic kernels. Our technique combines rewriting and
mixed-precision tuning. Rewriting searches through different evaluation orders to find one which minimizes the roundoff error at no
additional runtime cost. Mixed-precision tuning assigns different
finite precisions to different variables and operations and thus provides finer-grained control than uniform precision. We show that
when these two techniques are designed and applied together, they
can provide higher performance improvements than each alone.
KEYWORDS
mixed-precision tuning, floating-point arithmetic, fixed-point arithmetic, static analysis, optimization
ACM Reference format:
Eva Darulova, Einar Horn, and Saksham Sharma. 2017. Sound Mixed-Precision
Optimization with Rewriting. In Proceedings of ACM Conference, Washington,
DC, USA, July 2017 (Conference’17), 11 pages.
DOI: 10.1145/nnnnnnn.nnnnnnn
1
INTRODUCTION
Finite-precision computations, such as those found in embedded
and scientific computing applications, face an inherent tradeoff
between accuracy and efficiency due to unavoidable roundoff errors whose magnitude depends on several aspects. One of these
is the data type chosen: in general, the larger the data type (e.g.
in terms of bits), the smaller the roundoff errors will be. However,
increasing the bit-length typically leads to decreases in performance. Additionally, finite-precision arithmetic is not associative
or distributive. Thus, an attempt to reduce the running time of
a computation by reducing the number of arithmetic operations
Conference’17, Washington, DC, USA
2017. 978-x-xxxx-xxxx-x/YY/MM. . . $15.00
DOI: 10.1145/nnnnnnn.nnnnnnn
(e.g. a ∗ b + a ∗ c → a ∗ (b + c)) may lead to a higher and possibly unacceptable roundoff error. Due to the unintuitive nature of
finite-precision and the subtle interactions between accuracy and
efficiency, manual optimization is challenging and automated tool
support is needed.
Mixed-precision Tuning. In order to save valuable resources like
time, memory or energy, we would like to choose the smallest data
type which still provides sufficient accuracy. Not all applications
require high precision, but how much precision an application
needs depends highly on context: on the computations performed,
the magnitude of inputs, and the expectations of the environment,
so that no one-size-fits-all solution exists. Today, the common way
to program is to pick a seemingly safe, but often overprovisioned,
data type — for instance, uniform double floating-point precision.
Mixed-precision, where different operations are performed in
potentially different precisions, increases the number of points
on the accuracy-efficiency tradeoff space and thus increases the
possibility for more resource savings. With uniform precision, if
one precision is just barely not enough, we are forced to upgrade
all operations to the next higher precision. This can increase the
running time of the program substantially. Therefore, it would be
highly desirable to upgrade only part of the operations; just enough
to meet the accuracy target, while increasing the execution time by
the minimum.
One of the challenges in choosing a finite precision – uniform
or mixed – is ensuring that the roundoff errors remain below an
application-dependent acceptable bound. Recent work has provided
automated techniques and tools which help the programmer choose
between different uniform precisions by computing sound worstcase numerical error bounds [12, 19, 28, 43].
However, selecting a suitable mixed precision is significantly
more difficult than choosing a uniform precision. The number of
different type assignments to variables and operations is too large
to explore exhaustively. Furthermore, neither roundoff errors nor
the performance of mixed-precision programs follow intuitive rules,
making manual tuning very challenging. For instance, changing
one particular operation to lower precision may produce a smaller
roundoff error than changing two (other) operations. Furthermore,
mixed-precision introduces cast operations, which may increase
the running time, even though the accuracy decreases.
In high-performance computing (HPC), mixed-precision tuning [25, 40] approaches use dynamic techniques to estimate roundoff errors, and thus do not provide accuracy guarantees. This makes
them unsuitable, for instance, for many safety-critical embedded
applications. The FPTuner tool [9] is able to soundly tune mixedprecision for straight-line programs, but it requires user guidance
for choosing which mixed-precision variants are more efficient and
is thus not fully automated. Furthermore, its tuning time can be
prohibitively large.
Conference’17, July 2017, Washington, DC, USA
Rewriting. Another possibility to improve the efficiency of finiteprecision arithmetic is to reduce the number of operations that
need to be carried out. This can be achieved without changing the
real-valued semantics of the program by rewriting the computation
using laws like distributivity and associativity. Unfortunately, these
laws do not hold for finite-precision computations: changing the
order of a computation changes the magnitude of roundoff errors
committed, but in unintuitive ways. Previous work has focused
on automated techniques for finding a rewriting (i.e. re-ordering
of computations) such that roundoff errors are minimized [14, 33].
However, optimizing for accuracy may increase the number of
arithmetic operations and thus the execution time.
Combining Mixed-Precision Tuning and Rewriting. We propose
to combine mixed-precision tuning with rewriting in a fully automated technique for performance optimization of finite-precision
arithmetic computations. Our approach is sound in that generated
programs are guaranteed to satisfy user-specified roundoff error
bounds, while our performance improvements are best effort (due
to the complexity and limited predictability of today’s hardware).
Our rewriting procedure takes into account both accuracy and the
number of arithmetic operations. It can reduce the running time of
programs directly, but more importantly, by improving the accuracy,
it allows for more aggressive mixed-precision tuning. To the best
of our knowledge, this is the first combination of mixed-precision
tuning and rewriting for performance optimization.
We combine a search algorithm which was successfully applied
in HPC mixed-precision tuning [40] with a sound static roundoff
error analysis [12] and a static performance cost function to obtain
a mixed-precision tuning technique which is both sound and fully
automated as well as efficient. We furthermore modify a rewriting optimization algorithm based on genetic programming [14] to
consider both accuracy and performance.
While most of the building blocks of our approach have been
presented before, their effective combination requires careful adaptation and coordination, as our many less-successful experiments
have shown. Just as a manual optimization is challenging due to the
subtle interactions of finite-precision accuracy and performance,
so is the design of an automated technique.
We focus on arithmetic kernels, and do not consider conditionals
and loops. Our technique (as well as FPTuner’s) can be extended
to conditionals by considering individual paths separately as well
as to loops by optimizing the loop body only and thus reducing
both to straight-line code. The challenge currently lies in the sound
roundoff error estimation, which is known to be hard and expensive
for conditionals and loops [13, 20], and is largely orthogonal to the
focus of this paper.
Our technique is applicable and implemented in a tool called Anton for both floating-point as well as fixed-point arithmetic. While
floating-point support is standardized, fixed-point arithmetic is
most effective in combination with specialized hardware. We focus
in this paper on the algorithmic aspects of optimizing arithmetic
kernels and leave a thorough investigation of specialized hardware
implementations for future work.
For floating-point arithmetic, we evaluate Anton on standard
benchmarks from embedded systems and scientific computing. We
observe that rewriting alone improves performance by up to 17%
Eva Darulova, Einar Horn, and Saksham Sharma
and for some benchmarks even more by reducing roundoff errors
sufficiently to enable uniform double precision where the original benchmark requires (uniform) quad precision. Mixed-precision
tuning improves performance by up to 45% when compared to
the uniform precision version which would be needed to satisfy
the required accuracy. In combination with rewriting, Anton improves performance by up to 54% (93% for those cases where rewriting enables uniform double precision), and we also observe that
it improves performance for more benchmarks than when using
mixed-precision tuning or rewriting alone.
We focus on performance, although our algorithm is independent
of the optimization objective such that - with the corresponding
cost function - memory or energy optimization is equally possible.
Contributions. To summarize, in this paper we present:
• An optimization procedure based on rewriting, which takes
into account both accuracy and performance.
• A sound, fully automated and efficient mixed-precision
tuning technique.
• A carefully designed combination of rewriting and mixedprecision tuning, which provides more significant performance improvements that each of them alone.
• An implementation in a tool called Anton, which generates
optimized source programs in Scala and in C and which
supports both floating-point as well as fixed-point arithmetic. We plan to release Anton as open source.
• We show the effectivness of our tool on a set of arithmetic
kernels from embedded systems and scientific computing
and compare it against the state-of-the-art.
2
OVERVIEW
We first provide a high-level overview and explain the key ideas
of our approach using an example. Inspired by the tool Rosa [12],
the input to our tool Anton is a program written in a real-valued
specification language. (Nothing in our technique depends on this
particular frontend though.) Each program consists of a number of
functions which are optimized separately. The following nonlinear
embedded controller [14] is one such example function:
def rigidBody1(x1: Real, x2: Real, x3: Real): Real = {
require(-15.0 <= x1 && x1 <= 15 && -15.0 <= x2 &&
x2 <= 15.0 && -15.0 <= x3 && x3 <= 15)
-x1*x2 - 2*x2*x3 - x1 - x3
} ensuring(res => res +/- 1.75e-13)
In the function’s precondition (the require clause) the user provides the ranges of all input variables, on which the magnitudes
of roundoff errors depend. The postcondition (the ensuring clause)
specifies the required accuracy of the result in terms of worst-case
absolute roundoff error. For our controller, this information may be,
e.g., determined from the specification of the system’s sensors as
well as the analysis of the controller’s stability [29]. The function
√
body consists of an arithmetic expression (with +, −, ∗, /, ) with
possibly local variable declarations.
As output, Anton generates a mixed-precision source-language
program, including all type casts, which is guaranteed to satisfy
the given error bound and is expected to be the most efficient one
Sound Mixed-Precision Optimization with Rewriting
among the possible candidates. Anton currently supports fixedpoint arithmetic with bitlengths of 16 or 32 bits or IEEE754 single
(32 bit) and double (64 bit) floating-point precision as well as quad
precision (128 bit). The latter can be implemented on top of regular
double-precision floating-points [2]. Anton can be easily extended
to support other fixed- or floating-point precisions; here we have
merely chosen a representative subset.
Our approach decouples rewriting from the mixed-precision tuning. To find the optimal program, i.e. the most efficient one given
the error constraint, we would need to optimize both the evaluation
order as well as mixed-precision simultaneously: i) the evaluation order determines which mixed-precision type assignments to
variables and operations are feasible, and ii) the mixed-precision
assignment influences which evaluation order is optimal. Unfortunately, this would require an exhaustive search [14], which is
computationally infeasible. We thus choose to separate rewriting
from mixed-precision tuning and further choose (different) efficient
search techniques for each.
Step 1: Rewriting. Anton first rewrites the input expression into
one which is equivalent under a real-valued semantics, but one
which has a smaller roundoff error when implemented in finiteprecision and which does not increase the number of arithmetic
operations. Rewriting can increase the opportunities for mixedprecision tuning, because a smaller roundoff error may allow more
lower-precision operations. The second objective makes sure that
we do not accidentally increase the execution time of the program
by performing more arithmetic operations and even lets us improve
the performance of the expression directly.
Anton’s rewriting uses a genetic algorithm to search the vast
space of possible evaluation orders efficiently. At every iteration,
the algorithm applies real-valued identities, such as associativity
and distributivity, to explore different evaluation orders. The search
is guided by a fitness function which bounds the roundoff errors for
a candidate expression - the smaller the error, the better. This error
computation is done wrt. uniform precision, as the mixed-precision
type assignment will only be determined later. While the precision
can affect the result of rewriting, we empirically show that the
effect is small (section 4).
This approach is heavily inspired by the algorithm presented
in Darulova et al. [14] which optimized for accuracy only. We have
made important adaptations, however, to make it work in practice
for optimizing for performance as well as to work well with mixedprecision tuning.
For our running example, the rewriting phase produces the following expression, which improves accuracy by 30.39% and does
not change the number of operations:
(-(x1 * x2) - (x1 + x3)) - ((2.0 * x2) * x3)
To give an intuition why this seemingly small change makes such a
difference, note that the magnitude of roundoff errors depends on
the possible ranges of intermediate variables. Even small changes
in the evaluation order can have large effects on these ranges and
consequently also on the roundoff errors.
Step 2: Code Transformation. To facilitate mixed-precision tuning,
Anton performs two code transformations: constants are assigned
Conference’17, July 2017, Washington, DC, USA
to fresh variables and the remaining code is converted into threeaddress form. By this, every constant and arithmetic operation
corresponds to exactly one variable, whose precision will be tuned
during phase 4. If not all arithmetic operations should be tuned, i.e.
a more coarse grained mixed-precision is desired, then this step
can be skipped.
Step 3: Range Analysis. The evaluation order now being fixed,
Anton computes the real-valued ranges of all intermediate subexpressions and caches the results. Ranges are needed for bounding
roundoff errors during the subsequent mixed-precision tuning, but
because the real-valued ranges are not affected by different precisions, Anton computes them only once for efficiency.
Step 4: Mixed-precision Tuning. To effectively search the space
of possible mixed-precision type assignments, we choose a variation of the delta-debugging algorithm used by Precimonious [40],
which prunes the search space in an effective way. It starts with all
variables in the highest available precision and attempts to lower
variables in a systematic way until it finds that no further lowering
is possible while still satisfying the given error bound. We have
also tried to apply a genetic algorithm for mixed-precision tuning,
but observed that it was quite clearly not a good fit (we omit the
experimental results for space reasons).
Unlike Precimonious, which evaluates the accuracy and performance of different mixed-precisions dynamically, Anton uses a
static sound error analysis as well as a static (but heuristic) performance cost function to guide the search. The performance cost
function assigns (potentially different) abstract costs to each arithmetic as well as cast operation. Using static error and cost functions
reduces the tuning time significantly, and further allows tuning to
be run on different hardware than the final generated code.
For our running example, Anton determines that uniform double
floating-point precision is not sufficient and generates a tuned
program which runs 43% faster than the quad uniform precision
version, which is the next available uniform precision in the absence
of mixed-precision:
def rigidBody1(x1: Quad, x2: Quad, x3: Double): Double = {
(−d (x1
∗q x2) −d (x1 +q x3)) −d ((x2 ∗q 2.0f) ∗d x3)
}
For readability, we have inlined the expression and use the letters ‘d’
and ‘q’ to mean that the operation is performed in double and quad
precision respectively. The entire optimization including rewriting
takes about 4 seconds. Had we used only the mixed-precision tuning
without rewriting, the program would still run 28% faster than quad
precision.
Step 5: Code Generation. Once mixed-precision tuning finds a
suitable type configuration, Anton generates the corresponding
finite-precision program (in Scala or C), inserting all necessary
casts, and in the case of fixed-point arithmetic all necessary bitshift operations.
Our entire approach is parametric in the finite-precision used,
and thus works equally for fixed-point arithmetic. Furthermore, it
is geared towards optimizing the performance of programs under
the (hard) constraint that the given error bound is guaranteed to
be satisfied. Other optimization criteria like memory and energy
Conference’17, July 2017, Washington, DC, USA
are also conceivable, and would only require changing the cost
function.
3
BACKGROUND
We first review necessary background about finite-precision arithmetic and sound roundoff error estimation, which is an important
building block for both rewriting and mixed-precision tuning.
3.1
Floating-point Arithmetic
We assume standard IEEE754 single and double precision floatingpoint arithmetic, in rounding-to-nearest mode and the following
standard abstraction of IEEE754 arithmetic operations:
x ◦f l y = (x ◦ y)(1 + δ ) , |δ | ≤ ϵm where ◦ ∈ +, −, ∗, / and ◦f l
denotes the respective floating-point version, and ϵm bounds the
maximum relative error (which is 2−24 , 2−53 and 2−113 for single,
double and quad precision respectively). Unary minus and square
root follow similarly. We further consider NaNs (not-a-number
special values), infinities and ranges containing only denormal
floating-point numbers to be errors and Anton’s error computation
technique detects these automatically. We note that under these
assumptions the abstraction is indeed sound.
3.2
Fixed-point Arithmetic
Floating-point arithmetic requires dedicated support, either in hardware or in software, and depending on the application, this support
may be too costly. An alternative is fixed-point arithmetic which
can be implemented with integers only, but which in return requires that the radix point alignments are precomputed at compile
time. While no standard exists, fixed-point values are usually represented as (signed) bit vectors with an integer and a fractional part,
separated by an implicit radix point. At runtime, the alignments
are then performed by bit-shift operations. These shift operations
can also be handled by special language extensions for fixed-point
arithmetic [21]. For more details please see [1], whose fixed-point
semantics we follow. We use truncation as the rounding mode for
arithmetic operations. The absolute roundoff error at each operation is determined by the fixed-point format, i.e. the (implicit)
number of fractional bits available, which in turn can be computed
from the range of possible values at that operation.
3.3
Sound Roundoff Error Estimation
We build upon Rosa’s static error computation [12], which we
review here. Keeping with Rosa’s notation, we denote by f and
x a mathematical real-valued arithmetic expression and variable,
respectively, and by f˜ and x̃ their floating-point counterparts. The
worst-case absolute error that the error computation approximates
is maxx ∈[a,b] | f (x) − f˜ (x̃)| where [a, b] is the range for x given in
the precondition. The input x may not be representable in finiteprecision arithmetic, and thus we consider an initial roundoff error:
|x − x̃ | = |x | ∗ δ, δ ≤ ϵm which follows from subsection 3.1. This
definition extends to multi-variate f component-wise.
At a high level, error bounds are computed by a data-flow analysis over the abstract syntax tree, which computes for each intermediate arithmetic expression (1) a bound on the real-valued range,
(2) using this range, the propagated errors from subexpressions
Eva Darulova, Einar Horn, and Saksham Sharma
and the newly committed worst-case roundoff error. For a more
detailed description, please see [12].
For our rewriting procedure, since intermediate ranges change
with different evaluation orders, we compute both the ranges and
the errors at the same time. For mixed-precision tuning, where
real-valued ranges remain constant, we separate the computations
and only compute ranges once.
Anton currently does not support additional input errors, e.g.
from noisy sensors, but note that an extension is straight-forward.
In fact, a separate treatment of roundoff errors and propagated
input errors may be desirable [13].
We compute absolute errors. An automated and general estimation of relative errors (| f (x) − f˜ (x̃)|/| f (x)|), though it may be
more desirable, presents a significant challenge today. To the best
of our knowledge, state-of-the-art static analyses only compute
relative errors from an absolute error bound, which is then not
more informative. Furthermore, relative error is only defined if the
range of the expression in question (i.e. the range of f (x)) does not
include zero, which unfortunately happens very often in practice.
Range Estimation. Clearly, accurately estimating ranges is the
main component in the error bound computation, and is known
to be challenging, especially for nonlinear arithmetic. This challenge was addressed in previous work on finite-precision verification [12, 19, 43]. Interval arithmetic (IA) [31] is an efficient choice
for range estimation, but one which often introduces large overapproximations as it does not consider correlations between variables. Affine arithmetic [16] tracks linear correlations, and is thus
sometimes better (though not always) in practice. The overapproxi√
mations due to nonlinear arithmetic (∗, /, ) can be mitigated by
refining ranges computed by IA with a complete (though expensive) nonlinear arithmetic decision procedure inside the Z3 [17]
SMT solver [12]. Anton’s computation builds on this work and is
parametric in the range arithmetic and currently supports interval
and affine arithmetic as well as the combination with SMT.
4
REWRITING OPTIMIZATION
The goal of Anton’s rewriting optimization is to find an order of
computation which is equivalent to the original expression under a
real-valued semantics, but which exhibits a smaller roundoff error
in finite-precision - while not increasing the execution time. We
first review previous work that we build upon and then describe
the concrete adaptation in Anton.
4.1
Genetic Search for Rewriting
An exhaustive search of all possible rewritings or evaluation orders
is computationally infeasible. Even for only linear arithmetic, the
problem of finding an optimal order is NP-hard [14] and does not
allow a divide-and-conquer or gradient-based method such that a
heuristic and incomplete search becomes necessary.
Genetic programming [35] is an evolutionary heuristic search
algorithm which iteratively evolves (i.e. improves) a population of
candidate expressions, guided by a fitness function. The search is
initialized with copies of the initial expression. At every iteration,
expressions are selected from the current population based on their
fitness, and then mutated to form the the next population. For
rewriting, the fitness is the worst-case roundoff error - the smaller
Sound Mixed-Precision Optimization with Rewriting
the better. The selected expressions are randomly mutated using
mathematical real-valued identities, e.g. a + (b + c) → (a + b) + c.
In this fashion, the algorithm explores different rewritings. The key
idea is that the likelyhood of an expression to be selected depends
on its fitness - fitter expressions are more likely to be selected - and
thus the search converges with each iteration towards expressions
with smaller roundoff errors. Furthermore, even less-fit expressions
have a non-zero probability of being selected, thus helping to avoid
local minima.
The output of the procedure is the expression with the least
roundoff error seen during the run of the search. Darulova et al. [14]
used a static analysis as described in subsection 3.3 as the fitness
function, with smaller roundoff error being better. Their approach
was implemented for fixed-point arithmetic only, and the optimization goal was to reduce roundoff errors.
4.2
Rewriting in Anton
We instantiate the algorithm described above with a population of
30 expressions, 30 iterations and tournament selection [35] for selecting which expressions to mutate. These are successfull settings
identified in [14]. We do not use the crossover operation, because it
had only limited effects. We further extend the rather limited set of
mutation rules in [14] with the more complete one used by the (unsound) rewriting optimization tool called Herbie [33] (see section 7).
These rules are still based on mathematical identities. For the static
error function, as described in subsection 3.3, we choose interval
arithmetic for computing ranges and affine arithmetic for tracking
errors, which provide a good accuracy-performance tradeoff.
The algorithm described in subsection 4.1 reduces roundoff errors, but may - and as we experimentally observed often does increase the number of operations and thus the execution time.
This may negate any advantage reduced roundoff errors bring for
mixed-precision tuning. Furthermore, it is not clear at this point
with respect to which precision to perform the rewriting as a mixedprecision type assignment is not available.
4.2.1 Optimizing for Performance. We modify the search algorithm to return the expression which does not increase the number of arithmetic operations beyond the initial count, and which
has the smallest worst-case roundoff error. We do not use a more
sophisticated cost function, as for this the actual final type assignment would be needed (which only becomes available after
mixed-precision tuning). We have also implemented a variation of
the search which minimizes the number of arithmetic expressions,
while not increasing the roundoff error beyond the error of the initial expression. However, we found empirically in our experiments
that this produces worse overall results in combination with mixedprecision tuning, i.e. reducing the roundoff was more beneficial
than reducing the operation count. For space reasons, we omit this
experimental comparison.
4.2.2 Optimizing with Uniform Precision. The static error analysis, which we use as the fitness function during search has to be
performed wrt. to some mixed or uniform precision, and different
choices may result in the algorithm returning syntactically different rewritten expressions. As the final (ideal) mixed-precision type
Conference’17, July 2017, Washington, DC, USA
assignment is not available when Anton performs rewriting, it has
to choose some precision without knowing the final assignment.
The main aspect which determines which evaluation order is
better over another are the ranges of intermediate variables - the
larger the ranges, the more already accumulated roundoff errors
will be magnified. These intermediate ranges differ only little between different precisions, because the roundoff errors are small in
comparison to the real-valued ranges. Thus, we do not expect that
different precision affect the result of rewriting very much.
We performed the following experiment to validate this intuition.
We ran rewriting repeatedly on the same expression, but with the
error analysis wrt. uniform single, double and quad floating-point
precision as well as up to 50 random mixed-precision type assignments. We picked each mixed-precision assignment as the baseline in turn. We evaluated the roundoff errors of the expressions
returned by the uniform precision rewritings under this mixedprecision assignment. If rewriting in uniform precision produces an
expression which has the same or a similar error as the expressions
returned with rewriting wrt. mixed-precision, then we consider the
uniform precision to be a good proxy. We counted how often each
of the three uniform precisions was such a good proxy, where we
chose the threshold to be that the errors should be within 10%. For
space reasons, we only summarize the results. Single and double
floating-point precision were a good proxy in roughly 80% of the
cases, whereas quad precision in 75%. When the mixed-precision
assignments were in fixed-point precision, single, double and quad
uniform precision all achieve 69% accuracy. Performing the rewriting wrt. fixed-point arithmetic is not more beneficial either. Finally,
rewriting in uniform precision never increased the errors (when
evaluated in the mixed-precision baseline). We thus (randomly)
choose to perform the rewriting with respect to double floatingpoint precision.
5
SOUND MIXED-PRECISION TUNING
After rewriting, Anton pre-processes expressions as described in
step 2 in section 2 and computes the now-constant ranges of intermediate expressions. Since the range computation needs to be
performed only once, we choose the more expensive but also more
accurate range analysis using a combination of interval arithmetic
and SMT [12]. We again first review previous work that we build
upon before explaining Anton’s technique in detail.
5.1
Delta-Debugging for Precision Tuning
Delta-debugging has been originally conceived in the context of
software testing for identifying the smallest failing testcase [45].
Rubio-González et al. [40] have adapted this algorithm for mixedprecision tuning in the tool Precimonious. It takes as input:
• a list of variables to be tuned (τ ) as well as a list of all other
variables with their constant precision assignments (ϕ)
• an error function which bounds the roundoff error of a
given precision assignment
• a cost function approximating the expected performance
• an error bound emax to be satisfied.
The output is a precision assignment for variables in τ . A partial
sketch of the algorithm is depicted in Figure 1, where the boxes
represent sets of variables in τ . Consider the case where variables
Conference’17, July 2017, Washington, DC, USA
cost
function
64
✔
32
✗
64
32
64
32
...
64
64
32
✔
64
Eva Darulova, Einar Horn, and Saksham Sharma
✗
32
64
✔
Figure 1: Sketch of the delta-debugging algorithm
can be in single (32 bit) and double (64 bit) floating-point precision.
The algorithm starts by assigning all variables in τ to the highest
precision, i.e. to double precision. It uses the error function to check
whether the roundoff error is below emax . If it is not, then an
optimization is futile, because even the largest precision cannot
satisfy the error bound.
If the check succeeds, the algorithm tries to lower all variables
in τ by assigning them to single precision. Again, it computes the
maximum roundoff error. If it is below emax , the search stops as
single precision is sufficient. If the error check does not succeed,
the algorithm splits τ into two equally sized lists τ1 and τ2 and
recurses on each separately. When recursing on τ1 , the new list of
variables considered for lowering becomes τ 0 = τ1 and the list of
constant variables becomes ϕ 0 = ϕ + τ2 . The case for recursing on
τ2 is symmetric. When a type assignment is found which satisfies
the error bound emax , the recursion stops. Since several valid type
assignments can be found, a cost function is used to select the one
with lowest cost (i.e. best performance.)
The algorithm is generalized to several precisions by first running it with the highest two precisions. In the second iteration, the
variables which have remained in the highest precision become
constant and move to ϕ. The optimization is then performed on the
new τ considering the second and third highest precision.
5.2
Mixed-Precison Tuning in Anton
We have instantiated this algorithm in Anton and describe now our
adaptations which were important to obtain sound mixed-precision
assignments as well as good results.
5.2.1 Static Error Analysis. Precimonious estimates roundoff
errors by dynamically evaluating the program on several random
inputs. This approach is not sound, and also in general inefficient,
as a large number of program executions is needed for a reasonably
confident error bound. Anton uses a sound static roundoff error
analysis, which is an extension of Rosa’s uniform-precision error
analysis from subsection 3.3 to support mixed-precision. This extension uses affine arithmetic for the error computation and considers
roundoff errors incurred due to down-casting operations.
Precimonious can handle any program, including programs with
loops. Our static error function limits which kinds of programs Anton can handle to those for which the error bounds can be verified,
but for those it provides accuracy guarantees. We note, however,
that our approach can potentially be extended to loops with techniques from [13], by considering the loop body only.
5.2.2 Tuning All Variables. Unlike Precimonious, which optimizes only the precisions of declared variables, Anton optimizes the
precisions of all variables and intermediate expressions by transforming the program prior to mixed-precision tuning into threeaddress form (this transformation can also be skipped, if desired).
The precision of an arithmetic operation is determined by the
precisions of its operands as well as the variable that the result
is assigned to. In general, we follow a standard semantics, where
the operation is performed in the highest operand precision, with
one exception. For example, for the precision assignment {x →
single, y → single, z → double}, and expression val z = x + y,
we choose the interpretation z = x.toDouble + y.toDouble instead
of z = (x + y).toDouble, so that the operation is performed in the
higher precision, thus loosing less accuracy. Our experiments (whose
results are not shown for space reasons) have confirmed that this
indeed provides better overall results.
Delta-debugging operates on a list of variables that it optimizes.
We have observed in our experiments that it is helpful when the
variables are sorted by order of appearance in the program. Our
hypothesis is that delta-debugging is more likely to assign ‘neighboring’ variables the same type, which in general is likely to reduce
type casts and thus cost.
We found that often constants are representable in the lowest
precision, e.g. when they are integers. It is thus tempting to keep
those constants in the lowest precision. However, we found that,
probably due to cast operations, this optimization was not a universal improvement, so that Anton optimizes constants just like other
variables.
5.2.3 Static Cost Function. Precimonious uses dynamic evaluation to estimate the expected running time. We note that this
approach is quite inefficient, but also not entirely reliable, as running times can vary substantially between runs (our benchmarking
tool takes several seconds per benchmark until steady-state). It
furthermore restricts the tuning to the specific platform that the
tuning is run on. FPTaylor, on the other hand, optimizes for the
number of lower-precision operations (more is better) and provides
a way for the user to manually restrict the overall number of cast
operations, and provides the possibility to constrain certain variables to have the same precision (‘ganging’). Knowing up front how
many cast operations are needed is quite challenging.
We instead propose a static cost function to obtain an overall
technique which is efficient as well as fully automated. Note that
this function needs to be able to distinguish only which of two
mixed-precision assignments is the more efficient one, and does
not need to predict the actual running times. We are aiming for a
practical solution and are aware that more specialized approaches
are likely to provide better prediction accuracy. As we focus in this
paper on the algorithmic aspects, we leave this for future work.
We have implemented and experimentally evaluated several cost
function candidates for floating-point arithmetic, which all require
only a few milliseconds to run:
• Simple cost assigns a cost of 1, 2 and 4 to single, double and
quad precision arithmetic and cast operations respectively.
• Benchmarked cost assigns abstract costs to each operation based on benchmarked average running times, i.e.
we benchmark each operation in isolation with random
inputs. This cost function is platform specific and different
Sound Mixed-Precision Optimization with Rewriting
arithmetic operations have different costs (e.g. addition is
generally cheaper than division).
• Operation count cost counts the number of operations performed in each precision, recorded as a tuple and ordered
lexicographically, i.e. more higher-precision operations
lead to a higher cost. This cost function is inspired by FPTuner and does not consider cast operations.
• Absolute errors cost uses the static roundoff error, with
smaller values representing a higher cost. A smaller roundoff usually implies larger data types, which should correlate
with a higher execution time.
We evaluate our cost functions experimentally on complete examples (see section 6). For each example function, we first generate
42 random precision assignments and their corresponding programs
in Scala. We calculate the cost of each with all cost functions and
also benchmark the actual running time. We use Scalameter [36] for
benchmarking (see section 6). We are interested in distinguishing
pairs of mixed-precision assignments, thus for each benchmark
program, we create pairs of all the randomly generated precision
assignments. Then we count how often each static cost function can
correctly distinguish which of the two assignments is faster, where
the ‘ground truth’ is given by the benchmarked running times. The
following table summarizes the results of our cost function evaluation. The rows ‘32 - 128’ and ‘32 - 64’ and give the proportion of
correctly distinguished pairs of type assignments with the random
types selected from single, double and quad and single and double
precision, respectively.
precisions
bench
simple
opCount
errors
32 - 128
32 - 64
0.7692
0.6416
0.8204
0.5889
0.8106
0.5477
0.5871
0.5462
Given these results, we choose a two-pronged approach for
floating-point arithmetic: whenever quad precision may appear
(e.g. during the first round of delta-debugging), we use the naive
cost function. Once no more quad precision appears, we use the
benchmarked one (e.g. during the second round of delta-debugging
for benchmarks which do not require quad precision). For optimizing fixed-point arithmetic we use the simple cost function.
6
IMPLEMENTATION AND EVALUATION
We have implemented Anton in the Scala programming language.
Internal arithmetic operations are implemented with a rational data
type to avoid roundoff errors and ensure soundness. Apart from the
(optional) use of the Z3 SMT solver [17] for more accurate range
computations, Anton does not have any external dependencies.
In this work, we focus on arithmetic kernels, and do not consider conditionals and loops. Our technique (as well as FPTuner’s)
can be extended to conditionals by considering individual paths
separately as well as to loops by optimizing the loop body and thus
reducing it to straight-line code. The challenge currently lies in
the sound roundoff error estimation, which is known to be hard
and expensive [13, 20], and is largely orthogonal to the focus of
this paper. Our error computation method can also be extended
to transcendental functions [11] and we plan to implement this
extension in the future.
Conference’17, July 2017, Washington, DC, USA
We are not aware of any tool which combines rewriting and
mixed-precision tuning and which supports both floating-point as
well as fixed-point arithmetic. We experimentally compare Anton
with FPTuner [9], which is the only other tool for sound (floatingpoint) mixed-precision tuning. FPTuner reduces mixed-precision
tuning to an optimization problem. The optimization objective in
FPTuner is performance, but it is up to the user to provide optimization constraints, e.g. in form of a maximum number of cast
operations. FPTuner also supports ganging of operators where the
user can specify constraints that limit specific operations to have
the same precision, which can be useful for vectorization. Adding
these kinds of constraints to our approach is straight-forward. However, this manual optimization requires user expertise, and we have
chosen to focus on automated techniques and the combination of
mixed-precision and rewriting. As section 5 shows, already determining the cost function without ganging is challenging.
As FPTuner only supports floating-point arithmetic and fixedpoint arithmetic is most useful in combination with specialized
hardware, which is beyond the scope of this paper, we perform the
experimental evaluation here for floating-point arithmetic only.
We do not compare against Precimonious directly, since Anton
uses the same search algorithm internally. We would thus be merely
comparing sound and unsound error analyses, which in our view
is not meaningful.
Benchmarks. We have experimentally evaluated our approach
and tool on a number of standard finite-precision verification benchmarks [12, 14, 28]. The benchmarks rigidBody, invertedPendulum
and traincar are embedded controllers; bsplines, sine, and sqrt are
examples of functions also used in the embedded domain (e.g. sine
approximates the transcendental function whose library function
is often not available, or expensive). The benchmarks doppler, turbine, himmilbeau and kepler are from the scientific computing domain. Table 1 lists the number of arithmetic operations and variables
for each benchmark. An asterisk (∗) marks nonlinear benchmarks.
To evaluate scalability, we also include four ‘unrolled’ benchmarks
(marked by ‘2x’ and ‘3x’), where we double (or triple) the arithmetic
operation count, as well as the number of input variables.
Which mixed-precision assignment is possible crucially depends
on the maximum allowed error. None of the standard benchmarks
come with suitable bounds since the focus until now has been on
uniform precision. We follow FPTuner in defining suitable error
bounds for our benchmarks. For each original example program,
we first compute roundoff errors for uniform single and double
precision. Slightly rounded up, these are the error bounds for benchmarks denoted by F and D respectively. From these we generate
error bounds which are multiples of 0.5, 0.1 and 0.01 of these, denoted by F 0.5 , F 0.1 , etc. That is, we create benchmarks whose error
bounds are half, an order of magnitude and two orders of magnitude
smaller than in uniform precision. This corresponds to a scenario
where uniform precision is just barely not enough and we would
like to avoid the next higher uniform precision.
Experimental Setup. We have performed all experiment on a
Linux desktop computer with Intel Xeon 3.30GHz and 32GB RAM.
For benchmarking, we use Anton’s programs generated in Scala
(version 2.11.6) and translate FPTuner’s output into Scala. This
translation is done automatically by a tool we wrote, and does not
Conference’17, July 2017, Washington, DC, USA
benchmark
bspline2*
doppler*
himmilbeau*
invPendulum
kepler0*
kepler1*
kepler2*
rigidBody1*
rigidBody2*
sine*
sqroot*
traincar
turbine1*
turbine2*
turbine3*
kepler2 (2x)
rigidbody2 (3x)
sine (3x)
traincar (2x)
ops - vars
10 - 1
8-3
15 - 2
7-4
15 - 6
24 - 4
36 - 6
7-3
14 - 3
18 - 1
14 - 1
28 - 14
14 - 3
10 - 3
14 - 3
73 - 12
44 - 9
56 - 3
57 - 28
Eva Darulova, Einar Horn, and Saksham Sharma
FPTuner
Anton-mixed
Anton-full
4m 56s
12m 48s
9m 7s
3m 47s
19m 17s
1h 26m 3s
1h 52m 38s
4m 45s
8m 0s
9m 10s
4m 33s
17m 17s
5m 15s
4m 41s
4m 23s
15h 36m 59s
58m 55s
22m 40s
33m 19s
34s
1m 8s
44s
32s
43s
2m 17s
3m 36s
28s
43s
1m 9s
40s
1m 13s
1m 22s
58s
1m 21s
7m 57s
5m 33s
8m 20s
4m 32s
50s
5m 4s
1m 21s
45s
1m 2s
2m 9s
4m 22s
36s
1m 3s
3m 36s
1m 11s
2m 11s
3m 56s
2m 52s
3m 44s
9m 40s
3m 44s
13m 57s
5m 48s
Table 1: Optimization times of Anton and FPTuner
affect the results, as FPTuner is platform independent. We use the
Scalameter tool [36] (version 0.7) for benchmarking, which first
warms up the JVM and detects steady-state execution after the JustIn-Time compiler has run, and then benchmarks the function as it
is run effectively in native compiled code. We use the @strictfp annotation to ensure that the floating-point operations are performed
exactly as specified in the program (otherwise error bounds cannot
be guaranteed). We intentionally choose this setup to benchmark
the mixed-precision assignments produced by Anton and FPTuner
and not compiler optimization effects, which are out of scope of
this paper.
Optimization Time. Table 1 compares the execution times of
Anton and FPTuner themselves. For Anton, we report the times for
mixed-precision tuning only without rewriting, which corresponds
to the functionality that FPTuner provides, as well as the time for
full optimization, i.e. rewriting and mixed-precision tuning.
For each tool, we measured the running time 5 times for each
benchmark variant separately with the bash time command, recording the average real time. In the table, we show the aggregated
time for all the variants of a benchmark, e.g. the total time for the
F , F 0.5 , F 0.1 , ... variants of one benchmark together. These times
are end-to-end, i.e. they include all phases as well as occasional
timeouts by the backend solvers (Gelpia [3] for FPTuner and Z3
for Anton). Anton is faster than FPTuner for all benchmarks, even
with rewriting included, and often by large factors. We suspect that
this is due to the fact that FPTuner is solving a global optimization
problem, which is known to be hard.
Performance Improvements. To evaluate the effectiveness of Anton we have performed end-to-end performance experiments. For
this, we benchmark each generated optimized program five times
with Scalameter and use the average running time. Each run of a
program consists of 100 000 executions on random inputs. Then, for
each mixed-precision variant we compare its running time against
the corresponding uniform-precision program and report the relative improvement (i.e. mixed-precision running time/ uniform
running time). Corresponding here means the smallest uniform
precision which would satisfy the error bound, e.g. for the F 0.5
benchmark, the smallest uniform precision satisfying this bound is
double precision.
Table 2 shows the relative running time improvements for Anton
with only mixed-precision tuning, with only rewriting and with
both rewriting and mixed-precision tuning enabled as well as for
FPTuner. We show the average speedups over all variants of a
benchmark (row). Bold values mark performance improvements
above 5% and underlined values those with over 5% slowdowns.
Overall, we can see that the biggest speedups occur for the F 0.5
and D 0.5 benchmarks, as expected. The very low values (e.g. 0.07
in Table 2b for train4-st8-D 0.5 ) result from rewriting reducing the
roundoff error sufficiently for uniform precision being enough (the
baseline comparison running time is however the higher uniform
precision). In the case of FPTuner, the very low and very high
values are caused by different characteristics of the error analyses.
As a consequence, FPTuner is able to compute smaller roundoff
errors than Anton for some benchmarks (and assigning uniform
double precision instead of mixed), while for others it computes
bigger roundoff errors, where it cannot show that double precision
is sufficient, whereas Anton can.
Comparing FPTuner with Anton’s mixed-precision tuning only
(Table 2a), we observe that Anton is more conservative, both with
performance improvements but also with slowdowns, increasing
the execution time only rarely. Considering Anton’s significantly
better optimization time, Anton provides an interesting tradeoff
between mixed-precision tuning and efficiency.
Comparing the speedups obtained by mixed-precision tuning
and rewriting, we note that both techniques are successfull in improving the performance, though the effect of rewriting is modest.
When the two techniques are combined, we obtain the biggest performance improvements, which are furthermore more than just the
sum of both. Note that rewriting could also be combined with (i.e.
run before) FPTuner’s mixed-precision tuning.
7
RELATED WORK
Rewriting. An alternative approach to rewriting was presented
by Damouche et al. [10] which relies on a greedy search and an
abstract domain which represents possible expression rewritings
together with a static error analysis similar to ours. The tool Herbie [33] performs a greedy hill-climbing search guided by a dynamic error evaluation function, and as such cannot provide sound
error bounds. It is geared more towards correcting catastrophic
cancellations, by employing an ‘error localization’ function which
pin-points an operation which commits a particularly large roundoff error and then targets the rewriting rules at that part of the
expression. It would be interesting in the future to compare these
different search techniques for rewriting.
Mixed-precision Tuning in HPC. Mixed-precision is especially
important in HPC applications, because floating-point arithmetic
is widely used. Lam et al. introduced a binary intrumentation tool
together with a breadth-first search algorithm to help programmers
search for suitable mixed-precision programs [26]. This work was
Sound Mixed-Precision Optimization with Rewriting
benchmark
F
F 0.5 F 0.1 F 0.01 D
D 0.5 D 0.1 D 0.01 Q
bspline2
1.01 0.55 1.00 1.00 1.00 1.00 1.00
doppler
0.97 0.89 0.96 0.95 0.96 1.01 0.99
himmilbeau
0.98 0.95 1.02 0.98 1.02 0.61 1.04
invPend.
0.98 1.02 1.01 0.98 0.99 0.85 1.00
kepler0
1.00 1.07 1.00 1.00 1.00 0.85 0.97
kepler1
1.00 0.90 0.95 0.99 0.98 0.89 1.10
1.00 1.02 0.93 1.00 1.00 0.85 1.14
kepler2
1.04 1.02 1.02 1.00 0.97 0.72 0.94
rigidBody1
rigidBody2
0.97 0.94 1.01 1.04 1.01 0.98 1.16
sine
1.01 0.64 1.00 0.99 1.00 1.24 1.00
sqroot
1.02 0.84 0.99 1.01 1.00 0.59 1.00
traincar
0.98 0.97 0.98 1.01 1.00 0.61 0.61
turbine1
1.00 0.63 1.00 1.01 1.01 1.18 1.00
turbine2
0.98 0.66 0.98 1.01 0.98 0.77 1.00
turbine3
1.00 0.62 0.99 0.99 0.99 1.18 1.00
kepler2 (2x)
0.98 0.91 0.99 0.99 1.00 0.65 1.13
rigidBody2(3x) 0.99 1.02 1.00 0.97 0.97 0.74 1.09
1.00 0.75 1.00 1.00 1.00 0.69 1.00
sine (3x)
traincar (2x) 1.13 1.05 1.05 1.12 1.14 0.61 0.87
(a) Anton - mixed-precision tuning only
F 0.5 F 0.1 F 0.01 D
Conference’17, July 2017, Washington, DC, USA
benchmark
F
bspline2
doppler
himmilbeau
invPend.
kepler0
kepler1
kepler2
rigidBody1
rigidBody2
sine
sqroot
traincar
turbine1
turbine2
turbine3
kepler2 (2x)
rigidBody2(3x)
sine (3x)
traincar (2x)
1.04 0.50 0.99 0.97 0.98 0.77
1.03 0.87 0.98 1.31 1.31 0.10
0.99 0.96 1.01 1.00 0.98 0.60
0.99 0.99 0.99 0.97 0.97 0.68
0.94 1.05 0.99 0.96 0.96 0.55
0.98 0.91 0.87 0.90 0.87 0.89
0.96 1.04 0.94 0.93 0.93 0.55
1.00 1.02 0.97 0.96 0.99 0.57
0.97 0.98 0.90 0.88 0.88 0.61
0.98 0.65 1.09 1.10 1.09 1.26
0.97 0.87 0.95 0.92 0.93 0.64
0.88 0.85 0.86 0.89 0.91 0.07
0.99 0.68 0.96 0.96 0.96 1.17
0.98 0.67 0.94 0.95 0.95 0.74
1.00 0.70 0.96 0.96 0.95 1.16
0.86 0.79 0.86 0.96 0.89 0.80
1.00 1.02 0.94 0.95 0.96 0.55
0.93 0.61 0.95 0.90 0.89 0.46
0.96 0.91 0.93 0.97 1.00 0.08
(c) Anton - full optimization
1.00
1.00
1.00
1.01
1.00
1.00
1.00
1.00
1.00
1.00
1.00
0.87
1.00
1.00
1.01
1.01
1.00
1.00
0.93
1.00
0.99
1.00
1.00
1.00
1.00
1.00
0.99
1.00
1.00
1.00
1.00
1.00
1.00
1.01
1.01
1.00
1.00
1.00
D 0.5 D 0.1 D 0.01 Q
0.90
1.18
1.07
0.97
1.00
0.98
1.08
1.04
1.16
0.99
1.02
0.66
1.06
0.99
1.07
1.05
0.98
1.03
0.91
0.91
1.02
0.99
0.95
0.98
1.00
0.98
0.93
0.98
0.98
0.97
0.90
1.06
0.99
1.07
1.00
1.01
0.99
0.99
0.91
1.01
0.99
0.95
0.98
1.00
0.98
0.94
0.98
0.98
0.97
1.02
1.06
1.00
1.07
0.97
0.99
0.99
1.03
avrg
benchmark
F
F 0.5 F 0.1 F 0.01 D
D 0.5 D 0.1 D 0.01 Q
0.95
0.97
0.96
0.98
0.99
0.98
0.99
0.97
1.01
0.99
0.94
0.89
0.98
0.93
0.98
0.96
0.97
0.94
0.99
bspline2
doppler
himmilbeau
invPend.
kepler0
kepler1
kepler2
rigidBody1
rigidBody2
sine
sqroot
traincar
turbine1
turbine2
turbine3
kepler2 (2x)
rigidBody2(3x)
sine (3x)
traincar (2x)
0.99
1.02
0.96
1.03
0.98
0.99
0.96
0.99
0.96
0.99
0.96
0.89
1.02
0.94
1.01
0.89
1.04
0.93
0.98
avrg
benchmark
F
F 0.5 F 0.1 F 0.01 D
D 0.5 D 0.1 D 0.01 Q
avrg
0.88
0.98
0.95
0.94
0.93
0.93
0.93
0.94
0.93
1.01
0.92
0.78
0.99
0.91
0.99
0.91
0.93
0.86
0.87
bspline2
doppler
himmilbeau
invPend.
kepler0
kepler1
kepler2
rigidBody1
rigidBody2
sine
sqroot
traincar
turbine1
turbine2
turbine3
kepler2 (2x)
rigidBody2(3x)
sine (3x)
traincar (2x)
1.00
0.99
0.98
1.05
1.06
1.03
0.98
1.08
1.08
1.01
1.14
1.01
1.00
1.00
1.01
0.99
1.03
0.99
1.11
0.53
0.87
0.88
1.01
1.06
1.02
1.05
1.03
0.95
0.43
0.83
0.90
0.56
0.61
0.58
0.97
1.03
0.53
0.96
0.65
0.07
0.19
0.61
0.49
0.57
0.64
0.61
0.56
0.20
0.40
0.36
0.07
0.09
0.07
0.55
0.45
0.19
0.50
0.98
1.03
0.92
0.98
0.99
1.49
1.22
1.42
1.43
0.74
1.38
1.16
0.82
0.85
0.81
1.19
1.36
0.88
1.32
0.99 0.99 0.99 0.98 0.90
0.90 1.33 1.32 1.33 0.10
0.98 0.98 0.99 0.98 0.99
0.99 1.00 1.03 1.03 0.95
1.02 0.99 1.00 0.99 0.97
0.87 0.87 0.87 0.87 1.00
0.92 0.92 0.95 0.93 0.99
0.99 1.01 1.01 0.99 0.93
0.88 0.89 0.90 0.90 0.98
1.08 1.08 1.08 1.09 0.98
0.95 0.93 0.92 0.93 0.97
0.85 0.89 0.89 0.91 0.07
0.97 0.97 0.96 0.96 1.06
0.94 0.94 0.94 0.95 0.99
0.96 0.97 0.95 0.95 1.07
0.78 0.86 0.87 0.88 0.96
0.96 0.94 0.94 0.98 1.00
1.05 0.91 0.90 0.90 0.99
0.90 0.98 0.99 1.01 0.09
(b) Anton - rewriting only
1.01 1.00 1.00
1.03 1.01 1.00
1.22 0.99 1.01
1.04 0.98 0.99
1.07 1.01 1.01
1.22 0.99 5.46
1.11 1.24 3.03
0.99 0.96 4.66
1.18 1.02 5.52
0.86 0.93 1.00
1.22 1.12 4.86
0.94 0.92 4.55
0.88 0.95 1.00
0.79 0.98 1.00
0.96 0.92 1.00
1.13 1.09 2.91
1.13 1.08 4.97
0.85 1.02 2.20
0.99 0.99 5.26
(d) FPTuner
0.90
1.01
0.99
0.95
0.97
1.00
0.99
0.93
0.98
0.98
0.97
1.02
1.06
0.99
1.07
0.98
0.99
0.99
1.03
1.21
1.10
0.92
0.98
1.01
0.99
0.88
1.03
0.57
0.48
0.68
0.37
0.86
0.92
0.68
0.86
0.61
0.45
0.51
0.90
1.02
0.99
0.95
0.98
1.00
0.99
0.93
0.98
0.98
0.98
1.02
1.06
0.98
1.07
1.00
1.00
1.00
1.02
1.21
1.57
1.06
1.07
1.10
1.07
1.04
1.21
0.89
0.68
0.94
0.37
0.92
1.13
0.91
crash
0.83
0.69
0.51
0.90
1.01
0.99
0.96
0.97
1.00
0.99
0.93
0.98
0.98
0.98
1.02
1.06
0.99
1.07
0.96
1.01
0.98
1.02
1.22
1.62
1.05
1.05
1.10
1.07
1.04
1.21
1.13
1.03
1.24
1.04
1.18
1.12
1.18
1.03
1.13
1.02
1.03
avrg
0.95
1.01
0.98
0.99
0.99
0.94
0.96
0.97
0.94
1.03
0.95
0.84
1.01
0.96
1.01
0.91
0.99
0.96
0.89
Table 2: Relative performance improvements for Anton and FPTuner
later extended to perform a sensistivity analysis [25] based on a
more fine-grained approach. The Precimonious project [39, 40],
whose delta-debugging algorithm we adapt, targets HPC kernels
and library functions and performs automated mixed-precision
tuning. These projects have in common that the roundoff error
verification is performed dynamically on a limited number of inputs
and thus does not provide guarantees. In contrast, our technique
produces sound results, but is targeted at smaller programs and
kernels which can be verified statically.
Autotuning. Another way to improve the performance of (numerical) computations is autotuning, which performs low-level
transformations of the program in order to find one which empirically executes most efficiently. Traditionally, the approaches have
been semantics preserving [37, 44], but recently also non-semantics
preserving ones have been proposed in the space of approximate
computing [42]. These techniques represent another avenue for
improving performance, but do not optimize mixed-precision.
Bitlength Optimization in Embedded Systems. In the space of
embedded systems, much of the attention so far has focused on
fixed-point arithmetic and the optimization of bitlengths, which can
be viewed as selecting data types. A variety of static and dynamic
approaches have been applied. For instance, Gaffar et al. considers
both fixed-point and floating-point programs and uses automatic
differentiation for a sensitivity analysis [18]. Mallik et al. present an
optimal bit-width allocation for two variables and a greedy heuristic for more variables, and rely on dynamic error evaluation [30].
Conference’17, July 2017, Washington, DC, USA
Unlike our approach, these two techniques cannot provide sound
error bounds. Sound techniques have also been applied for both
the range and the error analysis for bitwidth optimization, for instance in [24, 27, 32, 34] and Lee et al. provide a nice overview of
static and dynamic techniques [27]. For optimization, Lee et al. have
used simulated annealing as the search technique [27]. A detailed
comparison of delta-debugging and e.g. simulated annealing would
be very interesting in the future. We note that our technique is
general in that it is applicable to both floating-point as well as fixedpoint arithmetic, and the first to combine bitwidth optimization for
performance with rewriting.
Finite-precision Verification. There has been considerable interest
in static and sound numerical error estimation for finite-precision
programs with several tools having been developed: Rosa [12], Fluctuat [19], FPTaylor [43] (which FPTuner is based on) and
Real2Float [28]. The accuracies of these tools are mostly comparable [13], so that any of the underlying techniques could be used
in our approach for the static error function. More broadly related
are abstract interpretation-based static analyses, which are sound
wrt. floating-point arithmetic [4, 8, 23]. These techniques can prove
the absence of runtime errors, such as division-by-zero, but cannot quantify roundoff errors. Floating-point arithmetic has also
been formalized in theorem provers such as Coq [6] [15] and HOL
Light [22], and entire numerical programs have been proven correct
and accurate within these [5, 38]. Most of these verification efforts
are to a large part manual, and do not perform mixed-precision
tuning. FPTaylor uses HOL Light and Real2Float Coq to generate
certificates of correctness of the error bounds it computes. We believe that this facility could be extended to the mixed-precision case
– this, however, would come after the tuning step, and hence these
efforts are largely orthogonal.
Floating-point arithmetic has also been formalized in an SMTlib [41] theory and SMT solvers exist which include floating-point
decision procedures [7, 17]. These are, however, not suitable for
roundoff error quantification, as a combination with the theory of
reals would be necessary which does not exist today.
8
CONCLUSION
We have presented a fully automated technique which combines
rewriting and sound mixed-precision tuning for improving the
performance of arithmetic kernels. While each of the two parts is
successful by itself, we have empirically demonstrated that their
careful combination is more than just the sum of the parts. Furthermore, our mixed-precision tuning algorithm presents an interesting
tradeoff as compared to state-of-the-art between efficiency of the
algorithm and performance improvements generated.
REFERENCES
[1] A. Anta, R. Majumdar, I. Saha, and P. Tabuada. 2010. Automatic Verification of
Control System Implementations. In EMSOFT.
[2] D. H. Bailey, Y. Hida, X. S. Li, and B. Thompson. 2015. C++/Fortran-90 doubledouble and quad-double package. Technical Report. http://crd-legacy.lbl.gov/
~dhbailey/mpdist/
[3] M. S. Baranowski and I. Briggs. 2016. Global Extrema Locator Parallelization for
Interval Arithmetic (Gelpia). https://github.com/soarlab/gelpia. (2016).
[4] B. Blanchet, P. Cousot, R. Cousot, J. Feret, L. Mauborgne, A. Miné, D. Monniaux,
and X. Rival. 2003. A Static Analyzer for Large Safety-Critical Software. In PLDI.
Eva Darulova, Einar Horn, and Saksham Sharma
[5] S. Boldo, F. Clément, J.-C. Filliâtre, M. Mayero, G. Melquiond, and P. Weis. 2013.
Wave Equation Numerical Resolution: A Comprehensive Mechanized Proof of a
C Program. Journal of Automated Reasoning 50, 4 (2013), 423–456.
[6] S. Boldo and G. Melquiond. 2011. Flocq: A Unified Library for Proving FloatingPoint Algorithms in Coq. In ARITH.
[7] M. Brain, V. D’Silva, A. Griggio, L. Haller, and D. Kroening. 2013. Deciding
floating-point logic with abstract conflict driven clause learning. Formal Methods
in System Design 45, 2 (Dec. 2013), 213–245.
[8] L. Chen, A. Miné, and P. Cousot. 2008. A Sound Floating-Point Polyhedra Abstract
Domain. In APLAS.
[9] W.-F. Chiang, G. Gopalakrishnan, Z. Rakamaric, I. Briggs, M. S. Baranowski, and
A. Solovyev. 2017. Rigorous Floating-point Mixed Precision Tuning. In POPL.
[10] N. Damouche, M. Martel, and A. Chapoutot. 2015. Intra-procedural Optimization
of the Numerical Accuracy of Programs. In FMICS.
[11] E. Darulova and V. Kuncak. 2011. Trustworthy Numerical Computation in Scala.
In OOPSLA.
[12] E. Darulova and V. Kuncak. 2014. Sound Compilation of Reals. In POPL.
[13] E. Darulova and V. Kuncak. 2017. Towards a Compiler for Reals. ACM TOPLAS
39, 2 (2017).
[14] E. Darulova, V. Kuncak, R. Majumdar, and I. Saha. 2013. Synthesis of Fixed-point
Programs. In EMSOFT.
[15] M. Daumas, L. Rideau, and L. Théry. 2001. A Generic Library for Floating-Point
Numbers and Its Application to Exact Computing. In TPHOLs.
[16] L. H. de Figueiredo and J. Stolfi. 2004. Affine Arithmetic: Concepts and Applications. Numerical Algorithms 37, 1-4 (2004).
[17] L. De Moura and N. Bjørner. 2008. Z3: an efficient SMT solver. In TACAS.
[18] A. A. Gaffar, O. Mencer, W. Luk, and P. Y. K. Cheung. 2004. Unifying Bit-Width
Optimisation for Fixed-Point and Floating-Point Designs. FCCM (2004).
[19] E. Goubault and S. Putot. 2011. Static Analysis of Finite Precision Computations.
In VMCAI.
[20] E. Goubault and S. Putot. 2013. Robustness Analysis of Finite Precision Implementations. In APLAS.
[21] ISO/IEC. 2008. Programming languages — C — Extensions to support embedded
processors. Technical Report ISO/IEC TR 18037.
[22] C. Jacobsen, A. Solovyev, and G. Gopalakrishnan. 2015. A Parameterized FloatingPoint Formalizaton in HOL Light. Electronic Notes in Theoretical Computer Science
317 (2015), 101–107.
[23] B. Jeannet and A. Miné. 2009. Apron: A Library of Numerical Abstract Domains
for Static Analysis. In CAV.
[24] A. B. Kinsman and N. Nicolici. 2009. Finite Precision Bit-Width Allocation using
SAT-Modulo Theory. In DATE.
[25] M. O. Lam and J. K. Hollingsworth. 2016. Fine-grained floating-point precision
analysis. Intl. Journal of High Performance Computing Applications (June 2016).
[26] M. O. Lam, J. K. Hollingsworth, B. R. de Supinski, and M. P. Legendre. 2013. Automatically Adapting Programs for Mixed-precision Floating-point Computation.
In ICS.
[27] D. U. Lee, A. A. Gaffar, R. C. C. Cheung, O. Mencer, W. Luk, and G. A. Constantinides. 2006. Accuracy-Guaranteed Bit-Width Optimization. Trans. Comp.-Aided
Des. Integ. Cir. Sys. 25, 10 (2006), 1990–2000.
[28] V. Magron, G. A. Constantinides, and A. F. Donaldson. 2015. Certified Roundoff
Error Bounds Using Semidefinite Programming. CoRR abs/1507.03331 (2015).
[29] R. Majumdar, I. Saha, and M. Zamani. 2012. Synthesis of Minimal-error Control
Software. In EMSOFT.
[30] A. Mallik, D. Sinha, P. Banerjee, and H. Zhou. 2007. Low-Power Optimization
by Smart Bit-Width Allocation in a SystemC-Based ASIC Design Environment.
IEEE Trans. on CAD of Integrated Circuits and Systems (2007).
[31] R. Moore. 1966. Interval Analysis. Prentice-Hall.
[32] W. G. Osborne, R. C. C. Cheung, J. Coutinho, W. Luk, and O. Mencer. 2007.
Automatic Accuracy-Guaranteed Bit-Width Optimization for Fixed and FloatingPoint Systems. In Field Programmable Logic and Applications. 617–620.
[33] P. Panchekha, A. Sanchez-Stern, J. R. Wilcox, and Z. Tatlock. 2015. Automatically
Improving Accuracy for Floating Point Expressions. In PLDI.
[34] Y. Pang, K. Radecka, and Z. Zilic. 2011. An Efficient Hybrid Engine to Perform Range Analysis and Allocate Integer Bit-widths for Arithmetic Circuits. In
ASPDAC.
[35] R. Poli, W. B. Langdon, and N. F. McPhee. 2008. A Field Guide to Genetic Programming. Lulu Enterprises.
[36] A. Prokopec. 2012. ScalaMeter. https://scalameter.github.io/. (2012).
[37] M. Püschel, J. M. F. Moura, B. Singer, J. Xiong, J. R. Johnson, D. A. Padua,
M. M. Veloso, and R. W. Johnson. 2004. Spiral - A Generator for Platform-Adapted
Libraries of Signal Processing Alogorithms. IJHPCA 18, 1 (2004), 21–45.
[38] T. Ramananandro, P. Mountcastle, B. Meister, and R. Lethin. 2016. A Unified
Coq Framework for Verifying C Programs with Floating-Point Computations. In
CPP.
[39] C. Rubio-Gonzáles, C. Nguyen, B. Mehne, K. Sen, J. Demmel, W. Kahan, C. Iancu,
W. Lavrijsen, D. H. Bailey, and D. Hough. 2016. Floating-Point Precision Tuning
Using Blame Analysis. In ICSE.
Sound Mixed-Precision Optimization with Rewriting
[40] C. Rubio-González, C. Nguyen, H. D. Nguyen, J. Demmel, W. Kahan, K. Sen,
D. H. Bailey, C. Iancu, and D. Hough. 2013. Precimonious: Tuning Assistant for
Floating-point Precision. In SC.
[41] P. Rümmer and T. Wahl. 2010. An SMT-LIB Theory of Binary Floating-Point
Arithmetic. In SMT.
[42] E. Schkufza, R. Sharma, and A. Aiken. 2014. Stochastic Optimization of Floatingpoint Programs with Tunable Precision. In PLDI.
[43] A. Solovyev, C. Jacobsen, Z. Rakamaric, and G. Gopalakrishnan. 2015. Rigorous
Estimation of Floating-Point Round-off Errors with Symbolic Taylor Expansions.
In FM.
[44] R. Vuduc, J. W. Demmel, and J. A. Bilmes. 2004. Statistical Models for Empirical
Search-Based Performance Tuning. Int. J. High Perform. Comput. Appl. 18, 1 (Feb.
2004), 65–94.
[45] A. Zeller and R. Hildebrandt. 2002. Simplifying and Isolating Failure-Inducing
Input. IEEE Trans. Software Eng. 28, 2 (2002), 183–200.
Conference’17, July 2017, Washington, DC, USA
| 6 |
1
Spectral analysis for non-stationary audio
arXiv:1712.10252v1 [eess.AS] 29 Dec 2017
Adrien Meynard and Bruno Torrésani
Abstract
A new approach for the analysis of non-stationary signals is proposed, with a focus on audio applications. Following earlier contributions, non-stationarity is modeled via stationaritybreaking operators acting on Gaussian stationary random signals. The focus is here on time
warping and amplitude modulation, and an approximate maximum-likelihood approach based
on suitable approximations in the wavelet transform domain is developed. This papers provides theoretical analysis of the approximations, and describes and analyses a corresponding
estimation algorithm. The latter is tested and validated on synthetic as well as real audio
signal.
Index Terms
Non-stationary signals, deformation, wavelet analysis, local spectrum, Doppler effect
I. I NTRODUCTION
Non-stationarity is a key feature of acoustic signals, in particular audio signals. To
mention a few examples, a large part of information carried by musical and speech
signals is encoded by their non-stationary nature, as is the case for environment sounds
(think of car noises for example, where non-stationarity informs about speed variations),
and many animals (bats, dolphins,...) use non-stationary signals for localization and
communication. Beyond acoustics, amplitude and frequency modulation are of prime
importance in telecommunication.
A. Meynard and B. Torrésani are with Aix Marseille Univ, CNRS, Centrale Marseille, I2M, UMR 7373, Marseille,
France.
Part of this work was done when the two authors were at Centre de Recherches Mathématiques, UMI 3457, CNRS
and Université de Montréal, Canada.
Manuscript received ???, 2017; revised ???, 2017.
2
While stationarity can be given rigorous definitions, non-stationarity is a very wide
concept, as there are infinitely many ways to depart from stationarity. The theory of
random signals and processes (see [1], [2] and references therein) gives a clear meaning
to the notion of stationarity. In the context of time series analysis, Priestley [2], [3] was
one of the first to develop a systematic theory of non-stationary processes, introducing
the class of locally stationary processes and the notion of evolutionary spectrum. A
similar approach was followed in [4], who proposed a wavelet-based approach to
covariance estimation for locally stationary processes (see also [5]). An alternate theory
of locally stationary time series was developed by Dahlhaus [6] (see also [7] for a
corresponding stationarity test). In a different context, frequency modulated stationary
signal were considered in [8], [9], and time warping models were analyzed in [10]. In
several of these approaches, wavelet, time-frequency and similar representations happen
to play a key role for the characterization of non-stationarity.
In a deterministic setting, a popular non-stationarity model expresses the signal as
a sum of K sinusoidal components y(t) = ∑K
k=1 Ak (t) cos (2πφk (t)). This model has
been largely used in speech processing since early works by McAulay and Quatieri [11]
(see also [12] and references therein for more recent developments, and [13], [14] for
probabilistic approaches). The instantaneous frequencies φk′ of each mode give important information about the physical phenomenon. Under smoothness assumptions
on functions Ak and φk′ , techniques such as ridge/multiridge detection (see [15] and
references therein), synchrosqueezing or reassignment have been developed to extract
theses quantities from a single signal observation (see [16], [17] for recent accounts).
In sound processing, signals often possess an harmonic structure, which corresponds
to a special case of the above model where each instantaneous frequency φk′ is multiple
of a fundamental frequency φ0′ : φk′ (t) = λk φ0′ (t), λk ∈ R. If the amplitudes Ak are
such that Ak (t) = αk A0 (t), we can describe such signals as a stationary signal x (t) =
∑K
k=1 α k cos(2πλk t + ϕ k ) modified by time warping and amplitude modulation: y(t) =
A0 (t) x (φ0 (t)). A major limit of this model is that each component is purely sinusoidal
while audio signals often contain broadband information. However sounds originating
from physical phenomena can often be modeled as stationary signals which have been
deformed by a stationarity breaking operator (time warping, amplitude modulation,...).
For example sounds generated by a variable speed engine or any stationary sound
3
deformed by Doppler effect can be described as such. A stochastic time warping model
has been introduced in [18], [19], where wavelet-based approximation and estimation
techniques were developed. In [9], [20], an approximate maximum-likelihood approach
was proposed for the joint estimation of the time warping and power spectrum of the
underlying Gaussian stationary signal, exploiting similar approximations.
In this paper, we build on results of [9], [20] which we extend and improve in
several ways. We develop an approximate maximum likelihood method for estimating
jointly time warping and amplitude modulation from a single realization. While the
overall structure of the algorithm is similar, we formulate the problem as a continuous
parameter estimation problem, which avoids quantization effects present in [9], [20],
and allows computing a Cramér-Rao bound for assessing the precision of the estimate.
After completing the estimation, the inverse deformation can be applied to the input
signal, which yields an estimate for the power spectrum.
The plan of the paper is as follows. After giving some definitions and notations
in Section II, we detail in Section III the non-stationary signal models we consider,
and specify the assumptions made on the underlying stationary signal. We also analyze the effect of time warping and amplitude modulation in the wavelet domain,
which we exploit in designing the estimation procedure. We finally propose an alternate
estimation algorithm, and analyze the expected performances of the corresponding
estimator. Section IV is devoted to numerical results, on both synthetic signals and
real sounds. We also shortly describe in this section an extension published in [21]
involving simultaneously time warping and frequency modulation. More mathematical
developments are postponed to the Appendix.
II. N OTATIONS
AND BACKGROUND
A. Random signals, stationarity
Throughout this paper, we will work in the framework of the theory of random
signals. Signals of interest will be modeled as realizations of random processes1 t ∈
R → Xt ∈ C. In this paper, the random processes will be denoted by upper-case letters
1 Signals
of interest are real-valued, however we will also use complex-valued functions since we will use complex-
valued wavelet transforms later on.
4
while their realizations will be denoted by lower-case letters. The random processes
will be assumed to zero mean (E { Xt = 0} for all t) and be second order, i.e. they
have well defined covariance kernel E Xt X s . A particularly interesting class of such
stochastic processes is the class of second order (or weakly) stationary processes, for
∆
which CX (t − s) = E Xt X s is a function of t − s only. Under these assumptions, the
Wiener-Khinchin theorem states that the covariance kernel may be expressed as the
inverse Fourier transform of a non-negative measure dηX , which we will assume to be
continuous with respect to the Lebesgue measure: dηX (ν) = SX (ν)dν, for some nonnegative L1 function SX called the power spectrum. We then write
CX ( t ) =
Z
SX (ν)e2iπνt dν .
We refer to textbooks such as [1], [2] for a more complete mathematical account of the
theory, and to [20] for an extension to distribution theory setting.
B. Elementary operators
Our approach rests on non-stationary models obtained by deformations of stationary
random signals. We will mainly use as elementary operators the amplitude modulation
Aα , translation Tτ , dilation Ds , and frequency modulation Mν defined as follows:
Aα x (t) = αx (t) ,
Ds x (t) = q−s/2 x (q−s t) ,
Tτ x (t) = x (t − τ ) ,
Mν x (t) = e2iπνt x (t) .
where α, τ, s, ν ∈ R and q > 0 is a fixed number. The amplitude modulation commutes
with the other three operators, which satisfy the commutation rules
Tτ Ds = Ds Tq−s τ , Tτ Mν = e−2iπντ Mν Tτ , Mν Ds = Ds Mνqs .
C. Wavelet transform
Our analysis relies heavily on transforms such as the continuous wavelet transform
(and discretized versions). In particular, the wavelet transform of a signal X : t ∈ R → Xt
is defined as:
W X (s, τ ) = hX, ψsτ i , with ψsτ = Tτ Ds ψ .
(1)
where ψ is the analysis wavelet, i.e. a smooth function, with fast decay away from the
origin. It may be shown that for suitable choices of ψ the wavelet transform is invertible
5
(see [15]), we will not use that property here. Notice that when X is a realization of
a continuous time random process, it does not need to decay at infinity. However,
for a suitably smooth and localized wavelet ψ, the wavelet transform can still be well
defined (see [15], [20] for more details). In such a situation the wavelet transform of X
is a two-dimensional random field, which we analyze in the next section. Besides, in
this paper the analysis wavelet ψ is complex valued and belongs to the set H 2 (R ) =
ψ ∈ L2 (R ) : supp(ψ̂ ) ⊂ R + . In that framework, a useful property is that if X is a
real zero mean Gaussian random process then, W X is a complex zero mean circular
Gaussian random field.
Classical choices of wavelets in H 2 (R ) are (analytic) derivative of Gaussian ψk (which
has k vanishing moments), and the sharp wavelet ψ♯ (with infinitely many vanishing
moments) introduced in [21]. These can be defined in the positive Fourier domain by
ψ̂k (ν) = νk e−kν
2 /2ν2
0
,
δ ( ν,ν0 )
ψ̂♯ (ν) = ǫ δ(ν1,ν0 ) , ν > 0
(2)
and vanish on the negative Fourier half axis. Here ν0 is the mode of ψ̂. In the expression
of ψ̂♯ , ν1 is chosen so that ψ̂♯ (ν1 ) = ǫ (a prescribed numerical tolerance at cutoff
frequency ν1 ), and the divergence δ is defined by δ(a, b) = 12 ba + ba − 1.
III. J OINT
ESTIMATION OF TIME WARPING AND AMPLITUDE MODULATION
A. Model and approximations
Let us first describe the deformation model we will mainly be using in the following.
As said above, the non-stationary signals of interest are obtained as linear deformations
of stationary random signals. The deformations of interest here are amplitude modulations and time warpings. Amplitude modulations are pointwise multiplications by
smooth functions, defined as
Aa :
Aa x (t) = a(t) x (t) ,
(3)
where a ∈ C1 is a real valued function, such that
0 < c a ≤ a(t) ≤ Ca < ∞,
∀t ,
(4)
for some constants c a , Ca ∈ R ∗+ . Time warpings are compositions with smooth and
monotonic functions,
Dγ :
Dγ x ( t ) =
q
γ′ (t) x (γ(t))
(5)
6
where γ ∈ C2 is a strictly increasing smooth function, satisfying the control condition [20]
0 < cγ ≤ γ′ (t) ≤ Cγ < ∞,
∀t ,
(6)
for some constants cγ , Cγ ∈ R ∗+ .
Assume one is given a (unique) realization of a random signal of the form
Y = A a Dγ X
(7)
where X is a stationary zero mean real random process with (unknown) power spectrum
SX . The goal is to estimate the deformation functions a and γ from this realization of
Y, exploiting the assumed stationarity of X.
Remark 1: Clearly enough, the stationarity assumption is not sufficient to yield unambiguous estimates, since affine functions γ(t) = λt + µ do not break stationarity: for
any stationary X, Dγ X is stationary too. Therefore, the warping function γ can only be
estimated up to an affine function, as analyzed in [19] and [20]. Similarly, the amplitude
function a can only be estimated up to a constant factor.
A key ingredient of our approach is the smoothness of the deformation functions a
and γ, and their slow variations. This allows us to perform a local analysis using smooth
and localized test functions, on which the action of Aa and Dγ can be approximated
τ
g
fτ
by their so-called tangent operators A
a and Dγ (see [19], [9], [20], [22]). More precisely,
given a test function g located near t = τ (i.e. decaying fast enough as a function of
|t − τ |), Taylor expansions near t = τ yield
τ
g
fτ ∆
Aa g(t) ≈ A
a g(t) , with Aa = A a(τ ) ,
∆
fγτ g(t) , with D
fγτ =
Dγ g ( t ) ≈ D
Tτ D− logq(γ′ (τ )) T−γ(τ ) .
(8)
(9)
∆
fY (s, τ ) =
Therefore, the wavelet transform of Y will be approximated by WY (s, τ ) ≈ W
E
D
τD
τ X, T D ψ , i.e.
g
f
A
τ
s
a
γ
′
f
(10)
WY (s, τ ) = a(τ )W X s + logq (γ (τ )), γ(τ ) .
Here we have used the standard commutation rules of translation and dilation operators
given in Section II-B.
The result below provides a quantitative assessment of the quality of the approximation. There, we denote by k f k∞ = ess supt | f (t)| the essential absolute supremum of a
function f .
7
Theorem 1: Let X be a second order zero mean stationary random process, let Y be the
non-stationary process defined in (7). Let ψ be a smooth test function, localized in such
a way that |ψ(t)| ≤ 1/(1 + |t| β ) for some β > 2. Let WY denote the wavelet transform of
fY its approximation given in (10), and let ε = WY − W
fY denote the approximation
Y, W
error. Assume ψ and SX are such that
rZ
∞
(ρ) ∆
IX =
0
ξ 2ρ SX (ξ )dξ < ∞ , where ρ =
β−1
.
β+2
Then the approximation error ε is a second order, two-dimensional complex random
field, and
where
n
o
ρ
E |ε(s,τ )|2 ≤ Ca2 q3s K1kγ′′ k∞ + K2 qµs kγ′′ k∞ + K3 a′
βσX
√ ,
2( β − 2) c γ
p
Cγ βσX
,
K3 =
( β − 2) c a
(ρ)
K2 = I X
K1 =
µ=
π ρ q
2
∞
Cγ
2
4
,
3ρ
β−4
,
β+2
σX2 being the variance of X.
The proof, which is an extension of the one given in [20], is given in appendix A.
Remark 2: The assumption on β ensures that the parameters belong to the following
intervals: 1/4 < ρ < 1 and −1/2 < µ < 1. Therefore, the variance of the approximation
error tends to zero when the scales are small (i.e. s → −∞). Besides, the error is
inversely proportional to the speed of variations of γ′ and a. This is consistent with
the approximations of the deformation operators by their tangent operators made in
equations (8) and (9).
From now on, we will assume the above approximations are valid, and work on the
fY , which
approximate random fields. The problem is then to estimate jointly a, γ from W
is a zero mean random field with covariance
o
n
fY (s′ , τ ′ ) = C(s, s′ , τ, τ ′ )
fY (s, τ )W
E W
(11)
where
C(s, s′, τ, τ ′ ) = a(τ )a(τ ′ )q
s+s′
2
q
Z ∞
SX (ξ )
γ′ (τ )γ′ (τ ′ )
0
s′ ′ ′ 2iπξ (γ(τ )−γ(τ ′ ))
s ′
dξ.
×ψ̂ q γ (τ )ξ ψ̂ q γ (τ )ξ e
(12)
8
B. Estimation
1) Estimation procedure: Our goal is to estimate both deformation functions γ and a
fy of a realization y of Y, assuming the
from the approximated wavelet transform W
latter is a reliable approximation of the true wavelet transform. From now on, we
fY is a zero mean
additionally assume that X is a Gaussian random process. Therefore, W
circular Gaussian random field and its probability density function is characterized by
the covariance matrix. However, equation (12) shows that besides deformation functions
the covariance also depends on the power spectrum SX of the underlying stationary
signal X, which is unknown too. Therefore, the evaluation of the maximum likelihood
estimate for a and γ requires a guess for SX . This constraint naturally brings the
estimation strategy to an alternate algorithm. In [21], an estimate for the power spectrum
was obtained at each iteration by computing a Welch periodogram on a “stationarized”
signal Aã−1 Dγ̃−1Y, ã and γ̃ being the current estimates for the deformation functions a
and γ. We use here a simpler estimate, computed directly from the wavelet coefficients.
The two steps of the estimation algorithm are detailed below.
Remark 3: The alternate likelihood maximization strategy is reminiscent of the Expectation-Maximization (EM) algorithm, the power spectrum being the nuisance parameter.
However, while it would be nice to apply directly the EM paradigm (whose convergence
is proven) to our problem, the dimensionality of the latter (and the corresponding size
of covariance matrices) forces us to make additional simplifications that depart from the
EM scheme. Therefore we turn to a simpler approach with several dimension reduction
steps.
(a) Deformation estimation. Assume that the power spectrum SX is known (in fact, only
an estimate S˜X is known). Thus, we are able to write the likelihood corresponding to
the observations of the wavelet coefficients. Then the maximum likelihood estimator is
implemented to determine the unknown functions γ and a.
The wavelet transform (1) is computed on a regular time-scale grid Λ = s × τ, δs
being the scale sampling step and Fs the time sampling frequency. The sizes of s and τ
are respectively denoted by Ms and Nτ .
Considering the covariance expression (12) we want to estimate the vector of pa∆
fy (Λ ) denote the
rameters Θ = (θ1 , θ2 , θ3 ) = (a(τ )2 , logq (γ′ (τ )) , γ(τ )). Let Wy = W
9
discretized transform and let CW (Θ ) be the corresponding covariance matrix. The related log-likelihood is
1
1
L (Θ ) = − ln |det(CW (Θ ))| − CW (Θ )−1 Wy · Wy .
2
2
(13)
The matrix CW (Θ) is a matrix of size Ms Nτ × Ms Nτ , which is generally huge. For
instance, when the signal is a sound of 5 seconds sampled at frequency Fs = 44.1
kHz and the wavelet transform is computed on 8 scales, the matrix CW (Θ ) has about
3.1 trillion elements which makes it numerically intractable. In addition, due to the
redundancy of the wavelet transform, CW (Θ ) turns out to be singular, which makes
the evaluation of the likelihood impossible.
To overcome these issues, we use a block-diagonal regularization of the covariance
matrix, obtained by forcing to zeros entries corresponding to different time indices. In
other words, we disregard time correlations in the wavelet domain, which amounts to
fy (s, τn ) as independent circular Gaussian vectors
consider fixed time vector wy,τn = W
with zero mean and covariance matrix
C(Θn )ij = θn,1 C0 (θn,2 )ij , 1 ≤ i, j ≤ Ms ,
where
C0 (θn,2 )ij = q(si +s j )/2
Z ∞
0
SX (q−θn,2 ξ )ψ̂ (qsi ξ )ψ̂ (qs j ξ )dξ .
(14)
(15)
In this situation, the regularized log likelihood L r splits into a sum of independent
terms
L r (Θ ) = ∑ L (Θn ) ,
n
∆
where Θn = (θn,1 , θn,2 ) = (θ1 (n), θ2 (n)) corresponds to the amplitude and warping
parameters at fixed time τn = τ (n). Notice that in such a formalism, θn,3 = γ(τn )
does not appear any more in the covariance expression. Thus we are led to maximize
independently for each n
1
1
L (Θn ) = − ln |det(C(Θn ))| − C(Θn )−1 wy,τn · wy,τn .
2
2
(16)
For simplicity, the estimation procedure is done by an iterative algorithm (given in
more details in part III-B2), which rests on two main steps. On the one hand, the loglikelihood is maximized with respect to θn,2 using a gradient ascent method, for a fixed
10
value of θn,1 . On the other hand, for a fixed θn,2 , an estimate for θn,1 is directly obtained
which reads
θ̃n,1 =
1 −1
C (θn,2 )wy,τn · wy,τn ,
Ms 0
(17)
(b) Spectrum estimation. Assume the amplitude modulation and time-warping parameters θ1 and θ2 are known (in fact, only estimates θ̃1 and θ̃2 are known). For any n we
can compute the wavelet transform
1 f
Wy (s − θn,2 , τn ) = W x (s, γ (τn )) ,
(18)
1/2
θn,1
For fixed scale sm , w x,sm = W x (sm , γ(τ )) ∈ C Nτ is a zero mean random circular Gaussian
vector with time independent variance (as a realization of the wavelet transform of
a stationary process). Hence, the empirical variance is an unbiased estimator of the
variance. We then obtain
(
)
Z ∞
1
1
2
2
E
kwx,sm k =
SX (ξ )qsm ψ̂ (qsm ξ ) dξ
2
2
Nτ kψk2
k ψ k2 0
∆
= SX,ψ (q−sm ω0 ) ,
(19)
where ω0 is the central frequency of |ψ̂ |2 . SX,ψ is a band-pass filtered version of SX
centered around frequency νm = q−sm ω0 . Besides, the bandwidth of the filter is propor-
tional to the frequency νm . This motivates the introduction of the following estimator
S˜X of SX
S˜X (q−sm ω0 ) =
1
kwx,sm k2 .
Nτ kψk22
(20)
Finally, the estimate SX is extended to all ξ ∈ [0, Fs /2] by linear interpolation.
2) Algorithm: The estimation procedure is implemented in an iterative alternate op-
timization algorithm, whose pseudo-code is given as Algorithm 1. The initialization
requires an initial guess for the power spectrum SX of X. We use for this the spectrum
estimator (20) applied to the original observations Y.
(k)
After k iterations of the algorithm, estimates Θ̃n
(k)
and S˜X for Θn and SX are
(k)
available. Hence we can only evaluate the plug-in estimate C̃0
of C0 , obtained by
replacing the power spectrum with its estimate in the covariance matrix (15). This yields
an approximate expression L (k) for the log-likelihood, which is used in place of L
in (16) for maximum likelihood estimation. The influence of such approximations on
the performances of the algorithm are discussed in section III-C.
11
To assess the convergence of the algorithm, the relative update of the parameters is
chosen as stopping criterion:
(k)
( k − 1) 2
θ̃j − θ̃j
2
( k − 1) 2
θ̃j
2
< T , for j = 1 and 2 ,
(21)
where 0 < T < 1 is a given threshold.
Finally, after convergence of the algorithm to the estimated value Θ̃
(k)
, logq (γ′ ) and a2
are estimated for all time by cubic spline interpolation. Besides, γ is given by numerical
integration assuming that γ(0) = 0.
Algorithm 1 Joint spectrum and deformations estimation
Initialization: Compute an estimate S˜Y of the power spectrum of Y as an initial
(0)
guess S˜X for SX . Initialize the estimator of the squared amplitude modulation with
(0)
θ̃n,1 = 1, ∀n.
Compute the wavelet transform Wy of y.
k := 1
while criterion (21) is false and k ≤ kmax do
( k + 1)
• For each n, subsample wy,τn on scales s p , and estimate θ̃n,2
(k)
approximate log-likelihood L (k) θ̃n,1 , θn,2 in (16).
by maximizing the
( k + 1)
• For each n, estimate θ̃n,1
by maximizing the approximate log-likelihood
( k + 1)
L (k) θn,1 , θ̃n,1
with respect to θn,1 in (16). Or, in absence of noise, directly apply
equation (17) using the regularized covariance matrix given by (22).
• Construct the estimated wavelet transform W x of the underlying stationary signal
(k)
by interpolation from Wy and θ̃
with equation (18). Estimate the corresponding
( k + 1)
with (20).
power spectrum S˜X
• k := k + 1
end while
• Compute ã and γ̃ by interpolation from Θ̃
(k)
.
Remark 4: In order to control the variances of the estimators, and the computational
cost, two different discretizations of the scale axis are used for θ̃1 or θ̃2 . Indeed, the
computation of the log-likelihood involves the evaluation of the inverse covariance
matrix. In [20], a sufficient condition for invertibility was given in the presence of
12
noise. The major consequence induced by this condition is that when δs is close to
zero (i.e. the sampling period of scales is small), the covariance matrix could not be
numerically invertible. The scale discretization must then be sufficiently coarse to ensure
good conditioning for the matrix. While this condition can be reasonably fulfilled to
estimate θn,2 without impairing the performances of the estimator, it cannot be applied
to the estimation of θn,1 because of the influence of Ms on its Cramér-Rao bound (see
section III-C below). The choice we made is to maximize L (Θn ) for θn,2 with wy,τn
corresponding to a coarse sampling s p which is a subsampled version of the original
vector s, the scale sampling step and the size of s p being respectively pδs and ⌊ Ms /p⌋
for some p ∈ N ∗ . While L (Θn ) is maximized for θn,1 on the original fine sampling s,
a regularization of the covariance matrix has to be done to ensure its invertibility. The
regularized matrix is constructed by replacing covariance matrix C0 (θn,2 ) given by (15)
by its regularized version C0,r (θn,2 ), given by
C0,r (θn,2 ) = (1 − r )C0,r (θn,2 ) + rI ,
(22)
for some regularization parameter 0 ≤ r ≤ 1.
Remark 5: After convergence of the estimation algorithm, the estimated functions ã
and γ̃ allow constructing a “stationarized” signal
x̃ = Dγ̃−1 Aã−1 y .
x̃ is an estimation of the original underlying stationary signal x. Furthermore, the Welch
periodogram [23] may be computed from x̃ to obtain an estimator of SX whose bias is
not depending on frequency (unlike the estimator used within the iterative algorithm).
Remark 6: In order to accelerate the speed of the algorithm, the estimation can be done
only on a subsampled time. The main effect of this choice on the algorithm concerns
the final estimation of a and γ which is more sensitive to the interpolation operation.
In the following section, we analyze quantities that enable the evaluation of the
expected performances of the estimators, and their influence of the algorithm. The reader
who is not directly interested in the statistical background may skip these section and
jump directly to the numerical results in part IV.
13
C. Performances of the estimators and the algorithm
(a) Bias. For θn,1 , the estimator is unbiased when the actual values of θn,2 and SX are
n
o
(k)
(k)
known. In our case, the bias bn,1 (θn,1 ) = E θ̃n,1 − θn,1 is written as
θn,1
(k) (k) −1
(k)
C0 (θn,2 ) − I .
(23)
Trace C̃0 θ̃n,2
bn,1 (θn,1 ) =
Ms
(k)
As expected, the better the covariance matrix estimation, the lower the bias bn,1 .
For θn,2 , as we do not have a closed-form expression for the estimator we are not
able to give an expression of the bias. Nevertheless, if we assume that the two other
true variables are known, as a maximum likelihood estimator we make sure that θ̃n,2 is
asymptotically unbiased (i.e. θ̃n,2 → θn,2 when Ms → ∞).
Regarding SX , equation (19) shows that the estimator yields a smoothed, thus biased
version of the spectrum. Besides, proposition 1 below shows that the estimated spectrum
converges to this biased version when the deformation parameters converge to their
actual values.
Proposition 1: Let ψ ∈ H 2 (R ) be an analytic wavelet such that ψ̂ is bounded and
ψ̂(u) = Ou→∞ (u−η ) with η > 2. Let ϕ1 and ϕ2 be bounded functions defined on R +
by ϕ1 (u) = u ψ̂(u)
2
and ϕ2 (u) = u2 ψ̂ (u) . Assume SX is such that
JX =
(k)
Let SX
Z ∞
0
ξ −1 SX (ξ )dξ < ∞.
denote the estimation of the spectrum after k iterations of the algorithm. Let
(k)
bSX denote the bias defined for all m ∈ [[1, Ms ]] by
n
o
(k)
(k) − sm
˜
bSX (m) = E SX (q
ω0 ) − SX,ψ (q−sm ω0 ) .
(k)
Assume there exists a constant cθ1 > 0 such that θn,1 > cθ1 , ∀n, k. Then
J
(k)
(k)
(k)
≤ X 2 K1′ θ1 − θ̃1
bSX
+ K2′ θ̃2 − θ2
,
∞
∞
∞
k ψ k2
where
K1′ =
k ϕ1 k ∞
<∞,
c θ1
K2′ = ln(q) k ϕ1 k∞ + 2kψ̂′ k∞ k ϕ2 k∞ < ∞ .
The proof is given in appendix B.
(24)
14
Remark 7: If
(k)
θ1
→ θ1 and
(k)
θ2
n
(k)
→ θ2 as k → ∞, we have E S˜X (νm )
which is the expected property.
o
→ SX,ψ (νm )
k
Formula (24) enables the control of the bias of the spectrum at frequencies νm =
q−sm ω0 only. We can also notice the required property JX < ∞ forces SX to vanish at
zero frequency.
(b) Variance. The Cramér-Rao lower bound (CRLB) gives the minimum variance that
can be attained by unbiased estimators. The Slepian-Bangs formula (see [24]) directly
gives the following CRLB for component θn,i
(
CRLB(θn,i ) = 2 Trace
C(Θn )
−1 ∂C(Θn )
∂θn,i
2 )!−1
.
This bound gives information about the variance of the estimator at convergence of the
algorithm, i.e. when both SX and the other parameters are well estimated.
Applying this formula to θn,1 gives
2
n
2θn,1
2 o
≥ CRLB(θn,1 ) =
E θ̃n,1 − E θ̃n,1
.
Ms
This implies that the number of scales Ms of the wavelet transform must be large enough
to yield an estimator with sufficiently small variance.
For θn,2 , no closed-form expression is available for the CRLB. Therefore, the evaluation
of this bound and its comparison with the variance of the estimator θ̃n,2 can only be
based on numerical results, see section IV.
(c) Robustness to noise. Assume now the observations are corrupted by a random
2 (supposed to be known). The model becomes
Gaussian white noise W with variance σW
Y = A a Dγ X + W .
(25)
The estimator θ̃n,1 is not robust to noise. Indeed, if the maximum likelihood estimator
(k)
of model (7) in the presence of such white noise, a new term bn,1| (θn,1 ) must be added
W
to the bias expression (23), which becomes
1
(k) −1
(k)
(k)
Cwn ,
Trace C̃0 θ̃n,2
bn,1| (θn,1 ) =
W
Ms
R
2 q(si + s j )/2 ∞ ψ̂ (qsi ξ )ψ̂ (qs j ξ )dξ. In practice, this term can take large
where (Cwn )ij = σW
0
values, therefore noise has to be taken into account. To do so, the covariance matrix is
now written as
C(Θn)ij = q
si +s jZ
2
∞
0
(θn,2 SX (q−θn,1 ξ ) + σW2 )ψ̂ (qsi ξ )ψ̂ (qs j ξ )dξ ,
(26)
15
and the likelihood is modified accordingly. Formula (17) is no longer true and no closedform expression can be derived any more, the maximum likelihood estimate θ̃n,1 must
be computed by a numerical scheme (here we use a simple gradient ascent).
The estimator θ̃n,2 is very robust to noise. Indeed, equation (26) shows that the only
change in the covariance matrix formula is to replace the power spectrum SX by SZ =
σ2
W
. The additive constant term is not impairing the estimator as long as it is small
SX + θn,2
in comparison with the maximum values of SX .
Moreover, the estimator S˜X is modified because when computing
on scale sm , we compute:
wz,sm = w x,sm + ww∗ ,sm ,
1 f
Ww (sm − θ2 , τ ) is the wavelet transform of a
θ1/2
1
by a−1 . Thus a constant term σ̃W independent of
1 f
1/2 W y
θn,1
(s − θn,2 , τn )
where ww∗ ,sm =
white noise modulated
in amplitude
frequency is added to
the new spectrum estimator S˜Z , so that
2
E S˜Z = SX,ψ + σ̃W
where
2
2
σ̃W
= σW
1
Nτ
Nτ
∑
1
θ
n =1 n,1
.
D. Extension: estimation of other deformations
To describe other non-stationary behaviors of audio signals, other operators can be
investigated. For example, combination of time warping and frequency modulation can
be considered, as was done in [21], we shortly account for this case here for the sake
of completeness. Let α ∈ C2 be a smooth function, and set
Mα :
Mα x (t) = e2iπα(t) x (t) ,
(27)
The deformation model in [21] is of the form
Y = A a M α Dγ X .
(28)
To perform joint estimation of amplitude and frequency modulation and time warping
for each time, a suitable time-scale-frequency transform V is introduced, defined as
V X (s, ν, τ ) = hX, ψsντ i, with ψsντ = Tτ Mν Ds ψ. In that case, an approximation theorem
similar to 1 can be obtained from which the corresponding log-likelihood can be written.
At fixed time τ, the strategy of estimation is the same as before, but the parameter
space is of higher dimension, and the extra parameter θ3 = α′ (τ ) complicates the loglikelihood maximization. In particular, the choice of the discretization of the two scale
16
and frequency variables s and ν influences performances of the estimator, in particular
the Cramér-Rao bound.
IV. N UMERICAL
RESULTS
We now turn to numerical simulations and applications. A main ingredient is the
choice of the wavelet transform. Here we shall always use the sharp wavelet ψ♯ defined
in (2) and set the scale constant q to q = 2.
We will systematically compare our approach to simple estimators for amplitude
modulation and time warping, commonly used in applications, defined below.
•
Amplitude modulation: we use as baseline estimator of a(τn )2 the average energy
(B)
θ̃n,1 defined as follows:
(B)
θ̃n,1 =
1
kwy,τn k2 .
Ms
This amounts to replace the estimated covariance matrix in (17) by the identity
(B)
matrix. Notice that θ̃n,1 does not depend on the time warping estimator, and can
be computed directly on the observation.
•
(B)
Time warping: the baseline estimator θ̃n,2 is the scalogram scale center of mass
defined as follows:
(B)
θ̃n,2 = C0 +
1
kwy,τn k2
Ms
∑
s[m]|wy,τn [m]|2 .
m=1
(B)
C0 is chosen such that θ̃n,2 is a zero mean vector.
Numerical evaluation is performed on both synthetic signals and deformations and real
audio signals.
A. Synthetic signal
We first evaluate the performances of the algorithm on a synthetic signal. This allows
us to compare variance and bias with their theoretical values.
The simulated signal has length Nτ = 216 samples, sampled at Fs = 8 kHz (meaning the signal duration is t F = ( Nτ − 1)/Fs ≈ 8.2 s). The spectrum SX is written
(l )
(l )
(l )
(l )
if |ν − ν0 | < ∆ν /2
as SX = S1 + S2 where Sl (ν) = 1 + cos 2π (ν − ν0 )/∆ν
and vanishes elsewhere (for l ∈ {1, 2}). The amplitude modulation a is a sine wave
R
1 tF 2
a(t) = a0 (1 + a1 cos(2πt/T1 )), where a0 is chosen such that t−
F
0 a (t)dt = 1. The
17
3
2.5
2
1.5
1
0.5
0
0
1
2
3
4
5
6
7
8
5
6
7
8
Time (s)
1
0.5
0
-0.5
0
1
2
3
4
Time (s)
Fig. 1. Joint amplitude modulation/time warping estimation on a synthetic signal. Top: amplitude modulation
estimation (a1 = 0.4 and T1 = t F /3). Bottom: warping estimation (T2 = t F /2 and T3 = t F /2).
Estimation
Amplitude
Time
method
modulation
warping
Baseline
2.01 × 10−1
2.32 × 10−2
Algorithm 1
7.01 × 10−2
4.91 × 10−4
TABLE I
M EAN SQUARE ERROR OF THE ESTIMATION METHODS FOR BOTH DEFORMATIONS
time warping function γ is such that logq (γ′ (t)) = Γ + cos(2πt/T2 )e−t/T3 , where Γ is
R
1 tF ′
chosen such that t−
F
0 γ (t)dt = 1.
The estimation algorithm is implemented with Ms = 106 and p = 7. Results are
shown in Fig. 1 and compared with baseline estimations. For the sake of visibility,
the baseline estimator of the amplitude modulation (which is very oscillatory) is not
displayed, but numerical assessments are provided in Table I, which gives MSEs for the
different estimations. The proposed algorithm is clearly more precise than the baseline
algorithm, furthermore its precision is well accounted for by the Cramér-Rao bound: in
Fig. 1, the estimate is essentially contained within the 95 % confidence interval provided
by the CRLB (assuming gaussianity and unbiasedness).
The left hand side of Fig. 2 displays the estimated spectrum given by the algorithm
with formula (20). The agreement with the actual spectrum is very good, with a slight
enlargement effect due to to the filtering by |ψ̂|2 . The right hand side of Fig. 2 gives
18
10 0
2
1.8
10 -1
1.6
Relative update
Spectrum
1.4
1.2
1
0.8
0.6
10 -2
10
-3
10
-4
0.4
0.2
0
400
600
800
1000
1200
1400
1600
1
T =0.1 %
Time warping
Amplitude modulation
Spectrum
2
3
Frequency (Hz)
( 1)
Fig. 2. Left: Spectrum estimation (ν0
4
5
6
7
Iteration
( 1)
= 600 Hz, ∆ ν
( 2)
= 200 Hz, ν0
( 2)
= 1.2 kHz, ∆ ν
= 400 Hz): actual (dash-dot
blue line) and estimated (solid red line) spectra. Right: Stopping criterion evolution.
the evolution of the stopping criterion (21) with iterations. Numerical results show
that time warping estimation converges faster than amplitude modulation estimation.
Nevertheless, when fixing a stopping criterion to 0.1 % only 7 iterations are necessary
for the algorithm to converge.
B. Application to Doppler estimation
After studying the influence of the various parameters, let us now turn to a real life
audio example. The analyzed sound is a recording (from a fixed location) from a racing
car, moving with constant speed. The car engine sound is then deformed by the Doppler
effect, which results in time warping the sound emitted by the car. Besides, as the car is
moving, the closer the car to the microphone, the larger the amplitude of the recorded
sound. Thus, our model fits well this signal.
The wavelet transforms of the original signal and the two estimations of the underlying stationary signal are shown in Fig. 3. While the estimation of time warping
only corrects the displacement of wavelet coefficients in the time-scale domain, the joint
estimation of time warping and amplitude modulation also approximately corrects nonstationary variations of the amplitudes.
The physical relevance of the estimated time warping function can be verified. Indeed,
denote by V the (constant) speed of the car and by c the sound velocity. Fixing the time
origin to the time at which the car passes in front of the observer at distance d, the time
19
5513
2756
1378
689
345
172
0
0.5
1
1.5
Time (s)
2
0
0.5
1
1.5
2
Time (s)
1.15
1.1
1.05
1
0.95
0.9
0.85
0.2
0.4
0.6
0.8
1
Fig. 3. Doppler estimation. Top left: Scalogram of the original signal. Top right: Scalogram of the unwarped and
unmodulated signal. Bottom left: Scalogram of the unwarped signal. Bottom right: Estimated time warping compared
with the theoretical value given in (29).
warping function due to Doppler effect can be shown to be
c2
γ′ (t) = 2
c − V2
1− p
V 2t
d2 (c2 − V 2 ) + (cVt)2
!
(29)
.
We plot in Fig. 3 (bottom right) the estimation γ̃′ compared with its theoretical value
where d = 5 m and V = 54 m/s. Clearly the estimate is close to the corresponding
theoretical curve obtained with these data, which are therefore realistic values.
Nevertheless, a closer look at scalograms in Fig. 3 shows that the amplitude correction
is not perfect, due to the presence of noise, and the fact that the model is still too simple:
the amplitude modulation actually depends on frequency, which is not accounted for.
V. C ONCLUSIONS
We have discussed in this paper extensions of methods and algorithms described
earlier in [9], [22], [20], [21] for the joint estimation of deformation operator and power
spectrum for deformed stationary signals, a problem already addressed in [19] with
a different approach. The main improvements described in this paper concern the
following two points
1) the extension of the algorithm to the joint estimation of deformations including
amplitude modulation to the model and its estimation;
2) a statistical study of the estimators and of the performances of the algorithm.
20
The proposed approach was validated on numerical simulations and an application to
Doppler estimation.
The results presented here show that the proposed extensions result in a significant
improvement in terms of precision, and a better theoretical control. In particular, the continuous parameter estimation procedure avoids quantization effects that were present
in [20] where the parameter space was discrete and the estimation based on exhaustive
search. Regarding the approach of [19], its domain of validity seems to be limited to
small scales (i.e. high frequency) signals, which is not the case here.
Contrary to [19] our approach is based on (approximate) maximum likelihood estimation in the Gaussian framework. Because of our choice to disregard time correlations, the
estimates obtained here generally present spurious fluctuations, which can be smoothed
out by appropriate filtering. A natural extension of our approach would be to introduce
a smoothness prior that would avoid such filtering steps when necessary.
The code and datasets used to produce the numerical results of this paper are available
at the web site
https://cv.archives-ouvertes.fr/bruno-torresani
A PPENDIX A
P ROOF
OF THEOREM
1
√
To simplify notations, let Bγ denote the operator Dγ / γ′ . We split the approximation
error as
τ fτ
g
ε(s, τ ) = hAa Dγ X, ψsτ i − hA
a Dγ X, ψsτ i
q
(2)
(3)
(1)
′
= a(τ ) ε (s, τ ) + γ (τ ) ε (s, τ ) + ε (s, τ ) ,
where
(1)
p
q
γ′ − γ′ (τ ) Bγ X, ψsτ
q
p
′
′
γ − γ (τ ) ψsτ ,
= X, Bγ−1
D
E
(2)
τ
f
ε (s, τ ) =
Bγ − Bγ X, ψsτ
E
D
−1
τ
f
ψsτ ,
= X, Bγ−1 − Bγ
ε
(s, τ ) =
21
ε
(3)
E
−1
τ
g
− 1 Dγ X, ψsτ
(s, τ ) = Aa Aa
E
D
−1
τ
g
− 1 ψsτ .
= X, Dγ−1 Aa Aa
D
Furthermore, the triangle inequality gives:
r n
o
n
E |ε(s,τ)|
2
≤
Ca2
2
ε(1) (s,τ)
E
o
r
n
o
2
+ Cγ E ε(2) (s,τ)
!
r n
o 2
2
.
+ E ε(3) (s, τ )
(30)
Let us now determine an upper bound for each error term. To this end, they are
written as follows:
E
ε
(k)
(s, τ )
2
=
Z ∞
0
2
(k)
SX (ξ ) fˆsτ (ξ ) dξ ,
with k ∈ {1, 2, 3}.
√
Concerning the first error term, a Taylor expansion of γ′ around τ gives
Z q
q
(
1
)
fˆsτ (ξ ) =
γ′ (t) − γ′ (τ ) ψsτ (t)e−2iπγ(t)ξ dt
R
where Iψ =
write
R
≤
R
Z
R
γ′′
√ ′
2 γ
3s
∞
|t − τ | |ψsτ (t)|dt ≤ q 2
kγ′′ k∞
Iψ ,
√
2 cγ
|tψ(t)|dt. Furthermore, the localization assumption on ψ allows us to
Iψ ≤ 2
Z ∞
0
t
dt ≤ 2
1 + tβ
Z
1
0
tdt +
Z ∞
1
1
t β−1
dt
=
β
.
β−2
Finally, we can control the first error term as follows:
2
′′
2
βσX
3s/2 kγ k∞
(1)
.
≤ q
E ε (s, τ )
√
c γ 2( β − 2)
Concerning the second error term, we have
Z
′
(2)
ˆf sτ
e−2iπγ(t)ξ − e−2iπ (γ(τ )+(t−τ )γ (τ ))ξ ψsτ (t)dt
(ξ ) =
Z
R
′
1 − e−2iπ (γ(τ )+(t−τ )γ (τ )−γ(t))ξ |ψsτ (t)| dt
R
Z
π
≤
2 sin
ξ (t − τ )2 γ′′ (t∗ ) |ψsτ (t)| dt ,
2
R
≤
for some t∗ between t and τ. Besides, we have | sin(u)| ≤ |u| and | sin(u)| ≤ 1 so that:
Z
Z
π
(
2
)
′′
2s
2
s/2
ξ kγ k∞ q t |ψ(t)| dt +
fˆsτ (ξ ) ≤ 2q
| ψ(t)| dt .
R\ J
J2
22
where J = [− T, T ]. One can prove that the value of T minimizing the the right-hand
−1/( β+2)
. Therefore, we have:
side of the latter equation is T = π2 ξ kγ′′ k∞ q2s
4( β + 2)
(2)
fˆsτ (ξ ) ≤ q 2( β+2)
3( β − 1)
5β−2
π
2
′′
ξ kγ k∞
Finally, we can control the second error term as follows:
E
ε
(2)
(s, τ )
2
≤ q
5β−2
2( β +2)
β +2
β −1
.
β +2
4( β + 2)π ′′ β−1 (ρ)
kγ k∞
IX
3( β − 1) 2
!2
.
Concerning the third error term, we have
Z q
a
(
t
)
(
3
)
fˆsτ (ξ ) =
γ′ (t)
− 1 ψsτ (t)e−2iπγ(t)ξ dt
a
(
τ
)
R
Z
1 k a′ k
1
3s
k a′ k∞
∞
2
|t − τ | |ψsτ (t)|dt = q 2 Cγ2
Iψ ,
≤ Cγ
ca
R ca
Finally, we can control the third error term as follows:
!2
p
2
C
γ βσX
E ε(3) (s, τ )
≤ q3s/2
k a′ k∞
.
ca β − 2
To conclude the proof, the three errors terms in equation (30) are replaced by their
upper bounds to obtain the approximation error given in the theorem.
A PPENDIX B
P ROOF
OF PROPOSITION
1
(k)
Let w̃ x,sm ∈ C Nτ denote the estimation of w x,sm after k iterations of the algorithm.
(k)
(k)
1 f
q
Considering equation (18), we have w̃ x,s =
Wy sm − θ̃ , τ , thus
m
E
n
(k)
S˜X (q−sm ω0 )
o
2
(k)
θ̃1
1
=
E
Nτ kψk22
(k) 2
w̃ x,sm
.
(ξ )
To simplify notations, let us introduce some variables. We define sm = sm + logq (ξ )
and h( x ) = ϕ1 (qx ) = qx |ψ̂ (qx )|2 for x ∈ R.
By means of the covariance expression given in equation (12) we can write
Nτ
1
(
k
)
(
k
)
(k) 2
fy sm − θ̃ , τn
fy sm − θ̃ , τn W
E w̃x,sm =∑ (k)E W
n,2
n,2
n =1 θ̃n,1
Nτ
=
∑
θn,1
(k)
n =1 θ̃n,1
Z ∞
SX (ξ ) (ξ )
0
ξ
(k)
h sm + θn,2 − θ̃n,2 dξ
23
Z ∞
SX (ξ ) Nτ θn,1 (ξ )
=
ξ
0
∑
(k)
n =1 θ̃n,1
h sm
(k)
(k)
+ θn,2 − θ̃ dξ.
n,2
Let us now split the bias into two terms such that bSX (m) = g1 (m) + g2 (m), where g1
and g2 are defined as
Z
N −1 ∞SX (ξ ) Nτθn,1 (ξ )
(k)
−
1
+
θ
−
θ̃
h
s
g1 ( m ) = τ 2
n,2
m
n,2 dξ,
(k)
ξ
k ψ k2 0
θ̃
n =1 n,1
∑
N −1
g2 ( m ) = τ 2
k ψ k2
Z ∞
SX (ξ )
0
ξ
Nτ
∑
n =1
(k)
(ξ )
h sm + θn,2 − θ̃n,2
!
(ξ )
− h sm dξ.
Regarding the first term, we directly have
(k) Z
∞ S (ξ )
khk∞ 1 Nτ θn,1 − θ̃n,1
X
| g1 (m)| ≤
dξ
∑
2
(
k
)
ξ
kψk2 Nτ n=1
0
θ̃n,1
≤
khk∞
(k)
θ1 − θ̃1
2
k ψ k 2 c θ1
∞
JX .
Besides, we have khk∞ = k ϕ1 k∞ and the smoothness and decay assumptions on ψ̂ allow
us to write ϕ1 (u) = Ou→∞ (u1−2η ) → 0. Then ϕ1 is bounded and K1′ < ∞. This yields
u→∞
JX K1′
(k)
| g1 (m)| ≤
.
θ1 − θ̃1
2
∞
k ψ k2
(ξ )
Regarding the second term, a Taylor expansion of h around sm gives
Z
N −1 ∞SX(ξ ) Nτ (ξ )
(ξ )
(k)
+
θ
−
θ̃
| g2 (m)| ≤ τ 2
h
s
n,2
m
∑
n,2 − h s m dξ
ξ
k ψ k2 0
n =1
≤
≤
Furthermore, ∀ x ∈ R
kh′ k∞ 1
kψk22 Nτ
kh′ k
∞
kψk22
Nτ
∑
n =1
(k)
θn,2 − θ̃n,2
(k)
θ2 − θ̃2
∞
Z ∞
SX (ξ )
ξ
0
dξ
JX .
2
|h′( x )|ln (q)−1 = qxϕ1′ (qx ) ≤ qx ψ̂(qx ) + 2q2x ψ̂ (qx )ψ̂′(qx )
≤ k ϕ1 k∞ + 2kψ̂′ k∞ k ϕ2 k∞ = ln(q)−1 K2′ .
Besides, the decay assumption on ψ̂ gives | ϕ2 (u)| = Ou→∞ (u2−η ) → 0 because η > 2.
Then ϕ2 is bounded and K2′ < ∞. This yields
JX K2′
(k)
| g2 (m)| ≤
θ2 − θ̃2
2
k ψ k2
u→∞
∞
.
The proof is concluded by summing up the upper bounds of | g1 | and | g2 | to obtain the
(k)
upper bound of bSX . Notice that this bound does not depends on m.
24
R EFERENCES
[1] L. H. Koopmans, The spectral analysis of time series, 2nd ed., ser. Probability and mathematical statistics, Z. W.
Birnbaum and E. Lukacs, Eds. Elsevier, 1995, vol. 22.
[2] M. B. Priestley, Spectral analysis and time series, ser. Probability and mathematical statistics.
Academic Press,
1982, no. v. 1-2.
[3] ——, Non-linear and non-stationary time series analysis, 1988.
[4] S. Mallat, G. Papanicolaou, and Z. Zhang, “Adaptive covariance estimation of locally stationary processes,”
Ann. Statist., vol. 26, no. 1, pp. 1–47, 02 1998. [Online]. Available: http://dx.doi.org/10.1214/aos/1030563977
[5] G. P. Nason, R. von Sachs, and G. Kroisandt, “Wavelet processes and adaptive estimation of the evolutionary
wavelet spectrum,” Journal of the Royal Statistical Society. Series B (Statistical Methodology), vol. 62, no. 2, pp.
271–292, 2000. [Online]. Available: http://www.jstor.org/stable/3088859
[6] R. Dahlhaus, “A likelihood approximation for locally stationary processes,” Ann. Statist., vol. 28, no. 6, pp.
1762–1794, 12 2000. [Online]. Available: http://dx.doi.org/10.1214/aos/1015957480
[7] R. von Sachs and M. H. Neumann, “A wavelet-based test for stationarity,” Journal of Time Series Analysis,
vol. 21, no. 5, pp. 597–613, 2000. [Online]. Available: http://dx.doi.org/10.1111/1467-9892.00200
[8] S. Wisdom, L. Atlas, and J. Pitton, “Extending coherence time for analysis of modulated random processes,” in
2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), May 2014, pp. 340–344.
[9] H. Omer and B. Torrésani, “Estimation of frequency modulations on wideband signals; applications
to audio signal analysis,” in Proceedings of the 10th International Conference on Sampling Theory and
Applications (SampTA), G. Pfander, Ed.
Eurasip Open Library, 2013, pp. 29–32. [Online]. Available:
http://hal.archives-ouvertes.fr/hal-00822186
[10] S. T. Wisdom, “Improved statistical signal processing of nonstationary random processes using time-warping,”
Master’s thesis, School of Electrical Engineering, University of Washington, USA, March 2014.
[11] R. J. McAulay and T. F. Quatieri, “Speech analysis/synthesis based on a sinusoidal representation,” IEEE
Transactions on Acoustics, Speech, Signal Processing, vol. 34, pp. 744–754, August 1986.
[12] M. Lagrange, S. Marchand, and J. B. Rault, “Enhancing the tracking of partials for the sinusoidal modeling of
polyphonic sounds,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 15, no. 5, pp. 1625–1634,
July 2007.
[13] P. Depalle, G. Garcı́a, and X. Rodet, “Tracking of partials for additive sound synthesis using hidden markov
models,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP-93),
vol. 1, April 1993, pp. 225–228.
[14] S. Godsill and M. Davy, “Bayesian harmonic models for musical pitch estimation and analysis,” in The IEEE
International Conference on Acoustics, Speech, and Signal Processing (ICASSP 02), 2002, pp. 1769–1772.
[15] R. Carmona, W. L. Hwang, and B. Torrésani, Practical time-frequency analysis: Gabor and Wavelet Transforms With
an Implementation in S, C. K. Chui, Ed. Academic Press, 1998.
[16] F. Auger, P. Flandrin, Y.-T. Lin, S. McLaughlin, S. Meignen, T. Oberlin, and H.-T. Wu, “Time-frequency
reassignment and synchrosqueezing: An overview,” IEEE Signal Processing Magazine, vol. 30, no. 6, pp. 32–41,
2013.
[17] H.-T. Wu, “Instantaneous frequency and wave shape function (i),” Applied and Computational Harmonic Analysis,
vol. 35, pp. 181–199, 2013.
25
[18] M. Clerc and S. Mallat, “The texture gradient equation for recovering shape from texture,” IEEE Transactions on
Pattern Analysis and Machine Intelligence, vol. 24, no. 4, pp. 536–549, 2002.
[19] ——, “Estimating deformations of stationary processes,” Ann. Statist., vol. 31, no. 6, pp. 1772–1821, 12 2003.
[Online]. Available: http://dx.doi.org/10.1214/aos/1074290327
[20] H. Omer and B. Torrésani, “Time-frequency and time-scale analysis of deformed stationary processes, with
application to non-stationary sound modeling,” Applied and Computational Harmonic Analysis, vol. 43, no. 1, pp.
1 – 22, 2017. [Online]. Available: https://hal.archives-ouvertes.fr/hal-01094835
[21] A. Meynard and B. Torrésani, “Spectral estimation for non-stationary signal classes,” in Sampling
Theory and Applications, ser. Proceedings of SampTA17, Tallinn, Estonia, Jul. 2017. [Online]. Available:
https://hal.archives-ouvertes.fr/hal-01475534
[22] H. Omer, “Modèles de déformation de processus stochastiques généralisés. application à l’estimation des non
stationnarités dans les signaux audio.” Ph.D. dissertation, Aix-Marseille Université, 2015.
[23] P. Welch, “The use of fast fourier transform for the estimation of power spectra: A method based on time
averaging over short, modified periodograms,” IEEE Transactions on Audio and Electroacoustics, vol. 15, no. 2, pp.
70–73, Jun 1967.
[24] P. Stoica and R. Moses, Introduction to Spectral Analysis. Prentice Hall, 1997.
| 10 |
Logical Methods in Computer Science
Vol. 8 (2:12) 2012, pp. 1–27
www.lmcs-online.org
Submitted
Published
Jan. 7, 2011
Jun. 19, 2012
GENERIC FIBRATIONAL INDUCTION ∗
NEIL GHANI, PATRICIA JOHANN, AND CLÉMENT FUMEX
University of Strathclyde, Glasgow G1 1XH, UK
e-mail address: {Neil.Ghani, Patricia.Johann, Clement.Fumex}@cis.strath.ac.uk
Abstract. This paper provides an induction rule that can be used to prove properties of
data structures whose types are inductive, i.e., are carriers of initial algebras of functors.
Our results are semantic in nature and are inspired by Hermida and Jacobs’ elegant algebraic formulation of induction for polynomial data types. Our contribution is to derive,
under slightly different assumptions, a sound induction rule that is generic over all inductive types, polynomial or not. Our induction rule is generic over the kinds of properties to
be proved as well: like Hermida and Jacobs, we work in a general fibrational setting and
so can accommodate very general notions of properties on inductive types rather than just
those of a particular syntactic form. We establish the soundness of our generic induction
rule by reducing induction to iteration. We then show how our generic induction rule can
be instantiated to give induction rules for the data types of rose trees, finite hereditary
sets, and hyperfunctions. The first of these lies outside the scope of Hermida and Jacobs’
work because it is not polynomial, and as far as we are aware, no induction rules have
been known to exist for the second and third in a general fibrational framework. Our
instantiation for hyperfunctions underscores the value of working in the general fibrational
setting since this data type cannot be interpreted as a set.
1. Introduction
Iteration operators provide a uniform way to express common and naturally occurring
patterns of recursion over inductive data types. Expressing recursion via iteration operators makes code easier to read, write, and understand; facilitates code reuse; guarantees
properties of programs such as totality and termination; and supports optimising program transformations such as fold fusion and short cut fusion. Categorically, iteration
operators arise from initial algebra semantics of data types, in which data types are regarded as carriers of initial algebras of functors. Lambek’s Lemma ensures that the carrier
of the initial algebra of F is its least fixed point µF , and initiality ensures that, given
any F -algebra h : F A → A, there is a unique F -algebra homomorphism, denoted f old h,
from the initial algebra in : F (µF ) → µF to that algebra. For each functor F , the map
1998 ACM Subject Classification: F.3.2, D.3.1.
Key words and phrases: Induction, algebraic semantics of data types, fibrations, category theory.
∗
This paper is revised and significantly expanded version of [9].
This research is partially supported by EPSRC grant EP/C0608917/1.
l
LOGICAL METHODS
IN COMPUTER SCIENCE
c
DOI:10.2168/LMCS-8 (2:12) 2012
CC
N. Ghani, P. Johann, and C. Fumex
Creative Commons
2
N. GHANI, P. JOHANN, AND C. FUMEX
fold : (F A → A) → µF → A is the iteration operator for the data type µF . Initial algebra
semantics thus provides a well-developed theory of iteration which is...
• ...principled, in that it is derived solely from the initial algebra semantics of data types.
This is important because it helps ensure that programs have rigorous mathematical
foundations that can be used to ascertain their meaning and correctness.
• ...expressive, and so is applicable to all inductive types — i.e., to every type which is the
carrier of an initial algebra of a functor — rather than just to syntactically defined classes
of data types such as polynomial data types.
• ...correct, and so is valid in any model — set-theoretic, domain-theoretic, realisability,
etc. — in which data types are interpreted as carriers of initial algebras.
Because induction and iteration are closely linked — induction is often used to prove properties of functions defined by iteration, and the correctness of induction rules is often established by reducing it to that of iteration — we may reasonably expect that initial algebra
semantics can be used to derive a principled, expressive, and correct theory of induction
for data types as well. In most treatments of induction, given a functor F together with a
property P to be proved about data of type µF , the premises of the induction rule for µF
constitute an F -algebra with carrier Σx : µF. P x. The conclusion of the rule is obtained by
supplying such an F -algebra as input to the iteration operator for µF . This yields a function from µF to Σx : µF. P x from which a function of type ∀x : µF. P x can be obtained. It
has not, however, been possible to characterise F -algebras with carrier Σx : µF. P x without
additional assumptions on F . Induction rules are thus typically derived under the assumption that the functors involved have a certain structure, e.g., that they are polynomial.
Moreover, taking the carriers of the algebras to be Σ-types assumes that properties are
represented as type-valued functions. So while induction rules derived as described above
are both principled and correct, their expressiveness is limited along two dimensions: with
respect to the data types for which they can be derived and the nature of the properties
they can verify.
A more expressive, yet still principled and correct, approach to induction is given by
Hermida and Jacobs [10]. They show how to lift each functor F on a base category of types
to a functor F̂ on a category of properties over those types, and take the premises of the
induction rule for the type µF to be an F̂ -algebra. Hermida and Jacobs work in a fibrational
setting and the notion of property they consider is, accordingly, very general. Indeed, they
accommodate any notion of property that can be suitably fibred over the category of types,
and so overcome one of the two limitations mentioned above. On the other hand, their
approach gives sound induction rules only for polynomial data types, so the limitation on
the class of data types treated remains in their work.
This paper shows how to remove the restriction on the class of data types treated.
Our main result is a derivation of a sound generic induction rule that can be instantiated
to every inductive type, regardless of whether it is polynomial or not. We think this is
important because it provides a counterpart for induction to the existence of an iteration
operator for every inductive type. We take Hermida and Jacobs’ approach as our point of
departure and show that, under slightly different assumptions on the fibration involved, we
can lift any functor on the base category of a fibration to a functor on the total category of
the fibration. The lifting we define forms the basis of our generic induction rule.
GENERIC FIBRATIONAL INDUCTION
3
The derivation of a generic, sound induction rule covering all inductive types is clearly
an important theoretical result, but it also has practical consequences:
• We show in Example 2 how our generic induction rule can be instantiated to the families
fibration over Set (the fibration most often implicitly used by type theorists and those
constructing inductive proofs with theorem provers) to derive the induction rule for rose
trees that one would intuitively expect. The data type of rose trees lies outside the scope
of Hermida and Jacobs’ results because it is not polynomial. On the other hand, an
induction rule for rose trees is available in the proof assistant Coq, although it is neither
the one we intuitively expect nor expressive enough to prove properties that ought to be
amenable to inductive proof. Indeed, if we define rose trees in Coq by
Node : list rose -> rose
then Coq generates the following induction rule
rose_ind : forall P : rose -> Prop,
(forall l : list rose, P (Node l)) ->
forall r : rose, P r
But to prove a property of a rose tree Node l, we must prove that property assuming only
that l is a list of rose trees, and without recourse to any induction hypothesis. There is,
of course, a presentation of rose trees by mutual recursion as well, but this doesn’t give
the expected induction rule in Coq either. Intuitively, what we expect is an induction
rule whose premise is
forall [r_0, ..., r_n] : list rose,
P(r_0) -> ... -> P(r_n) -> P(Node [r_0, ..., r_n])
The rule we derive for rose trees is indeed the expected one, which suggests that our
derivation may enable automatic generation of more useful induction rules in Coq, rather
than requiring the user to hand code them as is currently necessary.
• We further show in Example 3 how our generic induction rule can be instantiated, again
to the families fibration over Set, to derive a rule for the data type of finite hereditary
sets. This data type is defined in terms of quotients and so lies outside most current
theories of induction.
• Finally, we show in Example 7 how our generic induction rule can be instantiated to the
subobject fibration over ωCP O⊥ to derive a rule for the data type of hyperfunctions.
Because this data type cannot be interpreted as a set, a fibration other than the families
fibration over Set is required; in this case, use of the subobject fibration allows us to
derive an induction rule for admissible subsets of hyperfunctions. The ability to treat the
data type of hyperfunctions thus underscores the importance of developing our results
in the general fibrational framework. Moreover, the functor underlying the data type of
hyperfunctions is not strictly positive [7], so the ability to treat this data type also underscores the advantage of being able to handle a very general class of functors going beyond
simply polynomial functors. As far as we know, induction rules for finite hereditary sets
and hyperfunctions have not previously existed in the general fibrational framework.
4
N. GHANI, P. JOHANN, AND C. FUMEX
Although our theory of induction is applicable to all inductive functors — i.e., to all
functors having initial algebras, including those giving rise to nested types [15], GADTs [21],
indexed containers [1], dependent types [19], and inductive recursive types [6] — our examples show that working in the general fibrational setting is beneficial even if we restrict our
attention to strictly positive data types. We do, however, offer some preliminary thoughts
in Section 5 on the potentially delicate issue of instantiating our general theory with fibrations appropriate for deriving induction rules for specific classes of higher-order functors
of interest. It is also worth noting that the specialisations of our generic induction rule to
polynomial functors in the families fibration over Set coincide exactly with the induction
rules of Hermida and Jacobs. But the structure we require of fibrations generally is slightly
different from that required by Hermida and Jacobs, so while our theory is in essence a
generalisation of theirs, the two are, strictly speaking, incomparable. The structure we
require of our fibrations is, nevertheless, certainly present in all standard fibrational models
of type theory (see Section 4). Like Hermida and Jacobs, we prove our generic induction
rule correct by reducing induction to iteration. A more detailed discussion of when our
induction rules coincide with those of Hermida and Jacobs is given in Section 4.
We take a purely categorical approach to induction in this paper, and derive our generic
induction rule from only the initial algebra semantics of data types. As a result, our work
is inherently extensional. Although translating our constructions into intensional settings
may therefore require additional effort, we expect the guidance offered by the categorical
viewpoint to support the derivation of induction rules for functors that are not treatable at
present. Since we do not use any form of impredicativity in our constructions, and instead
use only the weaker assumption that initial algebras exist, this guidance will be widely
applicable.
The remainder of this paper is structured as follows. To make our results as accessible
as possible, we illustrate them in Section 2 with a categorical derivation of the familiar
induction rule for the natural numbers. In Section 3 we derive an induction rule for the
special case of the families fibration over Set. We also show how this rule can be instantiated
to derive the one from Section 2, and the ones for rose trees and finite hereditary sets
mentioned above. Then, in Section 4 we present our generic fibrational induction rule,
establish a number of results about it, and illustrate it with the aforementioned application
to hyperfunctions. The approach taken in this section is completely different from the
corresponding one in the conference version of the paper [9], and allows us to improve upon
and extend our previous results. Section 5 concludes, discusses possible instantiations of
our generic induction rule for higher-order functors, and offers some additional directions
for future research.
When convenient, we identify isomorphic objects of a category and write = rather
than ≃. We write 1 for the canonical singleton set and denote its single element by · . In
Sections 2 and 3 we assume that types are interpreted as objects in Set, so that 1 also
denotes the unit type in those sections. We write id for identity morphisms in a category
and Id for the identity functor on a category.
2. A Familiar Induction Rule
Consider the inductive data type Nat, which defines the natural numbers and can be specified in a programming language with Haskell-like syntax by
data Nat = Zero | Succ Nat
GENERIC FIBRATIONAL INDUCTION
5
The observation that Nat is the least fixed point of the functor N on Set — i.e., on the
category of sets and functions — defined by N X = 1 + X can be used to define the
following iteration operator:
foldNat
foldNat z s Zero
foldNat z s (Succ n)
=
=
=
X → (X → X) → Nat → X
z
s (foldNat z s n)
The iteration operator foldNat provides a uniform means of expressing common and naturally occurring patterns of recursion over the natural numbers.
Categorically, iteration operators such as foldNat arise from the initial algebra semantics
of data types, in which every data type is regarded as the carrier of the initial algebra of
a functor F . If B is a category and F is a functor on B, then an F -algebra is a morphism
h : F X → X for some object X of B. We call X the carrier of h. For any functor F , the
collection of F -algebras itself forms a category Alg F which we call the category of F -algebras.
In Alg F , an F -algebra morphism between F -algebras h : F X → X and g : F Y → Y is a
map f : X → Y such that the following diagram commutes:
FA
Ff
/ FB
g
h
A
f
/B
When it exists, the initial F -algebra in : F (µF ) → µF is unique up to isomorphism and
has the least fixed point µF of F as its carrier. Initiality ensures that there is a unique
F -algebra morphism fold h : µF → X from in to any F -algebra h : F X → X. This gives
rise to the following iteration operator fold for the inductive type µF :
fold
fold h (in t)
:
=
(F X → X) → µF → X
h (F (fold h) t)
Since fold is derived from initial algebra semantics it is principled and correct. It is also
expressive, since it can be defined for every inductive type. In fact, fold is a single iteration
operator parameterised over inductive types rather than a family of iteration operators, one
for each such type, and the iteration operator foldNat above is the instantiation to Nat of
the generic iteration operator fold .
The iteration operator foldNat can be used to derive the standard induction rule for Nat
which coincides with the standard induction rule for natural numbers, i.e., with the familiar
principle of mathematical induction. This rule says that if a property P holds for 0, and if
P holds for n + 1 whenever it holds for a natural number n, then P holds for all natural
numbers. Representing each property of natural numbers as a predicate P : Nat → Set
mapping each n : Nat to the set of proofs that P holds for n, we wish to represent this rule
at the object level as a function indNat with type
∀(P : Nat → Set). P Zero → (∀n : Nat. P n → P (Succ n)) → (∀n : Nat. P n)
Code fragments such as the above, which involve quantification over sets, properties, or
functors, are to be treated as “categorically inspired” within this paper. This is because
quantification over such higher-kinded objects cannot be interpreted in Set. In order to
give a formal interpretation to code fragments like the one above, we would need to work
in a category such as that of modest sets. While the ability to work with functors over
categories other than Set is one of the motivations for working in the general fibrational
6
N. GHANI, P. JOHANN, AND C. FUMEX
setting of Section 4, formalising the semantics of such code fragments would obscure the
central message of this paper. Our decision to treat such fragments as categorically inspired
is justified in part by the fact that the use of category theory to suggest computational constructions has long been regarded as fruitful within the functional programming community
(see, e.g., [2, 3, 18]).
A function indNat with the above type takes as input the property P to be proved, a
proof φ that P holds for Zero, and a function ψ mapping each n : Nat and each proof that P
holds for n to a proof that P holds for Succ n, and returns a function mapping each n : Nat
to a proof that P holds for n, i.e., to an element of P n. We can write indNat in terms
of foldNat — and thus reduce induction for Nat to iteration for Nat — as follows. First
note that indNat cannot be obtained by instantiating the type X in the type of foldNat
to a type of the form P n for a specific n because indNat returns elements of the types
P n for different values n and these types are, in general, distinct from one another. We
therefore need a type containing all of the elements of P n for every n. Such a type can
informally be thought of as the union over n of P n, and is formally given by the dependent
type Σn : Nat. P n comprising pairs (n, p) where n : Nat and p : P n.
The standard approach to defining indNat is thus to apply foldNat to an N -algebra
with carrier Σn : Nat. P n. Such an algebra has components α : Σn : N at. P n and β : Σn :
N at. P n → Σn : N at. P n. Given φ : P Zero and ψ : ∀n. P n → P (Succ n), we choose
α = (Zero, φ) and β (n, p) = (Succ n, ψ n p) and note that f oldN at α β : N at → Σn :
N at. P n. We tentatively take indNat P φ ψ n to be p, where f oldN at α β n = (m, p).
But in order to know that p actually gives a proof for n itself, we must show that m = n.
Fortunately, this follows easily from the uniqueness of foldNat α β. Indeed, we have that
1 + N at
/ 1 + Σn : N at. P n
/ 1 + N at
[α,β]
in
N at
foldNat α β
in
/ Σn : N at. P n
λ(n,p). n
/ Nat
commutes and, by initiality of in, that (λ(n, p). n)◦(f oldN at α β) is the identity map. Thus
n = (λ(n, p). n)(f oldN at α β n) = (λ(n, p). n)(m, p) = m
πP′
Letting
be the second projection on dependent pairs involving the predicate P , the
induction rule for Nat is thus
indNat
: ∀(P : Nat → Set). P Zero → (∀n : Nat. P n → P (Succ n))
→ (∀n : Nat. P n)
′
indNat P φ ψ = πP ◦ (foldNat (Zero, φ) (λ(n, p). (Succ n, ψ n p)))
As expected, this induction rule states that, for every property P , to construct a proof that
P holds for every n : Nat, it suffices to provide a proof that P holds for Zero, and to show
that, for any n : Nat, if there is a proof that P holds for n, then there is also a proof that
P holds for Succ n.
The use of dependent types is fundamental to this formalization of the induction rule
for Nat, but this is only possible because properties to be proved are taken to be set-valued
functions. The remainder of this paper uses fibrations to generalise the above treatment
of induction to arbitrary inductive functors and arbitrary properties which are suitably
fibred over the category whose objects interpret types. In the general fibrational setting,
properties are given axiomatically via the fibrational structure rather than assumed to be
(set-valued) functions.
GENERIC FIBRATIONAL INDUCTION
7
3. Induction Rules for Predicates over Set
The main result of this paper is the derivation of a sound induction rule that is generic over
all inductive types and which can be used to verify any notion of property that is fibred
over the category whose objects interpret types. In this section we assume that types are
modelled by sets, so the functors we consider are on Set and the properties we consider are
functions mapping data to sets of proofs that these properties hold for them. We make these
assumptions because it allows us to present our derivation in the simplest setting possible,
and also because type theorists often model properties in exactly this way. This makes the
present section more accessible and, since the general fibrational treatment of induction can
be seen as a direct generalisation of the treatment presented here, Section 4 should also be
more easily digestible once the derivation is understood in this special case. Although the
derivation of this section can indeed be seen as the specialisation of that of Section 4 to the
families fibration over Set, no knowledge of fibrations is required to understand it because
all constructions are given concretely rather than in their fibrational forms.
We begin by considering what we might naively expect an induction rule for an inductive
data type µF to look like. The derivation for Nat in Section 2 suggests that, in general, it
should look something like this:
ind : ∀P : µ F → Set. ??? → ∀ x : µF. P x
But what should the premises — denoted ??? here — of the generic induction rule ind be?
Since we want to construct, for any term x : µF , a proof term of type P x from proof terms
for x’s substructures, and since the functionality of the fold operator for µF is precisely
to compute a value for x : µF from the values for x’s substructures, it is natural to try to
equip P with an F -algebra structure that can be input to fold to yield a mapping of each
x : µF to an element of P x. But this approach quickly hits a snag. Since the codomain
of every predicate P : µF → Set is Set itself, rather than an object of Set, F cannot be
applied to P as is needed to equip P with an F -algebra structure. Moreover, an induction
rule for µF cannot be obtained by applying fold to an F -algebra with carrier P x for any
specific x. This suggests that we should try to construct an F -algebra not for P x for each
term x, but rather for P itself.
Such considerations led Hermida and Jacobs [10] to define a category of predicates P
and a lifting for each polynomial functor F on Set to a functor F̂ on P that respects the
structure of F . They then constructed F̂ -algebras with carrier P to serve as the premises of
their induction rules. The crucial part of their construction, namely the lifting of polynomial
functors, proceeds inductively and includes clauses such as
(F\
+ G) P = F̂ P + ĜP
and
(F\
× G) P = F̂ P × ĜP
The construction of Hermida and Jacobs is very general: they consider functors on bicartesian categories rather than just on Set, and represent properties by bicartesian fibrations
over such categories instead of using the specific notion of predicate from Definition 3.2
below. On the other hand, they define liftings for polynomial functors.
The construction we give in this section is in some sense orthogonal to Hermida and
Jacobs’: we focus exclusively on functors on Set and a particular category of predicates, and
show how to define liftings for all inductive functors on Set, including non-polynomial ones.
8
N. GHANI, P. JOHANN, AND C. FUMEX
In this setting, the induction rule we derive properly extends Hermida and Jacobs’, thus
catering for a variety of data types that they cannot treat. In the next section we derive
analogous results in the general fibrational setting. This allows us to derive sound induction
rules for initial algebras of functors defined on categories other than Set which can be used
to prove arbitrary properties that are suitably fibred over the category interpreting types.
We begin with the definition of a predicate.
Definition 3.1. Let X be a set. A predicate on X is a function P : X → Set mapping
each x ∈ X to a set P x. We call X the domain of P .
We may speak simply of “a predicate P ” if the domain of P is understood. A predicate
P on X can be thought of as mapping each element x of X to the set of proofs that P holds
for x. We now define our category of predicates.
Definition 3.2. The category of predicates P has predicates as its objects. A morphism
from a predicate P : X → Set to a predicate P ′ : X ′ → Set is a pair (f, f ∼ ) : P → P ′ of
functions, where f : X → X ′ and f ∼ : ∀x : X. P x → P ′ (f x). Composition of predicate
morphisms is given by (g, g∼ ) ◦ (f, f ∼ ) = (g ◦ f, λxp. g ∼ (f x)(f ∼ xp)).
Diagrammatically, we have
f
/ X′
④
∼
❉❉
④
❉❉ f
④④
❉❉
④/ ④P ′
④
P
}④
"
X ❉
❉
Set
As the diagram indicates, the notion of a morphism from P to P ′ does not require the sets
of proofs P x and P ′ (f x), for any x ∈ X, to be equal. Instead, it requires only the existence
of a function f ∼ which maps, for each x, each proof in P x to a proof in P ′ (f x). We denote
by U : P → Set the forgetful functor mapping each predicate P : X → Set to its domain
X and each predicate morphism (f, f ∼ ) to f .
An alternative to Definition 3.2 would take the category of predicates to be the arrow
category over Set, but the natural lifting in this setting does not indicate how to generalise
liftings to other fibrations. Indeed, if properties are modelled as functions, then every
functor can be applied to a property, and hence every functor can be its own lifting. In the
general fibrational setting, however, properties are not necessarily modelled by functions,
so a functor cannot, in general, be its own lifting. The decision not to use arrow categories
to model properties is thus dictated by our desire to lift functors in a way that indicates
how liftings can be constructed in the general fibrational setting.
We can now give a precise definition of a lifting.
Definition 3.3. Let F be a functor on Set. A lifting of F from Set to P is a functor F̂ on
P such that the following diagram commutes:
P
F̂
/P
U
U
Set
F
/ Set
We can decode the definition of F̂ as follows. The object part of F̂ must map each predicate
P : X → Set to a predicate F̂ P : F X → Set, and thus can be thought of type-theoretically
GENERIC FIBRATIONAL INDUCTION
9
as a function ∀(X : Set). (X → Set) → F X → Set. Of course, F̂ must also act on
morphisms in a functorial manner.
We can now use the definition of a lifting to derive the standard induction rule from
Section 2 for Nat as follows.
Example 1. The data type of natural numbers is µN where N is the functor on Set defined
by N X = 1 + X. A lifting N̂ of N can be defined by sending each predicate P : X → Set
b P : N X → Set given by
to the predicate N
N̂ P (inl ·)
N̂ P (inr n)
=
=
1
Pn
An N̂ -algebra with carrier P : Nat → Set can be given by in : 1 + Nat → Nat and
in∼ : ∀t : 1 + Nat. N̂ P t → P (in t). Since in (inl ·) = 0 and in (inr n) = n + 1, we see that
in∼ consists of an element h1 : P 0 and a function h2 : ∀n : Nat. P n → P (n + 1). Thus, the
second component in ∼ of an N̂ -algebra with carrier P : Nat → Set and first component in
gives the premises of the familiar induction rule in Example 1.
The notion of predicate comprehension is a key ingredient of our lifting. It begins to
explain, abstractly, what the use of Σ-types is in the theory of induction, and is the key
construct allowing us to define liftings for non-polynomial, as well as polynomial, functors.
Definition 3.4. Let P be a predicate on X. The comprehension of P , denoted {P }, is the
type Σx : X. P x comprising pairs (x, p) where x : X and p : P x. The map taking each
predicate P to {P }, and taking each predicate morphism (f, f ∼ ) : P → P ′ to the morphism
{(f, f ∼ )} : {P } → {P ′ } defined by {(f, f ∼ )}(x, p) = (f x, f ∼ x p), defines the comprehension
functor {−} from P to Set.
We are now in a position to define liftings uniformly for all functors:
Definition 3.5. If F is a functor on Set, then the lifting F̂ is the functor on P given as
follows. For every predicate P on X, F̂ P : F X → Set is defined by F̂ P = (F πP )−1 ,
where the natural transformation π : {−} → U is given by πP (x, p) = x. For every predicate
morphism f : P → P ′ , F̂ f = (k, k ∼ ) where k = F U f , and k∼ : ∀y : F X. F̂ P y → F̂ P ′ (k y)
is given by k∼ y z = F {f }z.
In the above definition, note that the inverse image f −1 of f : X → Y is indeed a predicate
P : Y → Set. Thus if P is a predicate on X, then πP : {P } → X and F πP : F {P } → F X.
Thus F̂ P is a predicate on F X, so F̂ is a lifting of F from Set to P. The lifting F̂ captures
an “all” modality, in that it generalises Haskell’s all function on lists to arbitrary data
types. A similar modality is given in [17] for indexed containers.
The lifting in Example 1 is the instantiation of the construction in Definition 3.5 to
the functor N X = 1 + X on Set. Indeed, if P is any predicate, then N̂ P = (N πP )−1 ,
i.e., N̂ P = (id + πP )−1 . Then, since the inverse image of the coproduct of functions is the
coproduct of their inverse images, since id −1 1 = 1, and since πP−1 n = {(n, p) | p : P n} for
all n, we have N̂ P (inl ·) = 1 and N̂ P (inr n) = P n. As we will see, a similar situation
to that for Nat holds in general: for any functor F on Set, the second component of an
F̂ -algebra whose carrier is the predicate P on the data type µF and whose first component
is in gives the premises of an induction rule that can be used to show that P holds for all
data of type µF .
10
N. GHANI, P. JOHANN, AND C. FUMEX
The rest of this section shows that F -algebras with carrier {P } are interderivable with
F̂ -algebras with carrier P , and then uses this result to derive our induction rule.
Definition 3.6. The functor K1 : Set → P maps each set X to the predicate K1 X = λx :
X. 1 on X and each f : X → Y to the predicate morphism (f, λx : X. id).
The predicate K1 X is called the truth predicate on X. For every x : X, the set K1 Xx of
proofs that K1 X holds for x is a singleton, and thus is non-empty. We intuitively think of
a predicate P : X → Set as being true if P x is non-empty for every x : X. We therefore
consider P to be true if there exists a predicate morphism from K1 X to P whose first
component is id X . For any functor F , the lifting F̂ is truth-preserving, i.e., F̂ maps the
truth predicate on any set X to that on F X.
Lemma 3.7. For any functor F on Set and any set X, F̂ (K1 X) ∼
= K1 (F X).
Proof. By Definition 3.5, F̂ (K1 X) = (F πK1 X )−1 . We have that πK1 X is an isomorphism
because there is only one proof of K1 X for each x : X, and thus that F πK1 X is an isomorphism as well. As a result, (F πK1 X )−1 maps every y : F X to a singleton set, and therefore
F̂ (K1 X) = (F πK1 X )−1 ∼
= λy : F X. 1 = K1 (F X).
The fact that K1 is a left-adjoint to {−} is critical to the constructions below. This is
proved in [10]; we include its proof here for completeness and to establish notation. The
description of comprehension as a right adjoint can be traced back to Lawvere [14].
Lemma 3.8. K1 is left adjoint to {−}.
Proof. We must show that, for any predicate P and any set Y , the set P(K1 Y, P ) of
morphisms from K1 Y to P in P is in bijective correspondence with the set Set(Y, {P })
of morphisms from Y to {P } in Set. Define maps (−)† : Set(Y, {P }) → P(K1 Y, P ) and
(−)# : P(K1 Y, P ) → Set(Y, {P }) by h† = (h1 , h2 ) where hy = (v, p), h1 y = v and h2 y = p,
and (k, k∼ )# = λ(y : Y ). (ky, k∼ y). These give a natural isomorphism between Set(Y, {P })
and P(K1 Y, P ).
Naturality of (−)† ensures that (g ◦f )† = g † ◦K1 f for all f : Y ′ → Y and g : Y → {P }.
Similarly for (−)# . Moreover, id† is the counit, at P , of the adjunction between K1 and
{−}. These observations are used in the proof of Lemma 3.10. Lemmas 3.9 and 3.10 are the
key results relating F -algebras and F̂ algebras, i.e., relating iteration and induction. They
are special cases of Theorem 4.8 below, but we include their proofs to ensure continuity of
our presentation and to ensure that this section is self-contained.
We first we show how to construct F̂ -algebras from F -algebras
Lemma 3.9. There is a functor Φ : Alg F → Alg F̂ such that if k : F X → X, then
Φk : F̂ (K1 X) → K1 X.
Proof. For an F -algebra k : F X → X define Φk = K1 k, and for two F -algebras k : F X → X
and k′ : F X ′ → X ′ and an F -algebra morphism h : X → X ′ between them define the F̂ algebra morphism Φh : Φk → Φk ′ by Φh = K1 h. Then K1 (F X) ∼
= F̂ (K1 X) by Lemma 3.7,
so that Φk is an F̂ -algebra and K1 h is an F̂ -algebra morphism. It is easy to see that Φ
preserves identities and composition.
GENERIC FIBRATIONAL INDUCTION
11
We can also construct F -algebras from F̂ -algebras.
Lemma 3.10. The functor Φ has a right adjoint Ψ such that if j : F̂ P → P , then Ψj :
F {P } → {P }.
Proof. We construct the adjoint functor Ψ : Alg F̂ → Alg F as follows. Given an F̂ -algebra
j : F̂ P → P , we use the fact that F̂ (K1 {P }) ∼
= K1 (F {P }) by Lemma 3.7 to define
†
#
Ψj : F {P } → {P } by Ψj = (j ◦ F̂ id ) . To specify the action of Ψ on an F̂ -algebra
morphism h, define Ψh = {h}. Clearly Ψ preserves identity and composition.
Next we show Φ ⊣ Ψ, i.e., for every F -algebra k : F X → X and F̂ -algebra j : F̂ P → P
with P a predicate on X, there is a natural isomorphism between F -algebra morphisms
from k to Ψj and F̂ -algebra morphisms from Φk to j. We first observe that an F -algebra
morphism from k to Ψj is a map from X to {P }, and an F̂ -algebra morphism from Φk
to j is a map from K1 X to P . A natural isomorphism between such maps is given by the
adjunction K1 ⊣ {−} from Lemma 3.8. We must check that f : X → {P } is an F -algebra
morphism from k to Ψj iff f † : K1 X → P is an F̂ -algebra morphism from Φk to j.
To this end, assume f : X → {P } is an F -algebra morphism from k to Ψj, i.e., assume
f ◦k = Ψj ◦F f . We must prove that f † ◦ Φk = j ◦ F̂ f † . By the definition of Φ in Lemma 3.9,
this amounts to showing f † ◦ K1 k = j ◦ F̂ f † . Now, since (−)† is an isomorphism, f is an F algebra morphism iff (f ◦k)† = (Ψj◦F f )† . Naturality of (−)† ensures that (f ◦k)† = f † ◦K1 k
and that (Ψj ◦ F f )† = (Ψj)† ◦ K1 (F f ), so the previous equality holds iff
f † ◦ K1 k = (Ψj)† ◦ K1 (F f )
(3.1)
But
j ◦ F̂ f †
= j ◦ F̂ id † ◦ K1 f )
= (j ◦ F̂ id† ) ◦ F̂ (K1 f )
= (Ψj)† ◦ K1 (F f )
= f † ◦ K1 k
by naturality of (−)† and f = id ◦ f
by the functoriality of F̂
by the definition of Ψ, the fact that (−)† and (−)#
are inverses, and Lemma 3.7
by Equation 3.1
Thus, f † is indeed an F̂ -algebra morphism from Φk to j.
Lemma 3.10 ensures that F -algebras with carrier {P } are interderivable with F̂ -algebras
with carrier P . For example, the N -algebra [α, β] with carrier {P } from Section 2 can be
derived from the N̂ -algebra with carrier P given in Example 1. Since we define a lifting
F̂ for any functor F , Lemma 3.10 thus shows how to construct F -algebras with carrier
Σx : µF. P x for any functor F and predicate P on µF .
Corollary 3.11. For any functor F on Set, the predicate K1 (µF ) is the carrier of the
initial F̂ -algebra.
Proof. Since Φ is a left adjoint it preserves initial objects, so applying Φ to the initial
F -algebra in : F (µF ) → µF gives the initial F̂ -algebra. By Lemma 3.9, Φ in has type
F̂ (K1 (µF )) → K1 (µF ), so the carrier of the initial F̂ -algebra is K1 (µF ).
12
N. GHANI, P. JOHANN, AND C. FUMEX
We can now derive our generic induction rule. For every predicate P on X and every F̂ algebra (k, k∼ ) : F̂ P → P , Lemma 3.10 ensures that Ψ constructs from (k, k∼ ) an F -algebra
with carrier {P }. Applying the iteration operator to this algebra gives a map
fold (Ψ (k, k∼ )) : µF → {P }
This map decomposes into two parts: φ = πP ◦ fold (Ψ (k, k∼ )) : µF → X and ψ : ∀(t :
µF ). P (φ t). Initiality of in : F (µF ) → µF , the definition of Ψ, and the naturality of πP
ensure φ = fold k. Recalling that πP′ is the second projection on dependent pairs involving
the predicate P , this gives the following sound generic induction rule for the type X, which
reduces induction to iteration:
genind
: ∀ (F : Set → Set) (P : X → Set) ((k, k ∼ ) : (F̂ P → P )) (t : µF ).
P (fold k t)
genind F P = πP′ ◦ fold ◦ Ψ
Notice this induction rule is actually capable of dealing with predicates over arbitrary sets
and not just predicates over µF . However, when X = µF and k = in, initiality of in further
ensures that φ = fold in = id, and thus that genind specialises to the expected induction
rule for an inductive data type µF :
ind
∀ (F : Set → Set) (P : µF → Set) ((k, k ∼ ) : (F̂ P → P )).
(k = in) → ∀(t : µF ). P t
= πP′ ◦ fold ◦ Ψ
:
ind F P
This rule can be instantiated to familiar rules for polynomial data types, as well as to ones
we would expect for data types such as rose trees and finite hereditary sets, both of which
lie outside the scope of Hermida and Jacobs’ method.
Example 2. The data type of rose trees is given in Haskell-like syntax by
data Rose = Node(List Rose)
The functor underlying Rose is F X = List X and its induction rule is
indRose
:
indRose F P
Calculating F̂ P = (F πP
list xs, we have that
)−1
=
∀ (P : Rose → Set) ((k, k ∼ ) : (F̂ P → P )).
(k = in) → ∀(x : Rose). P x
′
πP ◦ fold ◦ Ψ
: F Rose → Set, and writing xs !! k for the k th component of a
F̂ P rs
= {z : F {P } | F πp z = rs}
= {cps : List {P } | List πP cps = rs}
= {cps : List {P } | ∀k < length cps. πP (cps !! k) = rs !! k}
An F̂ -algebra whose underlying F -algebra is in : F Rose → Rose is thus a pair of functions
(in, k∼ ), where k∼ has type
= ∀rs : List Rose.
{cps : List {P } | ∀k < length cps. πP (cps !! k) = rs !! k} → P (Node rs)
= ∀rs : List Rose. (∀k < length rs. P (rs !! k)) → P (Node rs)
GENERIC FIBRATIONAL INDUCTION
13
The last equality is due to surjective pairing for dependent products and the fact that
length cps = length rs. The type of k ∼ gives the hypotheses of the induction rule for rose
trees.
Although finite hereditary sets are defined in terms of quotients, and thus lie outside
the scope of previously known methods, they can be treated with ours.
Example 3. Hereditary sets are sets whose elements are themselves sets, and so are the
core data structures within set theory. The data type HS of finitary hereditary sets is µPf
for the finite powerset functor Pf . We can derive an induction rule for finite hereditary
sets as follows. If P : X → Set, then Pf πP : Pf (Σx : X.P x) → Pf X maps each set
{(x1 , p1 ), . . . , (xn , pn )} to the set {x1 , . . . , xn }, so that (Pf πP )−1 maps a set {x1 , . . . , xn } to
the set P x1 × . . . × P xn . A Pˆf -algebra with carrier P : HS → Set and first component in
therefore has as its second component a function of type
∀({s1 , . . . , sn } : Pf (HS)). P s1 × . . . × P sn → P (in{s1 , . . . , sn })
The induction rule for finite hereditary sets is thus
indHS :: (∀({s1 , . . . , sn } : Pf (HS)). P s1 × . . . × P sn → P (in{s1 , . . . , sn }))
→ ∀(s : HS).P s
4. Generic Fibrational Induction Rules
We can treat more general notions of predicates using fibrations. We motivate the use of
fibrations by observing that i) the semantics of data types in languages involving recursion
and other effects usually involves categories other than Set; ii) in such circumstances, the
notion of a predicate can no longer be taken as a function with codomain Set; and iii)
even when working in Set there are reasonable notions of “predicate” other than that in
Section 3. (For example, a predicate on a set X could be a subobject of X). Moreover,
when, in future work, we consider induction rules for more sophisticated classes of data
types such as indexed containers, inductive families, and inductive recursive families (see
Section 5), we will not want to have to develop an individual ad hoc theory of induction for
each such class. Instead, we will want to appropriately instantiate a single generic theory of
induction. That is, we will want a uniform axiomatic approach to induction that is widely
applicable, and that abstracts over the specific choices of category, functor, and predicate
giving rise to different forms of induction for specific classes of data types.
Fibrations support precisely such an axiomatic approach. This section therefore generalises the constructions of the previous one to the general fibrational setting. The standard
model of type theory based on locally cartesian closed categories does arise as a specific
fibration — namely, the codomain fibration over Set — and this fibration is equivalent
to the families fibration over Set. But the general fibrational setting is far more flexible.
Moreover, in locally cartesian closed models of type theory, predicates and types coexist in
the same category, so that each functor can be taken to be its own lifting. In the general fibrational setting, predicates are not simply functions or morphisms, properties and types do
not coexist in the same category, and a functor cannot be taken to be its own lifting. There
is no choice but to construct a lifting from scratch. A treatment of induction based solely
on locally cartesian closed categories would not, therefore, indicate how to treat induction
in more general fibrations.
14
N. GHANI, P. JOHANN, AND C. FUMEX
Another reason for working in the general fibrational setting is that this facilitates a
direct comparison of our work with that of Hermida and Jacobs [10]. This is important,
since their approach is the most closely related to ours. The main difference between
their approach and ours is that they use fibred products and coproducts to define provably
sound induction rules for polynomial functors, whereas we use left adjoints to reindexing
functors to define provably sound induction rules for all inductive functors. In this section
we consider situations when both approaches are possible and give mild conditions under
which our results coincide with theirs when restricted to polynomial functors.
The remainder of this section is organised as follows. In Section 4.1 we recall the
definition of a fibration, expand and motivate this definition, and fix some basic terminology
surrounding fibrations. We then give some examples of fibrations, including the families
fibration over Set, the codomain fibration, and the subobject fibration. In Section 4.2 we
recall a useful theorem from [10] that indicates when a truth-preserving lifting of a functor
to a category of predicates has an initial algebra. This is the key theorem used to prove
the soundness of our generic fibrational induction rule. In Section 4.3 we construct truthpreserving liftings for all inductive functors. We do this first in the codomain fibration, and
then, using intuitions from its presentation as the families fibration over Set, as studied
in Section 3, in a general fibrational setting. Finally, in Section 4.4 we establish a number
of properties of the liftings, and hence of the induction rules, that we have derived. In
particular, we characterise the lifting that generates our induction rules.
4.1. Fibrations in a Nutshell. In this section we recall the notion of a fibration. More
details about fibrations can be found in, e.g., [12, 20]. We begin with an auxiliary definition.
Definition 4.1. Let U : E → B be a functor.
(1) A morphism g : Q → P in E is cartesian over a morphism f : X → Y in B if U g = f ,
and for every g′ : Q′ → P in E for which U g′ = f ◦ v for some v : U Q′ → X there exists
a unique h : Q′ → Q in E such that U h = v and g ◦ h = g ′ .
(2) A morphism g : P → Q in E is opcartesian over a morphism f : X → Y in B if U g = f ,
and for every g′ : P → Q′ in E for which U g′ = v ◦ f for some v : Y → U Q′ there exists
a unique h : Q → Q′ in E such that U h = v and h ◦ g = g ′ .
It is not hard to see that the cartesian morphism fP§ over a morphism f with codomain U P
is unique up to isomorphism, and similarly for the opcartesian morphism f§P . If P is an
GENERIC FIBRATIONAL INDUCTION
15
object of E, then we write f ∗ P for the domain of fP§ and Σf P for the codomain of f§P . We
can capture cartesian and opcartesian morphisms diagrammatically as follows.
E
′
Q′ ◗◗◗
◗◗◗ ′
◗◗◗g
◗◗◗
h
◗◗◗
◗◗(
∗
/P
f P
§
♠6 Q
♠♠♠ O
g ′ ♠♠♠♠
♠
h
♠♠♠
♠♠♠
♠
♠
♠
/ Σf P
P
P
U Q′ ◗◗
◗◗◗
◗◗U◗ g′
◗◗◗
v
◗◗◗
◗◗(
/Y
X
❧6 U Q
O
❧❧❧
❧
❧
❧
❧
v
❧❧
❧❧❧
❧
❧
❧
/Y
X
fP
f§
U
B
f
′
U g′
f
Cartesian morphisms (opcartesian morphisms) are the essence of fibrations (resp., opfibrations). We introduce both fibrations and their duals now since the latter will prove useful
later in our development. Below we speak primarily of fibrations, with the understanding
that the dual observations hold for opfibrations.
Definition 4.2. Let U : E → B be a functor. Then U is a fibration if for every object P
of E, and every morphism f : X → U P in B there is a cartesian morphism fP§ : Q → P in
E above f . Similarly, U is an opfibration if for every object P of E, and every morphism
f : U P → Y in B there is an opcartesian morphism f§P : P → Q in E above f . A functor
U a bifibration if it is simultaneously a fibration and an opfibration.
If U : E → B is a fibration, we call B the base category of U and E the total category
of U . Objects of the total category E can be thought of as properties, objects of the base
category B can be thought of as types, and U can be thought of as mapping each property
P in E to the type U P of which P is a property. One fibration U can capture many different
properties of the same type, so U is not injective on objects. We say that an object P in
E is above its image U P under U , and similarly for morphisms. For any object X of B, we
write EX for the fibre above X, i.e., for the subcategory of E consisting of objects above X
and morphisms above id. If f : X → Y is a morphism in B, then the function mapping
each object P of E to f ∗ P extends to a functor f ∗ : EY → EX . Indeed, for each morphism
k : P → P ′ in EY , f ∗ k is the morphism satisfying k ◦ fP§ = fP§ ′ ◦ f ∗ k. The universal
property of fP§ ′ ensures the existence and uniqueness of f ∗ k. We call the functor f ∗ the
reindexing functor induced by f . A similar situation ensures for opfibrations, and we call
the functor Σf : EX → EY which extends the function mapping each object P of E to Σf P
the opreindexing functor.
Example 4. The functor U : P → Set defined in Section 3 is called the families fibration
over Set. Given a function f : X → Y and a predicate P : Y → Set we can define a
cartesian map fP§ whose domain f ∗ P is P ◦ f , and which comprises the pair (f, λx : X. id).
The fibre PX above a set X has predicates P : X → Set as its objects. A morphism in PX
from P : X → Set to P ′ : X → Set is a function of type ∀x : X. P x → P ′ x.
Example 5. Let B be a category. The arrow category of B, denoted B → , has the morphisms,
or arrows, of B as its objects. A morphism in B → from f : X → Y to f ′ : X ′ → Y ′ is a pair
16
N. GHANI, P. JOHANN, AND C. FUMEX
(α1 , α2 ) of morphisms in B such that the following diagram commutes:
α1
X
/ X′
f′
f
Y
α2
/ Y′
i.e., such that α2 ◦ f = f ′ ◦ α1 . It is easy to see that this definition indeed gives a category.
The codomain functor cod : B → → B maps an object f : X → Y of B → to the object
Y of B and a morphism (α1 , α2 ) of B → to α2 . If B has pullbacks, then cod is a fibration,
called the codomain fibration over B. Indeed, given an object f : X → Y in the fibre above
Y and a morphism f ′ : X ′ → Y in B, the pullback of f along f ′ gives a cartesian morphism
above f ′ as required. The fibre above an object Y of B has those morphisms of B that map
into Y as its objects. A morphism in (B → )Y from f : X → Y to f ′ : X ′ → Y is a morphism
α1 : X → X ′ in B such that f = f ′ ◦ α1 .
Example 6. If B is a category, then the category of subobjects of B, denoted Sub(B), has
monomorphisms in B as its objects. A monomorphism f : X ֒→ Y is called a subobject of
Y . A morphism in Sub(B) from f : X ֒→ Y to f ′ : X ′ ֒→ Y ′ is a pair of morphisms (α1 , α2 )
in B such that α2 ◦ f = f ′ ◦ α1 .
The map U : Sub(B) → B sending a subobject f : X ֒→ Y to Y extends to a functor. If
B has pullbacks, then U is a fibration, called the subobject fibration over B; indeed, pullbacks
again give cartesian morphisms since the pullback of a monomorphism is a monomorphism.
The fibre above an object Y of B has as objects the subobjects of Y . A morphism in
Sub(B)Y from f : X ֒→ Y to f ′ : X ′ ֒→ Y is a map α1 : X → X ′ in B such that f = f ′ ◦ α1 .
If such a morphism exists then it is, of course, unique.
4.2. Lifting, Truth, and Comprehension. We now generalise the notions of lifting,
truth, and comprehension to the general fibrational setting. We prove that, in such a setting,
if an inductive functor has a truth-preserving lifting, then its lifting is also inductive. We
then see that inductiveness of the lifted functor is sufficient to guarantee the soundness
of our generic fibrational induction rule. This subsection is essentially our presentation of
pre-existing results from [10]. We include it because it forms a natural part of our narrative,
and because simply citing the material would hinder the continuity of our presentation.
Recall from Section 3 that the first step in deriving an induction rule for a datatype
interpreted in Set is to lift the functor whose fixed point the data type is to the category P of
predicates. More specifically, in Definition 3.3 we defined a lifting of a functor F : Set → Set
to be a functor F̂ : P → P such that U F̂ = F U . We can use these observations to generalise
the notion of a lifting to the fibrational setting as follows.
Definition 4.3. Let U : E → B be a fibration and F be a functor on B. A lifting of F with
respect to U is a functor F̂ : E → E such that the following diagram commutes:
E
F̂
/E
U
U
B
F
/B
GENERIC FIBRATIONAL INDUCTION
17
In Section 3 we saw that if P : X → Set is a predicate over X, then F̂ P is a predicate over
F X. The analogous result for the general fibrational setting observes that if F̂ is a lifting
of F and X is an object of B, then F̂ restricts to a functor from EX to EF X .
By analogy with our results from Section 3, we further expect that the premises of a
fibrational induction rule for a datatype µF interpreted in B should constitute an F̂ -algebra
on E. But in order to construct the conclusion of such a rule, we need to understand how
to axiomatically state that a predicate is true. In Section 3, a predicate P : X → Set is
considered true if there is a morphism in P from K1 X, the truth predicate on X, to P that
is over id X . Since the mapping of each set X to K1 X is the action on objects of the truth
functor K1 : Set → P (cf. Definition 3.6), we actually endeavour to model the truth functor
for the families fibration over Set axiomatically in the general fibrational setting.
Modeling the truth functor axiomatically amounts to understanding its universal property. Since the truth functor in Definition 3.6 maps each set X to the predicate λx : x. 1, for
any set X there is therefore exactly one morphism in the fibre above X from any predicate
P over X to K1 X. This gives a clear categorical description of K1 X as a terminal object
of the fibre above X and leads, by analogy, to the following definition.
Definition 4.4. Let U : E → B be a fibration. Assume further that, for every object
X of B, the fibre EX has a terminal object K1 X such that, for any f : X ′ → X in B,
f ∗ (K1 X) ∼
= K1 X ′ . Then the assignment sending each object X in B to K1 X in E, and
§
each morphism f : X ′ → X in B to the morphism fK
in E defines the (fibred) truth
1X
functor K1 : B → E.
The (fibred) truth functor is sometimes called the (fibred) terminal object functor. With
this definition, we have the following standard result:
Lemma 4.5. K1 is a (fibred) right adjoint for U .
The interested reader may wish to consult the literature on fibrations for the definition
of a fibred adjunction, but a formal definition will not be needed here. Instead, we can
simply stress that a fibred adjunction is first and foremost an adjunction, and then observe
that the counit of this adjunction is the identity, so that U K1 = Id . Moreover, K1 is full
and faithful. One simple way to guarantee that a fibration has a truth functor is to assume
that both E and B have terminal objects and that U maps a terminal object of E to a
terminal object of B. In this case, the fact that reindexing preserves fibred terminal objects
ensures that every fibre of E indeed has a terminal object.
The second fundamental property of liftings used in Section 3 is that they are truthpreserving. This property can now easily be generalised to the general fibrational setting
(cf. Definition 3.7).
Definition 4.6. Let U : E → B be a fibration with a truth functor K1 : B → E, let F be
a functor on B, and let F̂ : E → E be a lifting of F . We say that F̂ is a truth-preserving
lifting of F if, for any object X of B, we have F̂ (K1 X) ∼
= K1 (F X).
The final algebraic structure we required in Section 3 was a comprehension functor
{−} : P → Set. To generalise the comprehension functor to the general fibrational setting
we simply note that its universal property is that it is right adjoint to the truth functor K1
(cf. Definition 3.8). We single out for special attention those fibrations whose truth functors
have right adjoints.
18
N. GHANI, P. JOHANN, AND C. FUMEX
Definition 4.7. Let U : E → B be a fibration with a truth functor K1 : B → E. Then U is
a comprehension category with unit if K1 has a right adjoint.
If U : E → B is a comprehension category with unit, then we call the right adjoint to K1
the comprehension functor and denote it by {−} : E → B. With this machinery in place,
Hermida and Jacobs [10] show that if U is a comprehension category with unit and F̂ is a
truth-preserving lifting of F , then F̂ is inductive if F is and, in this case, the carrier µF̂ of
the initial F̂ -algebra is K1 (µF ). This is proved as a corollary to the following more abstract
theorem.
Theorem 4.8. Let F : B → B, G : A → A, and S : B → A be functors. A natural
transformation α : GS → SF , i.e., a natural transformation α such that
AO
S
B
G
/A
O
❈❈❈❈❈❈α
%
S
/B
F
induces a functor
AlgF
Φ
/ AlgG
given by Φ (f : F X → X) = S f ◦ αX . Moreover, if α is an isomorphism, then a right
adjoint T to S induces a right adjoint
AlgF l
Φ
⊤
,
AlgG
Ψ
given by Ψ(g : GX → X) = T g ◦βX , where β : F T → T G is the image of G ǫ◦α−1
T : SF T →
∼
G under the adjunction isomorphism Hom(S X, Y ) = Hom(X, T Y ), and ǫ : ST → id is
the counit of this adjunction.
We can instantiate Theorem 4.8 to generalise Lemmas 3.9 and 3.10.
Theorem 4.9. Let U : E → B be a comprehension category with unit and F : B → B be a
functor. If F has a truth-preserving lifting F̂ then there is an adjunction Φ ⊣ Ψ : Alg F →
Alg F̂ . Moreover, if f : F X → X then Φf : F̂ (K1 X) → K1 X, and if g : F̂ P → P then
Ψg : F {P } → {P }.
Proof. We instantiate Theorem 4.8, letting E be A, F̂ be G, and K1 be S. Then α is an
isomorphism since F̂ is truth-preserving, and we also have that K1 ⊣ {−}. The theorem thus
ensures that Φ maps every F -algebra f : F X → X to an F̂ -algebra Φf : F̂ (K1 X) → K1 X,
and that Ψ maps every F̂ -algebra g : F̂ P → P to an F -algebra Ψg : F {P } → {P }.
Corollary 4.10. Let U : E → B be a comprehension category with unit and F : B → B be
a functor which has a truth-preserving lifting F̂ . If F is inductive, then so is F̂ . Moreover,
µF̂ = K1 (µF ).
Proof. The hypotheses of the corollary place us in the setting of Theorem 4.9. This theorem
guarantees that Φ maps the initial F -algebra in F : F (µF ) → µF to an F̂ -algebra with
carrier K1 (µF ). But since left adjoints preserve initial objects, we must therefore have that
the initial F̂ -algebra has carrier K1 (µF ). Thus, µF̂ exists and is isomorphic to K1 (µF ).
GENERIC FIBRATIONAL INDUCTION
19
Theorem 4.11. Let U : E → B be a comprehension category with unit and F : B → B
be an inductive functor. If F has a truth-preserving lifting F̂ , then the following generic
fibrational induction rule is sound:
genfibind
genfibind F P
: ∀ (F : B → B) (P : E). (F̂ P → P ) → (µF̂ → P )
= fold
An alternative presentation of genfibind is
genfibind
genfibind F P
: ∀ (F : B → B) (P : E). (F̂ P → P ) → (µF → {P })
= fold ◦ Ψ
We call genfibind F the generic fibrational induction rule for µF .
In summary, we have generalised the generic induction rule for predicates over Set
presented in Section 3 to give a sound generic induction rule for comprehension categories
with unit. Our only assumption is that if we start with an inductive functor F on the
base of the comprehension category, then there must be a truth-preserving lifting of that
functor to the total category of the comprehension category. In that case, we can specialise
genfibind to get a fibrational induction rule for any datatype µF that can be interpreted in
the fibration’s base category.
The generic fibrational induction rule genfibind does, however, look slightly different
from the generic induction rule for set-valued predicates. This is because, in Section 3, we
used our knowledge of the specific structure of comprehensions for set-valued predicates
to extract proofs for particular data elements from them. But in the fibrational setting,
predicates, and hence comprehensions, are left abstract. We therefore take the return type
of the general induction scheme genfibind to be a comprehension with the expectation that,
when the general theory of this section is instantiated to a particular fibration of interest, it
may be possible to use knowledge about that fibration to extract from the comprehension
constructed by genfibind further proof information relevant to the application at hand.
As we have previously mentioned, Hermida and Jacobs provide truth-preserving liftings
only for polynomial functors. In Section 4.3, we define a generic truth-preserving lifting for
any inductive functor on the base category of any fibration which, in addition to being a
comprehension category with unit, has left adjoints to all reindexing functors. This gives a
sound generic fibrational induction rule for the datatype µF for any functor F on the base
category of any such fibration.
4.3. Constructing Truth-Preserving Liftings. In light of the previous subsection, it is
natural to ask whether or not truth-preserving liftings exist. If so, are they unique? Or, if
there are many truth-preserving liftings, is there a specific truth-preserving lifting to choose
above others? Is there, perhaps, even a universal truth-preserving lifting? We can also ask
about the algebraic structure of liftings. For example, do truth-preserving liftings preserve
sums and products of functors?
Answers to some of these questions were given by Hermida and Jacobs, who provided
truth-preserving liftings for polynomial functors. To define such liftings they assume that
the total category and the base category of the fibration in question have products and
coproducts, and that the fibration preserves them. Under these conditions, liftings for
polynomial functors can be defined inductively. In this section we go beyond the results of
Hermida and Jacobs and construct truth-preserving liftings for all inductive functors. We
employ a two-stage process, first building truth-preserving liftings under the assumption
20
N. GHANI, P. JOHANN, AND C. FUMEX
that the fibration of interest is a codomain fibration, and then using the intuitions of
Section 3 to extend this lifting to a more general class of fibrations. In Section 4.4 we
consider the questions from the previous paragraph about the algebraic structure of liftings.
4.3.1. Truth-Preserving Liftings for Codomain Fibrations. Recall from Example 5 that if B
has pullbacks, then the codomain fibration over B is the functor cod : B → → B. Given a
functor F : B → B, it is trivial to define a lifting F → : B → → B → for this fibration. We can
define the functor F → to map an object f : X → Y of B → to F f : F X → F Y , and to map
a morphism (α1 , α2 ) to the morphism (F α1 , F α2 ). That F → is a lifting is easily verified.
If we further verify that codomain fibrations are comprehension categories with unit,
and that the lifting F → is truth-preserving, then Theorem 4.11 can be applied to them. For
the former, we first observe that the functor K1 : B → B → mapping an object X to id and
a morphism f : X → Y to (f, f ) is a truth functor for this fibration. (In fact, we can take
any isomorphism into X as K1 X; we will use this observation below.) If we let B → (U, V )
denote the set of morphisms from an object U to an object V in B → , then the fact that K1
is right adjoint to cod can be established via the natural isomorphism
B → (f : X → Y, K1 Z) = {(α1 : X → Z, α2 : Y → Z) | α1 = α2 ◦ f } ∼
= B(Y, Z) = B(cod f, Z)
We next show that the functor dom : B → → B mapping an object f : X → Y of B → to
X and a morphism (α1 , α2 ) to α1 is a comprehension functor for the codomain fibration.
That dom is right adjoint to K1 is established via the natural isomorphism
B → (K1 Z, f : X → Y ) = {(α1 : Z → X, α2 : Z → Y ) | α2 = f ◦ α1 } ∼
= B(Z, X) = B(Z, dom f )
Finally, we have that F → is truth-preserving because
F → (K1 Z) = F → id = F id = id = K1 (F Z)
A lifting is implicitly given in [16] for functors on a category with display maps. Such
a category is a subfibration of the codomain fibration over that category, and the lifting
given there is essentially the lifting for the codomain fibration restricted to the subfibration
in question.
4.3.2. Truth-Preserving Liftings for the Families Fibration over Set. In Section 3 we defined, for every functor F : Set → Set, a lifting F̂ which maps the predicate P to (F πP )−1 .
Looking closely, we realise this lifting decomposes into three parts. Given a predicate P ,
we first consider the projection function πP : {P } → U P . Next, we apply the functor F
to πP to obtain F πP : F {P } → F U P . Finally, we take the inverse image of F πP to get a
predicate over F U P as required.
Note that π is the functor from P to Set→ which maps a predicate P to the projection
function πP : {P } → U P (and maps a predicate morphism (f, f ∼ ) from a predicate P : X →
Set to P ′ : X ′ → Set to the morphism ({(f, f ∼ )}, f ) from πP to πP ′ ; cf. Definition 3.4). If
I : Set→ → P is the functor sending a function f : X → Y to its “inverse” predicate f −1
(and a morphism (α1 , α2 ) to the predicate morphism (α2 , ∀y : Y. λx : f −1 y. α1 x)), then
each of the three steps of defining F̂ is functorial and the relationships indicated by the
GENERIC FIBRATIONAL INDUCTION
21
following diagram hold:
π
P ❇l
❇❇
❇❇
❇❇
U ❇!
⊤
I
Set
,
Set→
✇✇
✇✇
✇
✇
{ ✇ cod
✇
Note that the adjunction I ⊣ π is an equivalence. This observation is not, however,
necessary for our subsequent development; in particular, it is not needed for Theorem 4.14.
The above presentation of the lifting F̂ of a functor F for the families fibration over Set
uses the lifting of F for the codomain fibration over Set. Indeed, writing F → for the lifting
of F for the codomain fibration over Set, we have that F̂ = IF → π. Moreover, since π and
I are truth-preserving (see the proof of Lemma 3.7), and since we have already seen that
liftings for codomain fibrations are truth-preserving, we have that F̂ is truth-preserving
because each of its three constituent functors is. Finally, since we showed in Section 3 that
the families fibration over Set is a comprehension category with unit, Theorem 4.11 can be
applied to it.
Excitingly, as we shall see in the next subsection, the above presentation of the lifting
of a functor for the families fibration over Set generalises to many other fibrations!
4.3.3. Truth-Preserving Liftings for Other Fibrations. We now turn our attention to the
task of constructing truth-preserving liftings for fibrations other than codomain fibrations
and the families fibration over Set. By contrast with the approach outlined in the conference
paper [9] on which this paper is based, the one we take here uses a factorisation, like that
of the previous subsection, through a codomain fibration. More specifically, let U : E → B
be a comprehension category with unit. We first define functors I and π, and construct an
adjunction I ⊣ π between E and B → such that the relationships indicated by the following
diagram hold:
π
E ❄k
❄❄
❄❄
❄
U ❄❄
⊤
I
,
B→
⑤
⑤⑤
⑤
⑤⑤
}⑤⑤ cod
B
We then use the adjunction indicated in the diagram to construct truth-preserving a lifting
for U from that for the codomain fibration over B.
To define the functor π : E → B → we generalise the definition of π : P → Set→
from Sections 3 and 4.3.2. This requires us to work with the axiomatic characterisation in
Definition 4.7 of the comprehension functor {−} : E → B as the right adjoint to the truth
functor K1 : B → E. The counit of the adjunction K1 ⊣ {−} is a natural transformation
ǫ : K1 {−} → Id. Applying U to ǫ gives the natural transformation U ǫ : U K1 {−} → U , but
since U K1 = Id, in fact we have that U ǫ : {−} → U . We can therefore define π to be U ǫ.
Then π is indeed a functor from E to B → , its action on an object P is πP , and its action
on a morphism (f, f ∼ ) is ({(f, f ∼ )}, f ).
We next turn to the definition of the left adjoint I to π. To see how to generalise the
inverse image construction to more general fibrations we first recall from Example 4 that, if
f : X → Y is a function and P : Y → Set, then f ∗ P = P ◦ f . We can extend this mapping
22
N. GHANI, P. JOHANN, AND C. FUMEX
to a reindexing functor f ∗ : EY → EX by defining f ∗ (id , h∼ ) = (id , h∼ ◦ f ). If we define the
action of Σf : EX → EY on objects by
]
Σf P = λy.
Px
{x|f x=y}
U
where denotes the disjoint union operator on sets, and its action on morphisms by taking
Σf (id , α∼ ) to be (id , ∀(y : Y ). λ(x : X, p : f x = y, t : P x). (x, p, α∼ x t)), then Σf is left
adjoint to f ∗ . Moreover, if we compute
]
Σf (K1 X) = λy.
K1 Xx
{x | f x=y}
and recall that, for any x : X, the set K1 X x is a singleton, then Σf (K1 X) is clearly
equivalent to the inverse image of f .
The above discussion suggests that, in order to generalise the inverse image construction
to a more general fibration U : E → B, we should require each reindexing functor f ∗ to have
the opreindexing functor Σf as its left adjoint. As in [10], no Beck-Chevalley condition is
required on these adjoints. The following result, which appears as Proposition 2.3 of [11],
thus allows us to isolate the exact class of fibrations for which we will have sound generic
induction rules.
Theorem 4.12. A fibration U : E → B is a bifibration iff for every morphism f in B the
reindexing functor f ∗ has left adjoint Σf .
Definition 4.13. A Lawvere category is a bifibration which is also a comprehension category
with unit.
We construct the left adjoint I : B → → E of π for any Lawvere category U : E → B as
follows. If f : X → Y is an object of B → , i.e., a morphism of B, then we define I f to be the
object Σf (K1 X) of E. To define the action of I on morphisms, let (α1 , α2 ) be a morphism
in B → from f : X → Y to f ′ : X ′ → Y ′ in B → . Then (α1 , α2 ) is a pair of morphisms in B
such that the following diagram commutes:
X
α1
/ X′
f′
f
Y
α2
/ Y′
We must construct a morphism from Σf (K1 X) to Σf ′ (K1 X ′ ) in E. To do this, notice
′
that f§′K1 X ◦ K1 α1 : K1 X → Σf ′ (K1 X ′ ) is above f ′ ◦ α1 , and that it is also above α2 ◦ f
′
since f ′ ◦ α1 = α2 ◦ f . We can then consider the morphism f§′K1 X ◦ K1 α1 and use the
universal property of the opcartesian morphism f§K1 X to deduce the existence of a morphism
h : Σf (K1 X) → Σf ′ (K1 X ′ ) above α2 . It is not difficult, using the uniqueness of the
morphism h, to prove that setting this h to be the image of the morphism (α1 , α2 ) makes
I a functor. In fact, since cod ◦ π = U , Result (i) on page 190 of [11] guarantees that,
for any Lawvere category U : E → B the functor I : B → → E exists and is left adjoint to
π : E → B→ .
We can now construct a truth-preserving lifting for any Lawvere category U : E → B
and functor F on B.
GENERIC FIBRATIONAL INDUCTION
23
Theorem 4.14. Let U : E → B be a Lawvere category and, for any functor F on B, define
the functor F̂ on E by
F̂ : E → E
F̂ = IF → π
Then F̂ is a truth-preserving lifting of F .
Proof. It is trivial to check that F̂ is indeed a lifting. To prove that it is truth-preserving, we
need to prove that F̂ (K1 X) ∼
= K1 (F X) for any functor F on B and object X of B. We do
this by showing that each of π, F → , and I preserves fibred terminal objects, i.e., preserves
the terminal objects of each fibre of the total category which is its domain. Then since
K1 X is a terminal object in the fibre EX , we will have that F̂ (K1 X) = I(F → (π(K1 X))) is
a terminal object in EF X , i.e., that F̂ (K1 X) ∼
= K1 (F X) as desired.
We first show that π preserves fibred terminal objects. We must show that, for any
object X of B, πK1 X is a terminal object of the fibre of B → over X, i.e., is an isomorphism
with codomain X. We prove this by observing that, if η : Id → {−}K1 is the unit of
the adjunction K1 ⊣ {−}, then πK1 X is an isomorphism with inverse ηX . Indeed, if ǫ
is the counit of the same adjunction, then the facts that U K1 = Id and that K1 is full
and faithful ensure that K1 ηX is an isomorphism with inverse ǫK1 X . Thus, ǫK1 X is an
isomorphism with inverse K1 ηX , and so πK1 X = U ǫK1 X is an isomorphism with inverse
U K1 ηX , i.e., with inverse ηX . Since K1 X is a terminal object in EX and πK1 X is a terminal
object in the fibre of B → over X, we have that π preserves fibred terminal objects.
It is not hard to see that F → preserves fibred terminal objects: applying the functor F
to an isomorphism with codomain X — i.e., to a terminal object in the fibre of B → over
X — gives an isomorphism with codomain F X — i.e., a terminal object in the fibre of B →
over F X.
Finally, if f : X → Y is an isomorphism in B, then Σf is not only left adjoint to f ∗ ,
but also right adjoint to it. Since right adjoints preserve terminal objects, and since K1 X
is a terminal object of EX , we have that If = Σf (K1 X) is a terminal object of EY . Thus I
preserves fibred terminal objects.
We stress that, to define our lifting, the codomain functor over the base B of the Lawvere
category need not be a fibration. In particular, B need not have pullbacks; indeed, all that
is needed to construct our generic truth-preserving lifting F̂ for a functor F on B is the
existence of the functors I and π (and F → , which always exists). We nevertheless present
the lifting F̂ as the composition of π, F → , and I because this presentation shows it can be
factored through F → . This helps motivate our definition of F̂ , thereby revealing parallels
between it and F → that would otherwise not be apparent. At the same time it trades the
direct, brute-force presentation of F̂ from [9] for an elegant modularly structured one which
makes good use, in a different setting, of general results about comprehension categories
due to Jacobs [11].
We now have the promised sound generic fibrational induction rule for every inductive
functor F on the base of a Lawvere category. To demonstrate the flexibility of this rule, we
now derive an induction rule for a data type and properties on it that cannot be modelled
in Set. Being able to derive induction rules for fixed points of functors in categories other
than Set is a key motivation for working in a general fibrational setting.
Example 7. The fixed point Hyp = µF of the functor F X = (X → Int) → Int is the data
type of hyperfunctions. Since F has no fixed point in Set, we interpret it in the category
24
N. GHANI, P. JOHANN, AND C. FUMEX
ωCP O⊥ of ω-cpos with ⊥ and strict continuous monotone functions. In this setting, a
property of an object X of ωCP O⊥ is an admissible sub-ωCP O⊥ P of X. Admissibility
means that the bottom element of X is in P and P is closed under least upper bounds of
ω-chains in X. This structure forms a Lawvere category [11, 12]; in particular, it is routine
to verify the existence of its opreindexing functor. In particular, Σf P is constructed for a
continuous map f : X → Y and an admisible predicate P ⊆ X, as the intersection of all
admissible Q ⊆ Y with P ⊆ f −1 (Q). The truth functor maps X to X, and comprehension
maps a sub-ωCP O⊥ P of X to P . The lifting F̂ maps a sub-ωCP O⊥ P of X to the least
admissible predicate on F X containing the image of F P . Finally, the derived induction
rule states that if P is an admissible sub-ωCP O⊥ of Hyp, and if F̂ (P ) ⊆ P , then P = Hyp.
4.4. An Algebra of Lifting. We have proved that in any Lawvere category U : E → B,
any functor F on B has a lifting F̂ on E which is truth-preserving, and thus has the following
associated sound generic fibrational induction rule:
genfibind
genfibind F P
: ∀ (F : B → B) (P : E). (F̂ P → P ) → (µF → {P })
= fold ◦ Ψ
In this final subsection of the paper, we ask what kinds of algebraic properties the lifting
operation has. Our first result concerns the lifting of constant functors.
Lemma 4.15. Let U : E → B be a Lawvere category and let X be an object of B. If FX is
the constantly X-valued functor on B, then Fc
X is isomorphic to the constantly K1 X-valued
functor on E.
Proof. For any object P of E we have
→
Fc
X P = (I(FX ) π)P = I(FX πP ) = ΣF
X πP
K1 FX {P } = Σid K1 X ∼
= K1 X
The last isomorphism holds because id ∗ ∼
= Id and Σid ⊣ id ∗ .
Our next result concerns the lifting of the identity functor. It requires a little additional
structure on the Lawvere category of interest.
Definition 4.16. A full Lawvere category is a Lawvere category U : E → B such that
π : E → B → is full and faithful.
b ∼
Lemma 4.17. In any full Lawvere category, Id
= Id
Proof. By the discussion following Definition 4.13, I ⊣ π. Since π is full and faithful, the
counit of this adjunction is an isomorphism, and so IπP ∼
= P for all P in E. We therefore
have that
bP
P ∼
= IπP = Σπ K1 {P } = ΣId π K1 (Id {P }) = (I Id → π)P = Id
p
p
bP ∼
i.e., that Id
= P for all P in E. Because these isomorphisms are clearly natural, we
b ∼
therefore have that Id
= Id.
GENERIC FIBRATIONAL INDUCTION
25
We now show that the lifting of a coproduct of functors is the coproduct of the liftings.
Lemma 4.18. Let U : E → B be a Lawvere category and let F and G be functors on B.
Then F\
+G∼
= F̂ + Ĝ.
Proof. We have
(F\
+ G)P = I((F + G)→ πP ) = I(F → πP + G→ πP ) ∼
= I(F → πP ) + I(G→ πP ) = F̂ P + ĜP
The third isomorphism holds because I is a left adjoint and so preserves coproducts.
Note that the statement of Lemma 4.18 does not assert the existence of either of the
two coproducts mentioned, but rather that, whenever both do exist, they must be equal.
Note also that the lemma generalises to any colimit of functors. Unfortunately, no result
analogous to Lemma 4.18 can yet be stated for products.
Our final result considers whether or not there is anything fundamentally special about
the lifting we have constructed. It is clearly the “right” lifting in some sense because it
gives the expected induction rules. But other truth-preserving liftings might also exist and,
if this is the case, then we might hope our lifting satisfies some universal property. In
fact, under a further condition, which is also satisfied by all of the liftings of Hermida and
Jacobs, and which we therefore regard as reasonable, we can show that our lifting is the only
truth-preserving lifting. Our proof uses a line of reasoning which appears in Remark 2.13
in [10].
Lemma 4.19. Let U : E → B be a full Lawvere category and let F be a truth-preserving
lifting of a functor F on B. If F preserves Σ-types — i.e., if (F )(Σf P ) ∼
= ΣF f (F )P
∼
— then F = F̂ .
Proof. We have
(F )P
∼
=
∼
=
∼
=
∼
=
b P)
(F )(Id
(F )(ΣπP K1 {P })
ΣF πP (F )K1 {P }
ΣF πP K1 F {P }
= F̂ P
Finally, we can return to the question of the relationship between the liftings of polynomial
functors given by Hermida and Jacobs and the liftings derived by our methods. We have seen
that for constant functors, the identity functor, and coproducts of functors our constructions
agree. Moreover, since Hermida and Jacobs’ liftings all preserve Σ-types, Lemma 4.19
guarantees that in a full Lawvere category their lifting for products also coincides with
ours.
5. Conclusion and future work
We have given a sound induction rule that can be used to prove properties of data structures
of inductive types. Like Hermida and Jacobs, we give a fibrational account of induction,
but we derive, under slightly different assumptions on fibrations, a generic induction rule
that can be instantiated to any inductive type rather than just to polynomial ones. This
rule is based on initial algebra semantics of data types, and is parameterised over both
the data types and the properties involved. It is also principled, expressive, and correct.
26
N. GHANI, P. JOHANN, AND C. FUMEX
Our derivation yields the same induction rules as Hermida and Jacobs’ when specialised to
polynomial functors in the families fibration over Set and in other fibrations, but it also
gives induction rules for non-polynomial data types such as rose trees, and for data types
such as finite hereditary sets and hyperfunctions, for which no fibrational induction rules
have previously been known to exist.
There are several directions for future work. The most immediate is to instantiate our
theory to give induction rules for more sophisticated data types, such as nested types. These
are exemplified by the data type of perfect trees given in Haskell-like syntax as follows:
data PTree a : Set where
PLeaf : a → PTree a
PNode : PTree (a, a) → PTree a
Nested types arise as least fixed points of rank-2 functors; for example, the type of perfect
trees is µH for the functor H given by HF = λX. X + F (X × X). An appropriate fibration
for induction rules for nested types thus takes B to be the category of functors on Set,
E to be the category of functors from Set to P, and U to be postcomposition with the
forgetful functor from Section 3. A lifting Ĥ of H is given by Ĥ P X (inl a) = 1 and
Ĥ P X (inr n) = P (X × X) n. Taking the premise to be an Ĥ-algebra gives the following
induction rule for perfect trees:
indP T ree : ∀ (P : Set → P).
(U P = PTree) → (∀(X : Set)(x : X). P (PLeaf x)) →
(∀(X : Set)(t : PTree (X × X). P (X × X) t → P (PNode t))) →
∀(X : Set)(t : PTree X). P X t
This rule can be used to show, for example, that P T ree is a functor.
Extending the above instantiation for the codomain fibration to so-called “truly nested
types” [15] and fibrations is current work. We expect to be able to instantiate our theory for
truly nested types, GADTs, indexed containers, dependent types, and inductive recursive
types, but initial investigations show care is needed. We must ascertain which fibrations can
model predicates on such types, since the codomain fibration may not give useful induction
rules, as well as how to translate the rules to which these fibrations give rise to an intensional
setting.
Matthes [15] gives induction rules for nested types (including truly nested ones) in an
intensional type theory. He handles only rank-2 functors that underlie nested types (while
we handle any functor of any rank with an initial algebra), but his insights may help guide
choices of fibrations for truly nested types. These may in turn inform choices for GADTs,
indexed containers, and dependent types.
Induction rules can automatically be generated in many type theories. Within the
Calculus of Constructions [4] an induction rule for a data type can be generated solely
from the inductive structure of that type. Such generation is also a key idea in the Coq
proof assistant [5]. As far as we know, generation can currently be done only for syntactic
classes of functors rather than for all inductive functors with initial algebras. In some
type theories induction schemes are added as axioms rather than generated. For example,
attempts to generate induction schemes based on Church encodings in the Calculus of
Constructions proved unsuccessful and so initiality was added to the system, thus giving the
Calculus of Inductive Constructions. Whereas Matthes’ work is based on concepts such as
impredicativity and induction recursion rather than initial algebras, ours reduces induction
GENERIC FIBRATIONAL INDUCTION
27
to initiality, and may therefore help lay the groundwork for extending implementations of
induction to more sophisticated data types.
Acknowledgement
We thank Robert Atkey, Pierre-Evariste Dagand, Peter Hancock, and Conor McBride for
many fruitful discussions.
References
[1] T. Altenkirch and P. Morris. Indexed Containers. Proceedings, Logic in Computer Science, pp. 277–285,
2009.
[2] R. S. Bird and O. De Moor. Algebra of Programming. Prentice Hall, 1997.
[3] R. Bird and L. Meertens. Nested Datatypes. Proceedings, Mathematics of Program Construction, pp.
52–67, 1998.
[4] T. Coquand and G. Huet. The Calculus of Constructions. Information and Computation 76 (2-3), pp.
95–120, 1988.
[5] The Coq Proof Assistant. Available at coq.inria.fr
[6] P. Dybjer. A General Formulation of Simultaneous Inductive-Recursive Definitions in Type Theory.
Journal of Symbolic Logic 65 (2), pp. 525–549, 2000
[7] N. Ghani, M. Abbott, and T. Altenkirch. Containers - Constructing Strictly Positive Types. Theoretical
Computer Science 341 (1), pp. 3–27, 2005.
[8] N. Ghani and P. Johann. Foundations for Structured Programming with GADTs. Proceedings, Principles
of Programming Languages, pp. 297–308, 2008.
[9] N. Ghani and P. Johann and C. Fumex. Fibrational Induction Rules for Initial Algebras. Proceedings,
Computer Science Logic, pp. 336–350, 2010.
[10] C. Hermida and B. Jacobs. Structural Induction and Coinduction in a Fibrational Setting. Information
and Computation 145 (2), pp. 107–152, 1998.
[11] B. Jacobs. Comprehension Categories and the Semantics of Type Dependency. Theoretical Computer
Science 107, pp. 169–207, 1993.
[12] B. Jacobs. Categorical Logic and Type Theory. North Holland, 1999.
[13] P. Johann and N. Ghani. Initial Algebra Semantics is Enough! Proceedings, Typed Lambda Calculus
and Applications, pp. 207–222, 2007.
[14] F. W. Lawvere. Equality in Hyperdoctrines and Comprehension Scheme as an Adjoint Functor. Applications of Categorical Algebra, pp. 1–14, 1970.
[15] R. Matthes. An Induction Principle for Nested Datatypes in Intensional Type Theory. Journal of Functional Programming 19 (3&4), pp. 439–468, 2009.
[16] N. P. Mendler. Predicative type universes and primitive recursion. Proceedings, Logic in Computer
Science, pp. 173–184, 1991.
[17] P. Morris. Constructing Universes for Generic Programming. Dissertation, University of Nottingham,
2007.
[18] E. Moggi. Notations of Computation and Monads. Information and Computation 93 (1), pp. 55–92,
1991.
[19] B. Nordström, K. Petersson, and J. Smith. Programming in Martin-Löf ’s Type Theory. Oxford University Press, 1990.
[20] D. Pavlovič. Predicates and Fibrations. Dissertation, University of Utrecht, 1990.
[21] T. Sheard. Languages of the Future. SIGPLAN Notices 39 (10), pp. 116–119, 2004.
This work is licensed under the Creative Commons Attribution-NoDerivs License. To view
a copy of this license, visit http://creativecommons.org/licenses/by-nd/2.0/ or send a
letter to Creative Commons, 171 Second St, Suite 300, San Francisco, CA 94105, USA, or
Eisenacher Strasse 2, 10777 Berlin, Germany
| 6 |
Performance analysis of local ensemble Kalman filter
arXiv:1705.10598v2 [math.PR] 21 Mar 2018
Xin T. Tong
∗
March 22, 2018
Abstract
Ensemble Kalman filter (EnKF) is an important data assimilation method for high
dimensional geophysical systems. Efficient implementation of EnKF in practice often
involves the localization technique, which updates each component using only information within a local radius. This paper rigorously analyzes the local EnKF (LEnKF) for
linear systems, and shows that the filter error can be dominated by the ensemble covariance, as long as 1) the sample size exceeds the logarithmic of state dimension and a
constant that depends only on the local radius; 2) the forecast covariance matrix admits
a stable localized structure. In particular, this indicates that with small system and
observation noises, the filter error will be accurate in long time even if the initialization
is not. The analysis also reveals an intrinsic inconsistency caused by the localization
technique, and a stable localized structure is necessary to control this inconsistency.
While this structure is usually taken for granted for the operation of LEnKF, it can also
be rigorously proved for linear systems with sparse local observations and weak local
interactions. These theoretical results are also validated by numerical implementation
of LEnKF on a simple stochastic turbulence in two dynamical regimes.
1
Introduction
Data assimilation is a sequential procedure, in which observations of a dynamical system
are incorporated to improve the forecasts of that system. In many of its most important
geoscience and engineering applications, the main challenge comes from the high dimensionality of the system. For contemporary atmospheric models, the dimension can reach
d ∼ 108 , and the classical particle filter is no longer feasible [1, 2]. The ensemble Kalman
filter (EnKF) was invented by meteorologists [3, 4, 5] to resolve this issue. By sampling the
forecast uncertainty with a small ensemble, and then employing Kalman filter procedures
to the empirical distribution, EnKF can often capture the major uncertainty and produce
accurate predictions. The simplicity and efficiency of EnKF have made it a popular choice
for weather forecasting and oil reservoir management [6, 7].
One fundamental technique employed by EnKF is localization [8, 4, 9, 10, 11]. In most
geophysical applications, each component [X]i of the state variable X holds information of
one spatial location. There is a natural distance d(i, j) between two components. In most
∗
National University of Singapore, [email protected]
1
physical systems, the covariance between [X]i and [X]j is formed by information propagation
in space, intuitively its strength decays with the distance d(i, j). In particular, when d(i, j)
exceeds a threshold L, the covariance is approximately zero. This is a special sparse and
localized structure that can be exploited in the EnKF operation. In particular, the forecast
covariance can be artificially enforced as zero if d(i, j) > L. In other words, there is no need
to sample these covariance terms, and indeed sampling from them leads to higher errors [4].
Such modification significantly reduces the sampling difficulty and the associated sample
size. This is crucial for EnKF operation, since often only a few hundred samples can be
generated in practice. Various versions of localized EnKF (LEnKF) are derived based on
this principle, and there is ample numerical evidence showing their performance is robust
against the growth of dimension [4, 9, 10, 11, 12, 13, 14, 15, 16, 17]. Moreover, there is a
growing interest in applying the same technique to the classical particle filters [18, 19, 20].
While there is a consensus on the importance of the localization technique for EnKF,
currently there is no rigorous explanation of its success. This paper contributes to this issue
by showing that in the long run, the LEnKF can reach its estimated performance for linear
systems, if the ensemble size K exceeds DL log d, and the ensemble covariance matrix admits
a stable localized structure of radius L. The constant DL above depends on the radius L
but not on d.
Showing the necessary sampling size has only logarithmic dependence on d is our major
interest. In the simpler scenario of sampling a static covariance matrix, [21] shows that the
necessary sample size scales with DL log d. Generalizing this result to the setting of EnKF
is highly nontrivial, since the target covariance matrix evolves constantly in time, and the
sampling error at one time step has a nonlinear impact on future iterations. By analyzing the
filter forecast error evolution, and compare it with the filter covariance evolution, we show
the filter error covariance can be dominated by the ensemble covariance with high probability.
In other words, the LEnKF can reach its estimated performance.
One important corollary
√
is that if the system and observation noise are of scale ǫ, then the error covariance scales
as ǫ, which indicates that LEnKF can be accurate regardless of the initial condition. Such
property is often termed as accuracy for practical filters or observers [22, 23, 24].
Interestingly, our analysis also captures an intrinsic inconsistency caused by the localization technique. Generally speaking, the localization technique can be applied to the ensemble
covariance matrix, but not the ensemble. However, the Kalman update is applied to the ensemble, but not to the localized ensemble covariance matrix. As these two operations do not
commute, an inconsistency emerges, which we will call the localization inconsistency. This
phenomenon has been mentioned in [9, 25]. Moreover, [15] numerically examines its role
with serial observation processing, and shows that it may lead to significant filter error. In
correspondence to these findings, one crucial step in our analysis is showing that the localization inconsistency is controllable, if the forecast covariance matrix indeed has a localized
structure.
While most applications of LEnKF assume the underlying covariance matrices are localized, rigorous justification of this assumption is sorely missing in the literature. A recent
work [26] considers applying a projection to the continuous time Kalman-Bucy filter, and
shows that if the projection is a small perturbation on the covariance matrix, its impact on
the filter process is also small. It is shown through an example that if the filter system can
be decoupled into independent local parts, a projection similar to the LEnKF localization
2
procedure can be made. Unfortunately, in most practical problems, all spatial dimensions
are coupled with local interactions, and it is very difficult to show that the localization
procedure is a small perturbation.
This paper partially investigates the theoretical gaps mentioned above. We show that
for linear systems with weak local interactions and sparse local observations, the localized
structure is stable for the LEnKF ensemble covariance. Weak local interaction is an intuitive
requirement, else fast information propagation will form strong covariances between far away
locations. Sparse local observation, on the other hand, is assumed to simplify the assimilation
formulas.
In rough words, our main results consist of the following statements.
1. To sample a localized covariance matrix correctly, the necessary sample size scales with
DL log d (Theorem 2.1). This reveals the sampling advantage gained by applying the
localization procedure.
2. While localization improves the sampling, it creates an inconsistency in the assimilation
steps. For the LEnKF ensemble covariance to capture the filter error covariance with
DL log d samples, the localization inconsistency needs to be small (Theorem 2.4).
3. One way to guarantee a small localization inconsistency, is to have a stable localized
structure in the forecast ensemble covariance matrix (Proposition 2.3).
4. The LEnKF forecast covariance has a stable localized structure, if the underlying
linear system has weak interactions and sparse local observations. (Theorem 2.5). So
by points 2 and 3, we know that LEnKF has good forecast skills, since its ensemble
covariance captures the true filter error covariance.
5. The results above scale linearly with the variance of the noises. So when applying
LEnKF to a linear system with small system and observation noises, its long time
performance is accurate (Theorem 2.7).
Section 2 will provide the setup of our problem, and present the precise statements of the
main results. The implication of these results on the issue of localized radius is discussed in
Section 2.6.
Section 3 verifies the theoretical results by implementing LEnKF on a stochastically
forced dissipative advection equation [6]. One stable and one unstable dynamical regimes
are tested. In both of them, LEnKF have shown robust forecast skill with only K = 10
ensemble members, while the dimension varies between 10 and 1000. Moreover the localized
covariance structure and the accuracy with small noises can also be verified for LEnKF in
both regimes.
Section 4 investigates the covariance sampling problem of LEnKF, and proves Theorem
2.1. Section 5 analyzes the localization inconsistency and filter error evolution. It contains
the proofs of Theorem 2.4 and Proposition 2.3. Section 6 studies the localized structure
of linear systems with weak local interactions and sparse observations, and shows that the
small noise scaling can be applied to our results. Section 7 concludes this paper and discusses
some interesting extensions.
3
2
2.1
Main Results
Problem Setup
Since its invention, the ensemble Kalman filter (EnKF) has been modified constantly for two
decades, and its formulation has become rather sophisticated today. In this subsection we
briefly review some of the key modifications, in particular the localization techniques.
The following notations will be used throughout the paper. For two vectors a and b,
kak denotes the l2 norm of a, a ⊗ b denotes the matrix abT . Square bracket with subscripts
indicates a component or entry of an object. So [a]i is the i-th component of vector a. In
particular, we use ei to denote the i-th standard basis vector, i.e. [ei ]j = 1i=j .
Given a matrix A, [A]i,j is the (i, j)-th entry of A. The l2 operator norm is denoted
by kAk
P = inf{c : kAvk ≤ ckvk, ∀v}. The l∞ operator norm is denoted by kAk1 =
maxi j |[A]i,j |. The maximum absolute entry is denoted by kAk∞ = maxi,j |[A]i,j |. We
also use Im to denote the m × m dimensional identity matrix. Given two matrices A and D,
their Schur (Hadamard) product can be defined by entry wise product
[A ◦ D]i,j = [A]i,j [D]i,j .
For two real symmetric matrices A and B, A B indicates that B−A is positive semidefinite.
Ensemble Kalman Filter
In this paper, we consider a linear system in Rd with partial observations,
Xn+1 = An Xn + bn + ξn , ξn+1 ∼ N (0, Σn ),
Yn+1 = HXn+1 + ζn , ζn+1 ∼ N (0, σo2Iq ).
(2.1)
Throughout our discussion, we assume the matrices An , Σn are bounded:
kAn k ≤ MA ,
mΣ Id Σn MΣ Id .
The time-inhomogeneous generality can be used to model intermittent dynamical systems [6,
27]. We assume that the observations are made at q < d distinct locations {o1 , o2 , · · · , oq } ⊂
{1, · · · , d}. This can be modelled by letting
[H]k,j = 1j=ok ,
1 ≤ k ≤ q, 1 ≤ j ≤ d.
(2.2)
Note that the operator norm kHk = 1.
It is well known that the optimal estimate of Xn given historical observations Y1 , . . . , Yn
is provided by the Kalman filter [28], assuming X0 is Gaussian distributed. Unfortunately,
direct implementation of the Kalman filter involves a stepwise computation complexity of
O(d2q). When the state dimension d is high, the Kalman filter is not computationally
feasible.
The ensemble Kalman filter (EnKF) is invented by meteorologists [5] to reduce the computation complexity. K samples of (2.1) are updated using the Kalman filter rules, and their
ensemble mean and covariance are employed to estimate the signal Xn . In specific, suppose
4
(k)
the posterior ensemble for Xn is denoted by {Xn }k=1,...,K . The forecast ensemble of Xn+1
is first generated by propagating the linear system in (2.1):
b (k) = An X (k) + bn + ξ (k) ,
X
n+1
n
n+1
(k)
ξn+1 ∼ N (0, Σn ).
b n+1 , C
bn+1 ), where the mean
The EnKF then estimates Xn+1 with a prior distribution N (X
and covariance are obtained by the forecast ensemble:
K
1 X b (k)
b
X ,
X n+1 =
K k=1 n+1
b (k) := X
b (k) − X
b n+1 ,
∆X
n+1
n+1
K
1 X b (k)
b (k) .
b
∆Xn+1 ⊗ ∆X
Cn+1 =
n+1
K k=1
Applying the Bayes’ formula to the prior distribution and the linear observation Yn+1, a
target Gaussian posterior distribution for Xn+1 can be obtained. There are several ways to
update the forecast ensemble so its statistics approximate the target ones. Here we consider
the standard EnKF in [5, 6] with artificial perturbations:
(k)
(k)
(k)
e n+1 H)X
bn+1
e n+1 Yn+1 − K
b n+1 ζn+1
Xn+1 = (I − K
+K
.
(2.3)
(k)
e n+1 = C
bn+1 H T (σo2 Iq + H C
bn+1H T )−1 . The ζn+1
The Kalman gain matrix is given by K
are
independent noises sampled from N (0, σo2Iq ).
The computation complexity of EnKF is roughly O(K 2 d), assuming An and Σn are sparse
[29]. In practice, the ensemble size K is often less than a few hundred, so the operational
speed is significantly improved. On the other hand, with the sample size K much smaller than
bn+1 often produces spurious correlations
the state space dimension d, the sample covariance C
[30, 5]. Spurious correlations may seriously reduce the filter accuracy, since the Kalman
filter operation hinges heavily on the correctness of covariance estimation. The localization
techniques are often employed to resolve such problems.
Localization techniques
In most geophysical applications, each dimension index i ∈ {1, . . . , d} corresponds to a
spatial location. For simplicity, we assume different indices correspond to different spatial
locations. Let d(i, j) be the spatial distance between the locations i and j specify, then d is
also a distance on the index set {1, . . . , d}. In other words,
• d(i, j) = 0 if and only if i = j;
• d(i, j) = d(j, i);
• d(i, j) + d(j, k) ≥ d(i, k).
For a simple example, one can correspond index i with the integer i, then d(i, j) = |i − j|
clearly defines a distance.
For most geophysical problems that can be modeled by a (stochastic) partial differential
equation, the covariance between two locations is caused by the propagation of information
through local interactions. Information often is also dissipated during its propagation, so
its impact gets less significant when it reaches far-away locations. This leads to a localized
5
covariance structure. In other words, there is a decreasing function φ : [0, ∞) 7→ [0, 1],
φ(0) = 1 such that
[Cn ]i,j ∝ φ(d(i, j)).
In geophysical applications, a localization radius l is often defined, so φ(x) = 0 for x > l.
Consequentially, it is natural to model the localization function as
[Dl ]i,j = φ(d(i, j)).
(2.4)
In particular, the widely used Gaspari-Cohn matrix [31] is of this form with
x
x
φ(x) = 1 +
exp −
1x≤l ,
(2.5)
cl
cl
p
where the radius is often picked with l = 10/3cl or 2cl [32]. Another simple localization
matrix corresponds to the cutoff or heavyside function φ(x) = 1x≤l , and we denote it by
Dlcut . In other words
[Dlcut ]i,j = 1d(i,j)≤l .
(2.6)
As a remark, while (2.5) is more useful in practice, (2.6) is much simpler for theoretical
analysis and interpretation. Most of our analysis results in below only apply to (2.6), except Theorem 2.1. It will be very interesting to generalize the analysis framework here for
localization functions like (2.5).
The notion of localization radius is closely related to the bandwidth of a matrix [33]. For
a matrix A, we define its bandwidth as:
l := inf{x ≥ 0 : [A]i,j = 0 if d(i, j) > x}.
(2.7)
The bandwidth roughly captures how fast different components interact with each other. If
A has bandwidth l, each component interacts with at most Bl components when product
with A, where the volume constant Bl is defined by
Bl = max #{j : d(i, j) ≤ l}.
i
(2.8)
A localized covariance structure is extremely useful for EnKF. It indicates only covariances between nearby indices are worth sampling. By ignoring the far apart covariances,
the necessary sampling size can be significantly reduced. To apply this idea, the localization
technique modifies the Kalman gain matrix in (2.3), and ensures the assimilation updates
from far away observation is insignificant. There are two main types of localization methods
in the literature, domain localization and covariance localization [14]. This paper discusses
only the former, while similar analysis should in principal applies to the latter as well.
With domain localization, the i-th component is updated using only observations of
indices within distance l, which are elements of Ii = {j : d(i, j) ≤ l}. Let PIi be the
projection matrix of a Rd vector to its components on Ii , note that it is diagonal so it is
bi
b
symmetric. Then C
n+1 := PIi Cn+1 PIi contains the local covariance relevant to the i-th
component. The corresponding Kalman gain is
i
bi H T (σ 2 Iq + H C
b i H T )−1 ,
Kn+1
=C
n+1
o
n+1
6
(2.9)
i
and the i-th component is updated using the i-th row of (2.9), namely ei eTi Kn+1
. Again
ei is the i-th standard basis vector of Rd . The final Kalman gain matrix patches all rows
together
d
X
i
b
Kn+1 =
ei eTi Kn+1
.
(2.10)
i=1
i
b n+1 H is of bandwidth l as
Since each Kn+1
has nonzero entries only with indies in Ii × Ii , K
well. The proof in Proposition 2.3 below verifies this. Therefore, each component is updated
using observations of distance at most l from it.
Localized EnKF with covariance inflation
Other than spurious correlations, a small sampling size also jeopardizes the EnKF operation,
as the forecast covariance is often undervalued [34, 35, 23]. In order to resolve this issue, the
covariance needs to be inflated with a fixed ratio r > 1. [23] has shown these modification
are pivotal to EnKF performance. We also incorporate this idea in our LEnKF.
(k)
In summary, the localized EnKF (LEnKF) updates an posterior ensemble {Xn , k =
P
(k)
(k)
(k)
1, · · · , K} of its mean X n = K1 K
and spread ∆Xn = Xn − X n through the
k=1 Xn
b n+1 given by (2.9) and (2.10):
following steps with K
b n+1 = An X n + bn ,
X
bn+1 =
∆X
(k)
√
K
1 X b (k)
(k)
bn+1
b
∆Xn+1 ⊗ ∆X
,
Cn+1 =
K
k=1
(k)
∆Xn+1 =
(k)
r(An ∆Xn(k) + ξn+1),
(k)
ξn+1 ∼ N (0, Σn ),
b n+1 H)X
b n+1 + K
b n+1 Yn+1 ,
X n+1 = (I − K
b n+1 H)∆X
b (k) + K
b n+1 ζ (k) ,
(I − K
n+1
n+1
(2.11)
(k)
ζn+1 ∼ N (0, σo2 Iq ).
The posterior covariance matrix can be obtained through the spread
Cn+1
K
1 X
(k)
(k)
=
∆Xn+1 ⊗ ∆Xn+1 .
K k=1
Note here we update the mean and ensemble spread, the ∆ terms,
is different
P (k) separately.
P This
(k)
1
1
ξn+1 and K
ζn+1 are ignored
from the standard EnKF, since the average noise terms K
P
(k)
for simplicity. Also the sum of the ensemble spread,
∆Xn , may not be zero. On the
other hand, these differences are small by the law of large numbers. The proofs can also be
generalized to admit these terms, but the discussion will be notationally complicated.
One classical property of the Kalman filter is that the filter covariances and the Kalman
gain matrices are predetermined with no dependence on the realization of system (2.1). This
is inherited by the LEnKF (2.11), the covariances and Kalman gain depend only on the
(k) (k)
sample noise ξn , ζn realizations, but not on (Xn , Yn ).
To illustrate, consider the filtration generated by sample noise realization,
b , ξt , ζ , t = 1, . . . , n, k = 1, . . . , K}.
FnS = σ{∆X
0
t−1
(k)
(k)
(k)
7
(2.12)
Using induction, it is easy to verify the ensemble spread, ensemble covariance and Kalman
gain, are all FnS adapted:
b (k) , ∆X (k) , C
bn , Cn−1 , K
bn ∈ F S.
∆X
n
n−1
n
W
S
The corresponding conditional expectation is denoted by EFnS . We will use F∞
= FnS to
denote the σ-field for all ensemble spread information.
The other randomness of EnKF comes from the realization of system (2.1). We can
S
average out this part of randomness by conditioning on F∞
, which we will denote as ES .
This is useful when comparing the filter error and sample covariance. The natural filtration
generated by all random outcome at time n is
(k)
b0(k) , ξt , ζt−1 , ξt(k), ζt−1
Fn = σ{X0 , X
, t = 1, . . . , n, k = 1, · · · , K}.
We will denote the conditional expectation with Fn as En .
2.2
Sampling errors of localized forecast covariance
Since EnKF relies on the ensemble forecast covariance matrix to assimilate new observations,
its performance depends on the accuracy of the sampling procedure. The sampling procedure
updates the forecast matrix from time n to n + 1.
bn , based on the Kalman update rule, the inflated
Given the forecast ensemble covariance C
bn ), with the posterior Riccati map
target forecast covariance at n + 1 is given by rRn (C
bn ) := An (I − K
b n H)C
bn (I − K
b n H)T AT + σ 2 An K
b nK
b T AT + Σn .
Rn (C
n
o
n n
(2.13)
P
bn+1 = 1
b (k) ⊗ ∆X
b (k) is generated by the
The real ensemble forecast covariance C
∆X
n+1
n+1
K
ensemble spread
√
√
√
b (k) = rAn (I − K
b n H)∆X
b (k) + rAn K
b n ζ (k) + rξ (k) .
∆X
(2.14)
n+1
n
n
n+1
bn+1 over ζn(k) and ξ (k) matches Rn (C
bn ), that
It is straight forward to verify the average of C
n+1
bn+1 = Rn (C
bn ).
is, En C
bn+1 − rRn (C
bn )k, it is necessary to have a
In order to control the sampling error kC
sufficiently large K. Unfortunately, the size of K would need to grow linearly with d [21].
(k)
(k)
bn = K
b n = 0, Σn = Id , r = 1, then ∆X
bn+1
As a simple example, let C
= ξn+1 are i.i.d.
p
bn+1 k = 1 + d/K with
samples from N (0, Id), and the target sample matrix is Id . Yet kC
high probability by the Bai-Yin’s law [36]. In practical settings, K ≪ d, so the sample
covariance is unlikely to be correct.
As discussed in Section 2.1, the main idea of localization is that we assume the target
bn ) is localized, so it suffices to consider Rn (C
bn ) ◦ DL, which can be sampled
covariance Rn (C
bn+1 ◦ DL . Here DL can be any matrix of form (2.4), where its radius L does not
by C
need to match l used in (2.9). In fact, we will mostly use DL = DLcut (2.6) with L ≥ 4l
in our discussion. One important advantage gained by localization is that, in order for
bn+1 − Rn (C
bn )) ◦ DL k to be small, the
the covariance sampling to be accurate, that is k(C
necessary sample size scales only with DL log d, instead of d, where DL is some constant
8
that only depends on L. This phenomenon was discovered in statistics [21], assuming the
samples are generated from one fixed distribution.
But in EnKF, the conditional mean of
√
(k)
b
b n H)∆X
bn(k) . A generalization of [21]
each sample is different, i.e. En ∆Xn+1 = rAn (I − K
is our first result:
Theorem 2.1. For any fixed group of ak ∈ Rd , k = 1, . . . , K, and K i.i.d. samples zk ∼
N (0, Σz ). Consider the sample covariances
Z=
K
1 X
(ak + zk ) ⊗ (ak + zk ),
K k=1
Let
Σa =
1/2
K
1 X
ak ⊗ ak .
K k=1
1/2
σa,z = max{[Σz ]i,i , [Σa ]i,i [Σz ]j,j }.
i,j
Z concentrates around its mean in the following two ways, where c is an absolute constant:
a) Schur product with a symmetric matrix DL . For any t ≥ 0
P(k(Z − EZ) ◦ DL k ≥ kDL k1 σa,z t) ≤ 8 exp 2 log d − cK min{t, t2 } .
Recall that kDL k1 := maxi
Pd
j=1 |[DL ]i,j |,
which is often independent of d.
b) Entry-wise. Consider kZ − EZk∞ = maxi,j |[Z − EZ]i,j |, then for any t ≥ 0
P(kZ − EZk∞ ≥ σa,z t) ≤ 8 exp 2 log d − cK min{t, t2 } .
In application to LEnKF, we will let
√
b n H)∆X
bn(k) ,
ak = rAn (I − K
zk =
√
b n ζn(k) +
rAn K
√
(k)
rξn+1 ,
bn+1 ◦ DL concentrates around rRn (C
bn ) ◦ DL . The exact
and Theorem 2.1 shows that C
statement is given below by Corollary 5.4. The result in [21] is equivalent to the special case
where ak ≡ 0. Fortunately, the generalization is not difficult and is in Section 4.
2.3
Localization inconsistency with localized covariance
While the localization technique makes the covariance sampling much easier, they also introduce additional errors. The fundamental reason is that the localization techniques are
applied to the covariance matrices, but cannot be applied to the ensemble members themselves. On the other hand, the analysis update is applied to the ensemble but not to the
covariance. This leads to a matrix inconsistency [9, 25, 15].
b n −Xn . At this moment,
To illustrate, we look at the forecast filter error at time n, ên = X
the sample noise realization of FnS is available, so it is natural to consider the conditional
covariance of the forecast filter error :
EFnS ên ⊗ ên = ES ên ⊗ ên .
9
The identity holds because the sample noises after time n are independent of ên ∈ Fn .
Suppose this covariance is captured by the localized ensemble covariance, in other words
bn ◦ DL . Based on the LEnKF formulation (2.11), the filter errors after the
ES ên ⊗ ên = C
next assimilation step and forecast step are:
bn − K
b n (H X
b n − HXn − ζn ) − Xn = (I − K
b n H)ên + K
b n ζn ,
en = X n − Xn = X
b n+1 − Xn+1 = An (X n − Xn ) − ξn+1 = An (I − K
b n H)ên + An K
b n ζn − ξn+1. (2.15)
ên+1 = X
b n ∈ F S , ζn and ξn+1 are independent of F S , the new forecast error
Since the Kalman gain K
n
∞
covariance is
b n H)(ES ên ⊗ ên )(I − K
b n H)T + σ 2 K
b bT T
ES ên+1 ⊗ ên+1 = An [(I − K
o n Kn ]An + Σn ,
b n H)[C
bn ◦ DL ](I − K
b n H)T + σ 2 K
bnK
b T ]AT + Σn =: R′ (C
bn ).
= An [(I − K
o
n
n
n
(2.16)
On the other hand, the ensemble covariance is generated by the update in (2.14). With no
bn+1 ◦ DL is near its average
inflation, r = 1, Theorem 2.1 indicates C
bn ) ◦ DL = [An [(I − K
b n H)C
bn (I − K
b n H)T + σ 2 K
b bT T
Rn (C
o n Kn ]An + Σn ] ◦ DL .
(2.17)
bn ) is defined by (2.13).
Recall the posterior Riccati map Rn (C
The difference between (2.16) and (2.17) can be interpreted as the inconsistency caused
by commuting the localization and Kalman covariance update. In order for the ensemble
covariance to capture the error covariance, it is necessary for this difference to be small. This
is an issue not governed by the sampling scheme, but governed by the localization operation.
As discussed in the introduction, the major motivation behind localization techniques is
that the covariance is localized. We formalize this notion through the following definition.
Definition 2.2. Given a decreasing function Φ : R+ 7→ [0, 1] with Φ(0) = 1, we say the
bn follows an (Mn , Φ, L)-localized structure, if
forecast covariance sequence C
(
d(i, j) ≤ L;
bn ]i,j | ≤ Mn Φ(d(i, j))
|[C
(2.18)
Mn Φ(L)
d(i, j) > L.
The decay function Φ and L need not coincide with the φ and l used in Kalman gain
localization (2.4). This flexibility is useful when we try to verify the localized structure.
Intuitively, in order for localization techniques to be effective, we need Φ(x) to be near zero
when x is large. This holds true for most localized covariance structures, such as the Gaspari
Cohn matrix (2.5), and also the function Φ(x) = λxA with a certain λA < 1, which will appear
below in Theorem 2.5 for linear systems.
One interesting phenomenon, is that if the forecast covariance is already localized, then
the localization inconsistency is in general small:
bn
Proposition 2.3. Suppose kAn k ≤ MA , An and Σn are of bandwidth less than l, and C
L
follows an (Mn , Φ, L)-localized structure, then the localization inconsistency with DL = Dcut
and L ≥ 4l, given by
∆loc = (2.16) − (2.17),
10
has nonzero entries only around the localization boundary:
[∆loc ]i,j = 0
if |d(i, j) − L| > 2l.
Moreover, it is bounded by
k∆loc k ≤ Mn MA2 (1 + σo−2 Bl Mn )2 Bl2 BL,l Φ(L − 2l).
(2.19)
BL,l is a volume constant BL,l = maxi #{j : |d(i, j) − L| ≤ 2l}, and Bl is given by (2.8).
Note that if Φ(L − 2l) is close to zero, the right side is very small.
2.4
Main result: LEnKF performance
There are different ways to quantify the performance of EnKF. One approach is to compare
EnKF with its large ensemble limit, which is the Kalman filter, and estimate the convergence
rate [?, 37, 38, 39]. Moreover, advanced sampling techniques, such as multilevel Monte Carlo,
can be applied to the EnKF procedures, and speed up the convergence [?, ?]. However, these
results have not investigated the dependence of sample size K on the underlying dimension,
thus they are not helpful in explaining the advantages of the localization procedures. Moreover, the large ensemble limit for LEnKF is not necessarily the optimal, since the localization
techniques may violate the Bayes’ formula.
A more practical approach looks for qualitative EnKF properties, where the necessary
sample size K scales with quantities much less than d [40, 41, 42, 43], for example a low
effective dimension [23]. One central issue of EnKF is that, unlike Kalman filter, it estimates
the forecast uncertainty by the ensemble covariance, which can be faulty. Since the forecast
covariance matrix plays a pivotal role in the EnKF operation, it is important to ask if the
ensemble covariance captures the real filter error covariance.
In our particular case, we are interested in finding a bound for filter error covariance
bn . Note that the
ES ên ⊗ ên . We will compare it with the filter ensemble covariance C
S
conditioning ES is with respect to the sample noise filtration F∞
given in (2.12), moreover
S
b
note that Cn ∈ F∞ . Therefore the comparison is legitimate. By showing ES ên ⊗ ên is
bn with large probability, we demonstrate that the LEnKF
dominated by a proper inflation of C
reaches its estimated performance. In order to achieve that, we need the localized structure
to be stable as well.
Theorem 2.4. Suppose the forecast ensemble covariance follows a stable (Mn , Φ, L)-localized
structure, and the sample size K exceeds DL log d with a constant DL that depends on L, the
LEnKF (2.11) reaches its estimated performance in the long time average. In specific, for
any δ > 0, suppose the following conditions hold
1) In the signal-observation system (2.1), An and Σn are of bandwidth l, moreover
kAn k ≤ MA ,
mΣ Id Σn MΣ Id ,
MA2 ≥ mΣ .
b0 + ρId ) for some r0 and ρ that
2) Suppose the initial error satisfies ES ê0 ⊗ ê0 r0 (C
0 < r0 ,
0 < ρ < ( 21 −
1
) min{MA2 /mΣ , σo2 }.
2r
This can always be achieved by picking a larger r0 .
11
bn follows a (Mn , Φ, L)-localized structure as in Definition 2.2.
3) The forecast covariance C
Moreover, the localized structure is stable, so there are constants B0 , D0 and M0 so that
T
1 X
1
b0 k + D0 ) + M0 .
Mn ≤ (B0 EkC
E
T n=1
T
(2.20)
4) The localized structure Φ and radius L satisfy
L ≥ 4l,
−1
Φ(L − 2l) ≤ δ 3 BL,l
MA−2 Bl−6 .
The volume constants are given by Proposition 2.3.
5) The sample size K > Γ(rBl δ −1 , d), with
2
, 18x
log d},
Γ(x, d) = max{9x2 , 24x
c
c
(2.21)
and the absolute constant c is given by Theorem 2.1.
Then for any 1 < r∗ < r, the filter error covariance is dominated by the filter covariance
bn ◦ DL + ρId )
ES ên ⊗ ên r∗ (C
cut
with high 1 − O(δ) probability in long time average
1−
T −1
1X
bn ◦ DL + ρId ))
P(ES ên ⊗ ên r∗ (C
cut
T n=0
≤
2.5
b0 k + D0 )
δ(B0 kC
r0
2r 1/3
+
(ρ−1 Bl2 MA2 + (ρσ
1/3 )
o)
T log r∗
T log r∗
δ −1 2 2
−1
2r 1/3
r 1/3 2/3
+
(ρ Bl MA + (ρσo )1/3 )M0 + ρ MΣ + 2 ρ1/3 σo
.
log r∗
Weak local interaction with sparse observations
By Theorem 2.4, the stability of localized structure is a necessary condition for the LEnKF
to reach its estimated performance. While in practice this condition is often assumed to
be true to motivate the localization technique, and one can check it while the algorithm is
running, it is interesting to find some sufficient a-priori conditions of system (2.1), so that
(2.20) holds. Unfortunately, rigorous investigations in this direction is sorely missing. Here
we provide a stability analysis in a simple setting.
The origin of localized covariances is intuitively clear. In most physical systems, the
covariance between [X]i and [X]j comes from information propagation in space. So if the
propagation is weak and decays at the same time, there will be a localized covariance. For
our linear models, the information propagation is carried by local interactions, described by
the off diagonal terms of An . To enforce its weakness, we assume that there is a λA < 1,
such that
)
( d
X
−d(i,k)
|[An ]i,k |λA
≤ λA .
(2.22)
max
i
k=1
12
For the simplicity of our discussion, we also assume the system noise is diagonal Σn = σξ2 Id .
−d(i,k)
Note that λA < 1, so λA
is a large number when i and k are fart apart. So condition
(2.22) constraints the long distance interaction, measured by |[An ]i,k |, to be weak. In other
words, (2.22) models a local interaction. If we concern the unfilter covariance of the sequence
[X]i , then λA < 1 is sufficient to guarantee the covariance is localized, using Proposition 6.2
in below.
The main difficulty actually comes from the observation part. For simplicity, we require
the observations in (2.2) to be sparse in the sense that d(oi , oj ) > 2l for any i 6= j. Recall that
oi is the i-th observable component. Then for each location i ∈ {1, · · · , d}, there is at most
one location o(i) ∈ {o1 , · · · , oq } such that d(i, o(i)) ≤ l. This will significantly simplify the
analysis step and yield an explicit expression. Sparse observations are in fact quite common
in practice. Moreover, it is also possible to generalize the results here to non-sparse scenario,
by using sequential assimilation [15]. But the conditions will be much more involved.
Under the sparse observation scenario, the following function describes how does the
bn update to the one of C
bn+1 :
localized structure of C
o
n
2
(2.23)
ψλA (M, δ) = (r + δ) max λA M 1 + σo−2 M + λA σo−2 M 2 , λ2A M + σξ2 .
This function provides a way to ensure stable localized structure:
Theorem 2.5. Given a LEnKF (2.11), suppose the following holds
1) The system noise is diagonal and the observations are sparse
Σn = σξ2 Id ,
d(oi , oj ) > 2l, ∀i 6= j.
2) There is a λA < r −1 such that (2.22) holds.
3) There are constants
0 < δ∗ < min{0.25, 21 (λ−1
A − r)},
M∗ ≥
such that ψλA (M∗ , δ∗ ) ≤ M∗ with ψλA given by (2.23).
(r + 2δ∗ )σξ2
,
1 − λA
−1
4δ∗
4) Denote n∗ = 2L + ⌈ log
⌉. The sample size K exceeds
log λ−1
A
1
2
−2
−1
K > max − 2 2L log(16d n∗ δ∗ ), Γ(2rδ∗ , d) .
cδ∗ λA
(2.24)
Then the forecast ensemble covariance follows a stable localized structure (Mn , Φ, L) with
Φ(x) = λxA . In specific, the stochastic sequence Mn is dissipative every n∗ steps:
1
E0 Mn∗ ≤ M0 + (1 + 2δ∗ )M∗ .
2
The long time average condition (2.20) can be verified by
T
2n∗
1X
b0 k + M∗ ) + 2(1 + δ∗ )M∗ .
EMk ≤
(EkC
T k=1
T λLA
13
Remark 2.6. Note that
ψλA (M, 0) = max{rλA M(1 + σo−2 M)2 + rλA σo−2 M 2 , rλ2A M + rσξ2 },
With sufficiently small λA or σo−1 , ψλA (M, δ) < M can have a solution, so condition 3) holds.
2.6
Localization radius
One important and difficult issue of LEnKF implementation is how to choose the localization
radius l. The theoretical results above shed some light over this issue qualitatively. It is
worth noticing that this paper has two localization radii. l is the one used for LEnKF(2.11)
formulation, and L is used for the filter error theoretical analysis. But generally speaking L
and l should be picked so that L ≥ 4l, so we concern only of L in the following. We also
assume that Φ(x) = λxA from Theorem 2.5 for simpler discussion.
A smaller localization radius simplify the sampling task by focusing on a smaller assimilation domain, and significantly reduces the necessary sample size. This comes from
two perspectives. First, in order for the LEnKF to sample the correct localized covariance
matrix, condition 5) of Theorem 2.4 requires the sample size to grow polynomially with L,
since kΦk1 is summing over BL entries. Second, the localized covariance structure can be
very delicate at the boundary, and to maintain it one needs the random forecast covariance
to have sampling error of scale λLA . This leads to the exponential dependence of K on L, as
in condition 4) of Theorem 2.5.
On the other hand, a larger localization radius L reduces the size of the localization
inconsistency. Based on Proposition 2.3, the localization inconsistency is of order Φ(L−2l) =
L−2l
λA
, because within inequality (2.19), Bl is independent of L, and BL,l is also independent
of L if i, j are taken from {1, · · · , d}. This becomes condition 4) of Theorem 2.4, where we
need the localization radius to be large, so the inconsistency is bounded by the tolerance.
2.7
LEnKF accuracy with small noises
In practice, with frequent and accurate observations, the system noises, Σn and σo2 , are
often of scale ǫ. In this scenario, the LEnKF has its error covariance scale with ǫ in long
time, showing an accurate forecast skill. Moreover, there is no requirement that the initial
ensemble to have error of scale ǫ, meaning the LEnKF can converge to the signal Xn given
enough time.
Theorem 2.7. Suppose, the signal-observation system (2.1) satisfies the conditions of Theorem 2.5, and its LEnKF is tuned to satisfy the conditions of Theorem 2.4 except (2.20).
Then if the same LEnKF is applied to the following system
ǫ
Xn+1
= An Xnǫ + bn + ξn ,
ǫ
ǫ
Yn+1
= HXn+1
+ ζn ,
ξn+1 ∼ N (0, ǫσξ2 Id ),
ζn+1 ∼ N (0, ǫσo2Iq ),
it has small filter error covariance of scale ǫ. In particular, the ensemble covariance is of
scale ǫ in long time average
T
1X
bn k∞ ≤ 2n∗ (EkC
b0 k + ǫM∗ ) + 2(1 + δ∗ )ǫM∗ .
EkC
T n=1
T λLA
14
bn with high probability:
Moreover, the real filter covariance is dominated by C
T −1
1X
bn ◦ DL + ǫρId ))
P(ES ên ⊗ ên r∗ (C
1−
cut
T n=0
≤
b0 k + ǫM∗ )
2δn∗ (EkC
r0
2r 1/3
+
(ρ−1 Bl2 MA2 + (ρσ
1/3 )
L
o)
T ǫ log r∗
T ǫλA log r∗
δ −1 2 2
−1 2
2r 1/3
r 1/3 2/3
2(ρ Bl MA + (ρσ
.
)(1
+
δ
)M
+
ρ
σ
+
2
σ
+
∗
∗
1/3
ξ
ρ1/3 o
o)
log r∗
Note that ǫ appears only in terms that converge to zero with T → ∞.
Remark 2.8. We need the system to follow the conditions in Theorem 2.5 only to ensure
the stable localized structure exists. If one can find other conditions to verify that the LEnKF
follows an (Mn , Φ, L) localized structure such that Mn converges to a scale of ǫ, the conditions
in Theorem 2.5 can be replaced.
3
Numerical experiments
There is plenty of numerical evidence showing that LEnKF has good forecast skill even with
nonlinear dynamical systems. Moreover, this paper intends to understand LEnKF from a
theoretical perspective, not an empirical one. On the other hand, several new concepts and
conditions are introduced in our analysis framework. To understand their significance, we
conduct a few simple numerical experiments in this section.
3.1
Experiments setup: a stochastic turbulence model
We consider a stochastically forced dissipative advection equation on an one dimensional
periodic domain from Section 6.3 of [6]:
∂u(x, t)
∂u(x, t)
∂ 2 u(x, t)
=c
− νu(x, t) + µ
+ σx Ẇ (x, t).
∂t
∂x
∂x2
To transform it to a discrete linear system, we apply the centered difference formula with
spatial grid size h, and Euler scheme with time step ∆t. We assume W (x, t) is a white noise
in both time and space. The discretized signal-system [Xn,1 , · · · , Xn,d]T follows
√
Xn+1,i = a− Xn,i−1 + a0 Xn,i + a+ Xn,i+1 + σx ∆tWn+1,i , i = 1, . . . , d;
(3.1)
− c∆t
, a0 = 1 − 2µ∆t
− ν∆t, a+ = µ∆t
+ c∆t
.
a− = µ∆t
h2
2h
h2
h2
2h
The indices should be interpreted cyclically, that is Xn,0 = Xn,d and Xn,d+1 = Xn,1 . The
natural distance between indices is d(i, j) = min{|i − j|, ||i − j| − d|}. The system noises
Wn,i are independent samples from N (0, 1). We also initialize X0,i ∼ N (0, 1) for simplicity.
Evidently, if we formulate (3.1) in the format of (2.1), the corresponding matrix An is
constant with bandwidth l = 1. In other words it is tridiagonal. We assume one observation
is made every p components with independent Gaussian noise Bn,k ∼ N (0, 1):
Yn,k = Xn,p(k−1)+1 + σo Bn,k .
15
A simple LEnKF with domain localization radius l = 1, inflation r = 1.1 will be applied to
recover Xn . A small sample size K = 10 is taken. As comparison, we implement a standard
EnKF with the same inflation, sample size and sample noise realization. A standard Kalman
filter is also computed to indicate the optimal filter error. We are interested to see
• Does LEnKF have a close to optimal performance? Does localization play a key role?
• Is filter performance robust against dimension increase?
• Does filter performance scale with the noise strength?
• Does the LEnKF ensemble covariance localize, and is this structure stable?
• Do the a-priori conditions of Theorem 2.5 hold?
In the discussion below, we consider dimension in a wide range d = 10, 100, 1000. Yet
we will fix the grid size h in each regime. This corresponds to a sequence of domains with
increasing size, but not a fixed domain with increasing refinement. Although the latter can
also have very high dimension, localization is not a suitable tool; a proper projection to the
low effective dimension should be more effective [23]. Also it is worth noticing that there are
better ways to filter (3.1), such as Fourier domain filtering [6]. We are running LEnKF here
just to support our theoretical analysis.
3.2
Regime I: strong dissipation
We first consider a regime of (3.1) with strong uniform damping and weak advection
h = 1,
∆t = 0.1,
p = 5,
ν = 5,
c = 0.1,
µ = 0.1,
σx = σo = 1.
In this regime, the conditions of Theorem 2.5 can be verified. In particular, (2.22) can
be formulated as
−1
a− λ−1
(3.2)
A + a0 + a+ λA ≤ λA .
Direct numerical computation shows that λA = 0.5186 satisfies this relation. Furthermore,
we can verify that (δ ∗ , M∗ ) = (0.128, 0.2187) satisfy condition 3) of Theorem 2.5. Theobn follows localized structure
rem 2.5 predicts a stable stochastic sequence Mn exists so C
(Mn , Φ, 4), where Φ(x) = λx∧L
and Mn has its mean bounded by 8.8959. On the other
A
hand, Theorem 2.5 requires the sample size to be around K = 2.8 × 104 for d = 100, and
K = 7.34 × 104 for d = 106 . We will see K = 10 is sufficient for LEnKF to perform well
numerically. The overestimate is reasonable as theoretical analysis is often too conservative.
The main point of theoretical analysis is showing a logarithmic dependence of K on the
dimension.
The numerical results are presented in Figure 3.1. In subplot a) the dimension average
square forecast error
b n |2 /d
DSE := |Xn − X
of LEnKF is plotted for 100 iterations. The time mean DSE (MSE) is around 0.142 for
d = 100. This is comparable with the optimal Kalman filter MSE 0.129. Moreover, this
16
DSE
a) LEnKF
b) EnKF
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0
0
10
20
30
40
50
60
70
80
90
d=10
d=100
d=1000
optimal
100
t
c) Stability of localization structure
0.1
0
0
10
20
30
40
50
60
70
80
90
100
t
d) LEnKF MSE with small noise
6
5
−1
10
4
3
−2
10
2
1
0
−3
0
10
20
30
40
50
60
70
80
90
10
100
−1
10
Noise strength ε
t
Figure 3.1: Filter performance in stable regime I.
performance is robust for all dimensions, MSE=0.137 for d = 10 and MSE=0.143 for d =
1000, while the oscillation is stronger in d = 10 case due to averaging over a small dimension.
Since this regime is very stable, EnKF without localization also has surprisingly good
performance, as shown in subplot b). Its MSE is around 0.15, which is worse than LEnKF.
This shows that, while the conditions of Theorem 2.5 are sufficient for LEnKF to work well,
they might be too strong. It will be interesting if sharper working conditions for LEnKF
can be found. It will also be interesting if one can show such strong conditions can already
guarantee EnKF to work without localization.
Two other properties predicted by our theory are also validated. In subplot c), the
localization status Mn is plotted for all three dimensions. All three time sequences are
stable, and they are all bounded below the theoretical estimate
8.8959
√
√ from Theorem 2.5.
ǫ
ǫ
We also test LEnKF with small scale system noises σx = ǫσx , σo = ǫσo . In subplot d),
1
we plot the time mean DSE of ǫ = 1, 12 , 41 , . . . , 32
in logarithmic scales. It is clear that the
LEnKF has the correct MSE scale of ǫ as Theorem 2.7 predicted.
3.3
Regime II: strong advection
The second regime we considered has a strong advection, while the damping is weak:
h = 0.2,
∆t = 0.1,
p = 5,
ν = 0.1,
c = 2,
µ = 0.1,
σx = σo = 1.
This regime is close to unstable, since the linear system map An has spectral norm 0.99.
(3.2) does not have a solution below 1, so the conditions of Theorem 2.5 are not verifiable.
Nevertheless, we find empirically the LEnKF ensemble covariance matrices are localized. In
Figure 3.2, we demonstrate this by plotting
!
d−x
d
X
X
1
bn ]i,i+x−d |
bn ]i,i+x | +
b
|[C
|[C
Φ(x)
= E
d
i=1
i=d−x+1
17
0
10
b) EnKF
a) LEnKF
4
10
10
DSE
3
5
10
2
0
10
1
0
0
10
20
30
40
50
60
70
80
90
d=10
d=100
d=1000
optimal
100
t
c) Largest covariance component
−5
10
0
10
20
30
40
50
60
70
80
90
100
t
d) LEnKF MSE with small noise
10
8
0
10
6
4
−1
10
2
0
0
10
20
30
40
50
60
70
80
90
−1
100
0
10
10
Noise strength ε
t
Figure 3.3: Filter performance in stable regime II.
using empirical average from 1000 samples with d = 100, n = 100. The clear covariance
strength transition around x = 4 indicates that the ensemble covariance is localized. Therefore Theorem 2.4 applies and predicts that LEnKF will have a good performance.
This is indeed the case. In subplot a) of
Figure 3.3, we see that LEnKF has a forecast
skill. The MSE is around 1.63 for d = 100,
where the optimal Kalman filter MSE is
1.06. This performance does not change
much with the dimension, MSE=1.42 for
d = 10, MSE=1.72 for d = 1000. The EnKF
on the other hand is highly unstable except
Distance
for the low dimension d = 10 case. In subplot b), we see for d = 100 and 1000, the
Figure 3.2: Localization structure
DSE of EnKF grows exponentially to 1010 .
This is a phenomenon known as EnKF catastrophic filter divergence, previously studied by [6, 43]. Now this also demonstrates how
important is the localization technique. Such divergence can be resolved by introducing an
adaptive additive inflation, where the stability can be rigorously proved [42].
In this unstable regime, LEnKF retains its stability and accuracy. Since the localization
structure does not have a theoretical ground in this regime, Figure subplot c) plots only
bn . From it we see the LEnKF ensemble covariance is
the largest matrix component of C
stochastically stable for all three √
dimensions. √Like in Regime I, we also test LEnKF with
1
ǫ
small scale system noises σx = ǫσx , σoǫ = ǫσo , where ǫ = 1, 12 , 41 , . . . , 32
. Subplot d)
indicates the LEnKF has the correct MSE scaling with ǫ.
0.55
Covariance
0.5
0.45
0.4
0.35
0
18
5
10
15
20
25
30
35
40
45
50
4
Concentration of localized random matrices
In this section, we present the proof of Theorem 2.1. While part a) is more useful, it can be
established easily from part b), using a similar argument as in [21].
4.1
Entry-wise concentration
It is well known that the averages of independent Gaussian variables concentrate around
their expected values. In specific, a simplified version of theorem 1.1 from [44] is:
Theorem 4.1 (Hanson-Wright inequality). Let ξ ∼ N (0, In ) and A be an n × n matrix.
Then for any t ≥ 0
t
t2
T
T
.
,
P |ξ Aξ − Eξ Aξ| > t ≤ 2 exp −c min
kAk2HS kAk
Here c is a constant independent
of other parameters. The Hilbert-Schmidt (Frobenius) norm
P
is denoted by kAkHS = [ i,j [A]2i,j ]1/2 .
This provides us a straight forward way to control the random matrix entries [Z]i,j in
Theorem 2.1.
Lemma 4.2. Under the conditions of Theorem 2.1, let ∆ = Z − EZ. There is an absolute
constant c such that for any t ≥ 0,
P(|[∆]i,j | > σa,z t) ≤ 8 exp(−cK min{t, t2 }).
Proof. For any vector u, denote ∆u = uT [Z − EZ]u. Then by symmetry,
1
[∆]i,j = (∆ei +ej − ∆ei −ej ).
4
Recall that ei is the i-th standard basis vector. So it suffices to find a concentration bound
for ∆u with u = ei ± ej . To do that, note that uT Σz u = EuT zk zkT u, so we can decompose
∆u
∆u = K
−1
= 2K
K
X
[hu, ak + zk ihu, ak + zk i − hu, ak ihu, ak i − Ehu, zk ihu, zk i]
k=1
K
X
−1
k=1
hu, ak ihu, zk i + K
−1
K
X
k=1
(hu, zk ihu, zk i − Ehu, zk ihu, zk i)
We denote ha, bi = aT b as the inner product, and the P
two summations above as I and II in
K
T
−1
2
T
the following. Notice that hu, zk i ∼ N (0, u Σz u), K
k=1 hej , ak i = u Σa u. Moreover for
u = ei ± ej ,
uT Σz u = [Σz ]i,i + [Σz ]j,j ± 2[Σz ]i,j ≤ 2([Σz ]i,i + [Σz ]j,j ) ≤ 4σa,z .
19
(4.1)
We have the same conclusion for uT Σa u. Because hu, ak i is a deterministic scalar,
hu, ak ihu, zk i ∼ N (0, uT ak aTk u · uT Σz u)
and
I = 2K
−1
K
X
k=1
hu, ak ihu, zk i ∼ N (0, 4K −1uT Σa u · uT Σz u).
2
Because by definition of σa,z , uT Σa u · uT Σz u ≤ 16σa,z
, by the Chernoff bound for Gaussian
distributions, there is a c1 > 0 so that
P(|I| > 12 σa,z t) ≤ 2 exp(−c1 Kt).
In order to deal with II, notice that
ξ := √
So
II = K
−1
1
uT Σz u
K
X
k=1
[hu, z1 i, · · · , hu, zK i]T ∼ N (0, IK ).
(hu, zk i2 − Ehu, zk i2 ) = ξ T Aξ − Eξ T Aξ,
where A = K1 (uT Σz u)IK . Clearly, kAk ≤ 4σKa,z , and kAk2HS ≤
4.1 there is a constant c2 so that for all s ≥ 0
2
16σa,z
.
K
Therefore, by Theorem
2
s
}).
P(|II| > 21 s) ≤ 2 exp(−c2 min K{ σs2 , σa,z
a,z
−1
Let t = σa,z
s, the inequality can be written as
P(|II| > 21 σa,z t) ≤ 2 exp(−c2 K min{t, t2 }).
Because |∆u | ≤ |I| + |II|, by the union bound, if we let c = min{c1 , c2 } ,
P(|∆u | > σa,z t) ≤ P(|I| > 21 σa,z t) + P(|II| > 21 σa,z t) ≤ 4 exp(−cK min{t, t2 }).
Finally, recall the bound above holds for all u = ei ± ej , so by (4.1)
P(|[∆]i,j | > σa,z t) ≤ P(|∆ei +ej | > σa,z t) + P(|∆ei −ej | > σa,z t) ≤ 8 exp(−cK min{t, t2 }).
Entry-wise concentration now comes as a direct corollary.
Proof of Theorem 2.1 b). Let ∆ = Z − EZ. Note that kZ − EZk∞ = maxi,j=1,...,d {|[∆]i,j |},
so using the previous lemma we have our claim by the union bound
X
P(kZ − EZk∞ > σa,z t) ≤
P(|[∆]i,j | > σa,z t) ≤ 8d2 exp(−cK min{t, t2 }).
i,j
20
4.2
Summation of entry-wise deviation
One simple fact of matrix norm is that k∆k ≤ k∆k1 . This is also exploited by [21]
Lemma 4.3. Given a matrix ∆, the following holds
a) If ∆ is symmetric, then
k∆k ≤ k∆k1 = max
i
( d
X
j=1
)
|[∆]i,j | .
b) k∆k∞ ≤ k∆k always holds. If in addition ∆ has bandwidth l, then k∆k ≤ Bl k∆k∞ .
Proof. For a) part, recall that ei is the i-th standard basis vector. Notice that
±(ei eTj + ej eTi ) ei eTi + ej eTj .
Therefore
∆=
X
1X
[∆]i,j (ei eTj + ej eTi )
2 i6=j
[∆]i,i ei eTi +
i
d
X
[∆]i,i ei eTi
i=1
d
d
XX
1X
|[∆]i,j |(ei eTi + ej eTj ) =
|[∆]i,j |ei eTi k∆k1 Id .
+
2 i6=j
i=1 j=1
For the b) part, by the definition of operator norm, and kei k = kej k = 1, we have
|ei ∆eTj | ≤ k∆k.
Taking maximum among all i and j, we have P
k∆k∞ ≤ k∆k.
Next note that k∆k = k∆∆T k1/2 ≤ maxi j |[∆∆T ]i,j |, and if ∆ is of bandwidth l, by
part a)
X
X
X
|[∆∆T ]i,j | ≤
|[∆]i,k ||[∆]j,k | ≤ Bl2 k∆k2∞ .
j
k:d(i,k)≤l j:d(j,k)≤l
Therefore k∆k ≤ Bl k∆k∞ .
Now the Theorem 2.1 a) comes as a direct corollary:
Proof of Theorem 2.1 a). Let ∆ = Z − EZ. By Lemma 4.3 a),
( d
)
X
k∆◦DL k ≤ k∆◦DL k1 = max
[DL ]i,j |[∆]i,j | ≤ kDL k1 max |[∆]i,j | = kDL k1 kZ −EZk∞ .
i
i,j
j=1
Therefore by part b) of this theorem,
P(k∆ ◦ DL k ≥ kDL k1 σa,z t) ≤ P (kZ − EZk∞ ≥ σa,z t) ≤ 8 exp(2 log d − cK min{t, t2 }).
21
5
5.1
Error analysis of LEnKF
Localization inconsistency
Lemma 5.1. Fix an L > l, if matrix A is of bandwidth l, the difference caused by commuting
localization and bilinear product with A
∆ = A[C ◦ DLcut ]AT − [ACAT ] ◦ DLcut
has nonzero entries only for indices (i, j) with |d(i, j) − L| ≤ 2l.
If in addition, matrix C follows an (M, Φ, L)-localized structure, then
|[∆]i,j | ≤ MΦ(L − 2l)kAk2∞ Bl2 ,
L − 2l ≤ d(i, j) ≤ L.
Recall that Bl is the volume constant given by (2.8).
Proof. By the matrix product rule,
X
X
[∆]i,j =
[A]i,u [C]u,v [A]j,v − 1d(i,j)≤L
[A]i,u [C]u,v [A]j,v .
(5.1)
u,v
d(u,v)≤L
If d(i, j) > L + 2l, note that [A]i,u [A]j,v 6= 0 only when d(i, u) ≤ l, d(j, v) ≤ l. But for
these terms, by the triangular inequality d(u, v) > L, and they are not included in (5.1).
Therefore (5.1)= 0.
P
If d(i, j) ≤ L, it is easy to verify that [∆]i,j = − d(u,v)>L [A]i,u [C]u,v [A]j,v . Moreover,
[A]i,u [A]j,v 6= 0 only when d(i, u) ≤ l, d(j, v) ≤ l. So if d(i, j) < L − 2l, then by triangular
inequality d(u, v) < L and [∆]i,j = 0.
Next, we assume C follows an
P(M, Φ, L)-localized structure. If L < d(i, j), then among
the nonzero terms in [∆]i,j =
d(u,v)≤L [A]i,u [C]u,v [A]j,v , d(u, v) ≥ L − 2l by triangular
inequality. This leads to
X
|[∆]i,j | ≤
kAk2∞ MΦ(L − 2l) ≤ Bl2 kAk2∞ MΦ(L − 2l).
u,v:d(u,v)≤L,d(i,u)≤l,d(j,u)≤l
Here we used that
#{(u, v) : d(u, v) ≤ L, d(v, i), d(u, i) ≤ l} ≤ #{(u, v) : d(v, i), d(u, i) ≤ l} = Bl2 .
P
If L − 2l ≤ d(i, j) ≤ L, then by [∆]i,j = − d(u,v)>L [A]i,u [C]u,v [A]j,v ,
|[∆]i,j | ≤ MΦ(L)
X
d(u,v)>L
|[A]i,u ||[A]j,v | ≤ MΦ(L)kAk2∞ Bl2 ,
where we applied the inequality
#{(u, v) : d(v, i), d(u, i) ≤ l, d(u, v) > L} ≤ #{(u, v) : d(v, i), d(u, i) ≤ l} = Bl2 .
In either case, we have the bound we claim, since Φ(L) ≤ Φ(L − 2l).
22
Proof of Proposition 2.3. Since Schur product is a linear operation, we can decompose the
localization inconsistency as
b n H)][C
bn ◦ DL ][(I − K
b n H)T AT ]
∆loc =[An (I − K
cut
n
T
T
b n H)]C
bn [(I − K
b n H) A ] ◦ DL
− [An (I − K
n
cut
2
T
T
2
T
T
bnK
b A + Σn ] ◦ DL
bnK
b A + Σn ] − [σ An K
+ [σ An K
o
n
n
o
n
n
cut
b n and Σn are of bandwidth at most l, An K
bnK
b T AT has bandwidth at most 4l
Since both K
n n
by triangular inequality. Since L ≥ 4l, so
bnK
b T AT + Σn ] = [σ 2 An K
bnK
b T AT + Σn ] ◦ DL ,
[σo2 An K
n n
o
n n
cut
In other words, ∆loc is
b n H)][C
bn ◦ DL ][(I − K
b n H)T AT ] − [An (I − K
b n H)]C
bn [(I − K
b n H)T AT ] ◦ DL ,
[An (I − K
cut
n
n
cut
b n H)k∞ . Recall that
which can be applied by Lemma 5.1. Next, we try to bound kAn (I − K
kHk = 1, kAn k ≤ MA and Lemma 4.3 b),
b n H)k∞ ≤ kAn (I − K
b n H)k ≤ MA kI − K
b n Hk ≤ MA (1 + kK
b n Hk)
kAn (I − K
b n H has bandwidth l. To see this, note that
In domain localization (2.10), K
b n H]i,j = [K
b ni H]i,j = [C
bni H T (σo2 Iq + H C
bni H T )−1 H]i,j
[K
X
bi ]i,o [(σ 2 Iq + H C
b i H T )−1 ]k,m 1j=om .
=
[C
n
o
n
k
(5.2)
m,k
bi has nonzero entries only in Ii × Ii ,
Since C
n
bni H T )−1 ]k,m = σo−2 1k=m
[(σo2 Iq + H C
if d(ok , i) > l or d(om , i) > l.
b n H]i,j = 0 if d(i, j) > l.
bni ]i,o = 0 if d(ok , i) > l. Therefore, [K
Also [C
k
b n Hk ≤ Bl kK
b n Hk∞ . Since the i-th row of K
b n H is the i-th row of
By Lemma 4.3 b), kK
i
Kn H, so by Lemma 4.3 b),
b n Hk∞ ≤ max{kK i Hk∞ } ≤ max{kK i Hk}.
kK
n
n
i
i
Moreover, by definition (2.9) and Lemma 4.3 a)
bi kk(σ 2 I + H C
b i H T )−1 k ≤ σ −2 kC
bi k ≤ σ −2 kC
b i k1 .
kKni k ≤ kC
n
o
n
o
n
o
n
bi has nonzero entries only in Ii × Ii , by Lemma 4.3,
Note that C
n
bi k1 ≤ Bl kC
bi k∞ ≤ Bl kC
bn k∞ .
kC
n
n
23
bn follows an (Mn , Φ, L) structure, kC
bn k∞ ≤ Mn . Summing up, the domain
Moreover, since C
localized Kalman gain can be bounded by
b n H)k∞ ≤ MA (1 + σ −2 Bl Mn ).
kAn (I − K
o
Then by Lemma 5.1, the localization inconsistency matrix is bounded entry-wise
|[∆]i,j | ≤ Mn MA2 (1 + σo−2 Bl Mn )2 Bl2 Φ(L − 2l),
while |[∆]i,j | = 0 if |d(i, j)−L| > 2l. So there are at most BL,l = maxi #{j, |d(i, j)−L| ≤ 2l}
nonzero entries in each row.
As a consequence
k∆loc k ≤ k∆loc k1 ≤ Mn MA2 (1 + σo−2 Bl Mn )2 Bl2 BL,l Φ(L − 2l).
5.2
Component information gain through filtering
One of the fundamental properties in Kalman filter is that the assimilation of observation
improves estimation. Mathematically, this can be represented by that the forecast covariance
matrix dominates the posterior covariance matrix. Unfortunately, with LEnKF, this natural
bn (I − K
b n H)C
bn (I − K
b n H)T + σ 2 K
b bT
property, C
o n Kn , may no longer hold. However, we can
still show the dominance at the diagonal entries.
Proposition 5.2. The assimilation step lowers the variance at each component:
bn ]i,i ≥ [(I − K
b n H)C
bn (I − K
b n H)T + σ 2 K
b bT
[C
o n Kn ]i,i ,
i = 1, · · · , d.
bn is updated through the Kalman gain matrix
Proof. Recall that the i-th coordinate of ∆X
i
b
Kn . Therefore,
(k)
2 bi bi T
b n H)C
bn (I − K
b n H)T + σ 2 K
b bT
bi
b
bi T
[(I − K
o n Kn ]i,i = [(I − Kn H)Cn (I − Kn H) + σo Kn (Kn ) ]i,i
b ni H]i,j 6= 0 only when d(i, j) ≤ l, so
Moreover, in (5.2) we have shown that [K
b ni H)C
bn (I − K
b ni H)T + σo2 K
b ni (K
b ni )T ]i,i = [(I − K
b ni H)C
bni (I − K
b ni H)T + σo2 K
b ni (K
b ni )T ]i,i .
[(I − K
Note that the right side is the posterior Kalman covariance with the forecast covariance
bi . Therefore by
being C
n
b ni H)C
bni (I − K
b ni H)T + σo2 K
b ni (K
b ni )T = C
bni − C
bni H T (σo2 Iq + H C
bni H T )−1 H C
bni C
bni ,
(I − K
we have
b n H)C
bn (I − K
b n H)T + σ 2 K
b bT
bi
b
[(I − K
o n Kn ]i,i ≤ [Cn ]i,i = [Cn ]i,i .
24
5.3
Sampling error
First, we have the following general integral lemma
Lemma 5.3. If Y is a nonnegative random variable that satisfies
P(Y > Mt) ≤ 8d2 exp(−cK min{t, t2 )),
c > 0, M ≥ 1.
Then for any δ ∈ (0, 1), if K ≥ Γ(Mδ −1 , d), where
x, 18
x2 log d}.
Γ(x, d) = max{9x2 , 24
c
c
We have EY ≤ δ and EY 2 ≤ 2Mδ.
Proof. Let ǫ =
δ
,
3M
and X = Y /M, we have K ≥ max{ǫ−2 , cǫ8 , cǫ22 log d}, and
P(X > t) ≤ 8d2 exp(−cK min{t, t2 )),
We will show that EX ≤ 3ǫ and EX 2 ≤ 6ǫ, which are equivalent to our
R ∞ claims. Recall the
integration by part formula for nonnegative random variables, EX = 0 P(X > x)dx,
Z ǫ
Z ∞
EX =
P(X > x)dx +
P(X > x)dx
0
ǫ
Z ∞
≤ǫ+
P(X > t)dt
ǫ
Z ∞
Z 1
2
≤ǫ+8
d exp(−cKt)dt + 8
d2 exp(−cKt2 )dt
Z1 ∞
Zǫ ∞
2
≤ǫ+8
d exp(−cKt)dt + 8
d2 exp(−cKt2 )dt.
ǫ
ǫ
Note that with our requirement on K, d2 exp(−cKǫ) ≤ 1,
Z ∞
8
8d2
exp(−cKǫ) ≤
≤ ǫ.
8d2 exp(−cKt)dt =
cK
cK
ǫ
And for t > ǫ, 8 ≤ 2ǫcKt, so
Z ∞
Z
2
8 exp(−cKt )dt ≤ ǫ
ǫ
ǫ
∞
2cKt exp(−cKt2 )dt = ǫ exp(−cKǫ2 ) ≤ ǫ.
2
As for EX , we again apply the integration by part formula
Z ∞
2
EX =
2tP(X ≥ t)dt
0
Z ∞
≤ 2ǫ +
2tP(X ≥ t)dt
ǫ
Z ∞
Z ∞
2
2
≤ 2ǫ + 8d
2t exp(−cKt)dt + 8d
2t exp(−cKt2 )dt
ǫ
= 2ǫ +
≤ 2ǫ +
2
ǫ
+ c21K 2 )
16d exp(−cKǫ)( cK
16ǫ
8
+ c216
+ cK
≤ 6ǫ.
cK
K2
We used K ≥ max{ǫ−2 , cǫ8 , cǫ22 log d} in the last line.
25
+
ǫ
8d2
cK
exp(−cKǫ2 )
bn follows (Mn , Φ, L)-localized
Corollary 5.4. Under condition 1) of Theorem 2.4, suppose C
structure. For any ǫ ∈ (0, 1), if
a) K > Γ(BL ǫ−1 , d), then the sampling error
bn+1 − rRn (C
bn )) ◦ DL k ≤ ǫ(B2 M 2 Mn + MΣ ),
En k(C
cut
l
A
b) K > Γ(rCǫ−1 , d) for any C ≥ 1, then the entry-wise sampling error
bn+1 − rRn (C
bn )k∞ ≤ ǫC −1 kRn (C
bn )k∞ .
En kC
bn+1 − rRn (C
bn )k2 ≤ ǫ2C −1 kRn (C
bn )k2 .
En kC
∞
∞
Proof. We apply Theorem 2.1 with
√
b n H)∆X
b (k) ,
ak = rAn (I − K
n
√
zk =
and DL = DLcut . Then
b n H)C
bn (I − K
b n H)T AT ,
Σa = rAn (I − K
n
b n ζ (k) +
rAn K
n
√
rξn(k) ,
b nK
b T AT + rΣn .
Σz = rσo2 An K
n n
bn ), where recall
Note that Σa Σa + Σz and Σz Σa + Σz = rRn (C
bn ) = An Qn AT + Σn ,
Rn (C
n
Therefore
b n H)C
bn (I − K
b n H)T + σ 2 K
b bT
Qn := (I − K
o n Kn .
1/2
1/2
bn )k∞ .
σa,z ≤ r max{[Σa ]i,i , [Σa ]i,i [Σz ]j,j } ≤ r max[Σa + Σz ]i,i = rkRn (C
i
i,j
Moreover, since Qn is positive semidefinite (PSD), so
q
kQn k∞ = max |[Qn ]i,j | ≤ max[Qn ]i,i max[Qn ]j,j = max[Qn ]i,i ≤ kQn k∞ .
i,j
i
Moreover, by Proposition 5.2,
j
i
bn ]i,i ≤ Mn .
[Qn ]i,i ≤ [C
bn ) is PSD, and by Lemma 4.3 kAn k∞ ≤ kAn k ≤ MA ,
Since Rn (C
(
)
X
bn )k∞ ≤ max[Rn (C
bn )]i,i = max [Σn ]i,i +
kRn (C
[An ]i,j [Qn ]j,k [An ]i,k
i
i
≤
Apply Theorem 2.1, since kDLcut k1 = maxi
MA2 Bl2 Mn
P
j
+ MΣ .
j:d(i,j)<L
1 = BL , we have that
bn+1 − rRn (C
bn )) ◦ DL k/kRn (C
bn )k∞ > rBL t) ≤ 8d2 exp(−cK min{t, t2 }).
Pn (k(C
cut
bn+1 − rRn (C
bn )k∞ /kRn (C
bn )k∞ > rt) ≤ 8d2 exp(−cK min{t, t2 }).
Pn (kC
Pn denotes the probability conditioned on Fn . Apply Lemma 5.3 with the both of them,
but using δ = ǫ for the first inequality and δ = ǫC −1 for the second, we have our claimed
results.
26
5.4
Error analysis
Next, we proceed to prove Theorem 2.4.
Proof of Theorem 2.4. For each time n, let rn be the smallest number such that the following
hold,
bn ◦ DL + ρId ), rn ≥ 1.
ES ên ⊗ ên rn (C
cut
We will try to find a recursive upper bound of rn+1 in term of rn .
Step 1: tracking the filter error. Recall that the forecast error at time n + 1 is provided
by the (2.15), and its covariance conditioned on sample noise realization is
b n H)ES ên ⊗ ên (I − K
b n H)T + σ 2 K
b bT T
ES ên+1 ⊗ ên+1 = An [(I − K
o n Kn ]An + Σn
b n H)(C
bn ◦ DL )(I − K
b n H)T AT
rn An (I − K
cut
+
b nK
b T AT
[σo2 An K
n n
n
b n H)(I − K
b n H)T AT + Σn ].
+ rn ρAn (I − K
n
By Young’s inequality (a + b)(a + b)T 2aaT + 2bbT , and that HH T = Iq ,
b n H)(I − K
b n H)T AT ≤ 2(An AT + An K
b n HH T K
b T AT )
An (I − K
n
n
n n
T T
T
b
b
≤ 2(An A + An Kn K A ).
n
Moreover, An ATn MA2 Id
Furthermore,
2
MA
Σ .
mΣ n
Denote DΣ = max{
n
2
2MA
, 2 },
mΣ σo2
n
then
b n H)(I − K
b n H)T AT DΣ (Σn + σ 2 An K
bnK
b T AT ).
An (I − K
n
o
n n
b n H)(C
bn ◦ DL )(I − K
b n H)T AT + (1 + rn ρDΣ )(σ 2 An K
bnK
b T AT + Σn ).
ES ên+1 ⊗ ên+1 rn An (I − K
cut
n
o
n n
bn ) in (2.16) is
Recall that R′n (C
bn ) = An (I − K
b n H)(C
bn ◦ DL )(I − K
b n H)T AT + σ 2 An K
bnK
b T AT + Σn .
R′n (C
cut
n
o
n n
Therefore
bn ).
ES ên+1 ⊗ ên+1 max{1, rn /r, (1 + rn ρDΣ )/r} · rR′n (C
With our condition 2) on ρ,
(1 + rn ρDΣ )/r ≤
1
r
+
r−1 rn
r r
≤ max{1, rn /r},
bn ).
so ES ên+1 ⊗ ên+1 max{1, rn /r}rR′n (C
Step 2: difference between filter error covariance and its estimate.
bn+1 . Its conditional
The EnKF estimates the error covariance by the ensemble covariance C
expectation is
bn+1 = rRn (C
bn ) = r(An (I − K
b n H)C
bn (I − K
b n H)T AT + σ 2 An K
bnK
b T AT + Σn ).
En C
n
o
n n
27
(5.3)
In order to establish a control of the new filter error using localized ensemble covariance
matrix, consider the difference
bn+1 ◦ DL − rR′ (C
bn ) = (C
bn+1 ◦ DL − En C
bn+1 ◦ DL ) + r(Rn (C
bn ) ◦ DL − R′ (C
bn )).
C
cut
n
cut
cut
cut
n
The first part of (5.3) is the error caused by sampling. By Corollary 5.4, if we denote
bn+1 ◦ DL − En C
bn+1 ◦ DL k
µn+1 := kC
cut
cut
then En µn+1 ≤ (Bl2 MA2 Mn + MΣ )δ/r if K satisfies condition 5).
The second part of (5.3) is the localization inconsistency. By Proposition 2.3, we have
bn ) ◦ DL − R′ (C
bn )k ≤ Mn M 2 (1 + σ −2 Bl Mn )2 B2 BL,l Φ(L − 2l) =: νn+1 .
kRn (C
cut
n
A
o
l
Summing these two parts up,
bn ) C
bn+1 ◦ DL + r(µn+1 + νn+1 )Id .
rR′n (C
cut
Then
bn ) (1 + r (µn+1 + νn+1 ))(C
bn+1 ◦ DLcut + ρId ).
rR′n (C
ρ
bn ), so if we let rn+1 be the
Recall that in step 1, we have ES ên+1 êTn+1 max{1, rrn }rR′n (C
smallest number such that
bn+1 ◦ DLcut + ρId ),
ES ên+1 ⊗ ên+1 rn+1 (C
then
rn+1 ≥ 1,
rn+1 ≤ max{1, rrn }(1 + ρr (µn+1 + νn+1 )).
Step 3: long time stability analysis. Since r∗ ≤ r
max{0, log(rn /r)} ≤ max{0, log(rn /r∗ )} ≤ log rn − log r∗ 1rn ≥r∗ .
Taking the logarithm of (5.4), and using that log(1 + x + y 3 ) ≤ x + 2y for all x, y ≥ 0,
log rn+1 ≤ log rn − log r∗ 1rn ≥r∗ + log(1 + ρr (µn+1 + νn+1 ))
≤ log rn − log r∗ 1rn ≥r∗ + ρr µn+1 + 2( ρr νn+1 )1/3 .
Sum this inequality from n = 0, . . . , T − 1, we have
log r∗
T −1
X
n=0
1rn ≥r∗ ≤ log r0 − log rT +
T −1
X
( ρr µn+1 + 2( ρr νn+1 )1/3 ).
n=0
Because rT ≥ 1,
T −1
X
n=0
1rn ≥r∗
T −1
1 X r
log r0
( ρ µn+1 + 2( ρr νn+1 )1/3 ).
+
≤
log r∗ log r∗ n=0
28
(5.4)
Take expectation,
T −1
X
n=0
P(rn ≥ r∗ ) = E
T −1
X
1rn ≥r∗
n=0
T −1
log r0
1 X r
≤
+
( Eµn+1 + 2E( ρr νn+1 )1/3 ).
log r∗ log r∗ n=0 ρ
(5.5)
Step 4: Upper bounds for (5.5). Recall in step 2 we have that
T −1
X
r
Eµn+1
ρ
n=0
T −1
X
≤
δ
(Bl2 MA2 EMn
ρ
+ MΣ ).
n=0
Next, note the following holds because Bl ≥ 1
νn+1 = MA2 Bl2 BL,l Φ(L − 2l)Mn (1 + σo−2 Bl Mn )2 ≤ MA2 σo2 Bl3 BL,l Φ(L − 2l)(1 + σo−2 Bl Mn )3 .
With condition 4), we have
2/3
1/3
MA BL,l Bl2 Φ1/3 (L − 2l) ≤ δ,
so
1/3
2/3
1/3
Eνn+1 ≤ EMA σo2/3 BL,l Bl Φ1/3 (L − 2l)(1 + σo−2 Bl Mn ) ≤ δ(σo2/3 + σo−1/3 EMn ).
In conclusion,
1/3
2E( ρr νn+1 )1/3 ≤ 2δ ρr 1/3 (σo2/3 + σo−1/3 EMn ).
Plug these bounds to (5.5), and then use (2.20)
T −1
r0
δ
1X
P(rn ≥ r∗ ) ≤
+
(ρ−1 Bl2 MA2 +
T n=0
T log r∗ T log r∗
2r 1/3
)
(ρσo )1/3
T −1
X
EMn +
n=0
δ
1/3
(ρ−1 MΣ + 2 ρr 1/3 σo2/3 )
log r∗
r0
δ(B0 kC0 k + D0 ) −1 2 2
2r 1/3
≤
+
(ρ Bl MA + (ρσ
1/3 )
o)
T log r∗
T log r∗
δ −1 2 2
−1
2r 1/3
r 1/3 2/3
(ρ Bl MA + (ρσ
.
)M
+
ρ
M
+
2
σ
+
0
Σ
1/3
1/3
o
ρ
o)
log r∗
For our result, simply notice that
rn ≤ r∗
6
⇔
bn + ρId ).
ES ên ⊗ ên r∗ (C
Localized covariance for linear LEnKF systems
As discussed in the introduction, the existence of a localized covariance structure is often
assumed in practice to motivate the localization technique. Our result, Theorem 2.4, shows
that such a structure indeed can guarantee estimated performance, assuming the parameters
and sample size are properly tuned. Then it is natural to ask when does a stable localized
structure exist. This is an interesting and important question by itself, but to answer it
for general signal-observation systems with rigorous proof is beyond the scope of this paper.
Here we demonstrate how to verify a stable localized covariance for simple linear models.
29
6.1
Localized covariance propagation with weak local interactions
As discussed in Theorem 2.4, we require An to be of a short bandwidth l. In other words,
interaction in one time step exists only for components of distance l apart. When l = 1,
this type of interaction is often called nearest neighbor interaction, and it includes many
statistical physics models with proper spatial discretization.
Generally speaking, localized covariance is formed through weak local interactions. With
linear dynamics described by An , one way to enforce a weak local interaction is through
(2.22). We will show in this subsection that weak local interaction propagates a localized
bn ]i,j ∝ λd(i,j) , from diagonal entries of the covariance matrix
covariance structure of form [C
A
to entries further away from diagonal.
bn and Cn , we define the
To describe the state of localization in covariance matrices C
following quantities
n
o
n
o
cn,l = max |[C
bn ]i,j |λ−d(i,j)∧l , Mn,l = max |[Cn ]i,j |λ−d(i,j)∧l .
M
(6.1)
A
A
i,j
i,j
Then clearly, the forecast covariance matrices follow the (Mn , λxA , L) localized structure with
cn,L . The goal of this section is to show that M
cn,L is a stable stochastic sequence.
Mn = M
The following properties hold immediately because the matrices involved are PSD.
bn , define Mn,l , M
cn,l as in
Lemma 6.1. Given positive semidefinite (PSD) matrices Cn , C
cn,0 = maxi [C
bn ]i,i ,
(6.1), we have M
cn,0 ≤ M
cn,1 ≤ · · · ≤ M
cn,k ≤ M
cn,0 λ−k .
M
A
The same properties also hold for Mn,k as well.
bn ]i,j is the ensemble covariance, so for i 6= j
Proof. Recall that [C
q
bn ]i,i ||[C
bn ]j,j | ≤ max[C
bn ]i,i .
b
|[Cn ]i,j | ≤ |[C
i
Therefore
cn,0 = max |[C
bn ]i,j | = max[Cbn ]i,i .
M
i,j
i
cn,k in k is quite obvious since d(i, j) ∧ k ≤ d(i, j) ∧ (k + 1), and
The monotonicity of M
o
n
−d(i,j)∧k
b
c
b
≤ λ−k
Mn,k = max |[Cn ]i,j |λA
A max |[Cn ]i,j |.
i,j
i,j
Next, we investigate how does the forecast step change the state of localization.
Proposition 6.2. Suppose Σn = σξ2 Id and the linear dynamics admits a weak local interaction satisfying (2.22), the forecast step propagates the localization in covariance. In particbn+1 = An Cn AT + Σn , then the localization
ular, given any covariance matrix Cn , and let C
n
states described by (6.1) follows
cn+1,0 ≤ λ2 Mn,0 + σ 2 ,
M
A
ξ
30
cn+1,k ≤ max{λ2 Mn,k , M
cn+1,0 },
M
A
cn+1,k+1 ≤ max{λA Mn,k , M
cn+1,0 }.
M
Proof. Note that [Cbn+1 ]i,j = [An Cn ATn ]i,j + σξ2 1i=j . Moreover
X
|[An ]i,m [An ]j,m′ [Cn ]m,m′ |
|[An Cn ATn ]i,j | ≤
≤
≤
X
m,m′
X
m,m′
m,m′
d(i,j)∧k
Mn,k λA
−d(j,m′ )
Mn,k λA
|[An ]j,m′ |λA
−d(i,m)
|[An ]j,m′ |λA
|[An ]i,m |λA
= Mn,k λA
−d(j,m′ )
−d(i,m)
|[An ]i,m |λA
X
m
−d(i,m)
|[An ]i,m |λA
!
d(i,m)+d(j,m′ )+d(m,m′ )∧k
d(i,j)∧k
X
m
−d(j,m)
|[An ]j,m |λA
!
,
d(i,j)∧k
which by (2.22) is bounded by λ2A Mn,k λA
.
By Lemma 6.1,
bn+1 ]i,i ≤ λ2A Mn,0 + σξ2 .
cn+1,0 = max[C
M
i
Moreover,
n
o
−d(i,j)∧k
c
b
b
bn+1 ]i,i .
Mn+1,k = max max[Cn+1 ]i,j λA
, max[Cn+1 ]i,i ≤ max λ2A Mn,k , max[C
i6=j
i
i
n
o
−d(i,j)∧(k+1)
c
b
b
bn+1 ]i,i .
Mn+1,k+1 = max max[Cn+1 ]i,j λA
, max[Cn+1 ]i,i ≤ max λA Mn,k , max[C
i6=j
6.2
i
i
Preserving a localized structure with sparse observations
From now on, we require the observations to be sparse in the sense that d(oi , oj ) > 2l for any
i 6= j. Then for each location i ∈ {1, · · · , d}, there is at most one location o(i) ∈ {o1 , · · · , oq }
such that d(i, o(i)) ≤ l. If such an o(i) doesn’t exist, we set o(i) = nil, the analysis step will
not update it, and we will see the discussion for these components are trivial.
With domain localization and sparse observations, the analysis step updates the information at the i-th component using only the observation at o(i). This significantly simplifies
bi H T + σ 2 Iq )−1 , which is diagonal with entries (σ 2 + [C
bn ]o ,o )−1 in
the formulation of (H C
n
o
o
i i
Ii × Ii . As a result, the Kalman update matrix has entries
[Cbn ]i,o(i) ,
j = o(i);
i
b n H]i,j = [K
b H]i,j = σo2 +[Cbn ]o(i),o(i)
[K
n
0,
else.
In fact, if we apply the covariance localization scheme instead of domain localization, the
Kalman gain remains the same in this setting.
In below, we investigate how does the assimilation step change the state of localization.
31
bn , define K
b n as the Kalman gain in (2.10),
Proposition 6.3. Given any covariance matrix C
and let
b n H)C
bn (I − K
b n H)T + σ 2 K
b bT
Cn = (I − K
o n Kn .
Define the state of localization using (6.1). Then
cn,0 ,
Mn,0 ≤ M
where
cn,k )
Mn,k ≤ φ(M
φ(M) = M(1 + σo−2 M)2 + σo−2 M 2 .
cn,0 = maxi |[C
bn ]i,i |, so Mn,0 ≤ M
cn,0
Proof. Based on Lemma 6.1, Mn,0 = maxi |[Cn ]i,i |, M
holds by Proposition 5.2. Next, we look at the off diagonal terms:
b
b
b
b
bn ]i,j − [Cn ]i,o(i) [Cn ]j,o(i) − [Cn ]i,o(j) [Cn ]j,o(j)
[Cn ]i,j = [C
bn ]o(i),o(i)
bn ]o(j),o(j)
σo2 + [C
σo2 + [C
bn ]i,o(i) [C
bn ]j,o(j)[C
bn ]o(i),o(j)
[C
+
bn ]o(i),o(i) )(σo2 + [C
bn ]o(j),o(j) )
(σo2 + [C
bn ]i,o(i) [C
bn ]j,o(i)
σ 2 [C
+ o
1o(i)=o(j) .
(σo2 + [Cbn ]o(i),o(i) )2
(6.2)
We have the following bounds for each term in (6.2)
bn ]i,o(i) [C
bn ]j,o(i)
[C
c2 λd(i,o(i))∧k+d(j,o(i))∧k ≤ σ −2 M
c2 λd(j,i)∧k .
≤ σo−2 M
n,k A
o
n,k A
2
b
σ + [Cn ]o(i),o(i)
o
bn ]i,o(i) [C
bn ]j,o(j)[Cbn ]o(i),o(j)
[C
bn ]o(i),o(i) )(σ 2 + [C
bn ]o(j),o(j) )
(σ 2 + [C
o
≤
o
d(i,o(i))∧k+d(j,o(j))∧k+d(o(j),o(i))∧k
−4 c3
σo Mn,k λA
c3 λ
≤ σo−4 M
n,k A
d(i,j)∧k
.
bn ]i,o(i) [C
bn ]j,o(i)
[C
c2 λd(i,o(i))∧k+d(j,o(i))∧k ≤ σ −4 M
c2 λd(i,j)∧k .
≤ σo−4 M
n,k A
o
n,k A
2
2
bn ]o(i),o(i) )
(σ + [C
o
In summary
cn,k [(1 + σ −2 M
cn,k )2 + σ −2 M
c2 ]λd(i,j)∧k = φ(M
cn,k )λd(i,j)∧k .
|[Cn ]i,j | ≤ M
o
o
n,k A
A
b
b
b
Proposition 6.4. Denote δn+1 = λ−L
A kCn+1 − rRn (Cn )k∞ /kRn (Cn )k∞ , and
o
n
2
ψλA (M, δ) = (r + δ) max λA M 1 + σo−2 M + λA σo−2 M 2 , λ2A M + σξ2 .
Then for k ≤ L − 1,
cn+1,0 ≤ (r + δn+1 )(λ2 Mn,0 + σ 2 ),
M
A
ξ
32
cn+1,k+1 ≤ ψλ (M
cn,k , δn+1 ).
M
A
Proof. Recall that
bn ) = An [(I − K
b n H)C
bn (I − K
b n H)T + σ 2 K
b bT T
Rn (C
o n Kn ]An + Σn .
Following (6.1), we define its localized status:
n
o
−d(i,j)∧l
T
2 b bT
b
b
b
Rn,l = max |[(I − Kn H)Cn (I − Kn H) + σo Kn Kn ]i,j |λA
,
i,j
n
o
bn+1,l = max |[Rn (C
bn )]i,j |λ−d(i,j)∧l .
R
A
i,j
Apply Proposition 6.3,
cn,0 ,
Rn,0 ≤ M
Then apply Proposition 6.2, we find that
2
c
bn+1,0 = kRn (C
bn )k∞ ≤ λ2 M
R
A n,0 + σξ ,
cn,k ).
Rn,k ≤ φ(M
bn+1,k+1 ≤ max{λA φ(M
cn,k ), R
bn+1,0 }.
R
Finally by Lemma 6.1,
bn )k∞ .
cn+1,0 = kC
bn+1k∞ ≤ rkRn (C
bn )k∞ + kC
bn+1 − rRn (C
bn )k∞ ≤ (r + λL δn+1 )kRn (C
M
A
2
bn )k∞ ≤ λ2 M
c
c
Since kRn (C
A n,0 + σξ , we have our bound for Mn+1,0 . Likewise,
cn+1,k+1 = max |[C
bn+1 ]i,j |λ−d(i,j)∧(k+1)
M
A
i,j
bn )]i,j |λ
≤ r max |[Rn (C
A
−d(i,j)∧(k+1)
i,j
bn+1]i,j − r[Rn (C
bn )]i,j |λ−L
+ max |[C
A
i,j
bn+1,k+1 + δn+1 kRn (C
bn )k∞
= rR
cn,k ), R
bn+1,0 } + δn+1 kRn (C
bn )k∞ ≤ ψλ (M
cn,k , δn+1 ).
≤ r max{λA φ(M
A
6.3
Stability of localized structures
Lemma 6.5. Under the conditions of Theorem 2.5, when K > Γ(rǫ−1 , d) with ǫ = min{ 2λ1A −
r δ
, }, the diagonal status defined by (6.1) satisfies:
2 2
cn+1,0 ≤ λA M
cn,0 + (r + δ)σξ2
En M
c2
c2
En M
n+1,0 ≤ λA Mn,0 +
Therefore, by Gronwall’s inequality,
2
cn,0 ≤ λn M
c
E0 M
A 0,0 + (r + δ)σξ
n
X
k=0
(r + δ)2 σξ4
1 − λA
a.s..
a.s..
c0,0 +
λkA ≤ λnA M
c2 ≤ λn M
c2
E0 M
n,0
A 0,0 +
33
(r + δ)2 σξ4
(1 − λA )2
(r + δ)σξ2
1 − λA
a.s..
a.s..
Proof. We apply Lemma 6.1, Proposition 6.3 to find that
b n H)C
bn (I − K
b n H)T + σ 2 K
b bT
c
b
k(I − K
o n Kn k∞ = Mn,0 ≤ Mn,0 = kCn k∞ ,
and by the first claim of Proposition 6.2,
2
2 b
2
bn )k∞ ≤ λ2 k(I − K
b n H)C
bn (I − K
b n H)T + σ 2 K
b bT
kRn (C
A
o n Kn k∞ + σξ ≤ λA kCn k∞ + σξ .
Also by Young’s inequality, one can show that
bn k∞ + σ 2 )2 ≤ λ3 kC
bn k2 +
bn )k2 ≤ (λ2 kC
kRn (C
∞
A
ξ
A
∞
σξ4
.
1 − λA
With ǫ = min{ 2λ1A − 2r , 2δ }, when K > Γ(rǫ−1 , d), by Corollary 5.4 b),
bn+1 − rRn (C
bn )k∞ ≤ ǫkRn (C
bn )k∞
En kC
a.s.,
bn+1 − rRn (C
bn )k2∞ ≤ 2ǫrkRn (C
bn )k2∞
En kC
a.s..
2 b
2
b
By ǫ + r ≤ λ−1
A and kRn (Cn )k∞ ≤ λA kCn k∞ + σξ ,
bn+1 k∞ ≤ En kC
bn+1 − rRn (C
bn )k∞ + rkRn (C
bn )k∞
En kC
bn )k∞ ≤ λA kC
bn k∞ + (r + δ)σ 2 .
≤ (r + ǫ)kRn (C
ξ
Likewise, because (r + 2ǫ) ≤ λ−1
A ,
bn+1k2 ≤ En kC
bn+1 − rRn (C
bn )k2 + r 2 kRn (C
bn )k2
En kC
∞
∞
∞
b
b
b
+ 2rkRn (Cn )k∞ En kCn+1 − rRn (Cn )k∞
bn )k2∞
≤ (2ǫr + r 2 + 2ǫr)kRn (C
bn )k2
≤ (r + 2ǫ)2 kRn (C
∞
(r + δ)2 σξ4
2
b
≤ λA k C n k ∞ +
.
1 − λA
Lemma 6.6. Suppose the following holds
log 4δ∗−1
,
n∗ ≥ 2L +
log λ−1
A
1
1
δ∗ ≤ , δ∗ ≤ (λ−1
− r),
4
2 A
and the sample size satisfies (2.24). Then
cn∗ ,L ≤ 1 M
c0,L + (1 + 2δ∗ )M∗
E0 M
2
34
a.s..
c0,L >
Proof. Case 1: if M
4(r+δ∗ )σξ2
λL
A (1−λA )
. By Lemma 6.1
ck,0 ≤ M
ck,L ≤ λ−L M
ck,0.
M
A
Then by Lemma 6.5
cn,L ≤ E0 λ−L M
cn,0 ≤ λn−L M
c0,0 +
E0 M
A
A
(r + δ∗ )σξ2
(r + δ∗ )σξ2
n−L c
≤
λ
M
+
0,L
A
λLA (1 − λA )
λLA (1 − λA )
a.s..
n∗ −L
By our choice of n∗ , λA
≤ 41 , so we have our claim, since
c0,L ≤
Case 2: if M
(r + δ∗ )σξ2
1c
c0,L +
cn∗ ,L ≤ 1 M
≤ M
E0 M
0,L
L
4
λA (1 − λA )
2
4(r+δ∗ )σξ2
λL
A (1−λA )
a.s..
. Consider the event
U = {δk ≤ δ∗ , ∀k ≤ n∗ }.
Denote its complementary set as U c . Then the expectation can be decomposed as
q
p
cn∗ ,L 1U + P0 (U c ) E0 M
cn∗ ,L ≤ E0 M
cn∗ ,L 1U + E0 M
cn∗ ,L 1U c ≤ E0 M
c2 ,
E0 M
n∗ ,L
where we applied the Cauchy inequality for the U c part, and P0 is the probability conditioned
on F0 . We will find a bound for each of the two parts.
If U holds, then δn+1 ≤ δ∗ for n ≤ n∗ − 1. By Proposition 6.4,
2
2
cn+1,0 ≤ (r + δ∗ )(λ2 M
c
c
M
A n,0 + σξ ) ≤ λA Mn,0 + (r + δ∗ )σξ .
Then by the Gronwall’s inequality, under U,
c0,0 ≤ M
c0,L ≤
Because M
c
cn,0 ≤ λn M
M
A 0,0 +
4(r+δ∗ )σξ2
λL
A (1−λA )
c0,0 ≤
λnA0 M
(r + δ∗ )σξ2
.
1 − λA
, so after n0 = n∗ − L ≥ L + ⌈− log(4rδ∗−1 + 4)/ log λA ⌉
δ∗ σξ2
,
1 − λA
cn0 ,0 ≤
so M
(r + 2δ∗ )σξ2
≤ M∗ .
1 − λA
In the next 1 ≤ k ≤ L steps, since δn ≤ δ∗ when U holds, because ψλA is increasing, by
Proposition 6.4
cn0 +k,k ≤ ψλ (M
cn0 +k−1,k−1, δ∗ ),
M
A
cn0 +L,L ≤ M∗ . Therefore by n∗ = n0 + L,
we can derive that M
cn∗ ,L 1U ≤ M∗ ,
E0 M
35
a.s..
In order to conclude our claim, it suffices to show that
c2 ≤ δ 2 M 2 ,
P0 (U c )E0 M
n,L
∗
∗
c0,L ≤
Apply Lemma 6.5 with δ = δ∗ , recall that M
a.s..
4(r+δ∗ )σξ2
λL
A (1−λA )
(6.3)
n∗ −2L
and 16λA
≤ 1,
(r + δ∗ )2 σξ4
(1 − λA )2
16(r + δ∗ )2 σξ4 (r + δ∗ )2 σξ4
n∗
≤ 2M∗2 .
+
≤ λA 2L
2
2
(1 − λA )
λA (1 − λA )
2
cn2 ,L ≤λ−2L E0 M
c0,L
cn2 ,0 ≤ λn∗ −2L M
E0 M
+
A
A
∗
∗
Moreover, by Theorem 2.1 b)
2
P(δn+1 > δ∗ |Fn ) ≤ 8d2 exp(−cKλ2L
A δ∗ ) ≤
δ∗2
.
2n∗
where the final bound comes with the sample K satisfying (2.24). Therefore, by the law of
iterated expectation,
c
P0 (U ) ≤
n∗
X
P0 (δk > δ∗ ) =
k=1
nX
∗ −1
k=0
1
E0 P(δk+1 > δ∗ |Fk ) ≤ δ∗2 ,
2
and (6.3) comes as a result.
cn,L . So
Proof of Theorem 2.5. Recall that Mn = M
1
E0 Mn∗ ≤ M0 + (1 + δ∗ )M∗
2
has been proved by Lemma 6.6. This leads to the following using Gronwall’s inequality,
E0 Mjn∗ ≤
1
M0 + 2(1 + δ∗ )M∗ .
2j
Next, for k = 1, · · · , n∗ − 1, apply Lemma 6.5 with δ = δ∗
ck,L ≤ λ−L EM
ck,0 ≤ λ−L EM
c0,0 +
EMk = EM
A
A
(r + δ∗ )σξ2
b
≤ λ−L
A (EkC0 k + M∗ ),
λLA (1 − λA )
c0,0 = kC
b0 k∞ ≤ kC
b0 k by Lemma 6.1. Then if k + mn∗ ≤ T ,
because M
m
X
j=0
EMk+jn∗ =
m
X
j=0
EEk Mk+jn∗
m
X
1
≤
EMk + 2(1 + δ∗ )M∗
2j
j=0
ck,L + 2(m + 1)(1 + δ∗ )M∗
≤ 2EM
b0 k + M∗ ) + 2(m + 1)(1 + δ∗ )M∗ .
≤ 2λ−L (EkC
A
Summation of the inequality above with k = 0, · · · , n∗ − 1, we obtain our final claim.
36
6.4
Small noise scaling
Proof of Theorem 2.7. It suffices to verify the conditions of Theorems 2.4 and 2.5 under the
small noise scaling.
First, we check Theorem 2.5. Condition 1) is invariant except that Σn = ǫσξ2 Id . Condition
2) concerns only of An , so it and λA are also invariant under small noise scaling. For condition
3), if it holds without small noise scaling, that is
n
o
2
−2
−2
2
2
2
(r + δ∗ ) max λA M∗ 1 + σo M∗ + λA σo M∗ , λA M∗ + σξ ≤ M∗ .
This leads to
o
n
2
(r + δ∗ ) max λA (ǫM∗ ) 1 + (ǫσo2 )−1 (ǫM∗ ) + λA (ǫσo2 )−1 (ǫM∗ )2 , λ2A ǫM∗ + ǫσξ2 ≤ ǫM∗ .
Moreover, condition 3) requires that
(r + δ∗ )σξ2
M∗ ≥
1 − λA
(r + δ∗ )ǫσξ2
ǫM∗ ≥
.
1 − λA
⇒
Therefore, with small scaling, condition 3) holds with the same δ∗ , while M∗ is replaced by
ǫM∗ . Condition 4) is invariant under the small noise scaling, since δ∗ and λA are invariant.
As a consequence, Theorem 2.5 implies the following:
T
2n∗
1X
b0 k + ǫM∗ ) + 2(1 + δ∗ )ǫM∗ .
EMk ≤
(EkC
T k=1
T λLA
(6.4)
ck,L ≥ kC
bk k∞ by Lemma 6.1.
This yields the first claimed result, since Mk = M
Next we check the conditions of Theorem 2.4. For condition 1), mΣ and MΣ need to
be replaced by ǫσξ2 since we assume Σn = ǫσξ2 Id . Condition 2) still holds with (r0 , ρ) →
(ǫ−1 r0 , ǫρ) since
b0 + ρId )
Eê0 ⊗ ê0 r0 (C
⇒
b0 + ǫρId ).
Eê0 ⊗ ê0 (ǫ−1 r0 )(C
Condition 3) is guaranteed by (6.4) above, with M0 = 2(1 + δ∗ )ǫM∗ . Condition 4) and
condition 5) are both invariant, as it concerns only geometry quantities. Finally it suffices
to plug in all the estimates for the result, and find
T −1
1X
bn ◦ DLcut + ǫρId ))
1−
P(ES ên ⊗ ên r∗ (C
T n=0
≤
b0 k + ǫM∗ )
2δn∗ (EkC
r0
2r 1/3
+
(ρ−1 Bl2 MA2 + (ρσ
1/3 )
L
o)
T ǫ log r∗
T ǫλA log r∗
δ −1 2 2
−1 2
2r 1/3
r 1/3 2/3
+
2(ρ Bl MA + (ρσ
.
)(1
+
δ
)M
+
ρ
σ
+
2
σ
∗
∗
1/3
ξ
ρ1/3 o
o)
log r∗
Note that in above some ǫ terms are upper-bounded by 1, so the inequality has a simpler
form.
37
7
Conclusion and discussion
Ensemble Kalman filter (EnKF) is a popular tool for high dimensional data assimilation
problems. Domain localization is an important EnKF technique that exploits the natural
localized covariance structure, and simplifies the associated sampling task. We rigorously
investigate the performance of localized EnKF (LEnKF) for linear systems. We show in
Theorem 2.4 that in order for the filter error covariance to be dominated by the ensemble
covariance, 1) the sample size K needs to exceed a constant that depends on the localization
radius and the logarithmic of the state dimension, 2) the forecast covariance has a stable
localized structure. Condition 2) is necessary for an intrinsic localization inconsistency to be
bounded. This condition is usually assumed in LEnKF operations, but it can also be verified
for systems with weak local interaction and sparse observation by Theorem 2.5.
While the results here provide the first successive explanation of LEnKF performance
with almost dimension independent sample size, there are several issues that require further
study. In below we discuss a few of them.
1. There are several ways to apply the localization technique in EnKF. We discuss here
only the domain localization with standard EnKF procedures. In principle, our results
can be generalized to the covariance localization/tempering technique, and also the
popular ensemble square root implementation. But such generalization will not be
trivial, as the Kalman gain will not be of a small bandwidth, and localization techniques
will have unclear impact on the square root SVD operation.
2. This paper studies the sampling effect of LEnKF and shows the sampling error is controllable. Yet LEnKF without sampling error, in other words, LEnKF in the large
ensemble limit, is not well studied mathematically. The effect of the localization techniques on the classical Kalman filter controllability and observability condition is not
known. This may lead to practical guidelines in the choice of localization radius.
3. Theorem 2.5 provides the first proof that LEnKF covariance has a stable localized
structure. But the conditions we impose here are quite strong, while localized structure
is taken for granted in practice. How to show it in general nonlinear settings is a very
interesting question.
Acknowledgement
This research is supported by the NUS grant R-146-000-226-133, where X.T.T. is the principal investigator. The author thanks Andrew J. Majda, Lars Nerger and Ramon van Handel
for their discussion on various parts of this paper.
References
[1] C. Snyder, T. Bengtsson, and P. J. Bickel. Obstacles to high-dimensional particle
filtering. Mon. Wea. Rev., 136(12):4629–4640, 2008.
38
[2] P. J. van Leeuwen. Particle filtering in geophysical systems. Mon. Wea. Rev., 137:4089–
4114, 2009.
[3] J. L. Anderson. An ensemble adjustment Kalman filter for data assimilation. Mon.
Weather Rev., 129(12):2884–2903, 2001.
[4] T. M. Hamill, C. Whitaker, and C. Snyder. Distance-dependent filtering of background
error covariance estimates in an ensemble Kalman filter. Mon. Weather Rev., 129:2776–
2790, 2001.
[5] G. Evensen. The ensemble Kalman filter: Theoretical formulation and practical implementation. Ocean dynamics, 53(4):343–367, 2003.
[6] A. J. Majda and J. Harlim. Filtering complex turbulent systems. Cambridge University
Press, Cambridge, UK, 2012.
[7] E. Kalnay. Atmospheric modeling, data assimilation, and predictability. Cambridge
university press, 2003.
[8] P. L. Houtekamer and H. L. Mitchell. Data assimilation using an ensemble kalman filter
technique. Mon. Wea. Rev., 126(3):796–811, 1998.
[9] J. S. Whitaker and T. M. Hamill. Ensemble data assimilation without perturbed observations. Mon. Wea. Rev., 130(7):1913–1924, 2002.
[10] T. Miyoshi and S. Yamane. Local ensemble transform Kalman filtering with an AGCM
at a T159/L48 resolution. Mon. Wea. Rev., 135(11):3841–3861, 2007.
[11] B. R. Hunt, E. J. Kostelich, and I. Szunyogh. Efficient data assimilation for spatiotemporal chaos: a local ensemble transform Kalman filter. Physica D, 230(1):112–126,
2007.
[12] K. Bergemann and S. Reich. A localization technique for ensemble Kalman filters.
Quart. J. Roy. Meteor. Soc., 136(648):701–707, 2010.
[13] T. Janjić, L. Nerger, A. Albertlla, J. Schröter, and S. Skachoko. On domain localization
in ensemble-based Kalman filter algorithms. Mon. Wea. Rev., 139(7):2046–2060, 2011.
[14] L. Nerger, T. Janjić, J. Schröter, and W. Hiller. A regulated localization scheme for
ensemble-based Kalman filters. Quart. J. Roy. Meteor. Soc., 138:802–812, 2012.
[15] L. Nerger. On serial observation processing in localized ensemble Kalman filters. Mon.
Wea. Rev., 143(5):1554–1567, 2015.
[16] H. R. Künsch and Sylvian Robert. Localizing the ensemble Kalman particle filter. Tellus
A: Dynamic Meteorology and Oceanography, 69(1):1282016, 2017.
[17] M. D. L. Chevrotiere and J. Harlim. A data-driven method for improving the correlation
estimation in serial ensemble Kalman filters. Mon. Wea. Rev., 145(3):985–1001, 2017.
39
[18] P. Rebeschini and R. Van Handel. Can local particle filters beat the curse of dimensionality? Ann. Appl. Probab., 25:2809–2866, 2015.
[19] A. J. Majda and Y. Lee. State estimation and prediction using clustered particle filters.
Proc. Natl. Acad. Sci., 113(51):14609–14614, 2016.
[20] J. Poterjoy. A localized particle filter for high-dimensional nonlinear systems. Mon.
Wea. Rev., 144(1):59–76, 2016.
[21] P. J. Bickel and E. Levina. Regularized estimation of large covariance matrices. The
Annals of Statistics, 36(1):199–227, 2008.
[22] D. Sanz-Alonso and A. M. Stuart. Long time asymptotics of the filtering distribution
for partially observed chaotic dynamical systems. SIAM/ASA J. Uncertainty Quantification, 3:1200–1220, 2015.
[23] A. J. Majda and X. T. Tong. Performance of ensemble Kalman filters in large dimensions. Accepted by Comm. Pure Appl. Math. arXiv: 1606.09321, 2017.
[24] D. T. B. Kelly and A. M. Stuart. Ergodicity and accuracy of optimal particle filters for
Bayesian data assimilation. arXiv:1611.08761, 2017.
[25] J. S. Whitaker, T. M. Hamill, X. Wei, Y. Song, and Z. Toth. Ensemble data assimilation
with the NCEP global forecast system. Mon. Wea. Rev., 136(2):463–482, 2008.
[26] A. N. Bishop, P. Del Moral, and S. D. Pathiraja. Perturbations and projections of
Kalman-Bucy semigroups motivated by methods in data assimilation. arXiv:1701.05978,
2017.
[27] A. J. Majda and X. T. Tong. Rigorous accuracy and robustness analysis for two-scale
reduced random Kalman filters in high dimensions. arXiv: 1606.09087, 2016.
[28] R. S. Liptser and A. N. Shiryaev. Statistics of random processes. I, II,, volume 5 of
Applications of Mathematics. Springer-Verlag, 2001.
[29] J. Mandel. Efficient implementation of the ensemble Kalman filter. Technical Report
UCDHSC/CCM Report No. 231, University of Colorado at Denver and Health Sciences
Center, 2006.
[30] G. Burgers, P. J. van Leeuwen, and G. Evensen. Analysis scheme in the ensemble
Kalman filter. Mon. Wea. Rev., 126(6):1719–1724, 1998.
[31] G. Gaspari and S. E. Cohn. Construction of correlation functions in two and three
dimensions. Quarterly journal of the Royal Meteorological Society, 125(554):723–757,
1999.
[32] Lorenc. A. C. The potential of the ensemble Kalman filter for NWP-a comparison with
4D-Var. Quart. J. Roy. Meteor. Soc., 129(595):3183–3203, 2003.
40
[33] P. J. Bickel and M. Lindner. Approximating the inverse of banded matrices by banded
matrices with applications to probability and statistics. Theory of Probaility & its
Applications, 56(1):1–20, 2012.
[34] R. Furrer and T. Bengtsson. Estimation of hign-dimensional prior and posterior covariance matrices in Kalman filter variants. Journal of Multivariate Analysis, 98:227–255,
2007.
[35] Li. H., E. Kalnay, and T. Miyoshi. Simultaneous estimation of covariance inflation
and observation errors within an ensemble kalman filter. Quart. J. Roy. Meteor. Soc.,
135:523–533, 2009.
[36] R. Vershynin. Introduction to the non-asymptotic analysis of random matrices. In
Y Eldar and G Kutyniok, editors, Compressed Sensing, Theory and Applications, pages
210–268. Cambridge University Press, 2011.
[37] J. Mandel, L. Cobb, and J. D. Beezley. On the convergence of the ensemble Kalman
filter. Applications of Mathematics, 56(6):533–541, 2011.
[38] E. Kwiatkowski and J. Mandel. Convergence of the square root ensemble Kalman filter
in the large ensemble limit. SIAM/ASA J. Uncertainty Quantification, 3(1):1–17, 2015.
[39] K. J. Law, H. Tembine, and R. Tempone. Deterministic mean-field ensemble Kalman
filtering. SIAM J. Scientific Computing, 38(3):A1251–A1279, 2016.
[40] D. T. B. Kelly, K. J. Law, and A. M. Stuart. Well-posedness and accuracy of the
ensemble Kalman filter in discrete and continuous time. Nonlinearity, 27:2579–2603,
2014.
[41] X. T. Tong, A. J. Majda, and D. Kelly. Nonlinear stability and ergodicity of ensemble
based Kalman filters. Nonlinearity, 29:657–691, 2016.
[42] X. T. Tong, A. J. Majda, and D. Kelly. Nonlinear stability of the ensemble Kalman
filter with adaptive covariance inflation. Commun. Math. Sci., 14(5):1283–1313, 2016.
[43] D. Kelly, A. J. Majda, and X. T. Tong. Concrete ensemble Kalman filters with rigorous
catastrophic filter divergence. Proc. Natl. Acad. Sci., 112(34):10589–10594, 2016.
[44] M. Rudelson and R. Vershynin. Hanson-Wright inequality and sub-gaussian concentration. Electron. Commun. Probab., 18(82):1–9, 2013.
41
| 10 |
DroidGen: Constraint-based and Data-Driven
Policy Generation for Android
Mohamed Nassim Seghir1 and David Aspinall2
1
arXiv:1612.07586v1 [cs.CR] 22 Dec 2016
2
University College London
University of Edinburgh
Abstract. We present DroidGen a tool for automatic anti-malware policy inference. DroidGen is data-driven: uses a training set of malware and
benign applications and makes call to a constraint solver to generate a
policy under which a maximum of malware is excluded and a maximum
of benign applications is allowed. Preliminary results are encouraging.
We are able to automatically generate a policy which filters out 91% of
the tested Android malware. Moreover, compared to black-box machine
learning classifiers, our method has the advantage of generating policies
in a declarative readable format. We illustrate our approach, describe its
implementation and report on experimental results.
1
Introduction
Security on Android is enforced via permissions giving access to resources on the
device. These permissions are often too coarse and their attribution is based on
an all-or-nothing decision in the vast majority of Android versions in actual use.
Additional security policies can be prescribed to impose a finer-grained control
over resources. However, some key questions must be addressed: who writes the
policies? What is the rational behind them? An answer could be that policies
are written by experts based on intuition and prior knowledge. What can we
do then in the absence of expertise? Moreover, are we sure that they provide
enough coverage?
We present DroidGen a tool for the systematic generation of anti-malware
policies. DroidGen is fully automatic and data-driven: it takes as input two
training sets of benign and malware applications and returns a policy as output.
The resulting policy represents an optimal solution for a constraint satisfaction
problem expressing that the discarded malware should be maximized while the
number of excluded benign applications must be minimized. The intuition behind
this is that the solution will capture the maximum of features which are specific
to malware and less common to benign applications. Our goal is to make the
generated policy as general as possible to the point of allowing us to make
decisions regarding new applications which are not part of the training set.
In addition to being fully push-button, DroidGen is able to generate a policy
that filters out 91% of malware from a representative testing set of Android
applications with only a false positive rate of 6%. Moreover, having the policies
in a declarative readable format can boost the effort of the malware analyst
by providing diagnosis and pointing her to suspicious parts of the application.
In what follows we present the main ingredients of DroidGen, describe their
functionality and report on experimental results.
2
Application Abstraction
DroidGen proceeds in several phases: application abstraction, constraint extraction and constraint solving, see Figure 1. Our goal is to infer policies that distin-
Fig. 1: Illustration of DroidGen’s Main Ingredients
guish between good and bad behaviour. As it is not practical to have one policy
per malicious application, we need to identify common behaviours of applications. Hence the first phase of our approach is the derivation of specifications
(abstractions) which are general representations of applications. Given an application A, the corresponding high level specification Spec(A) consists of a set of
properties {p1 , . . . , pk } such that each property p has the following grammar:
p := c : r
c := entry point | activity | service | receiver
| onclick handler | ontouch handler | lc
lc := oncreate | onstart | onresume | . . .
u := perm | api
A property p describes a context part c in which a resource r is used. The resource
part can be either a permission perm or api referring to an api method identifier
which consists of the method name, its signature and the class it belongs to.
The context c can be entry point referring to all entry points of the app, activity representing methods belonging to activities, service for methods belonging
to service components3 , etc. We also have onclick handler and ontouch handler
respectively referring to click and touch event handlers. Moreover, c can be an
activity life-cycle callback such as oncreate, onstart, etc.4 Activity callbacks as
well as the touch and click event handlers are also entry points.
A property p of the form c : r belongs to the specification of an application A
if r (perm or api) is used within the context c in A. In other words: it exists at
3
4
activity, service and receiver are some of the building blocks of Android applications.
Some components have a life-cycle governing their callbacks invocation.
least one method matching c from which r is transitively called (reachable). To
address such a query, we compute the transitive closure of the call graph [9]. We
propagate permissions (APIs) backwards from callees to callers until we reach a
fixpoint.
For illustration, let us consider the example in Figure 2. On the left hand
side, we have code snippets representing a simple audio recording application
named Recorder which inherits from an Activity component. On the right hand
side, we have the corresponding specifications in terms of APIs (Figure 2(a))
and in terms of permissions (Figure 2(b)). The method setAudioSource, which
sets the recording medium for the media recorder, is reachable (called) from
the Activity life-cycle method onCretae, hence we have the entry oncreate: setAudioSource in the specification map (a). We also have the entry oncreate:
record audio in the permission-based specification map (b) as the permission
record audio is associated with the API method setAudioSource according
to the Android framework implementation. Similarly, the API method setOutputFile is associated with the context onclick (a) as it is transitively reachable
(through startRecording) from the click handler method onClick. Hence permission write external storage, for writing the recording file on disk, is also
associated with onclick (b). Both APIs and permissions are also associated with
the context activity as they are reachable from methods which are activity members. We use results from [3] to associate APIs with the corresponding permissions.
public c l a s s R e c o r d e r extends A c t i v i t y
implements O n C l i c k L i s t e n e r {
private MediaRecorder myRecorder ;
...
public void o n C r e a t e ( . . . ) {
myRecorder = new MediaRecorder ( ) ;
// u s e s RECORD AUDIO p e r m i s s i o n
myRecorder . s e t A u d i o S o u r c e ( . . . ) ;
}
private void s t a r t R e c o r d i n g ( ) {
// u s e s WRITE EXTERNAL STORAGE
myRecorder . s e t O u t p u t F i l e ( . . . ) ;
recorder . start ( ) ;
}
public void o n C l i c k ( . . . ) {
startRecording ( ) ;
}
Spec(Recorder)
oncreate:
onclick:
activity:
activity:
setAudioSource
setOutputFile
setAudioSource
setOutputFile
(a)
Spec(Recorder)
oncreate:
onclick:
activity:
activity:
record audio
write external storage
record audio
write external storage
(b)
}
Fig. 2: Code snippets sketching a simple audio recording application together with the
corresponding specifications based on APIs (a) and based on permissions (b)
3
Specifications to Policies: an Optimisation Problem?
DroidGen tries to derive a set of rules (policy) under which a maximum number
of benign applications is allowed and a maximum of malware is excluded. This
is an optimization problem with two conflicting objectives. Consider
Spec(benign1 ) = {pa }
Spec(benign2 ) =
{pc }
Spec(benign3 ) = {pb , pe }
Spec(malware1 ) = {pa , pb }
Spec(malware2 ) = {pa , pc }
Spec(malware3 ) =
{pd }
Each application (benign or malware) is described by its specification consisting
of a set of properties (pi ’s). As seen previously, a property pi can be for example
activity : record audio, meaning that the permission record audio is used within an
activity. A policy excludes an application if it contradicts one of its properties.
We want to find the policy that allows the maximum of benign applications and
excludes the maximum of malware. This is formulated as:
M ax[I(pa ) + I(pc ) + I(pb ∧ pe ) − (I(pa ∧ pb ) + I(pa ∧ pc ) + I(pd ))]
{z
} |
{z
}
|
benign
malware
where I(x) is the function that returns 1 if x is true or 0 otherwise. This type
of optimization problems where we have a mixture of theories of arithmetic and
logic can be efficiently solved via an SMT solver such as Z3 [7]. It gives us the
solution: pa = 0, pb = 1, pc = 1, pd = 0 and pe = 1. Hence, the policy will
contain the two rules ¬pa and ¬pd which filter out all malware but also exclude
the benign application benign1 . A policy is violated if one of its rules is violated.
Policy Verification and Diagnosis. Once we have inferred a policy, we want
to use it to filter out applications violating it. A policy P = {¬p1 , . . . , ¬pk }
is violated by an application A if {p1 , . . . , pk } ∩ Spec(A) 6= ∅, meaning that A
contradicts (violates) at least one of the rules of P . In case of policy violation,
the violated rule, e.g. ¬p, can give some indication about a potential malicious
behaviour. DroidGen maps back the violated rule to the code in order to have
a view of the violation origin. For p = (c : u), a sequence of method invocations
m1 , .., mk is generated, such that m1 matches the context c and mk invokes u.
4
Implementation and Experiments
DroidGen5 is written in Python and uses Androguard6 as front-end for parsing
and decompiling Android applications. DroidGen automatically builds abstractions for the applications which are directly accepted in APK binary format.
This process takes around 6 seconds per application. An optimization problem
in terms of constraints over the computed abstractions is then generated and
5
6
www0.cs.ucl.ac.uk/staff/n.seghir/tools/DroidGen
https://github.com/androguard
the Z3 SMT solver is called to solve it. Finally, the output of Z3 is interpreted
and translated to a readable format (policy). Policy generation takes about 7
seconds and its verification takes no more than 6 seconds per app on average.
We derived two kinds of policies based on a training set of 1000 malware
applications from Drebin7 and 1000 benign ones obtained from Intel Security
(McAfee). The first policy Pp is solely based on permissions and is composed
of 65 rules. The other policy Pa is exclusively based on APIs and contains 152
rules. Snippets from both policies are illustrated in the appendix. We have applied the two policies to a testing set of 1000 malware applications and 1000
benign ones (different from the training sets) from the same providers. Results
are summarised in Table 1. The policy Pa composed of rules over APIs performs
Policy
Malware filtered out Benign excluded
APIs (Pa )
910/1000
59/1000
Permission (Pp )
758/1000
179/1000
Table 1: Results for a permissions-based policy (Pp ) vs. an API-based one (Pa )
better than the one that uses permissions in terms of malware detection as it is
able to filter out 91% of malware while Pp is only able to detect 76%. It also has
a better false positive rate as it only excludes 6% of benign applications, while
Pp excludes 18%. Being able to detect 91% of malware is encouraging as it is
comparable to the results obtained with some of the professional security tools
(https://www.av-test.org/)8 . Moreover, our approach is fully automatic and the
actual implementation does not exploit the full expressiveness of the policy space
as we only generate policies in a simple conjunctive form. We plan to further
investigate the generation of policies in arbitrary propositional forms.
5
Related Work
Many tools for analysing various security aspects of Android have emerged [2,
5, 6, 8]. They either check or enforce certain security properties (policies). These
policies are either hard-coded or manually provided. Our work complements
such tools by providing the automatic means for inferring the properties to be
checked. Hence, DroidGen can serve as a front-end for a verification tool such
as EviCheck [9] to keep the user completely out of the loop.
Also various machine-learning-based approaches have been proposed for malware detection [1,4,10,11]. While some of them outperform our method, We did
not yet exploit the entire power of the policy language. We are planning to
allow more general forms for the rules, which could significantly improve the results. Moreover, many of the machine-learning based approaches do not provide
further indications about potential malicious behaviours in an application. Our
7
8
http://user.informatik.uni-goettingen.de/∼darp/drebin/
We refer to AV-TEST benchmarks dated September 2014 as our dataset was collected during the same period.
approach returns policies in a declarative readable format which is helpful in
terms of code inspection and diagnosis. Some qualitative results are reported in
the appendix. To the best of our knowledge, our approach is unique for being
data-driven and using a constraint solver for inferring anti-malware policies.
6
Conclusion and Future Work
We have presented DroidGen a tool for the automatic generation of anti-malware
policies. It is data-driven and uses a constraint solver for policy inference. DroidGen is able to automatically infer an anti-malware policy which filters out 91% of
the tested malware with the additional benefit of being fully automatic. Having
the policies in declarative readable format can boost the effort of the malware
analyst by pointing her to suspicious parts of the application. As future work,
we plan to generate more expressive policies by not restring their form to conjunctions of rules. We also plan to generate anti-malware policies for malware
families with the goal of obtaining semantics-based signatures (see appendix).
References
1. D. Arp, M. Spreitzenbarth, M. Hubner, H. Gascon, and K. Rieck. DREBIN: effective and explainable detection of android malware in your pocket. In NDSS,
2014.
2. S. Arzt, S. Rasthofer, C. Fritz, E. Bodden, A. Bartel, J. Klein, Y. L. Traon,
D. Octeau, and P. McDaniel. Flowdroid: precise context, flow, field, object-sensitive
and lifecycle-aware taint analysis for android apps. In PLDI, page 29, 2014.
3. K. W. Y. Au, Y. F. Zhou, Z. Huang, and D. Lie. Pscout: analyzing the Android
permission specification. In CCS, pages 217–228, 2012.
4. V. Avdiienko, K. Kuznetsov, A. Gorla, A. Zeller, S. Arzt, S. Rasthofer, and E. Bodden. Mining Apps for Abnormal Usage of Sensitive Data. In ICSE, pages 426–436,
2015.
5. M. Backes, S. Gerling, C. Hammer, M. Maffei, and P. von Styp-Rekowsky. Appguard - enforcing user requirements on Android apps. In TACAS, pages 543–548,
2013.
6. E. Chin, A. P. Felt, K. Greenwood, and D. Wagner. Analyzing inter-application
communication in Android. In MobiSys, pages 239–252, 2011.
7. L. M. de Moura and N. Bjørner. Z3: An efficient smt solver. In TACAS, pages
337–340, 2008.
8. S. Fahl, M. Harbach, T. Muders, M. Smith, L. Baumgärtner, and B. Freisleben.
Why Eve and Mallory love Android: an analysis of Android SSL (in)security. In
ACM Conference on Computer and Communications Security, pages 50–61, 2012.
9. M. N. Seghir and D. Aspinall. Evicheck: Digital evidence for android. In ATVA,
pages 221–227, 2015.
10. C. Yang, Z. Xu, G. Gu, V. Yegneswaran, and P. A. Porras. Droidminer: Automated mining and characterization of fine-grained malicious behaviors in android
applications. In ESORICS, pages 163–182, 2014.
11. W. Yang, X. Xiao, B. Andow, S. Li, T. Xie, and W. Enck. Appcontext: Differentiating malicious and benign mobile app behaviors using context. In ICSE, pages
303–313, 2015.
| 6 |
arXiv:1607.03790v1 [] 13 Jul 2016
ON THE FINITENESS OF THE CLASSIFYING SPACE FOR THE FAMILY OF
VIRTUALLY CYCLIC SUBGROUPS
TIMM VON PUTTKAMER AND XIAOLEI WU
Abstract. Given a group G, we consider its classifying space for the family of virtually
cyclic subgroups. We show for many groups, including for example, one-relator groups,
acylindrically hyperbolic groups, 3-manifold groups and CAT(0) cube groups, that they do
not admit a finite model for this classifying space unless they are virtually cyclic. This
settles a conjecture due to Juan-Pineda and Leary for these classes of groups.
Introduction
Given a group G, we denote by EG = EVCY (G) a G-CW-model for the classifying space
for the family of virtually cyclic subgroups. The space EG is characterized by the property
that the fixed point set EG H is non-empty and contractible for all virtually cyclic subgroups
H of G and empty otherwise. In the formulation of the Farrell–Jones Conjecture EG plays
an important role (see for example [7, 19] for more information). Due to this, there has
been a growing interest in studying EG, see for example [6, 13, 15, 17, 18]. Recall that a
G-CW-complex X is said to be finite if it has finitely many orbits of cells. Similarly, X is
said to be of finite type if it has finitely many orbits of cells of dimension n for any n. In
[13, Conjecture 1], Juan-Pineda and Leary formulated the following conjecture:
Conjecture A. [13, Juan-Pineda and Leary] Let G be a group admitting a finite model for
EG. Then G is virtually cyclic.
The conjecture may be surprising at the beginning as there are many examples of groups
with a finite model for the classifying space for the family consisting only of the trivial
subgroup or for the family of finite subgroups. Juan-Pineda and Leary demonstrated the
validity of the conjecture for hyperbolic groups [13, Corollary 12]. Later, Kochloukova,
Martı́nez-Pérez and Nucinkis verified the conjecture for elementary amenable groups [15].
Groves and Wilson gave a simplified proof for elementary amenable groups [10]. So far,
almost all the proofs boil down to analyzing whether EG has a finite 0-skeleton. It turns
out that having a finite 0-skeleton for EG is equivalent to the following purely algebraic
condition (see 1.3)
Date: July, 2016.
2010 Mathematics Subject Classification. 55R35, 20B07.
Key words and phrases. Classifying space, virtually cyclic subgroup, acylindrically hyperbolic groups, 3manifold groups, HNN-extension, one-relator groups, CAT(0) cube groups.
1
2
TIMM VON PUTTKAMER AND XIAOLEI WU
(BVC) G has a finite set of virtually cyclic subgroups {V1 , V2 , . . . , Vn } such that every
virtually cyclic subgroup of G is conjugate to a subgroup of some Vi .
Following Groves and Wilson, we shall call this property BVC and the finite set {V1 , V2 ,
. . . , Vn } a witness to BVC for G. In this paper, we give a systematic study of the property
BVC. Our main theorem can be stated as follows
Theorem. The following classes of groups satisfy Conjecture A:
(a) HNN extensions of free groups of finite rank (2.11),
(b) one-relator groups (2.12),
(c) acylindrically hyperbolic groups (3.2),
(d) 3-manifold groups (3.6),
(e) CAT(0) groups that contain a rank-one isometry or Z2 (4.13), in particular CAT(0)
cube groups (4.16).
In fact, we show that any finitely generated group in these classes has BVC if and only
if it is virtually cyclic. Our result also suggests that the following conjecture could be true.
Conjecture B. Let G be a finitely presented group which has BVC. Then G is virtually
cyclic.
Remark.
(a) The assumption of having a finitely presented group is necessary here
since Osin [25] has constructed finitely generated torsion-free groups with exactly
two conjugacy classes. In particular these groups have BVC.
(b) We do not know whether Conjecture B holds for all elementary amenable groups.
Groves and Wilson showed that solvable groups satisfy it [10].
(c) If we know the Flat Closing Conjecture, then it would follow that any CAT(0) group
satisfies Conjecture B. See 4.14 for more information.
Our paper is organized as follows. In Section 1, we first summarize what we already
know about groups admitting a finite model for EG, then we study basic properties of
groups with BVC and deduce that many groups cannot have this property. In section 2,
we study HNN extension of groups and show for example that any HNN extension of a
finitely generated free group does not have BVC. Using this, we prove that any non-cyclic
one-relator group does not have BVC. In section 3, we show that acylindrically hyperbolic
groups and finitely generated non virtually cyclic 3-manifold groups do not have BVC. In
the last section, we study groups acting on CAT(0) spaces. We show for example, that
CAT(0) cube groups do not have BVC unless they are virtually cyclic.
Results of this paper will also appear as a part of the first author’s thesis.
Acknowledgements. The first author was supported by an IMPRS scholarship of the
Max Planck Society. The second author would like to thank the Max Planck Institute for
Mathematics at Bonn for its support.
ON THE FINITENESS OF THE CLASSIFYING SPACE FOR THE FAMILY OF VIRTUALLY CYCLIC SUBGROUPS 3
1. Properties of groups admitting a finite model for EG
In this section we first review properties of groups admitting a finite model for EG.
We then proceed to prove many useful lemmas for groups with BVC. We also use these
lemmas to show many groups cannot have BVC.
We denote by EG, resp. EG the G-CW classifying space for the family of finite subgroups resp. for the family consisting only of the trivial subgroup. We summarize the
properties of groups admitting a finite model for EG as follows
Proposition 1.1. Let G be a group admitting a finite model for EG, then
(a) G has BVC.
(b) G admits a finite model for EG.
(c) For every finite subgroup of H ⊂ G, the Weyl group WG H is finitely presented and
of type FP∞ . Here WG H := NG (H)/H, where NG (H) is the normalizer of H in G.
(d) G admits a finite type model for EG. In particular, G is finitely presented.
(e) G has finitely many conjugacy classes of finite subgroups. In particular, the order
of finite subgroups of G is bounded.
Proof Note that if G has a finite or finite type model for EG, then G is finitely presented
and has finitely many conjugacy classes of finite subgroups [16, 4.2]. Now (a), (b), (c) can
be found for example in [15, Section 2]. Part (d) can be deduced from (c) by taking H to
be the trivial group.
Remark 1.2. If one replaces finite by finite type in the assumptions of the above proposition, then the conclusions still hold except one has to replace finite by finite type in (b).
The following lemma is well-known to experts.
Lemma 1.3. Let G be a group. Then there is a model for EG with finite 0-skeleton if and
only if G has BVC.
Proof Suppose X is a G-CW model for EG with a finite 0-skeleton. Let σ1 , σ2 , . . . , σn
be a set of representatives for each orbit of 0-cell. Let V1 , V2 , . . . , Vn be the corresponding
virtually cyclic stabilizers. Since for every virtually cyclic subgroup V, the set of fixed
points X V is non-empty, there exists some vertex of X that is fixed by V. Since this vertex
stabilizer is a conjugate to some Vi , the subgroup V is subconjugate to Vi . Conversely,
suppose G has BVC, and let V1 , V2 , . . . , Vn be witnesses. We can construct a model for
`
EG with finite 0-skeleton as follows. Consider the G-set S := ni=1 G/Vi . The complete
graph ∆(S ) spanned by S , i.e., the simplicial graph that contains an edge for every two elements in ∆(S ), carries a cocompact simplicial G-action. The first barycentric subdivision
¯ ) of ∆(S ) is a G-CW-complex. Note that ∆(S
¯ ) is again cocompact. Moreover, ∆(S
¯ )H
∆(S
4
TIMM VON PUTTKAMER AND XIAOLEI WU
is nonempty when H is virtually cyclic and empty otherwise. Now we can adding equi¯ ) and make ∆(S
¯ )H contractible for all virtually cyclic
variant cells of dimension ≥ 1 to ∆(S
subgroup by induction. Thus we get a model for EG at the end with finite 0-skeleton.
The following structure theorem about virtually cyclic groups is well known, see for
example [13, Proposition 4] for a proof.
Lemma 1.4. Let G be a virtually cyclic group. Then G contains a unique maximal normal
finite subgroup F such that one of the following holds
(a) the finite case, G = F;
(b) the orientable case, G/F is the infinite cyclic group;
(c) the nonorientable case, G/F is the infinite dihedral group.
Note that the above lemma implies that a torsion-free virtually cyclic group is either
trivial or isomorphic to Z. Thus we have the following
Corollary 1.5. Let G be a torsion-free group, then G has BVC if and only if there exist
elements g1 , g2 , . . . gn in G such that every element in G is conjugate to a power of some gi .
The following lemma is key to many of our proofs.
Lemma 1.6. Let V be a virtually cyclic group and let g, h ∈ V be two elements of infinite
order, then there exist p, q ∈ Z such that g p = hq . Furthermore, there exists v0 ∈ V such
p
that for any v ∈ V of infinite order there exist nonzero p0 , p such that v0 0 = v p with
p0
p
∈ Z.
Proof By 1.4, there exists an finite normal subgroup of F such that V/F is isomorphic
to Z or Z ⋊ Z/2. We denote the quotient map by q. Then if g is of infinite order and
V/F Z ⋊ Z/2, then q(g) ∈ Z ⋊ {0}. Thus we can always assume V/F Z. In this case
V F ⋊ f Z, where Z acts on F via the automorphism f . Now for any g = (x, m) ∈ F ⋊ f Z
we can choose k = |F|| f | where |F| is the order of the finite group F and | f | is the order of
the automorphism f , then gk = (0, km). Let h = (y, n), then hk = (0, kn). Now we see that
k
gkn = hkm . If we choose v0 = (0, 1) ∈ F ⋊ f Z then for any v = (x, m) ∈ V, vmk
0 =v .
Note that in any virtually cyclic group there are only finitely many distinct finite subgroups up to conjugacy (using 1.4 or the fact that virtually cyclic groups are CAT(0)).
Using this fact one immediately obtains
Lemma 1.7. If a group G has BVC, then G has finitely many conjugacy classes of finite
subgroups. In particular, the order of finite subgroups in G is bounded.
In a group G, we call an element g primitive if it cannot be written as a proper power.
Then 1.5 implies the following
Lemma 1.8. Let G be a torsion-free group. If G has infinitely many conjugacy classes of
primitive elements, then G does not have BVC.
ON THE FINITENESS OF THE CLASSIFYING SPACE FOR THE FAMILY OF VIRTUALLY CYCLIC SUBGROUPS 5
Note that without the assumption of G being torsion-free, the previous lemma does not
hold. In fact, even a virtually cyclic group could contain infinitely many conjugacy classes
of primitive elements.
Example 1.9. Let S n be the symmetric group of order n with n ≥ 3, then S n × Z has
infinitely many primitive conjugacy classes. In fact, let (xi , 2i ) ∈ S n × Z with xi an odd
element in S n . Then (xi , 2i ) is primitive for any i ≥ 1. Since if (xi , 2i ) = (x, y)k for some
k > 1, then ky = 2i . In particular, k is even and thus xk cannot equal the odd element
xi . The elements (xi , 2i ) cannot be conjugate to each other since the second coordinate is
different.
Lemma 1.10. Let G = A ∗ B be a free product with A and B nontrivial, then G has BVC if
and only if G is virtually cyclic.
Proof If A and B are finite groups, then A ∗ B is a virtually free group and hence hyperbolic. So the lemma holds in this case. Now we assume that A is not finite, then A ∗ B is
not virtually cyclic. Let a1 , a2 . . . , an , . . . , be a sequence of different elements in A and let
b ∈ B be a non-trivial element. Then {ai b | i ≥ 1} in G form infinitely many conjugacy
classes of primitive elements in G. Moreover, when i , j, ai b and any conjugates of a j b
cannot lie in a virtually cyclic group. In fact, if this were the case, by 1.6 we would have
that (ai b)m is conjugate to (a j b)n in G for some m, n , 0. But this is impossible by the
choices of ai and b and [20, IV.2.8]. Hence G does not have BVC in this case.
Lemma 1.11. [15, 5.6] If a group G has BVC, then any finite index subgroup also has
BVC.
Combining this with the main result of [10], we have
Proposition 1.12. Virtually solvable groups have BVC if and only if they are virtually
cyclic.
Lemma 1.13. [10, 2.2] Let G be a group with property BVC. Then the following assertions
hold.
(a) The group G satisfies the ascending chain condition for normal subgroups.
(b) If L and M are normal subgroups of G with M < L and L/M a torsion group, then
there are only finitely many normal subgroups K of G such that M ≤ K ≤ L.
(c) Let
1 = Gn ≤ Gn−1 ≤ · · · ≤ G1 ≤ G0 = G
be a series of normal subgroups of G. Then the number of factors Gi /Gi−1 that are
not torsion groups is bounded by the number of infinite groups in a witness to BVC
for G.
The following lemma allows us to show that many groups cannot have BVC.
6
TIMM VON PUTTKAMER AND XIAOLEI WU
Lemma 1.14. Let π : G → Q be a surjective group homomorphism. If Q is a torsion-free
group that does not have BVC, then G does not have BVC.
Proof Suppose G has BVC and let V1 , V2 . . . , Vn be a witness for BVC of G. Note that
any quotient of a virtually cyclic group is again virtually cyclic. Hence π(Vi ) are virtually
cyclic subgroups in Q. Since Q is torsion-free and does not have BVC, we can find a
nontrivial element q ∈ Q such that q cannot be conjugated to π(Vi ) for any i. Now take
g ∈ G such that π(g) = q, then we can find c ∈ G such that cgc−1 ∈ Vi for some i. But then
we would have π(c)π(g)π(c−1) = π(c)qπ(c−1) ∈ π(Vi ) which is a contradiction.
Corollary 1.15. If G is a group having BVC, then the abelianization H1 (G, Z) is finitely
generated of rank at most one.
Proof Let A be the abelianization of G and let T be the torsion subgroup of A. By 1.14
the torsion-free abelian group A/T has to be trivial or infinite cyclic. Hence A = T or
A T × Z. Now 1.13(b) implies that the torsion group T has to be finite.
Example 1.16. The Thompson groups are a family of three finitely presented groups, F ⊂
T ⊂ V. Thompson’s group F can be defined by the presentation hA, B | [AB−1 , A−1 BA] =
[AB−1, A−2 BA2] = 1i. Since H1 (F) Z2 , it follows that F does not have BVC. Since the
order of finite subgroups in T and V is unbounded, we see that T and V also do not have
BVC. See [3] for more information about Thompson groups.
Recall that a group satisfies the strong Tits alternative if any finitely generated subgroup has a finite index subgroup which is either solvable, or has a non-abelian free quotient. Since virtually solvable groups and free groups have BVC if and only if they are
virtually cyclic, we have the following by 1.11 and 1.14,
Lemma 1.17. If a group G satisfies the strong Tits alternative, then a finitely generated
subgroup of G has BVC if and only if it is virtually cyclic.
Since Coxeter group and right angled Artin groups are known to satisfy the strong Tits
alternative by [24] and [1, 1.6], we have the following
Corollary 1.18. Let G be a Coxeter group or a right angled Artin group, then a finitely
generated subgroup of G satisfies BVC if and only if it is virtually cyclic.
Note that 1.15 reduces the study of finitely generated groups with BVC to the case
where H1 (G, Z) is of rank one or zero. If H1 (G, Z) is of rank one, then up to replacing G
by a finite index subgroup, we can assume H1 (G, Z) Z. Then G becomes a semidirect
product of the form H ⋊ Z. We proceed to study groups of this type.
Given an automorphism φ of G, we say that two elements g, h in G are φ-conjugate if
g = xhφ(x−1 ) for some x ∈ G. This is an equivalence relation whose equivalence classes
ON THE FINITENESS OF THE CLASSIFYING SPACE FOR THE FAMILY OF VIRTUALLY CYCLIC SUBGROUPS 7
are called φ-twisted conjugacy classes. For φ = idG one recovers the usual notion of
conjugacy.
Lemma 1.19. Let φ be an automorphism of H such that H has infinitely many φ-twisted
conjugacy classes, then the semidirect product H ⋊φ Z does not have BVC.
Proof Note that in H ⋊φ Z, the elements (g, 1) and (h, 1) are primitive and they are in
the same conjugacy class if and only if g and h are in the same φ-twisted conjugacy class
in H. In fact, (g, 1) is conjugate to (h, 1) in H ⋊φ Z if and only if we can find (x, k) such
that (x, k)(g, 1)(x, k)−1 = (xφk (g)φ(x−1 ), 1) = (h, 1). This is equivalent to saying that g is φconjugate to φk (g) in H. But g and φ(g) are φ-conjugate in H since φ(g) = g−1 gφ(g). Hence
g and h are in the same φ-twisted conjugacy class in H if and only if (g, 1) is conjugate to
(h, 1) in H ⋊φ Z.
Since H has infinitely many φ-twisted conjugacy classes, we have infinitely many primitive elements of the form (g, 1) ∈ H ⋊φ Z that are not conjugate to each other. If H ⋊φ Z has
BVC, then we can choose infinitely many elements (g1 , 1), (g2, 1), . . . , (gn , 1), . . . that are
not conjugate to each other, but they lie in the same virtually cyclic subgroup. In particular
the group V generated by (g1 , 1), (g2, 1), . . . , (gn , 1) . . . in G is virtually cyclic. But this is
impossible. Consider the quotient map q : H ⋊ Zφ → Z, which is onto when restricted to
V. Hence the kernel must be finite. This means V F ⋊ Z for some finite group F and the
image of (gi , 1) is of the form ( fi , 1). This leads to a contradiction since there are infinitely
many (gi , 1) but only finitely many ( fi , 1).
Recall that a group is said to have property R∞ if it has infinitely many φ-twisted conjugacy classes for any automorphism φ.
Corollary 1.20. Let G be a group with the property R∞ , then any semidirect product H⋊φ Z
does not have BVC.
Note that there are many groups with property R∞ , for example hyperbolic groups
that are not virtually cyclic, relatively hyperbolic groups and most generalzied BaumslagSolitar groups. For more information about groups with property R∞ and further examples,
see [8].
2. HNN extension of groups and one-relator groups
In this section, we study HNN extension of groups. We first give a quick review of
basic results concerning the structure of these types of groups. Then we prove that any
non-ascending HNN extension of a group and any HNN extension of a finitely generated
free group do not have BVC. Using this, we are able to show that one-relator groups have
BVC if and only if they are virtually cyclic.
Recall that given a group H and an isomorphsim θ : A → B between two subgroups A
and B of H, we can define a new group H∗θ , called the HNN extension of H along θ, by
8
TIMM VON PUTTKAMER AND XIAOLEI WU
the presentation hH, t | txt−1 = θ(x), x ∈ Ai. In the study of HNN extensions of groups,
Britton’s Lemma and Collin’s Lemma play an important rule. We give a quick review of
the two lemmas and refer to [20, IV.2] for more details.
Definition 2.1. A sequence g0 , tǫ1 , g1 , . . . , tǫn , gn of elements with gi ∈ H and ǫi ∈ {−1, +1}
is said to be reduced if there is no consecutive sequence t, gi , t−1 with gi ∈ A or t−1 , g j , t
with g j ∈ B.
Lemma 2.2 (Britton’s Lemma). If the sequence g0 , tǫ1 , g1 , . . . , tǫn , gn is reduced and n ≥ 1,
then g0 tǫ1 g1 · · · tǫn gn , 1 in H∗θ .
In the following we will not distinguish between a sequence of words as above and the
element it defines in the HNN extension H∗θ .
Give any g ∈ H∗θ , we can write g in a reduced form. Let w = g0 tǫ1 g1 · · · tǫn gn , 1
be any reduced word in H∗θ which represents g. The we define the length of g, written as |g|, to be the number n of occurences of t± in w. Moreover we call an element
w = g0 tǫ1 g1 · · · tǫn gn , 1 cyclically reduced if all cyclic permutations of the sequence
g0 , tǫ1 , g1 , . . . , tǫn , gn are reduced. Every element of H∗θ is conjugate to a cyclically reduced
element.
Lemma 2.3 (Collin’s Lemma). Let G = hH, t | txt−1 = θ(x), x ∈ Ai be an HNN extension.
Let u = g0 tǫ1 g1 · · · tǫn gn and v be cyclically reduced elements of G that are conjugate, n ≥ 1.
Then |u| = |v|, and u can be obtained from v by taking a suitable cyclic permutation v∗ of v,
which ends in tǫn , and then conjugating by an element z, where z ∈ A if ǫn = 1, and z ∈ B if
ǫn = −1.
We are ready to prove the following:
Lemma 2.4. Let H be a group and let θ : A → B be an isomorphism between two subgroups of H. If [H : A], [H : B] ≥ 2, then the corresponding HNN extension G = H∗θ does
not have BVC
Proof Choose α ∈ H \ B and β ∈ H \ A, define
wn = t−1 αtn+1 β
for n ≥ 1. Note that the elements wn are of infinite order and cyclically reduced. By
Collin’s Lemma, they are not conjugate to each other. If G had BVC, there would exist
a virtually cyclic subgroup V ⊆ G such that there is an infinite subsequence {wni } with
each wni conjugate to an infinite order element of V. But this cannot happen as wnpn is not
conjugate to wmpm for any pn , pm , 0 when n , m. In fact, first note that wnpn is cyclically
p
p
reduced for any n. So if wn n is conjugate to wmm , their lengths must coincide by Collin’s
Lemma. Hence we have an equation |pn |(n + 2) = |pm |(m + 2). On the other hand, there is a
p
p
p
p
canonical quotient map q : G → hti Z. If wn n is conjugate to wmm , then q(wn n ) = q(wmm ).
ON THE FINITENESS OF THE CLASSIFYING SPACE FOR THE FAMILY OF VIRTUALLY CYCLIC SUBGROUPS 9
This means pn n = pm m. But the two equations can never hold at the same time when
n, m ≥ 1 unless n = m. Hence we have a contradiction by 1.6.
Now when H = A or H = B, we would have an ascending HNN extension. It seems
to us that this case cannot be handled as easily as before. In the following we will analyze
the case of an ascending HNN extension of a free group F of finite rank in detail. We will
first deal with the case that θ : F → F is injective with its image lying in the commutator
subgroup of F. Given a group G, we will write G′ for the commutator subgroup [G, G]
and we will denote the r-th term in the lower central series by Γr (G) = [Γr−1 (G), G] where
Γ1 (G) = G. Let us first recall the following facts:
Lemma 2.5. The lower central series has the following properties:
(a) [Γr (G), Γ s(G)] ⊆ Γr+s (G) for any group G.
T
(b) r≥1 Γr (F) = {1} for any free group F.
(c) Γr (F)/Γr+1 (F) is a free abelian group for any r and any free group F of finite rank.
Proof (a) can be found in [11, Corollary 10.3.5]. (b) and (c) can be found in [11, 11].
Corollary 2.6. Let θ : F → F be an injective map of the finitely generated free group F
with the image of θ lying in the commutator subgroup of F. If x ∈ Γr (F), then θ(x) ∈ Γ2r (F).
Proof If x ∈ Γ1 (F) = F, then by assumption on θ we have θ(x) ∈ [F, F] = Γ2 (F). Let
r ≥ 2 and suppose that for any s < r the claim holds. If x ∈ Γr (F) = [Γr−1 (F), F], then by
induction and 2.5 we get θ(x) ∈ [Γ2(r−1) (F), Γ2 (F)] ≤ Γ2r (F).
Lemma 2.7. Let G = hH, t | txt−1 = θ(x), x ∈ Hi be an ascending HNN extension of a
group H. Then any element of G can be written in the form t−p htq with p, q ≥ 0 and h ∈ H.
S
Moreover the normalizer NG (H) of H in G is given by NG (H) = i≥0 t−i Hti .
Proof The claim about the form elements of G take follows since for any h ∈ H, th =
θ(h)t and similarly ht−1 = t−1 θ(h) in G. For the second part, notice that t−i Hti ⊂ NG (H)
for any i. Since G/NG (H) hti, we have that if g = t−p htq ∈ NG (H), then p = q. Thus
S
NG (H) = i≥0 t−i Hti .
Lemma 2.8. Let G = hF, t | txt−1 = θ(x), x ∈ Fi be any ascending HNN extension of a free
group F of finite rank with im(θ) ≤ [F, F]. Suppose that x, y ∈ F are non-primitive in G
and generate a free subgroup of rank 2. Then xy is primitive in G.
Proof Suppose x, y and xy are all non-primitive. Let x = um , y = vn , xy = wl for some
u, v, w ∈ G and m, n, l ≥ 2. Let q be the canonical quotient map from G to hti Z mapping
F to 0. Then u, v, w lie in the kernel since x and y lie in the kernel. Note that the kernel is
just the normalizer of F in G. By 2.7, there exist some p ≥ 0 such that u, v, w lie in the free
subgroup t−p Ft p . But by [21], the equation um vn = wl has a solution in a free group only if
u, v, w generate a free subgroup of rank 1. This contradicts our hypothesis on x and y.
10
TIMM VON PUTTKAMER AND XIAOLEI WU
We need the following lemma.
Lemma 2.9. Let f : A → A be an automorphism of a free abelian group A. If f (ka) = la
for some a , 0 and positive integers k, l, then k = l.
Proof We can assume without loss of generality that A has infinite rank, so A
L
i∈I
Z
for some infinite index set I. We call a non-trivial element a ∈ A prime if the common
divisor of its finitely many non-zero coordinates is trivial. Note that any non-trivial a ∈ A
can be written as a = d · x with x being prime. Since f is an automorphism, it will preserve
prime elements. So now suppose that f (ka) = la with k, l ∈ N and a , 0. We write a = dx
as above with x prime. Then k f (x) = lx and by cancelling common factors we might as
well assume that k and l are coprime. Since k divides all coordinates of the prime element
x it has to equal to one and the same holds for l since f (x) is prime.
Proposition 2.10. Let G = hF, t | txt−1 = θ(x), x ∈ Fi be an ascending HNN extension
of a free group F of finite rank, where the image of θ lies in the commutator subgroup of
F. If x, y ∈ F \ [F, F] generate a free subgroup of rank 2 in F and x is primitive, then
the elements {xk yxk y−1 | k ≥ 2} form pairwise distinct primitive conjugacy classes. In
particular, G does not have BVC.
Proof Note first that xk yxk y−1 does not lie in [F, F] and is primitive when k ≥ 2 by 2.8.
Note that every element in G can be written in the form t−p wtq for some p ≥ 0, q ≥
0 and w ∈ Fn by 2.7. Now if xk yxk y−1 is conjugate to xl yxl y−1 for some k , l, then
xk yxk y−1 = t−p wtq xl yxl y−1 t−q w−1 t p for some p, q ≥ 0 and w ∈ F. Hence θ p (xk yxk y−1 ) =
wθq (xl yxl y−1 )w−1
If p , q, the equation never holds. In fact, assume p > q. We can further assume
θq (x) ∈ Γr (Fn ) \ Γr+1 (F) for some r ≥ 2 by 2.5 (b). Then θ p (x) ∈ Γr+1 (F) by 2.6 and thus
θ p (xk yxk y−1 ) ∈ Γr+1 (F). On the other hand, θq (xl ) ∈ Γr (F) \ Γr+1 (F) for any l > 0 since
Γr (F)/Γr+1 F is a free abelian group by 2.5 (c). Now θq (xl yxl y−1 ) = θq (x2l )[θq (x−l ), θq (y)]
and [θq (x−l ), θq (y)] ∈ Γr+1 (F) by 2.6, we have θq (xl yxl y−1 ) ∈ Γr (F) \ Γr+1 (F). But the right
hand side wθq (xl yxl y−1 )w−1 ∈ Γr (F) \ Γr+1 (F), hence the equation cannot hold.
If p = q, then the equation again cannot hold unless k = l. In fact, assume θ p (x) ∈
Γr (F)\Γr+1 (F), then both sides lie in Γr (F)\Γr+1 (F) by the same argument above. By taking
the quotient by Γr+1 (F) we obtain an equation in the free abelian group Γr (F)/Γr+1 (F).
Then we would have k([θ p (x)] + [θ p (yxy−1 )]) = l([wθ p (x)w−1 ] + [wθ p (yxy−1 )w−1 ]). Note
that Γr (F)/Γr+1 (F) is a free abelian group of infinite rank by 2.5 (c) and the action of w on
Γr (F)/Γr+1 (F) induced by conjugation is an isomorphism. Thus the equation k([θ p (x)] +
[θ p (yxy−1 )]) = l([wθ p (x)w−1 ] + [wθ p (yxy−1 )w−1 ]) can never hold unless k = l by 2.9.
We are ready to prove the following.
Theorem 2.11. Let G be an HNN extension of a free group of finite rank, then G does not
have BVC.
ON THE FINITENESS OF THE CLASSIFYING SPACE FOR THE FAMILY OF VIRTUALLY CYCLIC SUBGROUPS11
Proof By 2.4, we can assume G = hFn , t | txt−1 = θ(x), x ∈ Fn i, where θ : Fn → Fn is
injective, Fn is a free group of rank n. For n = 1 the group G is solvable but not virtually
cyclic. Thus G does not have BVC by 1.12. So in the following we assume that Fn is a
free group of rank bigger than 1.
Note first that we have an induced map θ̄ : Fn /[Fn , Fn ] → Fn /[Fn , Fn ] Zn . Since
the rank of the abelian group is finite, there exists some k ≥ 1 such that rank(ker(θ̄k+1 )) =
rank(ker(θ̄k )). But since Zn is free abelian, it follows that ker(θ̄k+1 ) = ker(θ̄k ), and we will
denote this group by K. This implies that θ̄k induces an injective endomorphism of Zn /K.
If K is a proper subgroup of Zn , we consider the induced quotient map Fn ∗θk → (Zn /K)∗θ̄k .
Note that the quotient is a torsion-free metabelian group which is not virtually cyclic.
Hence Fn ∗θk does not have BVC by 1.12 and 1.14. As Fn ∗θk is a finite index subgroup
of Fn ∗θ (see for example [14, 2.2]) we conclude that the latter group does not have BVC
by 1.11.
If K = Zn , we are in the situation that the image of θk lies in the commutator subgroup
of Fn . By 2.10 the group Fn ∗θk does not have BVC. Again by 1.11 it follows that Fn ∗θ
does not have BVC.
We now want to apply the previous results to verify Conjecture B for the class of onerelator groups. Recall that a one-relator group is a group G which has a presentation with
a single relation, so G = hx1 , . . . , xn | ri where r is a word in the free group F on the letters
x1 , . . . , xn . The group G is torsion-free precisely when r, as an element of the free group
F, is not a proper power. If r = sn for some maximal n ≥ 2 and s ∈ F, then s, considered
as an element in G, is of order n. In all cases there exists a finite G-CW model for EG, see
for example [17, 4.12].
For one-relator groups G with torsion, Newman’s Spelling Theorem [23] implies that
G is a hyperbolic group. In particular, the one-relator groups containing torsion satisfy
Conjecture B. However, our proof of the following theorem does not depend on this fact.
Theorem 2.12. A one-relator group has BVC if and only if it is cyclic.
Proof Let G be a one-relator group.
If the one-relator presentation of G contained three or more generators then G would
surject to Z2 , in particular G would not have BVC by 1.15. Thus we can restrict to the case
that G has two generators, so
G = ha, b | R(a, b) = 1i
for some word R(a, b) in the free group on the two generators a, b. By [20, Lemma V.11.8]
we can moreover assume that the exponent sum of one of the generators in the single
relator equals to zero, say for the generator a. The following rewriting procedure, which
we outline for the reader’s convenience, is standard. The proofs of the mentioned facts
can be found in [20, IV.5]. We let bi = ai ba−i for all i ∈ Z. Then R can be rewritten as
12
TIMM VON PUTTKAMER AND XIAOLEI WU
a cyclically reduced word R′ in terms of these, so R′ = R′ (bm , . . . , b M ) for some m ≤ M,
such that the elements bm , b M occur in R′ . If m = M, then R(a, b) = bm for some m ∈ Z
and thus G Z or G Z ∗ Z/|m| where |m| ≥ 2. Note that by 1.10 the latter group does not
have BVC. So in the following we can assume that m < M. We let
H = hbm , . . . , b M | R′ (bm , . . . , b M ) = 1i.
Moreover we define A to be the subgroup of H generated by bm , . . . b M−1 and we let B to
be the subgroup of H generated by bm+1 , . . . b M . Then A and B are free subgroups of the
one-relator group H and G is isomorphic to the HNN extension H∗θ where θ : A → B is
the isomorphism defined by θ(bi ) = bi+1 for m ≤ i < M.
If [H : A] ≥ 2 and [H : B] ≥ 2, then G does not have BVC by 2.4. Otherwise G is
an ascending HNN extension, say with H = A. Since A was free, G is an ascending HNN
extension of a finitely generated free group. The claim now follows from 2.11.
3. Groups with some hyperbolicity
In this section, we first show that acylindrically hyperbolic groups do not have BVC.
Using this we show that any finitely generated 3-manifold group does not have BVC.
We first give a quick definition of acylindrically hyperbolic group and refer to [12] and
[26] for more information. Recall the action of a group G on a metric space S is called
acylindrical if for every ε > 0 there exist R, N > 0 such that for every two points x, y with
d(x, y) ≥ R, there are at most N elements g ∈ G satisfying d(x, gx) ≤ ε and d(y, gy) ≤ ε.
Given a hyperbolic space S , we use ∂S to denote its Gromov boundary.
Definition 3.1. A group G is called acylindrically hyperbolic if there exists a generating
set X of G such that the corresponding Cayley graph Γ(G, X) is hyperbolic, |∂Γ(G, X)| > 2,
and the natural action of G on Γ(G, X) is acylindrical.
Proposition 3.2. An acylindrically hyperbolic group does not have BVC.
Proof Let G be an acylindrically hyperbolic group. By [26, Theorem 1.2] this is equivalent to saying that G contains a proper infinite hyperbolically embedded subgroup. By [5,
Lemma 6.14], we further have a subgroup K = F2 × K(G) inside G which is hyperbolically
embedded, where K(G) is some maximal normal finite subgroup of G and F2 is the free
group of rank 2. The following two statements are copied from the proof of [12, VI.1.1]
(they can be easily decduced from [5, Proposition 4.33]): (1) every element of infinite
order which is primitive in K is also primitive in G; (2), if two elements of K of infinite
order are conjugate in G, they are conjugate in K. Now since K contains F2 as a direct
factor, there exists infinitely many primitive elements of infinite order g1 , g2 , . . . , gn , . . . in
F2 ⊂ K such that gi is not conjugate to g j or g−1
j when i , j. But any two such elements
or conjugates of them cannot lie in the same virtually cyclic subgroup. In fact, let gi , g j
be two such elements and suppose gi , xg j x−1 lie in the same virtually cyclic subgroup in
ON THE FINITENESS OF THE CLASSIFYING SPACE FOR THE FAMILY OF VIRTUALLY CYCLIC SUBGROUPS13
n
n −1
G, where x ∈ G. Then we have gm
for some m and n by 1.6. Thus gm
i and g j
i = xg j x
are conjugate in G. Hence they are also conjugate in F2 , which is a contradiction. Thus
these primitive elements of infinite order cannot lie in finitely many conjugacy classes of
virtually cyclic subgroups. Hence G does not have BVC.
Corollary 3.3. The following classes of groups does not have BVC:
(a) Hyperbolic groups that are not virtually cyclic;
(b) Non-elementary groups that are hyperbolic relative to proper subgroups;
(c) The mapping class group MCG(Σg , p) of a closed surface of genus g with p punctures except for g = 0 and p ≤ 3 (in these exceptional cases, MCG(Σg , p) is finite);
(d) Out(Fn ) where n ≥ 2;
(e) Groups which act properly on proper CAT(0) spaces and contain rank-one elements;
Proof These groups are all acylindrically hyperbolic, we refer to [12, I.1,page 4] and [26,
Section 8] for the detailed reference.
Corollary 3.4. Let φ : A → G be a surjective group homomorphism and suppose that G is
an acylindrically hyperbolic group. Then A does not have BVC.
Proof As in the proof of 3.2, there exists infinitely many primitive conjugacy classes of
elements {gi }i≥1 inside a certain free group F2 ⊂ G such that gi pi is not conjugate to any
g j p j for i , j and pi , p j , 0. Now we take any preimage g′i of gi in A. Then the g′i are
′
′
primitive and g′i pi is not conjugate to any g′j p j for i , j and p′i , p′j , 0. By 1.6, they cannot
lie in finitely many conjugacy classes of virtually cyclic subgroups in A. Hence A does not
have BVC.
By a 3-manifold group we mean a group that can be realized as the fundamental group
of a 3-manifold, which may be open or have boundary. Note that, by Scott’s theorem [29],
every finitely generated 3-manifold group is itself the fundamental group of a compact
3-manifold. Minsanyan and Osin prove in [22, 2.8] the following:
Lemma 3.5. Let G be a subgroup of the fundamental group of a compact 3-manifold, then
exactly one of the following holds:
(a) G is acylindrically hyperbolic.
(b) G contains a normal infinite cyclic subgroup N such that G/N is acylindrically
hyperbolic.
(c) G is virtually polycyclic.
Now combining this with 1.12, 3.2 and 3.4, we have the following
14
TIMM VON PUTTKAMER AND XIAOLEI WU
Proposition 3.6. Let G be the subgroup of the fundamental group of a compact 3-manifold,
then G has BVC if and only if G is virtually cyclic. In particular, if G is a finitely generated
3-manifold group, then G has BVC if and only if G is virtually cyclic.
4. Groups acting on CAT(0) spaces
In this section, we study groups acting on CAT(0) spaces and show that many of them
do not have BVC. We first give a quick review of properties of CAT(0) spaces and groups
that we may need and refer to [2] for more details.
Definitions 4.1. [2, II.6.1] Let X be a metric space and let g be an isometry of X. The
displacement function of g is the function dg : X → R+ = {r ≥ 0 | r ∈ R} defined by
dg (x) = d(gx, x). The translation length of g is the number |g| := inf{dg (x) | x ∈ X}. The
set of points where dg attains this infimum will be denoted by Min(g). More generally,
T
if G is a group acting by isometries on X, then Min(G) := g∈G Min(g). An isometry g
is called semi-simple if Min(g) is non-empty. An action of a group by isometries of X is
called semi-simple if all of its elements are semi-simple.
The following theorem is known as the Flat Torus Theorem [2, II.7.1].
Theorem 4.2. Let A be a free abelian group of rank n acting properly by semi-simple
isometries on a CAT(0) space X. Then:
T
(a) Min(A) = α∈A Min(α) is non-empty and splits as a product Y × En , here En
denotes Rn equipped with the standard Euclidean metric.
(b) Every element α ∈ A leaves Min(A) invariant and respects the product decomposition; α acts as the identity on the first factor Y and as a translation on the second
factor En .
(c) The quotient of each n-flat Y × En by the action of A is an n-torus.
It is clear that the translation length is invariant under conjugation, i.e. |hgh−1| = |g| for
any g, h ∈ G. Moreover, for g semi-simple, we have that |gn | = |n| · |g| for any n ∈ Z, e.g.
by the Flat Torus Theorem. It turns out that the translation length can also be defined by
the following limit for g a semi-simple isometry
1
d(x, gn x),
n
where x is an arbitrary point of the metric space X [2, II.6.6].
Note that if a group acts properly and cocompactly on a CAT(0) space via isometries,
|g| = lim
n→∞
we call the group a CAT(0) group. In this case, the action is semi-simple.
Proposition 4.3. If a group G acts properly and cocompactly by isometries on a CAT(0)
space X, then
(a) G is finitely presented.
(b) G has only finitely many conjugacy classes of finite subgroups.
ON THE FINITENESS OF THE CLASSIFYING SPACE FOR THE FAMILY OF VIRTUALLY CYCLIC SUBGROUPS15
(c) Every solvable subgroup of G is virtually abelian.
(d) Virtually abelian subgroups of G satisfy the ascending chain condition.
(e) G has a finite model for EG.
(f) There is a finite-dimensional model for EG.
Proof (a) - (c) can be found in [2, III.Γ.1.1], (d) can be found in [2, II.7.5]. Since G acts
on X properly and cocompactly via isometry, X is proper by [2, I.8.4(1)]. With this (e) is
implied by [27, Proposition B]. The last statement was proven in [18].
Lemma 4.4. Let V be an infinite virtually cyclic group which acts on a CAT(0) space via
semi-simple isometries. Then there exist an element v0 ∈ V such that for any element v ∈ V,
the translation length |v| of v is an integer multiple of the translation length of v0 .
Proof When v is of finite order, then |v| = 0. So let us assume in the following that v is
′
of infinite order, in this case the translation length is strictly positive. If vk = wk for some
′
w ∈ V and k, k′ ∈ Z, then |k||v| = |vk | = |wk | = |k′ ||w|. Now by 1.6, there exist v0 ∈ V such
that for any v ∈ V, there exist nonzero p, p0 such that v0p0 = v p with pp0 ∈ Z. This implies
that |v| is a multiple of |v0 |.
The lemma leads us to define the following terminology.
Definition 4.5. We define a subset A of the real numbers to be finitely divisor dominated
if there are finitely many real numbers x1 , x2 , . . . , xn such that every a ∈ A can be written
in the form kxi for some k ∈ Z, or equivalently
n
[
A⊂
Z · xi .
i=1
In this case we say that A is finitely divisor dominated by x1 , x2 , . . . , xn .
Note that for a CAT(0) group the set of translation lengths { |g| | g ∈ G} is a discrete
subset of R [2, II.6.10 (3)]. We obtain first the key property of the set of translation lengths
for a group acting on a CAT(0) space with BVC.
Lemma 4.6. Let G be a group acting properly on a CAT(0) space via semi-simple isometries. Let L = { |g| | g ∈ G} ⊂ R≥0 be the set of translation lengths of G. If G has BVC, then
L is finitely divisor dominated.
Proof Note first that if g and h are conjugate, then they have the same translation length.
Now assume G has BVC, and let V1 , V2 , . . . , Vn be witnesses. We only need to consider
those Vi that are infinite since a finite order element has vanishing translation length.
By 4.4, we can choose for each Vi an element vi such that the translation length of any
infinite order element of Vi is a multiple of the translation length |vi |. Now L is finitely
divisor dominated by |v1 |, |v2 |, . . . , |vn |.
16
TIMM VON PUTTKAMER AND XIAOLEI WU
Remark 4.7. For a hyperbolic group acting on its Cayley graph with respect to some
fixed generating set S , we define the algebraic translation length using the limit |g| =
limn→∞ 1n dS (1, gn) [2, III.Γ.3.13], where dS denotes the word metric with respect to S .
Gromov [2, III.Γ.3.17] showed that the set of algebraic translation lengths in this case is a
discrete subset of the rational numbers and the denominators are bounded. In particular,
the set of algebraic translation lengths of a hyperbolic group is finitely divisor dominated.
We need do some preparation before we prove our main result in this section.
Lemma 4.8. Let x > 0, y ≥ 0 be two rational numbers and let
(
)
q
2
A = λn | λn = x + (y + n) , n ∈ N .
Then A is not finitely divisor dominated.
Proof Since x and y are rational numbers, we can choose d to be the smallest positive
integer such that 2yd and (y2 + x)d are integers. Then we can consider the quadratic polynomial
f (n) = d(x + (y + n)2 ) = dn2 + 2ydn + (y2 + x)d
which has coprime integer coefficients. Note that f (n) is irreducible over R since d, x > 0.
Now by an old result of Ricci [28], there exists infinitely many positive integers n such that
the integer f (n) is square-free.
Now if A was finitely divisor dominated, there would exist finitely many positive real
numbers x1 , x2 , . . . , xm such
A⊂
n
[
Z · xi
i=1
In particular, there exists some i0 and an infinite sequence n1 , n2 , . . . , n j , . . . of natural
numbers such that λn j = k j xi0 , with k j ∈ Z and f (n j ) square-free. This implies there are
infinitely many k j > k1 such that
λ2n j
λ2n1
=
k2j
x + (y + n j )2
=
x + (y + n1 )2 k12
This further implies
n2j + 2yn j + y2 + x =
k2j
k12
(n21 + 2yn1 + y2 + x)
Multiplying both sides by d, we obtain
dn2j + 2ydn j + (y2 + x)d =
k2j
k12
(dn21 + 2ydn1 + (y2 + x)d)
Now since f (n) = d(x + (y + n)2 ) = dn2 + 2ydn + (y2 + x)d is a polynomial in n with positive
integer coefficients that have no common divisor, the left hand side of the above equation
must be a positive integer and f (n1 ) = dn21 + 2ydn1 + (y2 + x)d is also a positive integer.
ON THE FINITENESS OF THE CLASSIFYING SPACE FOR THE FAMILY OF VIRTUALLY CYCLIC SUBGROUPS17
k2
But since k j > k1 are positve integers, the value of f (n j ) = k2j (dn21 + 2ydn1 + (y2 + x)d) is
1
not square-free. This leads to a contradiction as we have chosen the n j such that f (n j ) is
square-free.
Lemma 4.9. Let n0 , q0 ∈ N and x > 0, y ≥ 0 be two real numbers, such that there are
infinitely many intergers m > n with
x2 +(y+m)2
x2 +(y+n0 )2
=
p2
q20
for some p ∈ N. Then y ∈ Q, x2 ∈ Q.
Proof Let m1 , m2 , . . . , mi , . . ., be an infinite sequence of positive integers greater than
2
2
p2
i)
n0 such that xx2+(y+m
= q2i . Let us write x2 + (y + n0 )2 = q20 t for some t > 0, then
+(y+n0 )2
0
x2 + (y + mi )2 = p2i t. Subtracting this by the previous equation, we get
(mi − n0 )(2y + mi + n0 ) = (p2i − q20 )t.
Now comparing this with the same equality for m j , we obtain
(4.1)
p2 − q20
(mi − n0 )(2y + mi + n0 )
= 2i
(m j − n0 )(2y + m j + n0 ) p j − q20
Since mi , m j , n0 , pi , p j are all integers, we have y is rational unless
p2i − q20
p2j − q20
=
m2i − n20
m2j − n20
But this cannot happen as long as mi , m j . In fact, note first that we can assume
without loss of generality that n0 = 0 (via shifting y by some integer). Now let r =
(p2i − q20 )/(p2j − q20 ). Then equation 4.1 above leads to
2y(mi − m j r) = rm2j − m2i
We cannot solve for y if mi = m j r. But if this happens r =
m2i
m2j
=
mi
mj ,
hence mi = m j .
This also immediately implies x2 ∈ Q using the equation x2 + (y + n0 )2 = q20 t.
Combining 4.8 and 4.9, we have the following
Corollary 4.10. Let x > 0, y ≥ 0 be two real numbers and let A = {λn | λn =
N}. Then A is not finitely divisor dominated.
p
x2 + (y + n)2 , n ∈
Proposition 4.11. Let G be a group acting properly on a CAT(0) space X via semi-simple
isometries. If G contains Z2 as a subgroup, it does not have BVC.
Proof Assume G has BVC, then the set of translation lengths L = { |g| | g ∈ G} is finitely
divisor dominated by 4.6. On the other hand, by the Flat Torus Theorem, we have that Z2
acts on a flat plane E2 inside X and the translation length of any g = (z, w) ∈ Z2 is just
d(g, gx0) for some base point x0 ∈ E2 . Let a be the translation vector of (1, 0) ∈ Z2 and
b for (0, 1) ∈ Z2 . Let (a1 , a2 ), (b1 , b2 ) be the coordinate of a and b in the Euclidean plane
E2 . Without loss of generality, we can assume a1 > 0, a2 ≥ 0 and (b1 , b2 ) = (1, 0). Then
18
TIMM VON PUTTKAMER AND XIAOLEI WU
the translaton length of (1, k) ∈ Z2 is the length of the vector a + kb = (a1 , a2 ) + k(0, 1) =
(a1 , a2 + k), which is
λ(1,k) =
q
a21 + (a2 + k)2 .
Now if a set is finitely divisor dominated,
then any subset
of it is also finitely divisor
q
2
2
a1 + (a2 + k) | k ∈ N is finitely divisor dominated
dominated. In particular, the subset
for some a1 > 0, a2 ≥ 0. But this contradicts 4.10.
Recall that a semi-simple isometry is called hyperbolic if it has positive translation
length. Now if g acts properly on a CAT(0) space X via a hyperbolic isometry, by the Flat
Torus Theorem, we have an axis E1 on which g acts via translation by the length |g|.
Definition 4.12. Supppose g is a hyperbolic isometry of a CAT(0) space X. Then g is
called rank one if no axis of g bounds a flat half plane in X.
Note that if a group acts on a CAT(0) space X properly and cocompactly via isometries,
then X is proper [2, I.8.4(1)]. Combining this with 3.3 (e) and 4.11, we have the following
Theorem 4.13. Let G be a subgroup of a CAT(0) group which contains a subgroup ismorphic to Z2 or a rank-one isometry, then G does not have BVC.
Remark 4.14. The Flat Closing Conjecture [9, 6.B3] predicts that X contains a d-dimensional
flat if and only if G contains a copy of Zd as a subgroup. In particular, it implies that a
CAT(0) group is hyperbolic if and only if it does not contain a subgroup isomorphic to Z2 .
Thus the Flat Closing Conjecture together with 4.13 would also imply that a CAT(0) group
has BVC if and only if it is virtually cyclic.
Recall that a CAT(0) cube group is a group which acts properly and cocompactly on a
CAT(0) cube complex via isometries.
Lemma 4.15. Let G be a group which acts on a CAT(0) cube complex X properly and cocompactly via isometries and suppose that G is not virtually cyclic. Then either G contains
a rank one isometry or G contains a free abelian subgroup of rank 2.
Proof This is essentially due to Caprace and Sageev [4]. Note first that X is locally finite,
see for example [2, I.8.4(1)]. Note also that G acts on X without fixed points and essentially
(see [4, 1.1] for the terminology). Now by [4, Theorem A] and remarks below it, we have
that either X is a product of two unbounded CAT(0) cube subcomplexes or G contains an
element acting on X as a rank one isometry. Note that since G acts on X cocompactly, if X
is a product of two CAT(0) cube complexes, by [4, Corollary D], it follows that X contains
a free abelian subgroup of rank 2.
ON THE FINITENESS OF THE CLASSIFYING SPACE FOR THE FAMILY OF VIRTUALLY CYCLIC SUBGROUPS19
Corollary 4.16. Let G be a CAT(0) cube group. Then G has BVC if and only if G is
virtually cyclic.
References
1. Y. Antolı́n, A. Minasyan. Tits Alternatives for graph products. J. Reine Angew. Math. 704 (2015), 55-83.
2. M. Bridson, A. Haefliger, Metric spaces of non-positive curvature, Springer-Verlag Berlin (1999).
3. J. W. Cannon, W. J. Floyd, W. R. Parry, Introductory notes on Richard Thompson’s groups. Enseign. Math.
(2) 42 (1996), no. 3-4, 215-256.
4. P.-E. Caprace, M. Sageev, Rank rigidity for CAT(0) cube complexes. Geom. Funct. Anal. 21 (2011), no. 4,
851-891.
5. F. Dahmani, V. Guirardel, D. Osin, Hyperbolically embedded subgroups and rotating families in groups
acting on hyperbolic spaces, Memoirs AMS, to appear, arXiv:1111.7048
6. D. Degrijse, N. Petrosyan, Geometric dimension of groups for the family of virtually cyclic subgroups,
Journal of Topology (2013): 177-203.
7. F. T. Farrell, L. E. Jones, Isomorphism conjectures in algebraic K-theory, J. Amer. Math. Soc., v. 6, 249-297,
1993.
8. A. Fel’shtyn, E. Troitsky, Aspects of the property R∞ . Journal of Group Theory 18 (2015).
9. M. Gromov, Asymptotic invariants of infinite groups, Geometric group theory, Vol. 2 (Sussex, 1991), London
Math. Soc. Lecture Note Ser., vol. 182, Cambridge Univ. Press, Cambridge, 1993, pp. 1-295.
10. J. R. J. Groves, J. S. Wilson, Soluble groups with a finiteness condition arising from Bredon cohomology.
Bull. Lond. Math. Soc. 45 (2013), no. 1, 89-92.
11. M. Hall, The theory of groups. The Macmillan Co., New York, N.Y. 1959.
12. M. Hull, Properties of acylindrically hyperbolic groups and their small cancellation quotients, Ph.D thesis,
Vanderbilt University, 2013.
13. D. Juan-Pineda and I. J. Leary, On classifying spaces for the family of virtually cyclic subgroups, Recent
developments in algebraic topology, 135-145, Contemp. Math., 407, Amer. Math. Soc., Providence, RI,
2006
14. I. Kapovich, Mapping tori of endomorphisms of free groups, Comm. in Algebra 28 (2000), no. 6, 2895-2917
15. D. H. Kochloukova, C. Martı́nez-Pérez, B. E. A. Nucinkis, Cohomological finiteness conditions in Bredon
cohomology. Bull. Lond. Math. Soc. 43 (2011), no. 1, 124-136.
16. W. Lück, The type of the classifying space for a family of subgroups. J. Pure Appl. Algebra 149 (2000), no.
2, 177-203.
17. W. Lück, Survey on Classifying Spaces for Families of Subgroups. Infinite Groups: Geometric, Combinatorial and Dynamical Aspects (2005), 269-322.
18. W. Lück, On the classifying space of the family of virtually cyclic subgroups for CAT(0)-groups, Münster J.
of Math. 2 (2009), 201-214.
19. W. Lück, H. Reich, The Baum-Connes and the Farrell-Jones conjectures in K- and L-theory, In Handbook of
K-theory. Vol. 1,2, 703-842. Springer, Berlin, 2005.
20. R. C. Lyndon, P. E. Schupp, Combinatorial group theory. Ergebnisse der Mathematik und ihrer Grenzgebiete,
Band 89. Springer-Verlag, Berlin-New York, 1977.
21. R. C. Lyndon, M. P. Schützenberger, The equation aM = bN cP in a free group. Michigan Math. J. 9, 289-298,
1962.
22. A. Minasyan, D. Osin, Acylindrical hyperbolicity of groups acting on trees, Math. Ann. 362 (2015), no. 3-4,
1055-1105.
20
TIMM VON PUTTKAMER AND XIAOLEI WU
23. B. B. Newman. Some results on one-relator groups. Bull. Amer. Math. Soc., 74: 568-571, 1968.
24. G. A. Noskov, E. B. Vinberg, Strong Tits alternative for subgroups of Coxeter groups. J. Lie Theory, 12:1
(2002), 259-264
25. D. V. Osin, Small cancellations over relatively hyperbolic groups and embedding theorems. Ann. of Math.
(2) 172 (2010), no. 1, 1-39.
26. D. V. Osin, Acylindrically hyperbolic groups. Trans. Amer. Math. Soc. 368 (2016), no. 2, 851-888.
27. P. Ontaneda, Cocompact CAT(0) spaces are almost geodesically complete. Topology 44 (2005), no. 1, 47-62.
28. G. Ricci, Riecenche aritmetiche sui polynomials, Rend. Circ. Mat. Palermo, 57 (1933), 433-475.
29. P. Scott, Compact submanifolds of 3-manifolds. J. London Math. Soc. 7 (1973), 246-250.
University of Bonn, Mathematical Institute, Endenicher Allee 60, 53115 Bonn, Germany
E-mail address: [email protected]
Max Planck Institut für Mathematik, Vivatsgasse 7, 53111 Bonn, Germany
E-mail address: [email protected]
| 4 |
ALLSAT compressed with wildcards. Part 4: An invitation for
C-programmers
arXiv:1712.00751v1 [] 3 Dec 2017
Marcel Wild
Abstract The model set of a general Boolean function in CNF is calculated in a
compressed format, using wildcards. This novel method can be explained in very
visual ways. Preliminary comparison with existing methods (BDD’s and ESOPs)
looks promising but our algorithm begs for a C encoding which would render it
comparable in more systematic ways.
1
Introduction
By definition for us an ALLSAT problem is the task to enumerate all models of a Boolean function
ϕ = ϕ(x1 , ..., xt ), often given by a CNF C1 ∧ . . . Cs with clauses Ci . The Boolean functions can
be of a specific kind (e.g. Horn formulae), or they can be general Boolean functions . The bit
’Part 4’ in the title refers to a planned series of articles dedicated to the use of wildcards in this
context. Two articles concern general Boolean functions ϕ : {0, 1}t → {0, 1}; one1 is Part 1, the
other Part 4 in front of you.
While much research has been devoted to SATISFIABILITY, the ALLSAT problem commanded
less attention. The seemingly first systematic comparison of half a dozen methods is carried out
in the article of Toda and Soh [TS]. It contains the following, unsurprising, finding. If there are
billions of models then the algorithms that put out their models one-by-one, stand no chance
against the only competitor offering compression. The latter is a method of Toda (referenced
in [TS]) that is based on Binary Decision Diagrams (BDD); see [K] for an introduction to
BDD’s. Likewise the method propagated in the present article has the potential for compression.
Whereas BDDs achieve their compression using the common don’t-care symbol * (to indicate
bits free to be 0 or 1), our method employs three further kinds of wildcards, and is entirely
different from BDDs. Referring to these wildcards we call it the men-algorithm. In a nutshell,
the men-algorithm retrieves the model set M od(ϕ) by imposing one clause after the other:
(1)
{0, 1}t ⊇ M od(C1 ) ⊇ M od(C1 ∧ C2 ) ⊇ · · · ⊇ M od(C1 ∧ . . . ∧ Cn ) = M od(ϕ)
The Section break up is as follows. In Section 2 the overall LIFO stack framework for achieving
(1) is explained. Actually the intermediate stages of shrinking {0, 1}t to M od(ϕ) don’t quite
1
The approaches taken in Part 1 and 4 are quite different, except for the common LIFO framework (discussed
in Section 2). Part 1 also contains a tentative account of the planned topics in the series, and it reviews wildcardrelated previous publications of the author. The appeal of Part 4 is its no-fuzz approach (for Theorems look in
Part 1) and its strong visual component.
1
match the n + 1 idealized stages in (1); what stages occur instead will emerge in Subsection 2.3.
Section 3 starts with a well-known Boolean tautology, which for k = 2 is x1 ∨x2 ↔ x1 ∨(x1 ∧x2 ).
Generally the k terms to the right of ↔ are mutually exclusive, i.e. their model sets are disjoint.
The problem of keeping systems ri of bitstrings disjoint upon imposing clauses on them, is the
core technical difficulty of the present article. The main tools are wildcards and the above
tautology. While in Section 3 only positive, or only negative clauses are considered (leading to
dual kinds of wildcards), both kinds occur together in Section 4. This requires a third type of
wildcard, which in turn makes the systems ri more intricate. Fortunately (Section 5) this doesn’t
get out of hand. Being able to alternately impose positive clauses like x1 ∨x2 and negative clauses
like x3 ∨ x4 ∨ x5 , doesn’t enable one to impose the mixed clause x1 ∨ x2 ∨ x3 ∨ x4 ∨ x5 . But
it certainly helps (Section 6). In Section 7 we carry out by hand the men-algorithm on some
random moderate-size Boolean functions, and observe that the compression achieved compares
favorably to BDD’s and ESOP’s. We calculate the latter two by using the commands expr2bdd
of Python and BooleanConvert of Mathematica. Of course only systematic2 experiments will
show the precise benefits and deficiencies of both methods.
2
The overall LIFO-stack framework
2.1 For the time being it suffices to think of a 012men-row as a row (=vector) r that contains
some of the symbols, 0, 1, 2, m, e, n. Any such r of length t represents a certain set of length t
bitstrings (which will be fully explained and motivated in later Sections). As a sneak preview, the
number of length 10 bitstrings represented by r = (2, m, e, m, 1, n, e, e, 1, n) is 84. We say that
r is ϕ-infeasible with respect to a Boolean function ϕ if no bitstring in r satisfies ϕ. Otherwise
r is called ϕ-feasible. If all bitstrings in r satisfy a Boolean formula ψ then we say that r fulfills
ψ.
2.2 The input of the men-algorithm is any Boolean function ϕ : {0, 1}t → {0, 1} given in CNF
format C1 ∧ C2 ∧ . . . ∧ Cs . Its output is the model set M od(ϕ), i.e. the set of bitstrings x
with ϕ(x) = 1. Here M od(ϕ) comes as a disjoint union of 012men-rows. The basic supporting
data-structure is a good old Last-In-First-Out (LIFO) stack, filled with (changing) 012men-rows.
Suppose by induction3 we obtained a LIFO stack as shown in Figure 1a (thus each ∗ is one of
the symbols 0, 1, 2, m, e, n). If there is no ambiguity we may simply speak of rows instead of
012men-rows.
2
The men-algorithm awaits implementation in either high-end Mathematica-code or in C. As to Mathematica,
this remains the only programming language I master. If any reader wants to implement the men-algorithm with
C, she is more than welcome to seize this offer on a silver tablet. The benefit (as opposed to pointless coding
efforts with Mathematica) is that the men-algorithm coded in C becomes comparable to the methods evaluated
in [TS], and likely others.
3
At the beginning the only 012men-row in the stack is (2, 2, ..., 2), thus the powerset {0, 1}t , see (1).
2
r
=
*
*
*
*
*
*
*
C5
*
*
*
*
*
*
*
C5
*
*
*
*
*
*
*
C5
*
*
*
*
*
*
*
C4
*
*
*
*
*
*
*
C4
*
*
*
*
*
*
*
C3
*
*
*
*
*
*
*
C2
*
*
*
*
*
*
*
C2
Figure 1a: LIFO stack before imposing C5
r2
=
*
*
*
*
*
*
*
C6
r4
=
*
*
*
*
*
*
*
C14
*
*
*
*
*
*
*
C5
*
*
*
*
*
*
*
C5
*
*
*
*
*
*
*
C4
*
*
*
*
*
*
*
C4
*
*
*
*
*
*
*
C3
*
*
*
*
*
*
*
C2
*
*
*
*
*
*
*
C2
Figure 1b: LIFO stack after imposing C5
The top row r fulfills C1 ∧ ... ∧ C4 , but not yet C5 , which hence is the pending clause. Similarly
the other rows have pending clauses as indicated. To ’impose’ C5 upon r means replacing r
by a few successor rows ri (called the sons of r) whose union is disjoint and contains exactly4
those bitstrings in r that satisfy C5 . Therefore each ri fulfills C5 in the sense of 2.1, but some
ri may incidently fullfil C6 as well (and perhaps even C7 ). We come back to that in a in a
moment but first handle the more serious issue that some ri ’s may be ϕ-infeasible. Fortunately
this can be detected as follows. Translate ri into a Boolean CNF σ. (As a sneak preview, if
ri = (e, 0, e, 1, e), then σ = (x1 ∨ x3 ∨ x5 ) ∧ x2 ∧ x4 .) Evidently ri is ϕ-infeasible, if and only if
ϕ ∧ σ is insatisfiable. This can be determined with any off-the-shelf SAT-solver5 .
The infeasible rows ri are cancelled. For the remaining feasible rows rj it is very easy to
determine their pending clause. This is because for any 012men-row r and any given clause C it
is straightforward (Section 6.2) to check whether or not r fulfills C. For instance, suppose r in
Figure 1a decomposes as r = r1 ] r2 ] r3 ] r4 such that r1 is infeasible, r2 has pending clause C6 ,
r3 is final, r4 has pending clause C14 . Then r1 gets kicked, r3 is output (or stored elsewhere),
and r2 , r4 (in any order) take the place or r. Nothing happens to the other rows in the LIFO
stack, see Figure 1b. This finishes the imposition of C upon r.
2.3 By induction at all stages the union U of all final rows and of all rows in the LIFO stack
is disjoint and contains M od(ϕ). (Recall Footnote 3.) Whenever the pending clause of any top
row r has been imposed on r, the new set U has shrunk. Hence the procedure ends in finite time.
More specifically, when the LIFO stack becomes empty, the set U equals the disjoint union of
all final rows, which in turn equals M od(ϕ). See Section 6 for carrying out all of this with a
concrete Boolean function ϕ.
4
This is the truly novel ingredient of the men-algorithm, as opposed to oldie-but-goldie LIFO and SAT-solvers.
Sections 3 to 6 deliver the fine details of how the sons ri get calculated. We will gradually increase the complexity
of r, and make ample use of pictures throughout. As to ’few successor rows’, in 6.2 we shall see that the number
of sons is at most the number of variables of the imposed clause.
5
All methods evaluated in [TS] use SAT-Solvers one way or another. Our use seems to be particularly economic,
but that could be author’s bias.
3
3
The Flag of Bosnia and how it befriends competing wildcards
3.1 The Flag of Bosnia6 (FoB) features a white main diagonal, the lower triangle is blue, and
the upper triangle yellow. Using 0, 1, 2 as colors the two (out of 3! imaginable) FoBes we care
about are rendered in Figure 2 and 3. Here Type 1 and 0 refer to the color of the diagonal.
1
0
0
0
2
1
0
0
2
2
1
0
2
2
2
1
Figure 2: FoB of Type 1
0
1
1
1
2
0
1
1
2
2
0
1
2
2
2
0
Figure 3: FoB of Type 0.
The FoB of Type 1 visualizes in obvious ways the tautology
(2)
(x1 ∨ x2 ∨ x3 ∨ x4 ) ↔ x1 ∨ (x1 ∧ x2 ) ∨ (x1 ∧ x2 ∧ x3 ) ∨ (x1 ∧ x2 ∧ x3 ∧ x4 )
The dimension 4 × 4 generalizes to any k × k, but only k ≥ 2 will be relevant. It is essential that
the four clauses on the right in (2) are mutually disjoint, i.e. their conjunction is insatisfiable.
Such DNFs are also known as exclusive sums of products (ESOP). Equation (2) (for any k ≥ 2)
is the key for many methods that transform (=orthogonalize) an arbitrary DNF into an ESOP;
see [B,p.327]. It will be essential for us as well, only that we are concerned with orthogonalizing
CNF’s into (fancy kinds of) ESOP’s.
As in previous publications we prefer to write 2 instead of the more common don’t-care symbol
*. Thus e.g. the 012-row (2, 0, 1, 2, 1) by definition is the set of bitstrings
{(0, 0, 1, 0, 1), (0, 0, 1, 1, 1), (1, 0, 1, 0, 1), (1, 0, 1, 1, 1)}.
In particular, in view of (2) the model set of x1 ∨ x2 ∨ x3 ∨ x4 is the disjoint union of the four
012-rows constituting the FoB in Figure 2. This is confirmed by a row-wise cardinality count:
8 + 4 + 2 + 1 = 24 − 1.
Dually the FoB of Type 0 visualizes the tautology
(3)
(x1 ∨ x2 ∨ x3 ∨ x4 ) ↔ x1 ∨ (x1 ∧ x2 ) ∨ (x1 ∧ x2 ∧ x3 ) ∨ (x1 ∧ x2 ∧ x3 ∧ x4 )
3.2 Slightly more creative than writing 2 instead of *, is it to replace the whole FoB in
Figure 2 by the single wildcard (e, e, e, e) which by definition7 is the set of all length 4 bitstrings x = (x1 , x2 , x3 , x4 ) with ’at least one 1’. In other words, only (0,0,0,0) is forbidden. The e symbols need not8 be contiguous. Thus e.g (1, e, 0, e) is the set of bitstrings
{(1, 1, 0, 0), (1, 0, 0, 1), (1, 1, 0, 1)}. If several e-wildcards occur, they need to be distinguished
6
Strictly speaking Bosnia should be Bosnia-Herzegowina, but this gets too clumsy. Other national flags, such
as the previously used Flag of Papua, have similar patterns but miss out on relevant details.
7
Surprisingly, this idea seems to be new. Information to the contrary is welcome. The definition generalizes
to tuplets (e, e..., e) of length t ≥ 2. For simplicity we sometimes strip (e, e, ..., e) to ee...e. Observe that a single
e (which we forbid) would amount to 1.
8
But for better visualization we strive to have them contiguous in more complicated examples.
4
by subscripts. For instance9 r1 in Figure 4 represents the model set of the CNF
(x1 ∨ x2 ∨ x3 ∨ x4 ) ∧ (x5 ∨ x6 ∨ x7 ∨ x8 ).
(4)
The fewest number of disjoint 012-rows required to represent r1 seems to be a hefty sixteen;
they are obtained by ’multiplying out’ two FoBes of Type 1.
3.3 So far ee...e can be viewed as the slender enemy of the sluggish FoB in Figure 2. But can
the e-formalism handle overlapping clauses? It is here where the FoB reputation gets restored,
but the FoB needs to reinvent itself as a ’Meta-FoB’. To fix ideas, let F := M od(C1 ∧ C2 ∧ C3 ) ⊆
M od(C1 ∧ C2 ) = r1 , where
C1 ∧ C2 ∧ C3 := (x1 ∨ x2 ∨ x3 ∨ x4 ) ∧ (x5 ∨ x6 ∨ x7 ∨ x8 ) ∧ (x3 ∨ x4 ∨ x5 ∨ x6 ).
(5)
We claim that F is the disjoint union of the two e-rows r2 and r3 in Figure 4a, and shall refer to
the framed part as a Meta-FoB (of dimensions 2 × 2). Specifically, the bitstrings (x3 , x4 , x5 , x6 )
satisfying the overlapping clause x3 ∨x4 ∨x5 ∨x6 are collected in (e, e, e, e) and come in two sorts.
The ones with x3 = 1 or x4 = 1 are collected in (e, e, 2, 2), and the other ones are in (0, 0, e, e).
These two quadruplets constitute, up to some adjustments, the two rows of our Meta-FoB.
The first adjustment is that the right half of (e, e, 2, 2) gets erased by the left part of the
old constraint (e2 , e2 , e2 , e2 ) in r1 . The further adjustments do not concern the shape of the
Meta-FoB per se, but rather are repercussions caused by the Meta-FoB outside of it. Namely,
(e1 , e1 , e1 , e1 ) in r1 splits into (2, 2, e, e) (left half of r2 ) and (e1 , e1 , 0, 0) (left part of r3 ). It
should be clear why (e2 , e2 , e2 , e2 ) in r2 transforms differently: It stays the same in r2 (as
noticed already), and it becomes (e, e, 2, 2) in r3 . Because of its diagonal entries (shaded) our
Meta-FoB is10 a Meta-FoB of Type 1, e.
1
r1
=
r2
=
r3
=
2
3
4
5
6
7
8
1
2
3
4
5
6
7
8
e1 e1 e1 e1 e2 e2 e2 e2
e1 e1 e1 e1 e2 e2 e2 e2
2
2
e
e e 2 e2 e2 e2
2
e1 e1
0
0
e1 e1 e1 0
e
e
2
2
Figure 4a: Meta-FoB of Type 1, e
2
2
1 e 2 e2 e2 e2
1
2
2
2
Fig. 4b: Small Meta-FoB of Type 1, e
3.4 In dual fashion we define a second wildcard (n, n, ..., n) as the set of all bitstrings (of length
equal to the number of n0 s) that have ’at least one 0’. The definition of n-row is the obvious
one. Mutatis mutandis the same arguments as above show that by using a dual Meta-FoB of
Type 0, n one can impose (n, n, ..., n) upon disjoint constraints (ni , ni , ..., ni ). See Figure 5 which
shows that the model set of
9
As for 012-rows, not all symbols 0, 1, 2, e need to occur in a 012e-row.
Generally the lengths of the diagonal entries e..e match the cardinalities of the traces of the overlapping
clause. For instance, imposing x4 ∨ x5 instead of x3 ∨ x4 ∨ x5 ∨ x6 triggers the Meta-FoB of Type 1, e in Figure 4b.
We keep the terminology Type 1, e despite the fact that all diagonal entries are 1. (Because Type 1 is reserved
for the kind in Figure 2.)
10
5
(x1 ∨ x2 ∨ x3 ∨ x4 ) ∧ (x5 ∨ x6 ∨ x7 ∨ x8 ) ∧ (x3 ∨ x4 ∨ x5 ∨ x6 )
(5’)
can be represented as disjoint union of the two n-rows r2 and r3 .
1
r1
=
r2
=
r3
=
2
3
4
5
6
7
8
n1 n1 n1 n1 n2 n2 n2 n2
2
2
n
n n 2 n2 n2 n2
n1 n1
1
1
n
n
2
2
Figure 5: Meta-FoB of Type 0, n
3.5 It is compelling to use a third11 kind of wildcard (m, m, . . . , m) := (e, e, . . . , e)∩(n, n, . . . , n).
In other words, (m, m, . . . , m) is the set of all bitstrings with ’at least one 1 and at least one 0’.
This gadget will come in handy soon. The definition of a 012men-row is the obvious one.
4
Positive and negative clauses simultaneously
New issues arise if nnnn (or dually eeee) needs to be imposed on distinct types of wildcards, say
n1 n1 n1 n1 and e1 e1 e1 e1 as occuring in the en-row r1 in Figure 6. More precisely, let n1 n1 n1 n1
model C1 = x1 ∨ x2 ∨ x3 ∨ x4 , let e1 e1 e1 e1 model C2 = x5 ∨ x6 ∨ x7 ∨ x8 , and nnnn model the
overlapping clause C3 = x3 ∨x4 ∨x5 ∨x6 . We need to sieve the model set F := M od(C1 ∧C2 ∧C3 )
from r1 := M od(C1 ∧ C2 ). To do so split r1 as r1 = r1 (+) ] r1 (−), where
r1 (+) := {x ∈ r1 : x satisfies x3 ∨ x4 } = {x ∈ r1 : x3 = 0 or x4 = 0} and
r1 (−) := {x ∈ r1 : x violates x3 ∨ x4 } = {x ∈ r1 : x3 = x4 = 1}.
Evidently r1 (+) = r2 and r1 (−) = r20 , and so r1 = r2 ] r20 . From r2 ⊆ F follows
(6)
F = r2 ] {x ∈ r20 : x satisfies C3 }
11
The choice of letters n and m stems from ’nul’ and ’mixed’ respectively. The letter e stems from ’eins’ which
is German for ’one’.
6
1
r1
=
r2
=
r2 ′
=
r2
=
r3
r4
2
3
4
5
6
7
8
n1 n1 n1 n1 e1 e1 e1 e1
2
2
n
n e 1 e1 e1 e1
n1 n1
1
1 e 1 e1 e1 e1
2
2
n
n e 1 e1 e1 e1
=
n1 n1
1
1
0
=
n1 n1
1
1
m m
0 e 1 e1
2
2
Figure 6: Meta-FoB of Type 0, n, m
For instance x = (0, 0, 1, 1, 1, 1, 1, 1) is in r20 but does not satisfy C3 . Recalling from 3.5 the
definition of mm...m a moment’s thought shows that the rightmost set in (6) is the disjoint
union of r3 and r4 in Figure 6. The framed part in Figure 6 constitutes a Meta-FoB of Type
0, n, m (which generalizes Type 0, n).
While ee...e and nn...n are duals of each other, mm...m is selfdual. Hence unsurprisingly also
ee...e can be imposed on distinct types of wildcards by virtue of a Meta-FoB of Type 1, e, m. See
Figure 7 where the imposed ee...e matches x3 ∨ x4 ∨ x5 ∨ x6 .
1
2
3
4
5
6
7
8
r1
=
n1 n1 n1 n1 e1 e1 e1 e1
r2
=
n1 n1
1
r3
=
2
2
m m e1 e1 e1 e1
r4
=
2
2
0
1 e 1 e1 e1 e1
0
e
e
2
2
Figure 7: Meta-FoB of Type 1, e, m
5
Imposing a positive or negative clause upon a 012men-row
Now that we have three wildcards the problem arises how to impose nn...n or ee...e upon a
men-row. Fortunately Meta-FoBes of Type 0, n, m respectively 1, e, m still suffice to do this. In
Figure 8 the imposition of x3 ∨ x4 ∨ x6 ∨ x7 · · · ∨ x14 (thus nn...n of length 11) upon the men-row
r1 is carried out. This boils down to the imposition of the shorter clause x6 ∨ x7 · · · ∨ x14 since
7
each x ∈ r1 has x3 = x4 = 1. We omit the details of why the Meta-FoB of Type 0, n, m,
and its repercussions outside, look the way they look12 . This, for the most part, should be
self-explanatory in view of our deliberations so far.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
r1
=
m1
m1
1
1
n
n
n
2
2
m1
m1
m2
e
e
e
e
m2
m2
r2
=
m1
m1
1
1
2
n
n
2
2
m1
m1
m2
e
e
e
e
m2
m2
r3
=
m1
m1
1
1
0
1
1
n
n
m1
m1
m2
e
e
e
e
m2
m2
r4
=
e1
e1
1
1
0
1
1
1
1
0
0
m2
e
e
e
e
m2
m2
r5
=
2
2
1
1
0
1
1
1
1
m
m
m2
e
e
e
e
m2
m2
r6
=
n1
n1
1
1
0
1
1
1
1
1
1
0
e
e
e
e
e2
e2
r7
=
n1
n1
1
1
0
1
1
1
1
1
1
1
0
0
e
e
n2
n2
r8
=
n1
n1
1
1
0
1
1
1
1
1
1
1
m
m
2
2
n2
n2
Figure 8: Another Meta-FoB of Type 0, n, m
Imposing the corresponding positive clause x3 ∨ x4 ∨ · · · ∨ x14 upon r1 would be trivial since
each bitstring x in r1 satisfies this clause in view of x3 = x4 = 1. Energetic readers may enjoy
imposing the shorter clause x6 ∨ · · · ∨ x14 (thus ee...e) upon r1 by virtue of a Meta-FoB of Type
1, e, m.
6
Handling general (=mixed) clauses
In contrast, we dissuade13 imposing mm...m upon r1 by virtue of some novel Meta-FoB. That’s
because what matters isn’t imposing mm...m, but rather our honed skill (Section 5) to either
impose ee...e or nn...n. This skill, as well as a clever ad hoc maneuver, will suffice to impose
any mixed clause upon any 012men-row. As opposed to Section 3 to 5 we won’t show how to
impose a mixed clause within an isolated environment, but rather within a toy example that
refreshes the LIFO framework of Section 2.
6.1 Let us thus embark on the compression of the model set of the CNF with clauses
(7) C1 = x1 ∨ x2 ∨ x3 , C2 = x4 ∨ x5 ∨ x6 ∨ x7 , C3 = x8 ∨ x9 ∨ x10 ,
C4 = x2 ∨ x3 ∨ x4 ∨ x5 ∨ x6 ∨ x7 ∨ x8 ∨ x9 , C5 = x1 ∨ x3 ∨ x4 ∨ x6 ∨ x7
12
Notice that x6 ∨ x7 · · · ∨ x14 has 9 literals whereas the induced Meta-FoB has 7 rows. Generally speaking the
shaded rectangles in any Meta-FoB arising from imposing a positive or negative clause upon r, are of dimensions
1 × t (any t ≥ 1 can occur) and 2 × 2. This implies that the number of rows in such a Meta-FoB (=number of sons
of r) is at most the number of literals in that clause. Although imposing a mixed clause is more difficult (Section
6), it is easy to see that the number of literals remains an upper bound to the number of sons.
13
More precisely, the dissuasion concerns our present focus on arbitrary CNFs. For special types of CNFs, e.g.
such that the presence of xi ∨ xj ∨ · · · ∨ xk implies the presence of xi ∨ xj ∨ · · · ∨ xk , it would be a different story.
8
It is clear that r1 in Figure 9 compresses the model set of C1 ∧ C2 ∧ C3 . Hence the pending
clause14 of r1 is C4 . In order to sieve F := M od(C1 ∧ C2 ∧ C3 ∧ C4 ) from r1 let us split r1 as
r1 = r1 (+) ] r1 (−) where
(8) r1 (+) := {x ∈ r1 : x satisfies x2 ∨ x3 ∨ x4 ∨ x5 } and
r1 (−) := {x ∈ r1 : x violates x2 ∨ x3 ∨ x4 ∨ x5 }.
Then we have r1 (+) ⊆ F, and hence (akin to (6))
(9)
F = r1 (+) ] {x ∈ r1 (−) : x satisfies x6 ∨ x7 ∨ x8 ∨ x9 }.
Trouble is, as opposed to (6), both systems of bitstrings in (9) are tougher to rewrite as disjoint
union of 012men-rows.
6.1.1 Enter the ’ad hoc maneuver’ mentioned above: Roughly speaking both bitstring systems
temporarily morph into ’overloaded’ 012men-rows. The latter get transformed, one after the
other (in 6.1.2 and 6.1.3), in disjoint collections of (ordinary) 012men-rows.
Two definitions are in order. If in a 012men-row r we bar any symbols, then the obtained
overloaded Type A row by definition consists of the bitstrings in r that feature at least on 0
on a barred location. It follows that r1 (+) equals the overloaded Type A row r2 in Figure 8.
Similarly, if in a row r we encircle, respectively decorate with stars, nonempty disjoint sets of
symbols, then the obtained overloaded Type B row by definition consists of the bitstrings in r
that feature 1’s at the encircled locations, and feature at least one 1 on the starred locations.
It follows that the rightmost set in (9) equals the overloaded Type B r3 in Figure 8. (We shall
see that merely starring, without encircling, also comes up. The definition of such an overloaded
Type C row is likewise.)
6.1.2 As to transforming r2 and r3 , we first turn to r2 , while carrying along the overloaded row
r3 . Transforming r2 simply amounts to impose the negative part x2 ∨ x3 ∨ x4 ∨ x5 of clause C4
upon r1 , and hence works with the Meta-FoB of Type 0, n, m that stretches over r4 to r6 . As
to r5 , it fulfills C5 (since each x ∈ C5 has x4 = 0), and so is final and leaves the LIFO stack (see
Section 2).
6.1.3 As to transforming r3 , the first step is to replace the encircled symbols by 1’s and to record
the ensuing repercussions. Some starred symbols may change in the process but they must keep
their star. The resulting overloaded Type C row still represents the same set of bitstrings r3 .
The second step is to impose the positive part x6 ∨ x7 ∨ x8 ∨ x9 of C4 by virtue of a Meta-FoB,
see r7 to r9 in Figure 9.
14
That some pending clauses in Figure 9 are rendered boldface merely indicates that such a clause is currently
being imposed.
9
1
2
3
4
5
6
7
r1 = n 1 n 1 n 1 e
e
e
e n 2 n2 n2 C4
r2 = n 1 n 1 n 1 e
e
e
e n2 n 2 n 2 C 4
r3 = n 1 n 1 n 1 e
e
e
*
e* n* 2 n* 2 n2 C4
r4 =
2 n 1 n1 e
e
e
e n 2 n2 n2 C5
r5 =
0
1
1
0
e
e
n
n
n
r6 =
0
1
1 m m 2
2
n
n
n C5
*
*
*
*
n2 C4
0
r3 = n 1 n 1 n 1 e
e
e
r4 =
2 n 1 n1 e
e
e
r6 =
0
8
e n
9
2
n
10
2
final
e n 2 n2 n2 C5
1
1 m m 2
2
n
n
n C5
1
*
*
*
*
n C4
r3 =
0
1
1
1 2
r4 =
2 n 1 n1 e
r6 =
0
1
1 m m 2
2
n
n
n C5
r7 =
0
1
1
1
1
e
e
n
n
n C5
r8 =
0
1
1
1
1
0
0
1
1
0
final
r9 =
0
1
1
1
1
0
0 m m 2
final
e
e
2
n
n
e n 2 n2 n2 C5
Figure 9: The men-algorithm in action. Snapshots of the LIFO stack.
6.1.4 In likewise fashion the algorithm proceeds (Figure 10). In particular r10 , r11 are overloaded
rows of Type A and B. The men-algorithm ends after the last row in the LIFO stack, here the
final row r17 , gets removed.
10
1
3
4
6
7
2
5
8
9
10
r4
=
2 n1 e
e
e n 1 e n 2 n2 n2 C5
r5
=
0
1
m
2
2
1
m
n
n
n C5
r7
=
0
1
1
1
1
e
e
n
n
n C5
2 n1 e
e
e n1 e n 2 n2 n2 C5
r10 =
r11 = 2
*
n1
e
e
e
n1 e n 2 n2 n2 C5
r5
=
0
1
m
2
2
1
m
n
n
n C5
r7
=
0
1
1
1
1
e
e
n
n
n C5
r12 =
2
0
e
e
e
2
e
n
n
n
final
r13 =
2
1
0
0
0
0
1
n
n
n
final
r14 =
2
1
m m m
0
2
n
n
n
final
r11 = 2* n1
e
e
e
n1 e n 2 n2 n2 C5
r5
=
0
1
m
2
2
1
m
n
n
n C5
r7
=
0
1
1
1
1
e
e
n
n
n C5
r11 =
1
1
1
1
1
0
2
n
n
n
r5
=
0
1
m
2
2
1
m
n
n
n C5
r7
=
0
1
1
1
1
e
e
n
n
n C5
r15 =
0
1
0
2
2
1
1
n
n
n
r16 =
0
1
1 n 1 n1 1
0 n 2 n2 n2
r7
=
0
1
1
1
1
e
e
n
n
n C5
r7
=
0
1
1
1
1
e
e
n
n
n C5
r17 =
0
1
1
1
1
m m
n
n
n
final
final
final
final
Figure 10: Further snapshots of the LIFO stack.
Altogether there are ten (disjoint) final rows r5 , r8 , r9 , r12 , r13 , r14 , r11 , r15 , r16 , r17 . Their
union is M od(ϕ), which hence is of cardinality
11
|M od(ϕ)| = 21 + 1 + 4 + 420 + 14 + 168 + 14 + 28 + 21 + 14 = 695
6.2 Here we justify the claim made in 2.3 that checking whether a 012men-row r fulfills a clause
C is straightforward. Indeed, focusing on the most elaborate case of a mixed clause C the
following holds.
(10) If C = x1 ∨ · · · ∨ xs ∨ xs+1 ∨ · · · ∨ xt and r = (a1 , .., as , as+1 , .., at , . . .) then r fulfills C iff
one of these cases occurs:
(i) For some 1 ≤ j ≤ s one has aj = 1;
(ii) {1, . . . , s} contains the position-set of a full e-wildcard or full m-wildcard;
(iii) For some s + 1 ≤ j ≤ t one has aj = 0;
(iv) {s + 1, . . . , t} contains the position-set of a full n-wildcard or full m-wildcard;
Proof of (10). It is evident that each of (i) to (iv) individually implies that all bitstrings x ∈ r
satisfy C. Conversely suppose that (i) to (iv) are false. We must pinpoint a bitstring in r that
violates C. To fix ideas, consider r of length 18 and the clause C = x1 ∨ · · · ∨ x6 ∨ x7 ∨ · · · ∨ x13 .
(For readibility the disjunctions ∨ are omitted in Figure 11.) Properties (i) to (iv) are false15
for C. Accordingly one checks that rvio ⊆ r and that each x ∈ rvio violates C.
C
= x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 x11 x12 x13
r
=
2
1
e
n1
n1
m1
m1
m1
e
n2
n2
m2
m2
m2
m2
e
n1
n1
rvio =
1
1
1
1
1
1
0
0
0
0
0
0
0
e2
e2
2
n1
n1
Figure 11: The men-row r does not fulfill clause C.
7
Comparison with BDD’s and ESOP’s
We reiterate from Section 1 that the men-algorithm has not yet been implemented. That’s why
we content ourselves to take two medium-size random CNFs and hand-calculate what the menalgorithm does with them. We compare the outcome with two competing paradigms; ESOP’s
in 7.2, and BDD’s in 7.3. But first we warm up in 7.1 by looking how ESOP and BDD handle
M od(µt ) for µt = (x1 ∨ · · · ∨ xt ) ∧ (x1 ∨ · · · ∨ xt ). Recall that the men-algorithm achieves optimal
compression here: M od(µt ) = {(m, m, . . . , m)}.
7.1 One verifies at once that the BDD on the left in Figure 12 yields the function µ5 . As
for any BDD, each nonleaf node A yields ’its own’ Boolean function. (The Boolean function
commonly coupled to the whole BDD is obtained by letting A be the root.) For instance, there
15
For instance the position-set {6, 7, 8} of m1 m1 m1 is neither contained in {1, . . . , s} nor in {s + 1, . . . , t}. That
it is contained in their union is irrelevant.
12
are two nodes labelled with x2 . The left, call it A, yields a Boolean function α(x2 , x3 , x4 , x5 )
whose model set is the disjoint union of the four 012-rows in the top square in the Table on the
right. For instance, the bitstring (0, 0, 1, 0) belongs to (0, 0, 1, 2), and indeed it triggers (in the
usual way, [K]) a path that leads from A to >. Similarly the right node labelled x2 yields some
Boolean function β(x2 , x3 , x4 , x5 ) whose model set is the disjoint union of the four 012-rows in
the bottom square in the Table on the right. It is now evident that whole Table represents the
model set of the whole BDD, thus M od(µ5 ).
Generally each BDD naturally induces an ESOP; this has been previously observed, e.g. in
[B,p.327]. The converse does not16 hold. Furthermore an easy induction of the above kind
shows that the number of nodes in a BDD is a lower bound to the number of 012-rows it brings
forth.
x1
x2
x2
x3
x3
x4
x4
x5
⊥
x5
⊤
1
2
3
4
5
0
0
0
0
1
1
1
1
1
2
2
2
0
1
2
2
0
0
1
2
0
0
0
1
0
2
2
2
1
0
2
2
1
1
0
2
1
1
1
0
Figure 12: Each BDD readily yields an ESOP
7.2 Consider this random17 CNF:
(11) ϕ1 = (x5 ∨ x7 ∨ x10 ∨ x2 ∨ x4 ) ∧ (x1 ∨ x2 ∨ x9 ∨ x7 ∨ x5 )
∧ (x2 ∨ x3 ∨ x7 ∨ x4 ∨ x9 ) ∧ (x8 ∨ x9 ∨ x10 ∨ x4 ∨ x9 )
Table 13 shows the fourteen rows that the men-algorithm produces to compresses M od(ϕ1 ).
One reads off that |M od(ϕ1 )| = 16 + 48 + · · · + 18 = 898.
16
The author didn’t manage to find out whether the particular ESOP-algorithm used in Mathematica stems
from a BDD. In any case, for all Boolean functions of type µt it yields the same 012-rows as the BDD of µt .
17
More specifically, all clauses have 3 positive and 2 negative literals, which were randomly chosen (but avoiding
xi ∨ xi ) from a set of 20 literals.
13
1
2
3
4
5
6
7
8
9
10
2
0
1
1
1
2
0
2
1
2
16
2
0
e
1
0
2
e
2
1
2
48
n
1
n
0
1
2
1
2
2
2
48
1
1
1
0
1
2
1
e
e
e
14
0
0
2
2
1
2
1
2
1
2
32
1
0
0
2
1
2
1
2
2
2
32
1
0
1
2
1
2
1
e
e
e
28
n
1
n
1
e
2
e
2
2
e
168
1
1
1
1
2
2
2
2
2
1
32
1
1
1
1
e1
2
e1
e2
e2
0
18
n1
2
n1
0
n2
2
n2
2
2
2
288
1
2
1
0
n2
2
n2
e
e
e
84
n1
0
n1
1
n2
2
n2
2
0
2
72
1
0
1
1
n2
2
n2
e
0
e
18
Table 13: Applying the men-algorithm to ϕ1 in (11).
Using the Mathematica-command BooleanConvert[***,"ESOP"] transforms (11) to an ESOP
(x4 ∧ x9 ) ∨ (x1 ∧ x4 ∧ x8 ∧ x9 ) ∨ · · ·, which amounts to a union (2, 2, 2, 0, 2, 2, 2, 2, 1, 2)∪
(1, 2, 2, 0, 2, 2, 2, 1, 0, 2) ∪ · · · of 23 disjoint 012-rows. We note that the ESOP algorithm is quite
sensitive18 to the order of clauses. Incidentally the 23 rows above stem from one of the optimal
permutations of clauses; the worst would yield 36 rows. Adding the random clause (x5 ∨ x6 ∨
x8 ∨ x3 ∨ x9 ) to ϕ1 triggers twenty six 012men-rows, but between 27 and 56 many 012-rows with
the ESOP-algorithm.
The second example in (12) has longer clauses, all of them either positive or negative (for ease
of hand-calculation). Long clauses make our wildcards more effective still.
(12) ϕ2 = (x3 ∨ x4 ∨ x6 ∨ x7 ∨ x9 ∨ x14 ∨ x15 ∨ x16 ∨ x17 ∨ x18 )
∧ (x3 ∨ x5 ∨ x8 ∨ x9 ∨ x11 ∨ x12 ∨ x13 ∨ x14 ∨ x15 ∨ x17 )
∧ (x1 ∨ x4 ∨ x5 ∨ x6 ∨ x9 ∨ x12 ∨ x14 ∨ x15 ∨ x17 ∨ x18 )
∧ (x1 ∨ x2 ∨ x3 ∨ x8 ∨ x11 ∨ x13 ∨ x14 ∨ x16 ∨ x17 ∨ x18 )
∧ (x2 ∨ x3 ∨ x7 ∨ x8 ∨ x11 ∨ x13 ∨ x14 ∨ x16 ∨ x17 ∨ x18 )
Table 14 shows the ten rows the men-algorithm uses to compress M od(ϕ2 ). In contrast the
ESOP-algorithm uses between 85 and 168 many 012-rows, depending on the order of the clauses.
18
And so is the men-algorithm, but stuck with hand-calculation we didn’t experiment with that.
14
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
e2
2
0
0
e2
0
e1
2
0
2
2
e2
2
0
0
e1
0
0
2
e2
0
e1
2
e1
e2
e2
0
2
e2
2
e2
0
0
e2
0
0
3
024
2
2
0
2
2
2
2
2
0
2
2
2
2
0
0
2
0
1
4
096
2
e2
e2
2
2
2
e2
e2
e1
2
e2
2
e2
0
e1
e2
0
e2
48 960
e
2
1
e
e
e
2
2
0
2
2
e
2
0
0
2
0
e
8
2
2
2
2
2
2
2
2
2
2
2
2
2
m
2
2
m
2
131 072
n2
n2
n2
2
2
2
2
n2
n1
2
n2
2
n2
1
n1
n2
1
n2
48 960
2
2
0
2
2
2
2
2
1
2
2
2
2
1
1
2
1
2
8
192
2
2
1
2
2
2
2
n
1
2
n
2
n
1
1
2
1
2
7
168
n2
n2
1
2
n1
2
2
1
1
2
1
n1
1
1
1
n2
1
n2
672
064
720
Table 14: Applying the men-algorithm to ϕ2 in (12).
7.3 As to BDD’s, the BDD coupled to ϕ1 in (11) has 27 nodes. Recall that hence the ESOP
induced by it has 19 at least 27 terms (=012-rows). Adding the clause (x5 ∨ x6 ∨ x8 ∨ x3 ∨ x9 )
to ϕ1 pushes the BDD size to 42. The BDD of ϕ2 in (12) has 102 nodes.
References
[B] E. Boros, Orthogonal forms and shellability, Section 7 in: Boolean Fuctions (ed. Y. Crama,
P.L. Hammer), Enc. Math. Appl. 142, Cambridge University Press 2011.
[K] D. Knuth, The art of computer programming, Volume 4 (Preprint), Section 7.14: Binary
decision diagrams, Addison-Wesley 2008.
[TS] Takahisa Toda and Takehide Soh. 2016. Implementing Efficient All Solutions SAT Solvers.
J. Exp. Algorithmics 21, Article 1.12 (2016), 44 pages. DOI: https://doi.org/10.1145/2975585
[W] M. Wild, ALLSAT compressed with wildcards. Part 1: Converting CNFs to orthogonal
DNFs, preliminary version is on ResearchGate.
19
As seen in 7.1 in order to find the exact number of 012-rows one needs to know the actual BDD, not just its
size. This is planned in a later version of the article, when certain problems with Python’s handling of BDD’s
are sorted out. For the same reason the author hasn’t experimented yet with different variable orders. As is well
known [K], they heavily influence the size of the BDD.
15
| 8 |
SUBGROUP GROWTH IN SOME PROFINITE CHEVALLEY GROUPS
arXiv:1511.04333v4 [] 2 Nov 2016
INNA CAPDEBOSCQ, KARINA KIRKINA, AND DMITRIY RUMYNIN
Abstract. In this article we improve the known uniform bound for subgroup growth of Chevalley groups G(Fp [[t]]). We introduce a new parameter, the ridgeline number v(G), and give new
bounds for the subgroup growth of G(Fp [[t]]) expressed through v(G). We achieve this by deriving a new estimate for the codimension of [U, V ] where U and V are vector subspaces in the
Lie algebra of G.
For a finitely generated group G, let an (G) be the number of subgroups of G of index n
and sn (G) the number of subgroups of G of index at most n. The “subgroup growth” of G is
the asymptotic behaviour of the sequence (an (G))n∈N . It turns out that the subgroup growth and
structure of G are not unrelated, and in fact the latter can sometimes be used as a characterisation
of G. (For complete details we refer the reader to a book by Lubotzky and Segal [LSe03].)
For example, Lubotzky and Mann [LM91] show that a group G is p-adic analytic if and only
if there exists a constant c > 0 such that an (G) < nc . This inspiring result is followed by Shalev
who proves that if G is a pro-p group for which an (G) ≤ nc logp n for some constant c < 81 , then G
is p-adic analytic.
Answering a question of Mann, Barnea and Guralnick investigate the subgroup growth of
SL12 (Fp [[t]]) for p > 2, and show that the supremum of the set of all those c that a pro-p group G
is p-adic analytic provided that an (G) < nc logp n , is no bigger than 21 . Thus one may see that not
only the growth type, but also the precise values of the constants involved are important when
studying the connection between subgroup growth and the structure of a group.
Later on Lubotzky and Shalev pioneer a study of the so-called Λ-standard groups [LSh94]. A
particular subclass of these groups are Λ-perfect groups for which they show existence of a constant
c > 0 such that
an (G) < nc logp n .
An important subclass of those groups are the congruence subgroups of Chevalley groups over
Fp [[t]]. Let G be a simple simply connected Chevalley group scheme, G(1) the first congruence
subgroup of G(Fp [[t]]). Abért, Nikolov and Szegedy show that if m is the dimension of G, then
7
spk (G(1)) ≤ p 2 k
2
+mk
,
7
that is, sn (G(1)) ≤ n 2 logp n+m [ANS03].
In this article we improve their estimates (cf. [ANS03]). The Lie algebra gZ of G is defined
over integers. Let K be a field of characteristic p, which could be either zero or a prime. To state
the results we now introduce a new parameter of the Lie algebra g := gZ ⊗Z K. We fix an invariant
bilinear form η = h , i on g of maximal possible rank. Let g0 be its kernel. Notice that the nullity
r := dim g0 of η is independent of the choice of η.
Definition 0.1. Let l be the rank of g, m its dimension and s the maximal dimension of the
centraliser of a non-central element x ∈ g. We define the ridgeline number of g as
l
v(G) = v(g) :=
.
m−s−r
We discuss ridgeline numbers in Section 2. The values of v(g) can be found in the table in
Appendix A.
Date: November 2, 2016.
1991 Mathematics Subject Classification. Primary 20E07; Secondary 17B45, 17B70.
Key words and phrases. subgroup growth, Lie algebras, Chevalley groups.
1
2
INNA CAPDEBOSCQ, KARINA KIRKINA, AND DMITRIY RUMYNIN
Definition 0.2. The positive characteristic p of the field K is called good if p does not divide the
coefficients of the highest root. The positive characteristic p of the field K is called very good if p
is good and g is simple. We call the positive characteristic p tolerable if any proper ideal of g is
contained in its centre.
We discuss these restrictions in Section 2. We may now state our main result.
Theorem 0.3. Let G be a simple simply connected Chevalley group scheme of rank l ≥ 2. Suppose
p is a tolerable prime for G. Let G(1) be the first congruence subgroup of G(Fp [[t]]), that is
G(1) = ker(G(Fp [[t]]) ։ G(Fp )). If m := dim G, then
apk (G(1)) ≤ p
(3+4v(g))
2
k2 +(m− 23 −2v(g))k
.
If l = 2 and p is very good, then a stronger estimate holds:
3
apk (G(1)) ≤ p 2 k
2
+(m− 32 )k
.
Notice that as one can see from the table in Appendix A, with one exception (when G = Al
and p divides l + 1) the biggest possible value of v(g) is 12 (v(g) ≤ 32 in that special case). This
makes 3+4v(g)
≤ 52 ( 3+4v(g)
≤ 17
2
2
6 correspondingly).
Our proof of Theorem 0.3 in many ways follows the ones of Barnea and Guralnick and of Abért,
Nikolov and Szegedy. The improvement in the result is due to the following new estimates.
Theorem 0.4. Let a be a Lie algebra over a field K. Suppose that the Lie algebra g = a ⊗K K is
a Chevalley Lie algebra of rank l ≥ 2 and that the characteristic of K is zero or tolerable. Then
for any two subspaces U and V of a, we have
codim([U, V ]) ≤ (1 + v(g))(codim(U ) + codim(V )).
If l = 2 and the characteristic of K is zero or very good, a stronger result holds:
codim([U, V ]) ≤ codim(U ) + codim(V ).
We conjecture that the second estimate holds for any Lie algebra g (if the characteristic of K
is zero or very good).
1. Proof of Theorem 0.3
The proof of Theorem 0.3 relies on Theorem 0.4 that will be proved later. We follow Abért,
Nikolov and Szegedy [ANS03, Theorem 2] and Barnea, Guralnick [BG01, Theorem 1.4].
Suppose that hypotheses of Theorem 0.3 hold. We start with the following observation (cf.
[ANS03, Corollary 1 and Lemma 1] and [LSh94, Lemma 4.1] ).
Lemma 1.1. If H is an open subgroup of G(1) and d(H) is the minimal number of generators of
H, then
d(H) ≤ m + (3 + 4v(g)) logp |G(1) : H|.
Moreover, if l = 2 and p is very good, then
d(H) ≤ m + 3 logp |G(1) : H|.
Notice that in the second case g = A2 , C2 or G2 and m = 8, 10 or 14 correspondingly.
Proof. First of all recall that d(H) = logp |H : Φ(H)| ≤ logp |H : H ′ | where Φ(H) is the Frattini
subgroup. Because of the correspondence between the open subgroups of G(1) and subalgebras of
its graded Lie algebra L = L(G(1)) (see [LSh94]), logp |H : H ′ | ≤ dim H/H′ where H = L(H) is
the corresponding subalgebra of L. Hence it suffices to show that
dim H/H′ ≤ m + (3 + 4v(g)) dim L/H
in the general case, and that
dim H/H′ ≤ m + 3 dim L/H
in the very good rank 2 case.
SUBGROUP GROWTH
3
Recall that the graded Lie algebra L is isomorphic to g ⊗F tF[t] where F = Fp . Since every
i
element a ∈ L can be uniquely written as a = Σ∞
i=1 ai ⊗ t with ai ∈ g, one can define l(a) := as
where s is the smallest integer such that as 6= 0, and in this case s := deg(a). Now set
Hi := hl(a) | a ∈ H with deg(a) = ii.
Observe that Hi = {l(a) | a ∈ H with deg(a) = i} ∪ {0}. Then dim L/H = Σ∞
i=1 dim g/Hi , and
this sum is finite as the left hand side is finite. Then
′
[Hi ⊗ ti , Hj ⊗ tj ] ⊆ [Hi , Hj ] ⊗ ti+j ⊆ Hi+j
⊗ ti+j
′
′
:= hl(a) | a ∈ H′ with deg(a) = i + ji, and so dim g/[Hi , Hj ] ≥ dim g/Hi+j
. Adding
where Hi+j
up these inequalities for i = j and i = j + 1 we get
∞
′
dim L/H′ = Σ∞
i=1 dim L/Hi ≤ dim g + Σ1≤i≤j≤i+1 dim g/[Hi , Hj ].
Now we use the estimates of Theorem 0.4:
dim L/H′ ≤ m + Σ∞
1≤i≤j≤i+1 α(dim g/Hi + dim g/Hj ) ≤ m + 4α dim L/H,
where α = 1 + v(g) or 1 depending on the rank of g and p. The result follows immediately.
Now we apply an estimate [LSh94, Lemma 4.1]: apk (G(1)) ≤ p
g1 +...+gpk−1
where
i
gpi = gpi (G(1)) = max{d(H) | H ≤open G(1), |G(1) : H| = p }.
Using Lemma 1.1, in the general case (l ≥ 2) we have
i=k−1
apk (G(1)) ≤ pΣi=0
m+(3+4v(g))i
=p
(3+4v(g)) 2
k +(m− 23 −2v(g))k
2
.
For l = 2 and very good p, Lemma 1.1 gives us
i=k−1
apk (G(1)) ≤ pΣi=0
m+3i
3
= p2k
2
+(m− 32 )k
.
This finishes the proof of the theorem.
2. Ridgeline numbers and small primes
We adopt the notations of Definition 0.1. We prove that m − s = 2(h∨ − 1) where h∨ is the
dual Coxeter number of g (see Proposition 3.4). Therefore,
v(g) =
2(h∨
l
.
− 1) − r
We present the values of v(g) in Appendix A. We include only Lie algebras in tolerable characteristics (see Definition 0.2) where our method produces new results.
Let us remind the reader that the very good characteristics are p ∤ l + 1 in type Al , p 6= 2
in types Bl , Cl , Dl , p 6= 2, 3 in types E6 , E7 , F4 , G2 , and p 6= 2, 3, 5 in type E8 . If p is very
good, the Lie algebra g behaves as in characteristic zero. In particular, g is simple, its Killing
form is non-degenerate, etc. Let us contemplate what calamities betide the Lie algebra g in small
characteristics.
Suppose that p is tolerable but not very good. If p does not divide the determinant of the
Cartan matrix of g, the Lie algebra g is simple. This covers the following primes: p = 2 in types
E6 and G2 , p = 3 in types E7 and F4 , p = 2, 3, 5 in type E8 . In this scenario, the g-modules g
and g∗ are isomorphic, which immediately gives us a non-degenerate invariant bilinear form on g
[H95, 0.13].
If p divides the determinant of the Cartan matrix of g, there is more than one Chevalley Lie
algebra. We study the simply connected Lie algebra g, i.e., [g, g] = g and g/z is simple (where z is
the centre). There is a canonical map to the adjoint Lie algebra g♭ :
M
M
ϕ:g=h⊕
gα → g♭ = h♭ ⊕
gα .
α
α
The map ϕ is the identity on the root spaces gα . Let us describe it on the Cartan subalgebras.
The basis of the Cartan subalgebra h are the simple coroots hi = α∨
i = [eαi , e−αi ]. The basis of
4
INNA CAPDEBOSCQ, KARINA KIRKINA, AND DMITRIY RUMYNIN
the Cartan subalgebra h♭ are the fundamental coweights yi = ̟i∨ defined by αi (yj ) = δi,j . Now
the map ϕ on the Cartan subalgebras is given by
X
cj,i yj
ϕ(hi ) =
j
where cj,i are entries of the Cartan matrix of the coroot system of g. The image of ϕ is [g♭ , g♭ ].
The kernel of ϕ is the centre z. From our description z is the subspace of h equal to the null space
of the Cartan matrix. It is equal to g0 , the kernel of η. The dimension of z is at most 2 (see the
values of r in Appendix A).
The key dichotomy now is whether the Lie algebra g/z is simple or not. If g is simply-laced,
the algebra g/z is simple. This occurs when p | l + 1 in type Al , p = 2 in types Dl and E7 , p = 3
in type E6 . Notice that A1 in characteristic 2 needs to be excluded: g/z is abelian rather than
simple. In this scenario the g-modules g/z and (g/z)∗ are isomorphic. This gives us an invariant
bilinear form with the kernel z [H95, 0.13].
Let us look meticulously at g of type Dl when p = 2. The standard representation gives a
homomorphism of Lie algebras
ρ11 (x) ρ12 (x)
ρ : g → so2l (K), x 7→ ρ(x) =
,
ρ21 (x) ρ22 (x)
where ρ22 (x) = ρ11 (x)t , while ρ12 (x) and ρ21 (x) are skew-symmetric l × l-matrices, which for
p = 2 is equivalent to symmetric with zeroes on the diagonal. The Lie algebra so2l (K) has a
1-dimensional centre spanned by the identity matrix. If l is odd, ρ is an isomorphism, and g has a
1-dimensional centre. However, if l is even, ρ has a 1-dimensional kernel, and g has a 2-dimensional
centre.
It is instructive to observe how the standard representation ρ equips g with an invariant form.
A skew-symmetric matrix Z can be written uniquely as a sum Z = Z L + Z U , where Z L is strictly
lower triangular and Z U is strictly upper triangular. Then the bilinear form is given by
η(x, y) := hρ(x), ρ(y)i := Tr (ρ11 (x)ρ11 (y) + ρ12 (x)L ρ21 (y)U + ρ21 (x)L ρ12 (y)U ).
This form η is a reduction of the form 12 Tr (ϕ(x)ϕ(y)) on so2l (Z), hence it is invariant.
Finally we suppose that p is not tolerable. This happens when p = 2 in types Bl , Cl and F4
or p = 3 in type G2 . In all these cases g is not simply-laced and the quotient algebra g/z is not
simple. The short root vectors generate a proper non-central ideal I. This ideal sits in the kernel
of any non-zero invariant form. Consequently, our method fails to produce any new result.
3. Proof of Theorem 0.4: the General Case
Let a be an m-dimensional Lie algebra over a field K of characteristic p (prime or zero). We
consider it as a topological space in the Zariski topology. We also consider a function dim ◦c : a → R
that for an element x ∈ a computes the dimension of its centraliser c(x).
Lemma 3.1. The function dim ◦c is upper semicontinuous, i.e., for any number n the set
{x ∈ a | dim(c(x)) ≤ n} is Zariski open.
Proof. Observe that c(x) is the kernel of the adjoint operator ad(x). Thus, dim(c(x)) ≤ n is
equivalent to rank(ad(x)) ≥ m − n. This is clearly an open condition, given by the non-vanishing
of one of the (m − n)-minors.
Now we move to K, the algebraic closure of K. Let ā = a ⊗K K. To distinguish centralisers in
a and ā we denote c(x) := ca (x) and c̄(x) := cā (x). Now we assume that ā is the Lie algebra of a
connected algebraic group A. Let Orb(x) be the A-orbit of an element x ∈ ā.
Lemma 3.2. Let x and y be elements of ā such that x ∈ Orb(y). Then dim c̄(x) ≥ dim c̄(y).
Proof. The orbit Orb(y) intersects any open neighbourhood of x, and, in particular, the set
X = {z ∈ ā | dim(c̄(z)) ≤ dim(c̄(x))}, which is open by Lemma 3.1. If z ∈ Orb(y) ∩ X, then
dim c̄(x) ≥ dim c̄(z) = dim c̄(y).
SUBGROUP GROWTH
5
The stabiliser subscheme Ax is, in general, non-reduced in positive characteristic. It is reduced
(equivalently, smooth) if and only if the inclusion c(x) ⊇ Lie(Ax ) is an equality (cf. [H95, 1.10]).
If Ax is smooth, the orbit-stabiliser theorem implies that
dim(a) = m = dim Ax + dim Orb(x) = dim c̄(x) + dim Orb(x).
In particular, Lemma 3.2 follows from the inequality dim Orb(x) ≤ dim Orb(y).
Let us further assume that A = G is a simple connected simply-connected algebraic group
and ā = g is a simply-connected Chevalley Lie algebra. Let us fix a triangular decomposition
g = n− ⊕ h ⊕ n. An element x ∈ g is called semisimple if Orb(x) ∩ h 6= ∅. An element x ∈ g
is called nilpotent if Orb(x) ∩ n 6= ∅. We call a representation x = xs + xn a quasi-Jordan
decomposition if xs ∈ g(h) (image of h under g) and xn ∈ g(n) for the same g ∈ G.
Recall that a Jordan decomposition is a quasi-Jordan decomposition x = xs + xn such that
[xs , xn ] = 0. A Jordan decomposition exists and is unique if g admits a non-degenerate bilinear
form [KW, Theorem 4].
Notice that part (1) of the following lemma cannot be proved by the argument that the Lie subalgebra Kx is contained in a maximal soluble subalgebra: in characteristic 2 the Borel subalgebra
b = h ⊕ n is not maximal soluble.
Lemma 3.3. Assume that p 6= 2 or G is not of type Cl (in particular, this excludes C2 = B2 and
C1 = A1 ). Then the following statements hold.
(1) Every x ∈ g admits a (non-unique) quasi-Jordan decomposition x = xs + xn .
(2) xs belongs to the orbit closure Orb(x).
(3) If Orb(x) is closed, then x is semisimple.
(4) dim c̄(xs ) ≥ dim c̄(x).
Proof. (cf. [KW, Section 3].) (1) Our assumption on g assures the existence of a regular semisimple
element h ∈ h, i.e., an element such that c̄(h) = h. The differential d(e,h) a : g ⊕ h → g of the
action map a : G × h → g is given by the formula
d(e,h) a(x, k) = [x, h] + k.
Since the adjoint operator ad(h) is a diagonalizable operator whose 0-eigenspace is h, the kernel
of d(e,x) a is h ⊕ 0. Hence, the image of a contains an open subset of g. Since the set ∪g∈G g(b)
contains the image of a, it is a dense subset of g.
Let B be the Borel subgroup of G whose Lie algebra is b. The quotient space F = G/B is a
flag variety. Since F is projective, the projection map π : g × F → g is proper. The Springer
variety S = {(x, g(B)) | x ∈ g(b)} is closed in g × F . Hence, ∪g∈G g(b) = π(S) is closed in g. Thus,
∪g∈G g(b) = g. Choosing g such that x ∈ g(b) gives a decomposition.
(2) Suppose xs ∈ g(h). Let T be the torus whose Lie algebra is g(h). We decompose x over the
roots of T :
X
xα .
x = xs + xn = x0 +
α∈Y (T )
We can choose a basis of Y (T ) so that only positive roots appear. Hence, the action map a : T → g,
a(t) = t(x) extends alone the embedding T ֒→ Kl to a map b
a : Kl → g. Observe that xs = b
a(0).
−1
Let U ∋ xs be an open subset of g. Then b
a (U ) is open in Kl and T ∩ b
a −1 (U ) is not empty.
Pick t ∈ T ∩ b
a −1 (U ). Then a(t) = t(x) ∈ U , thus, xs ∈ T (x) ⊆ Orb(x).
(3) This immediately follows from (1) and (2).
(4) This immediately follows from (2) and Lemma 3.2.
If α is a long simple root, its root vector eα ∈ g = ā is known as the minimal nilpotent. The
dimension of Orb(eα ) is equal to 2(h∨ − 1) (cf. [W99]).
Proposition 3.4. Suppose that l ≥ 2 and that the characteristic p of the field K is tolerable for
g. Then for any noncentral x ∈ a
dim c(x) ≤ dim c̄(eα ) = m − 2(h∨ − 1).
6
INNA CAPDEBOSCQ, KARINA KIRKINA, AND DMITRIY RUMYNIN
Proof. Let x ∈ a (y ∈ g) be a noncentral element with c(x) (c̄(y) correspondingly) of the largest
possible dimension. Observe that dim c(x) ≤ dim c̄(x) ≤ dim c̄(y).
Let us examine a quasi-Jordan decomposition y = ys +yn . Since ys ∈ Orb(y), we conclude that
dim c̄(ys ) ≥ dim c̄(y). But dim c̄(y) is assumed to be maximal. There are two ways to reconcile
this: either dim c̄(ys ) = dim c̄(y), or ys is central.
Suppose ys is central. Then y and yn have the same centralisers. We may assume that y = yn
is nilpotent. Lemma 3.2 allows us to assume without loss of generality that the orbit Orb(y) is
minimal, that is, Orb(y) = Orb(y) ∪ {0}. On the other hand, the closure Orb(y) contains a root
vector eβ .
Let us prove the last statement. First, observe that K× y ⊆ Orb(y). If p is good, this immediately follows from Premet’s version of Jacobson-Morozov Theorem [P95]. If Orb(λy) 6= Orb(y) in
an exceptional Lie algebra in a bad tolerable characteristic, then we observe two distinct nilpotent
orbits with the same partition into Jordan blocks. It never occurs: all the partitions are listed in
the VIGRE paper [V05, section 6]. The remaining case of p = 2 and g is of type Dl is also settled
in the VIGRE paper [V05]. Now let y ∈ g(n), and T0 be the torus whose Lie algebra is g(h).
:= T0 × K× with the second factor acting on g via the vector space structure. Write
Consider
P T
y = β∈Y (T0 ) yβ using the roots of T0 . The closure of the orbit T (y) is contained in Orb(y). Let
us show that T (y) contains one of yβ . Let us write T0 = Gm × Gm × . . . × Gm and decompose
y = yk + yk+1 + . . . + yn using the weights of the first factor Gm with yk 6= 0. Then
T (y) ⊇ {(λ, 1, 1 . . . , 1, λ−k ) · y|λ ∈ K× } = {yk + λ1 yk+1 + . . . + λn−k yn |λ ∈ K× }.
Hence, yk ∈ T (y). Repeat this argument with yk instead of y for the second factor of T0 , and so
on. At the end we arrive at nonzero yβ , hence, eβ ∈ Orb(y).
Without loss of generality we now assume that y = eβ for a simple root β. If p is good, then
dim(c̄(eβ )) does not depend on the field:
M
c̄(eβ ) = ker(dβ : h → K) ⊕
gγ .
γ+β is not a root
In particular, it is as in characteristic zero: the long root vector has a larger centraliser then the
short root vector and dim c̄(y) = dim c̄(eα ) = m − 2(h∨ − 1) [W99]. If p = 2 and g is of type
Dl , then a direct calculation gives the same formula for dim c̄(eα ). In the exceptional cases in
bad characteristics the orbits and their centralisers are computed in the VIGRE paper [V05]. One
goes through their tables and establishes the formula for dim c̄(y) in all the cases.
Now suppose dim c̄(ys ) = dim c̄(y). We may assume that y = ys is semisimple. Then y is in
some Cartan subalgebra g −1 (h) and dim c̄(g(y)) = dim c̄(y). Moreover,
M
gα
c̄(g(y)) = h ⊕
{α|α(g(y))=0}
is a reductive subalgebra. If ϕ : g → g♭ is the canonical map (see Section 2), then dim c̄(g(y)) =
dim cg♭ (ϕ(g(y))). It remains to examine the Lie algebras case by case and exhibit a non-zero
element in h♭ with the maximal dimension of centraliser. This is done in Appendix A.
Now we can give a crucial dimension estimate for the proof of Theorem 0.4.
Proposition 3.5. Let a be an m-dimensional Lie algebra with an associative bilinear form η,
whose kernel a0 is the centre of a. Suppose r = dim(a0 ) and k ≥ dim(c(x)) for any non-central
element x ∈ a. Finally, let U, V be subspaces of a such that dim(U ) + dim(V ) > m + k + r. Then
[U, V ] = a.
Proof. Suppose not. Let us consider the orthogonal complement W = [U, V ]⊥ 6= a0 under the
form η. Observe that U ⊆ [V, W ]⊥ since η is associative. But W admits a noncentral element
x ∈ W so that dim(c(x)) ≤ k. Hence
dim([V, W ]) ≥ dim(V ) − k and dim([V, W ]⊥ ) ≤ m + k + r − dim(V ).
Inevitably, dim(U ) ≤ m + k + r − dim(V ).
SUBGROUP GROWTH
7
We may now prove the first part of Theorem 0.4. We use m, l, r and s as in Definition 0.1. If
dim(U ) + dim(V ) > m + s + r, we are done by Proposition 3.5:
codim([U, V ]) = 0 ≤ (1 + v(g))(codim(U ) + codim(V )).
Now we assume that dim(U ) + dim(V ) ≤ m + s + r. It is known [ANS03] that
codim([U, V ]) ≤ l + codim(U ) + codim(V ).
It remains to notice that l = v(g)(m−s−r) ≤ v(g)(codim(U )+codim(V )). The theorem is proved.
4. Proof of Theorem 0.4: Rank 2
In this section G is a Chevalley group scheme of rank 2. The characteristic p of the field K is
zero or very good for g. Let {α, β} be the set of simple roots of g with |β| ≤ |α|. If g is of type A2
then α and β have the same length. The group G = G(K) acts on on g via the adjoint action. By
c(x) we denote the centraliser cg (x) in this section. Let us summarise some standard facts about
this adjoint action (cf. [H95]).
If x ∈ g, the stabiliser Gx is smooth, i.e., its Lie algebra is the centraliser c(x).
The dimensions dim(Orb(x)) = dim(G) − dim(c(x)) and dim(c(x)) are even.
If x 6= 0 is semisimple, dim(c(x)) ∈ {2, 4}. Hence, dim(Orb(x)) ∈ {m − 2, m − 4}.
A truly mixed element x = xs + xn (with non-zero semisimple and nilpotent parts) is
regular, i.e., dim(c(x)) = 2 (cf. Lemma 3.3).
(5) x is nilpotent if and only if Orb(x) contains 0.
(6) There is a unique orbit of regular nilpotent elements Orb(er ) where er = eα + eβ . In
particular, dim(c(er )) = 2 and dim(Orb(er )) = m − 2.
(7) For two nilpotent elements x and y we write x y if Orb(x) ⊇ Orb(y). The following are
representatives of all the nilpotent orbits in g (in brackets we report [dim(Orb(x)), dim(c(x))]):
(a) If G is of type A2 , then
(1)
(2)
(3)
(4)
er [6, 2] eα [4, 4] 0 [0, 8] .
(b) If G is of type C2 , then eα and eβ are no longer in the same orbit and so we have
er [8, 2] eβ [6, 4] eα [4, 6] 0 [0, 10] .
(c) If G is of type G2 , there is an additional subregular nilpotent orbit of an element
esr = e2α+3β + eβ . In this case we have
er [12, 2] esr [10, 4] eβ [8, 6] eα [6, 8] 0 [0, 14] .
We will now prove Theorem 0.4 for groups of type A2 , C2 and G2 . We need to show that if U
and V are subspaces of g, then
(1)
dim([U, V ]) ≥ dim(U ) + dim(V ) − dim g.
We will use the following device repeatedly:
Lemma 4.1. The inequality
(2)
dim([U, V ]) ≥ dim(V ) − dim(V ∩ c(x))
holds for any x ∈ U . In particular, if there exists x ∈ U such that dim(U )+dim(V ∩c(x)) ≤ dim g,
then inequality (1) holds.
.
Proof. It immediately follows from the observation [U, V ] ⊇ [x, V ] ∼
= V (V ∩ c(x)).
Now we give a case-by-case proof of inequality (1). Without loss of generality we assume that
1 ≤ dim(U ) ≤ dim(V ) and that the field K is algebraically closed.
8
INNA CAPDEBOSCQ, KARINA KIRKINA, AND DMITRIY RUMYNIN
4.1. G = A2 . Using the standard facts, observe that if x ∈ g \ {0}, then dim(c(x)) ≤ 4. Moreover,
if dim(c(x)) = 4, then either x ∈ Orb(eα ), or x is semisimple. Since dim g = 8, we need to
establish that
dim([U, V ]) ≥ dim(U ) + dim(V ) − 8
Now we consider various possibilities.
Case 1: If dim(U ) ≤ 4, then dim(V ∩ c(x)) ≤ dim(c(x)) ≤ 4 ≤ 8 − dim(U ) for any nonzero
x ∈ U . We are done by Lemma 4.1.
Case 2: If dim(U ) + dim(V ) > 12, then the hypotheses of Proposition 3.5 hold. Hence,
[U, V ] = g that obviously implies the desired conclusion.
Therefore we may suppose that dim(U ) + dim(V ) ≤ 12 and dim U ≥ 5. This leaves us with the
following two cases.
Case 3: dim(U ) = 5 and dim(V ) ≤ 7. We need to show that
dim([U, V ]) ≥ dim(U ) + dim(V ) − 8 = dim(V ) − 3.
As dim(Orb(eα )) = 4, we may pick x ∈ U with x 6∈ Orb(eα ). If x is regular, we are done by
Lemma 4.1 since dim(V ∩ c(x)) ≤ dim(c(x)) = 2. If x is not regular, then dim(c(x)) = 4 and x is
semisimple. In particular, its centraliser c(x) contains a Cartan subalgebra g(h) of g.
Let us consider the intersection V ∩ c(x). If dim(V ∩ c(x)) ≤ 3, we are done by Lemma 4.1.
Otherwise, V ⊇ c(x) and V contains a regular semisimple element y ∈ g(h) ⊆ V . If U ⊇ c(y) =
g(h), then U ∋ y and we are done by Lemma 4.1 as in the previous paragraph. Otherwise,
dim(U ∩ c(y)) ≤ 1 and we finish the proof using Lemma 4.1:
dim([U, V ]) ≥ dim(U ) − dim(U ∩ c(y)) ≥ 5 − 1 = 4 ≥ dim(V ) − 3.
Case 4: dim(U ) = dim(V ) = 6. This time we must show that
dim([U, V ]) ≥ 4 = dim(V ) − 2.
By Lemma 4.1 it suffices to find a regular element in x ∈ U (or in V ) since dim(V ∩ c(x)) ≤
dim(c(x)) = 2. Observe that
dim(U ∩ V ) ≥ dim(U ) + dim(V ) − 8 = 4 = dim(Orb(eα )).
Since Orb(eα ) is an irreducible algebraic variety and not an affine space, there exists x ∈ U ∩ V
such that x 6∈ Orb(eα ). If x is regular, we are done. If x is not regular, x is semisimple and its
centraliser c(x) = Kx ⊕ l, a direct sum of Lie algebras Kx ∼
= K and l ∼
= sl2 (K).
Consider the intersection V ∩ c(x). If dim(V ∩ c(x)) ≤ 2, we are done by Lemma 4.1 as before.
Assume that dim(V ∩ c(x)) ≥ 3. If dim(V ∩ c(x)) = 4, V contains c(x) and consequently a regular
semisimple element y.
Finally, consider the case dim(V ∩ c(x)) = 3. Let π2 be the natural projection π2 : c(x) → l and
set W := π2 (V ∩ c(x)). Since Kx ⊆ V ∩ c(x), the subspace W of sl2 (K) is 2-dimensional. Clearly,
V ∩ c(x) ⊆ Kx ⊕ W . Since both spaces have dimension 3, V ∩ c(x) = Kx ⊕ W . Then W = a⊥
(with respect to the Killing form), where 0 6= a ∈ sl2 (K) is either semisimple or nilpotent. In
both cases W contains a nonzero nilpotent element z. Thus, we have found a regular element
x + z ∈ V ∩ c(x). This finishes the proof for A2 .
4.2. G = C2 . Notice that this time dim(c(x)) ≤ 6 for all 0 6= x ∈ g. Moreover, if dim(c(x)) = 6,
x ∈ Orb(eα ). Finally, the set Orb(eα ) = Orb(eα ) ∪ {0} is a 4-dimensional cone, and the set
Orb(eβ ) = Orb(eβ ) ∪ Orb(eα ) ∪ {0} is a 6-dimensional cone.
As dim g = 10, this time we need to show that
dim([U, V ]) ≥ dim(U ) + dim(V ) − 10 = dim(V ) − (10 − dim(U )).
Case 1: dim(U ) ≤ 4. We are done by Lemma 4.1 since for any 0 6= x ∈ U ,
dim(V ∩ c(x)) ≤ dim(c(x)) ≤ 6 ≤ 10 − dim(U ).
SUBGROUP GROWTH
9
Case 2: 5 ≤ dim(U ) ≤ 6. Hence, we may choose x ∈ U such that x 6∈ Orb(eα ). We are done
by Lemma 4.1 since
dim(V ∩ c(x)) ≤ dim(c(x)) ≤ 4 ≤ 10 − dim(U ).
Case 3: If dim(U ) + dim(V ) > 16, then then the hypotheses of Proposition 3.5 hold. Hence,
[U, V ] = g, which implies the desired conclusion.
Therefore, we may assume that dim(U ) + dim(V ) ≤ 16 and dim(U ) ≥ 7. This leaves us with
the remaining two cases.
Case 4: dim(U ) = 7, dim(V ) ≤ 9. Now we must show that dim([U, V ]) ≥ dim(V ) − 3. By
Lemma 4.1 it suffices to pick x ∈ U with dim(V ∩ c(x)) ≤ 3. In particular, a regular element will
do.
Let us choose x ∈ U such that x 6∈ Orb(eβ ). If x is regular, we are done. If x is not regular, x
is semisimple. Hence, its centraliser c(x) contains a Cartan subalgebra g(h). Let us consider the
intersection V ∩ c(x). If dim(V ∩ c(x)) ≤ 3, we are done again. Assume that dim(V ∩ c(x)) = 4.
Consequently, V ⊇ c(x) and V contains a regular semisimple element y ∈ g(h) ⊆ V . Now if
U ⊇ c(y) = g(h), then we have found a regular element y ∈ U . Otherwise, dim(U ∩ c(y)) ≤ 1,
and so, as y ∈ V , we finish using inequality (2) of Lemma 4.1:
dim([U, V ]) ≥ dim(U ) − dim(U ∩ c(y)) ≥ 7 − 1 = 6 ≥ dim(V ) − 3.
Case 5: dim(U ) = dim(V ) = 8. Let us observe that
dim(U ∩ V ) ≥ dim(U ) + dim(V ) − 10 = 6 = dim(Orb(eβ )).
Since Orb(eβ ) is an irreducible algebraic variety and not an affine space, there exists x ∈ U ∩ V
such that x 6∈ Orb(eβ ). If x is regular, we are done by Lemma 4.1:
dim([U, V ]) ≥ dim(V ) − dim(V ∩ c(x)) ≥ 8 − 2 = 6 = dim(U ) + dim(V ) − 10.
If x is not regular, then x is semisimple and its centraliser c(x) = Kx ⊕ l, a direct sum of Lie
algebras K and l ∼
= sl2 (K). If dim(V ∩ c(x)) ≤ 2, then by Lemma 4.1
dim([U, V ]) ≥ dim(V ) − dim(V ∩ c(x)) ≥ 8 − 2 = 6.
Thus we may assume that dim(V ∩c(x)) ≥ 3. We now repeat the argument from the last paragraph
of § 4.1. This concludes § 4.2.
4.3. G = G2 . In this case dim(c(x)) ≤ 8 for all 0 6= x ∈ g. Moreover, if dim(c(x)) = 8, then
x ∈ Orb(eα ). The centre of c(eα ) is Keα . Finally, the set Orb(eα ) = Orb(eα ) ∪ {0} is a 6dimensional cone, the set Orb(eβ ) = Orb(eβ ) ∪ Orb(eα ) ∪ {0} is an 8-dimensional cone and the
set Orb(esr ) = Orb(esr ) ∪ Orb(eβ ) ∪ Orb(eα ) ∪ {0} is a 10-dimensional cone.
As dim g = 14, our goal now is to show that
dim([U, V ]) ≥ dim(U ) + dim(V ) − 14
In order to do so, as before, we are going to consider several mutually exclusive cases.
Case 1: dim(U ) ≤ 6. We are done by Lemma 4.1 since for any 0 6= x ∈ U ,
dim(V ∩ c(x)) ≤ dim(c(x)) ≤ 8 ≤ 14 − dim(U ).
Case 2: 7 ≤ dim(U ) ≤ 8. In this case we may choose x ∈ U such that x 6∈ Orb(eα ). We are
done by Lemma 4.1 since
dim(V ∩ c(x)) ≤ dim(c(x)) ≤ 6 ≤ 14 − dim(U ).
Case 3: 9 ≤ dim(U ) ≤ 10. Now we may pick x ∈ U such that x 6∈ Orb(eβ ). Again we are
done by Lemma 4.1 since
dim(V ∩ c(x)) ≤ dim(c(x)) ≤ 4 ≤ 14 − dim(U ).
Case 4: If dim(U ) + dim(V ) > 22, then [U, V ] = g by Proposition 3.5. This leaves us with a
single last possibility.
Case 5: dim(U ) = dim(V ) = 11. It remains to show that
dim([U, V ]) ≥ 8 = dim(V ) − 3.
10
INNA CAPDEBOSCQ, KARINA KIRKINA, AND DMITRIY RUMYNIN
By dimension considerations we can choose x ∈ U such that x 6∈ Orb(esr ). Then dim(c(x)) ≤ 4.
If dim(V ∩ c(x)) ≤ 3, we are done by by Lemma 4.1. Thus we may assume that dim(c(x)) = 4
and c(x) ⊆ V . Since x is not nilpotent, x must be semisimple. Hence, c(x) ⊆ V contains a
Cartan subalgebra g(h) and, therefore, a regular semisimple element y ∈ g(h). We are done by
Lemma 4.1:
dim(U ∩ c(y)) ≤ dim(c(y)) ≤ 2.
We have finished the proof of Theorem 0.4.
Appendix A. Ridgeline numbers and maximal dimensions of centralisers
Column 3 contains the nullity r of an invariant form. It is equal to dim z. Column 4 contains
the dual Coxeter number. Column 5 contains the ridgeline number. Column 6 contains dimension
of the centraliser of the minimal nilpotent. Column 7 contains a minimal non-central semisimple
element in g♭ , using simple coweights yi and the enumeration of roots in Bourbaki [Bo68]. Column
8 contains the dimension of the centraliser of the minimal semisimple element in g♭ .
type of g
p
r
h∨
v(g)
m − 2(h∨ − 1)
y
dim c(y)
Al , l ≥ 2
(p, l + 1) = 1
0
l+1
1
2
l2
y1
l2
Al , l ≥ 2
p | (l + 1)
1
l+1
l
2l−1
l2
y1
l2
Bl , l > 3
p 6= 2
0 2l − 1
2l2 − 3l + 4
y1
2l2 − 3l + 2
Cl , l ≥ 2
p 6= 2
0
2l2 − l
y1
2l2 − 3l + 2
Dl , l ≥ 4
p 6= 2
0 2l − 2
1
4 (1
+
3
2l−3 )
2l2 − 5l + 6
y1
2l2 − 5l + 4
Dl , l = 2l0 ≥ 4
2
2 2l − 2
1
4 (1
+
2
l−2 )
2l2 − 5l + 6
y1
2l2 − 5l + 4
Dl , l = 2l0 + 1 ≥ 4
2
1 2l − 2
1
4 (1
+
7
4l−7 )
2l2 − 5l + 6
y1
2l2 − 5l + 4
G2
p>3
0
4
1
3
8
y1
4
G2
p=2
0
4
1
3
8
y1
6
F4
p 6= 2
0
9
1
4
36
y1
22
E6
p 6= 3
0
12
3
11
56
y1
46
E6
3
1
12
2
7
56
y1
46
E7
p 6= 2
0
18
7
34
99
y7
79
E7
p=2
1
18
7
33
99
y7
79
E8
p 6= 2
0
30
4
29
190
y8
134
E8
p=2
0
30
4
29
190
y3
136
1
4 (1
1
l−1 )
+
1
2
l+1
References
[ANS03] M. Abért, N. Nikolov, B. Szegedy, Congruence subgroup growth of arithmetic groups in positive characteristic, Duke Math. J., 117 (2003), no. 2, 367–383.
SUBGROUP GROWTH
[BG01]
11
Y. Barnea, R. Guralnick, Subgroup growth in some pro-p groups, Proc. Amer. Math. Soc. 130 (2001),
no. 3, 653–659.
[Bo68]
N. Bourbaki, Groupes et algébres de Lie, Ch. IV–VI, Hermann, Paris, 1968.
[H95]
J. Humphreys, Conjugacy Classes in Semisimple Algebraic Groups, Math. Surveys and Monographs,
v.43, Amer. Math. Soc., Providence, 1995.
[KW]
V. Kac, B. Weisfeiler, Coadjoint action of a semi-simple algebraic group and the center of the enveloping
algebra in characteristic p, Indag. Math. , 38 (1976), no. 2, 136–151.
[LM91] A. Lubotzky, A. Mann, On groups of polynomial subgroup growth, Invent. Math, 104 (1991), no. 3,
521–533 .
[LSh94] A. Lubotzky, A. Shalev, On some Λ-analytic pro-p groups, Israel J. Math., 85 (1994), no. 1–3, 307–337.
[LSe03] A. Lubotzky, D. Segal, Subgroup growth, Progress in Mathematics, vol. 212, Birkhäuser, Basel, 2003.
[P95]
A. Premet, An analogue of the Jacobson-Morozov theorem for Lie algebras of reductive groups of good
characteristic, Trans. Amer. Math. Soc. 347 (1995), no. 8, 2961–2988.
[Sh92]
A. Shalev, Growth functions, p-adic analytic groups, and groups of finite coclass, J. London Math. Soc.,
46 (1992), no. 1, 111–122.
[V05]
University of Georgia VIGRE Algebra Group, Varieties of nilpotent elements for simple Lie algebras II:
Bad primes, J. Algebra, 292 (2005), no. 1, 65–99.
[W99]
W. Wang, Dimension of a minimal nilpotent orbit, Proc. Amer. Math. Soc., 127 (1999), no. 3, 935–936.
E-mail address: [email protected], [email protected], [email protected]
| 4 |
BIASED PREDECESSOR SEARCH*
arXiv:1707.01182v1 [] 4 Jul 2017
Prosenjit Bose,† Rolf Fagerberg,‡ John Howat,† and Pat Morin†
Abstract. We consider the problem of performing predecessor searches in a bounded
universe while achieving query times that depend on the distribution of queries. We obtain
several data structures with various properties: in particular, we give data structures that
achieve expected query times logarithmic in the entropy of the distribution of queries but
with space bounded in terms of universe size, as well as data structures that use only
linear space but with query times that are higher (but still sublinear) functions of the
entropy. For these structures, the distribution is assumed to be known. We also consider
individual query times on universe elements with general weights, as well as the case when
the distribution is not known in advance.
Keywords: data structures, predecessor search, biased search trees, entropy
1 Introduction
The notion of biased searching has received significant attention in the literature on ordered
dictionaries. In ordered dictionaries, the central operation is predecessor queries—that is,
queries for the largest element stored in the data structure that is smaller than a given
query value. The setting is biased when each element i of the data structure has some
probability pi of being queried, and we wish for queries to take a time related to the inverse
of the probability of that query. For example, a biased search tree [5] can answer a query
for item i in time O(log 1/pi ).1 For biased predecessor queries, also the gaps between
consecutive elements of the data structure are assigned probabilities of being searched for
P
[11, 5, p. 564]. Recall that i pi log(1/pi ) is the entropy of the distribution of queries. In
terms of this quantity, we note that the expected query time in a biased search tree is linear
in the entropy of the query distribution, and that this is optimal for binary search trees [5,
Thm. A].2
Binary search trees work in the comparison-based setting where keys only can be
accessed by comparisons. Predecessor searches have also been researched extensively in
the context of bounded universes where keys are integers of bounded range whose bits may
* Partially supported by the Danish Council for Independent Research, Natural Sciences, grant DFF-132300247.
† Carleton University, {jit,jhowat,morin}@scs.carleton.ca
‡ University of Southern Denmark, [email protected]
1 In this paper, we define log x = log (x + 2).
2
2 As will be apparent from our results, in bounded universes this lower bound does not hold, and one can
achieve query times below it.
1
be accessed individually. More precisely, let U = {0, 1, . . . , U − 1} be the universe of possible
keys, and consider a static subset S = {s1 , s2 , . . . , sn } ⊆ U , where s1 < s2 < · · · < sn . Predecessor
searches in this context admit data structures with query times that are not only a function
of n, but also of U . For example, van Emde Boas trees [16] can answer predecessor queries
in time O(log log U).
A natural question—but one which has been basically unexplored—is how to combine these two areas of study to consider biased searches in bounded universes. In this
setting, we have a probability distribution D = {p0 , p1 , . . . , pU −1 } over the universe U such
P −1
that the probability of receiving i ∈ U as a query is pi and U
i=0 pi = 1. We wish to preprocess U and S, given D, such that the time for a query is related to D.
P −1
The motivation for such a goal is the following. Let H = U
i=0 pi log(1/pi ) be the
entropy of the distribution D. Recall that the entropy of a U -element distribution is between 0 and log U . Therefore, if an expected query time of O(logH) can be achieved, this
for any distribution will be at most O(log logU ), which matches the performance of van
Emde Boas trees [16]. However, for lower-entropy distributions, this will be faster—as a
concrete example, an exponential distribution (say, pi = Θ(1/2i )) has H = O(1) and will
yield support of queries in expected constant time. In other words, such a structure will
allow bias in the query sequence to be exploited for ordered dictionaries over bounded
universes. Hence, perhaps the most natural way to frame the line of research in this paper is by analogy: the results here are to biased search trees as van Emde Boas trees (and
similar structures) are to binary search trees.
Our results. The results presented here can be divided into four categories. In the first
we give two variants of a data structure that obtains O(log H) query time but space that
is bounded in terms of U . In the
we give a solution that obtains space that is
√ second
linear in n but has query time O H . In bounded universe problems, n is always smaller
than U (often substantially so), so these two categories can be seen as representing a timespace trade-off. In the third we consider individual query times on universe elements with
general weights. In the fourth we consider query times related to the working-set number
(which is defined as the number of distinct predecessors reported since the last time a
particular predecessor was reported), so that the query distribution need not be known
in advance. Our methods use hashing and existing (unbiased) predecessor structures for
bounded universes [3, 17] as building blocks.
Organization. The rest of the paper is organized in the following way. We first complete
the current section by reviewing related work. In Section 2 we show how to obtain good
query times at the expense of large space. In Section 3 we show how to obtain good space
at the expense of larger query times. We conclude in Section 4 with a summary of the
results obtained and possible directions for future research.
2
1.1
Related Work
It is a classical result that predecessor searches in bounded universes can be performed in
time O(log log U). This was first achieved by van Emde Boas trees [16], and later by y-fast
tries [17], and Mehlhorn and Näher [13]. Of these, van Emde Boas trees use O(U ) space,
while the other two structures use O(n) space.
These bounds can be improved to
s
log n
loglog U
,
O min
log loglog U
log log n
using nO(1) space [3]. By paying an additional O(log log n) factor in the first half of this
bound, the space can be improved to O(n) [3]. Pătraşcu and Thorup later effectively settled
this line of research with a set of time-space trade-offs [14].
Departing the bounded universe model for a moment and considering only biased
search, perhaps the earliest such data structure is the optimum binary search tree [11],
which is constructed to be the best possible static binary search tree for a given distribution. Optimum binary search trees take a large amount of time to construct; in linear time,
however, it is possible to construct a binary search tree that answers queries in time that is
within a constant factor of optimal [12]. Even if the distribution is not known in advance,
it is still possible to achieve the latter result (e.g., [2, 15]).
Performing biased searches in a bounded universe is essentially unexplored, except for the case where the elements of S are drawn from D rather than the queries [4].
In that result, D need not be known, but must satisfy certain smoothness constraints,
and
time with high probability and
is given that supports O(1) query
p a data structure
1+ǫ
bits of space, which can be reO log n/ log log n worst-case query time, using O n
duced to O(n) space at the cost of a O(log log n) query time (with high probability). It is
worth noting that this data structure is also dynamic.
A related notion is to try to support query times related to the distribution in a
p
less direct way. For example, finger searching can be supported in time O log d/ log log d
where d is the number of keys stored between a finger pointing at a stored key and the
query key [1]. There is also a data structure that supports such searches in expected time
O(log log d) for a wide class of input distributions [10]. Finally, a query time of O(log log ∆),
where ∆ is the difference between the element queried and the element returned, can also
be obtained [7].
Other problems in bounded universes can also be solved in similar ways. A priority
queue that supports insertion and deletion in time O(log log d ′ ), where d ′ is the difference
between the successor and predecessor (in terms of priority) of the query, is known [9],
as well as a data structure for the temporal precedence problem, wherein the older of two
query elements must be determined, that supports query time O(log log δ), where δ is the
3
temporal distance between the given elements [8].
2 Supporting O(log H) Query Time
In this section, we describe how to achieve query time O(log H) using space that is bounded
in terms of U .
2.1
Using O(n + U ǫ ) Space
Let ǫ > 0. We place all elements i ∈ U with probability pi ≥ (1/U )ǫ into a hash table T , and
with each element we store a pointer to its predecessor in S (which never changes since
S is static). All elements of S are also placed into a y-fast trie over the universe U . Since
there are at most U ǫ elements with probability greater than (1/U )ǫ , it is clear that the hash
table requires O(U ǫ ) space. Since the y-fast trie requires O(n) space, we have that the total
space used by this structure is O(n + U ǫ ). To execute a search, we check the hash table
first. If the query (and thus the answer) is not stored there, then a search is performed in
the y-fast trie to answer the query.
The expected query time is thus
X
X
pi O(1) +
pi O(log log U )
i∈T
= O(1) +
X
i∈U \T
= O(1) +
X
i∈U \T
= O(1) +
X
i∈U \T
pi O(log log U)
pi O log log (U ǫ )1/ǫ
pi O(log((1/ǫ) logU ǫ ))
i∈U \T
= O(1) +
X
pi O(log(1/ǫ)) +
i∈U \T
= O(1) + O(log(1/ǫ)) +
i∈U \T
X
i∈U \T
≤ O(1) + O(log(1/ǫ)) +
X
X
pi O(log log U ǫ )
pi O log log
1
1/U ǫ
pi O(log log(1/pi ))
i∈U \T
The last step here follows from the fact that, if i ∈ U \T , then pi ≤ (1/U )ǫ , and so 1/(1/U )ǫ ≤
1/pi . Recall Jensen’s inequality, which states that for concave functions f , E[f (X)] ≤
f (E[X]). Since the logarithm is a concave function, we therefore have
X
X
pi O(log log(1/pi )) ≤ log
pi O(log(1/pi )) ≤ O(log H)
i∈U \T
i∈U \T
therefore, the expected query time is O(log(1/ǫ)) + O(log H) = O(log(H/ǫ)).
4
Theorem 1. Given a probability distribution with entropy H over the possible queries in a
universe of size U , it is possible to construct a data structure that performs predecessor searches
in expected time O(log(H/ǫ)) using O(n + U ǫ ) space for any ǫ > 0.
Theorem 1 is a first step towards our goal. For ǫ = 1/2, for example, we achieve
O(log H) query time, as desired, and our space usage is O(n) + o(U ). This dependency on
U , while sublinear, is still undesirable. In the next section, we will see how to reduce this
further.
2.2
ǫ
Using O n + 2log U Space
To improve the space used by the data structure described in Theorem 1, one observation
is that we can more carefully select the threshold for “large probabilities” that we place
ǫ
logǫ U for some 0 < ǫ < 1. The space
in the hash table. Instead of (1/U
) , ǫwe can use (1/2)
used by the hash table is thus O 2log U , which is o(U ǫ ) for any ǫ > 0. The analysis of the
expected query times carries through as follows
X
X
X
pi O(1) +
pi O(log log U) = O(1) +
pi O(log log U)
i∈T
i∈U \T
i∈U \T
= O(1) +
X
i∈U \T
= O(1) +
X
i∈U \T
= O(1) +
X
pi ǫ(1/ǫ)O(log log U)
pi (1/ǫ)O(log (logǫ U ))
ǫ
pi (1/ǫ)O log log 2log U
i∈U \T
≤ O(1) +
X
i∈U \T
≤ O(1) + (1/ǫ)
pi (1/ǫ)O(log log(1/pi ))
X
pi O(log log(1/pi ))
i∈U \T
≤ O((1/ǫ) log H)
Theorem 2. Given a probability distribution with entropy H over the possible queries in a
universe of size U , it is possible to construct
structure that performs predecessor searches
a data
ǫ
log
U
space for any 0 < ǫ < 1.
in expected time O((1/ǫ) log H) using O n + 2
2.3
Individual Query Times for Elements
Observe that part of the proof of Theorem 2 is to show that an individual query for an
element i ∈ U \ T can be executed in time O((1/ǫ) log log1/pi ) time. Since the query time of
elements in T is O(1), the same holds for these. More generally, the structure can support
arbitrarily weighted elements in U . Suppose each element i ∈ U has a real-valued weight
P −1
wi > 0 and let W = U
i=0 wi . By assigning each element probability pi = wi /W , we achieve
5
an individual query time of O((1/ǫ) loglog(W /wi )), which is analogous to the O(logW /wi )
query time of biased search trees [5]. Since the structure is static, we can use perfect
hashing for the hash tables involved (T as well as those in the y-fast trie), hence the search
time is worst-case.
Theorem 3. Given a positive real weight wi for each element i in a universe of size U , such that
the sum of all weights is W , it is possible to construct a data structure that
a predecessor
performs
ǫ
log
U
space for any
search for item i in worst-case time O((1/ǫ) loglog(W /wi )) using O n + 2
0 < ǫ < 1.
3 Supporting O(n) Space
In this section, we describe how to achieve space O(n) by accepting a larger query time
√
O H . We begin with a brief note concerning input entropy vs. output entropy.
Input vs. Output Distribution. Until now, we have discussed the input distribution, i.e.,
the probability pi that i ∈ U is the query. We could also discuss the output distribution,
i.e., the probability pi∗ that i ∈ U is the answer to the query. This distribution is defined by
Psk+1 −1
pi∗ = 0 if i < S and pi∗ = j=s
pj if i ∈ S = {s1 , s2 , . . . , sn } with i = sk .
k
∗
Suppose we can answer a predecessor query for i in time O log log1/ppred(i)
where
pred(i) is the predecessor of i. Then the expected query time is
X
∗
pi O log log1/ppred(i)
i∈U
P
∗
Since pi ≤ ppred(i)
for all i, this is at most i∈U pi log log 1/pi , i.e., the entropy of the input
distribution. It therefore suffices to consider the output distribution.
Our data structure will use a series of data structures for predecessor search [3] that
increase doubly-exponentially in size in much the same way as the working-set structure
[2]. Recall from Section 1.1 that there exists a linear
that is able to
q
space data structure
execute predecessor search queries in time O min
log log n·log log U
log log log U ,
log n
log log n
[3]. We will
maintain several such structures D1 , D2 , . . ., where each Dj is over the universe U and stores
j
22 elements of S. In more detail, sorting the elements of S by probability into decreasing
1
2
order, we store the first 22 elements in D1 , the next 22 elements in D2 , etc. In general, Dj
j
contains the 22 elements of highest probability that are not contained in any Dk for k < j.
Note that here, “probability” refers to the output probability.
Searches are performed by doing a predecessor search in each of D1 , D2 , . . . until the
answer is found. Along with each element we store a pointer to its successor in S. When
we receive the predecessor of the query in Dj , we check its successor to see if that successor
is larger than the query. If so, the predecessor in Dj is also the real predecessor in S (i.e.,
the answer to the query), and we stop the process. Otherwise, the real predecessor in S is
6
somewhere between the predecessor in Dj and the query, and can be found by continuing
to Dj+1 , Dj+2 , . . .. This technique is known from [6].
We now consider the search time in this data structure. Suppose the process stops
by finding the correct predecessor of the query i in Dj where j > 1 (otherwise, the prej−1
decessor was found in D1 in O(1) time). Dj−1 contains 22 elements all of which have
∗
(output) probability at least ppred(i)
. Since the sum of the probabilities of these elements
j−1
∗
∗
is at most one, it follows that ppred(i) ≤ 1/22 . Equivalently, j is O loglog 1/ppred(i)
. The
total time spent searching is bounded by
s
s
j
j r
q
k
2j
X
X
log 22
2k
= O log 1/p ∗
(1)
=
O
=
j
k
pred(i)
k
log log22
k=1
k=1
The second equality above follows because the terms of the summation are exponentially
∗
increasing and hence the last term dominates the entire sum. Therefore, since pi ≤ ppred(i)
for all i, the expected query time is
X q
X p
√
∗
pi log1/ppred(i)
≤
pi log 1/pi ≤ H
i∈U
i∈U
The final step above follows from Jensen’s inequality. To determine the space used by this
data structure, observe that every element stored in S is stored in exactly one Dj . Since
each Dj uses space linear in the number of elements stored in it, the total space usage is
O(n).
Theorem 4. Given a probability distribution with entropy H over the possible queries in a
universe of size U , it√is possible
to construct a data structure that performs predecessor searches
in expected time O H using O(n) space.
Observe that we need not know the exact distribution D to achieve the result of
Theorem 4; it suffices to know the sorted order of the keys in terms of non-increasing
probabilities.
Also observe that like in Section 2.3, the structure here can support arbitrarily
weighted elements. Suppose each element i ∈ U has a real-valued weight wi > 0 and let
P −1
W = U
i=0 wi . By assigning each element probability pi = wi /W , we see that (1) and the
∗
fact that pi ≤ ppred(i)
for all i give the following.
Theorem 5. Given a positive real weight wi for each element i in a bounded universe, such
that the sum of all weights is W , it is possible to
pconstruct adata structure that performs a
predecessor search for item i in worst-case time O log(W /wi ) using O(n) space.
Furthermore, since the predecessor search structure used for the Dj ’s above is in
fact dynamic [3], we can even obtain a bound similar to the working-set
property: a pre
p
decessor search for item i can be answered in time O logw(i) where w(i) is the number
7
of distinct predecessors reported since the last time the predecessor of i was reported.
This can be accomplished using known techniques [2], similar to the data structure of
Theorem 4, except that instead of ordering the elements of S by their probabilities, we
order them in increasing order of their working-set numbers w(i). Whenever an element
from Dj is reported, we move the element to D1 and for k = 1, 2, . . . , j − 1 shift one element
from Dk to Dk+1 in order to fill the space left in Dj while keeping the ordering based on
j−1
w(i), just as in the working-set structure [2]. All 22 elements in Dj−1 have been reported
more recently than the current element reported from Dj , so an analysis similar to (1)
p
shows that queries are answered in O log w(i) time. The structure uses O(n) space.
Theorem 6. Let w(i) denote the number of distinct predecessors reported since the last time the
predecessor of i was reported, or n if the predecessor of i has not yet been reported. It is possible
toconstruct
a data structure that performs a predecessor search for item i in worst-case time
p
O log w(i) using O(n) space.
4 Conclusion
In this paper, we have introduced the idea of biased predecessor search in bounded universes. Two different categories of data structures were considered: one with query times
that are logarithmic in the entropy of the query distribution (with space that is a function
of U ), and one with linear space (with query times larger than logarithmic in the entropy).
We also considered the cases of individual query times on universe elements with general
weights and of query times related to the working-set number.
Our results leave open several possible directions for future research:
1. Is it possible to achieve O(logH) query time and O(n) space?
2. The reason for desiring a O(log H) query time comes from the fact that H ≤ log U
and the fact that the usual data structures for predecessor searching have query
time O(log log U ). Of course, this is not optimal: other results have since improved
this upper bound [3, 14]. Is it possible to achieve a query time of, for example,
O(log H/ log log U)?
3. What lower bounds can be stated in terms of either the input or output entropies?
Clearly O(U ) space suffices for O(1) query time, and so such lower bounds must
place restrictions on space usage.
References
[1] A. Andersson and M. Thorup. Dynamic ordered sets with exponential search trees.
Journal of the ACM, 54(3):Article 13, 2007.
8
[2] Mihai Bădoiu, Richard Cole, Erik D. Demaine, and John Iacono. A unified access
bound on comparison-based dynamic dictionaries. Theoretical Computer Science,
382(2):86–96, 2007.
[3] Paul Beame and Faith E. Fich. Optimal bounds for the predecessor problem and
related problems. Journal of Computer and System Sciences, 65(1):38–72, 2002.
[4] D. Belazzougui, A.C. Kaporis, and P.G. Spirakis. Random input helps searching predecessors. arXiv:1104.4353, 2011.
[5] Samuel W. Bent, Daniel D. Sleator, and Robert E. Tarjan. Biased search trees. SIAM
Journal on Computing, 14(3):545–568, 1985.
[6] Prosenjit Bose, John Howat, and Pat Morin. A distribution-sensitive dictionary with
low space overhead. In Proceedings of the 11th International Symposium on Algorithms
and Data Structures (WADS 2009), LNCS 5664, pages 110–118, 2009.
[7] Prosenjit Bose, Karim Douı̈eb, Vida Dujmović, John Howat, and Pat Morin. Fast local searches and updates in bounded universes. In Proceedings of the 22nd Canadian
Conference on Computational Geometry (CCCG 2010), pages 261–264, 2010.
[8] Gerth Stølting Brodal, Christos Makris, Spyros Sioutas, Athanasios Tsakalidis, and
Kostas Tsichlas. Optimal solutions for the temporal precedence problem. Algorithmica, 33(4):494–510, 2002.
[9] Donald B. Johnson. A priority queue in which initialization and queue operations
take O(log log D) time. Theory of Computing Systems, 15(1):295–309, 1981.
[10] Alexis Kaporis, Christos Makris, Spyros Sioutas, Athanasios Tsakalidis, Kostas
Tsichlas, and Christos Zaroliagis. Improved bounds for finger search on a RAM.
In ESA ’03: Proceedings of the 11th Annual European Symposium on Algorithms, LNCS
2832, pages 325–336, 2003.
[11] D.E. Knuth. Optimum binary search trees. Acta Informatica, 1(1):14–25, 1971.
[12] Kurt Mehlhorn. Nearly optimal binary search trees. Acta Informatica, 5(4):287–295,
1975.
[13] Kurt Mehlhorn and Stefan Näher. Bounded ordered dictionaries in O(log log N ) time
and O(n) space. Information Processing Letters, 35(4):183–189, 1990.
[14] Mihai Pătraşcu and Mikkel Thorup. Time-space trade-offs for predecessor search. In
STOC ’06: Proceedings of the 38th Annual ACM Symposium on Theory of Computing,
pages 232–240, 2006.
[15] Daniel Dominic Sleator and Robert Endre Tarjan. Self-adjusting binary search trees.
Journal of the ACM, 32(3):652–686, 1985.
[16] P. van Emde Boas. Preserving order in a forest in less than logarithmic time and
linear space. Information Processing Letters, 6(3):80–82, 1977.
9
[17] Dan E. Willard. Log-logarithmic worst-case range queries are possible in space
Θ(log log N ). Information Processing Letters, 17(2):81–84, 1983.
10
| 8 |
A note on evolutionary stochastic portfolio optimization and
probabilistic constraints
Ronald Hochreiter
arXiv:1001.5421v1 [q-fin.PM] 29 Jan 2010
February 1, 2010
Abstract
In this note, we extend an evolutionary stochastic portfolio optimization framework to
include probabilistic constraints. Both the stochastic programming-based modeling environment as well as the evolutionary optimization environment are ideally suited for an
integration of various types of probabilistic constraints. We show an approach on how to
integrate these constraints. Numerical results using recent financial data substantiate the
applicability of the presented approach.
1
Introduction
Stochastic programming is a powerful method to solve optimization problems under uncertainty,
see Ruszczyński and Shapiro [2003] for theoretical properties and Wallace and Ziemba [2005]
for an overview of possible applications. One specific application area is portfolio optimization,
which was pioneered by H.M. Markowitz Markowitz [1952]. The advantage of using the stochastic
programming approach is that the optimization can be done without using a covariance matrix
of the assets, which is on one hard to estimate and on the other hand does not capture the
uncertainty in sufficient detail, especially if there are many assets.
Instead of using the asset means and the covariance matrix, a stochastic programming approach uses a set of scenarios, each weighted by a certain probability. In the specific portfolio
optimization context one scenario is a set of one possible asset return per asset under consideration - see below for more details or e.g. see Chapter 16 of Cornuejols and Tütüncü [2007].
This note is organized as follows: Section 2 summarizes the evolutionary approach, which
was used to solve the stochastic portfolio optimization problems. Section 3 adds probabilistic
constraints to the standard problem and shows an approach on how to integrate these type of
constraints Section 4 summarizes numerical results using recent financial data, while Section 5
concludes the note.
2
Evolutionary Stochastic Portfolio Optimization
We follow the approach taken by Hochreiter [2007] and Hochreiter [2008], which builds a stochastic programming-based environment on top of the general evolutionary portfolio optimization
findings reported by Streichert et al. [2003], Streichert et al. [2004a], and Streichert et al. [2004b].
1
2.1
Stochastic portfolio optimization
Let us define the stochastic portfolio optimization problem as follows. We consider a set of assets
(or asset categories) A with cardinality a. Furthermore, we base our decision on a scenario set
S, which contains a finite number s of scenarios each containing one uncertain
P return per asset.
Each scenario is equipped with a non-negative probability ps , such that
s∈S ps = 1. The
scenario set can be composed of historical data or might be the output of a scenario simulator.
For every portfolio x we can easily calculate the profit and loss distribution by multiplying
the scenario matrix with the portfolio weighted by the respective probability. Let us denote the
profit function of a certain portfolio x by π(x) and the loss function by `(x) = −π(x).
Every portfolio optimization procedure is a multi-objective optimization. In the traditional
case it is a trade-off between return and risk of the profit and loss function. We do not want
to employ an multi-objective approach such that we use the classical Markowitz approach and
use the Variance of the loss distribution for the risk dimension, which we want to minimize, and
the expectation of the profit function for the reward dimension, on which we want to set a lower
limit. Hence, the main optimization problem is shown in Eq. (1) below.
minimize
subject to
Variance(`x )
E(πx ) > µ
(1)
Furthermore we consider the classical constraints, i.e. budget normalization, as well as setting
an upper and lower limit on each asset position, as shown in Eq. (2). These are naturally fulfilled
by the evolutionary approach shown below, especially since we restrict short-selling in our case.
P
subject to
a∈A xa = 1
(2)
l ≤ xa ≤ u
∀a ∈ A
2.2
Evolutionary stochastic portfolio optimization
The evolutionary algorithm chosen is based on the commonly agreed standard as surveyed by
Blum and Roli [2003] and is based on the literature cited at the beginning of this Section.
We use the following genetic encoding of a portfolio. Each gene consists of of two parts:
One that determines the amount of budget to be distributed to each selected asset and one part
which determines in which assets to invest. The first part g1 consists of a predefined number b
of real values between 0 and 1 and the second part g2 is encoded as a bit-string of the size of the
amount of assets.
The following evolutionary operators have been implemented and used:
• Elitist selection. A certain number o1 of the best chromosomes will be used within the
next population.
• Crossover. A number o2 of crossovers will be added, both 1-point crossovers (g1 and g2 )
and intermediate crossovers (only g1 ).
• Mutation. o3 mutated chromosomes will be added.
• Random addition. Furthermore o4 randomly sampled chromosomes are added, which are
also used for creating the initial population.
The specific number of operators o = (o1 , o2 , o3 , o4 ) has to be determined for each concrete
number of assets a as well as the parameter b.
2
3
Probabilistic Constraints
The main advantage of the stochastic programming approach is that the complete distribution
is naturally available and can be used for optimization purposes by integrating these directly
into the constraint handling mechanism. In the most simplest case, we want to restrict that the
probability that the loss exceeds a certain value δ is lower than a specified probability ε. Given
our profit function πx , the constraint we want to add to our optimization model is given in Eq.
(3).
subject to P(πx ≤ δ) ≤ ε.
(3)
We will not treat probabilistic constraints as a hard constraint, but use the size of the violation
for adding a penalty to the objective function. Let the fitness value which we aim to minimizing
be f . We calculate the penalty p by
p = f × (P(πx ≤ δ) − ε) × γ,
where γ is a factor to control the penalization level. The fitness value used for evolutionary
optimization purposes is thus given by f 0 = f + p. Such a constraint can be implemented and
handled conveniently.
4
Numerical Results
The program code was implemented using MatLab 2008b without using any further toolboxes.
We use data from the 30 stocks of the Dow Jones Industrial Average at the beginning of
2010, i.e. the ticker symbols AA, AXP, BA, BAC, CAT, CSCO, CVX, DD, DIS, GE, HD, HPQ,
IBM, INTC, JNJ, JPM, KFT, KO, MCD, MMM, MRK, MSFT, PFE, PG, T, TRV, UTX, VZ,
WMT, XOM. Daily data from each trading day in 2009 has been used. Weekly returns have
been calculated.
We are using b = 100 buckets which are distributed to the respective asset picks, such that
each chromosome has a length of b + a = 130. The initial population consists of 1000 random
chromosomes. The operator structure defined in Section 2.2 is o = (100, 420, 210, 100), such that
each follow-up population has a size of 830. This number is due to the different combinations
between the crossovers and mutations of g1 and g2 .
First, we optimize without using the probabilistic constraints, i.e. the main optimization
problem given Eq. 1 using µ = 0.001. Then we add the probabilistic constraint shown in
Eq. 3 with δ = and ε = 0.1. These values have been chosen after analysing the resulting loss
distribution.
Fig. 1 shows the optimal portfolio without applying the probabilistic constraint P1 (left) and
the optimal one using the probabilistic constraint P2 (right). The allocation changed considerably. The diversification has not changed, i.e. three assets (KFT, VZ, XOM) are dropped from
the portfolio, and three others are picked (HPQ, MSFT, MMM). The resulting loss distributions
are shown in Fig. 2, where the impact of the probabilistic constraint is immediately visible.
Furthermore, the statistical properties of the portfolios are shown in Table 1. In this table, the
naive 1/N portfolio P3 has been added for comparison purposes. While P3 has the highest return, it is also the most risky one in terms of both risk parameters - standard deviation and the
probability to fall below the specified threshold. Another interesting fact is that the probabilistic
constrained portfolio yields a higher expected return than the standard optimal portfolio. This
is partly due to the fact that the lower level µ has been set to a level below the expected return
of the standard portfolio but is definitely another indicator that the plain classical Markowitz
approach should not be used for contemporary portfolio optimization purposes.
3
Figure 1: P1 without P constraint (left) and P2 with P constraint (right).
Figure 2: Loss distribution of P1 (left) and P2 (right).
Mean
Std.Dev.
Prob.
P1 (no P)
0.0024
0.02
0.1774
P2
0.0051
0.0225
0.1089
P3 (1/N )
0.0062
0.0398
0.2702
Table 1: Statistical properties of various portfolios.
4
5
Conclusion
In this note, an extension of an Evolutionary Stochastic Portfolio Optimization to include probabilistic constraints has been presented. It can be shown that the integration of such constraints
is straightforward as the underlying probability space is the main object considered for the optimization evaluation. Numerical results visualized the impact of using such constraints in the
area of financial engineering. Future extensions of this work include the integration of risk measures into the probabilistic constraint, e.g. constraining the maximum draw-down of the optimal
portfolio.
References
C. Blum and A. Roli. Metaheuristics in combinatorial optimization: Overview and conceptual
comparison. ACM Computing Surveys, 35(3):268–308, 2003.
Gerard Cornuejols and Reha Tütüncü. Optimization methods in finance. Mathematics, Finance
and Risk. Cambridge University Press, Cambridge, 2007.
R. Hochreiter. An evolutionary computation approach to scenario-based risk-return portfolio
optimization for general risk measures. In EvoWorkshops 2007, volume 4448 of Lecture Notes
in Computer Science, pages 199–207. Springer, 2007.
R. Hochreiter. Evolutionary stochastic portfolio optimization. In A. Brabazon and M. O’Neill,
editors, Natural Computing in Computational Finance, volume 100 of Studies in Computational
Intelligence, pages 67–87. Springer, 2008.
H. M. Markowitz. Portfolio selection. The Journal of Finance, 7(1):77–91, 1952.
A. Ruszczyński and A. Shapiro, editors. Stochastic programming, volume 10 of Handbooks in
Operations Research and Management Science. Elsevier Science B.V., Amsterdam, 2003.
F. Streichert, H. Ulmer, and A. Zell. Evolutionary algorithms and the cardinality constrained
portfolio selection problem. In Selected Papers of the International Conference on Operations
Research (OR 2003), pages 253–260. Springer, 2003.
F. Streichert, H. Ulmer, and A. Zell. Comparing discrete and continuous genotypes on the
constrained portfolio selection problem. In Kalyanmoy D. et al., editor, Genetic and Evolutionary Computation (GECCO 2004) - Proceedings, Part II, volume 3103 of Lecture Notes in
Computer Science, pages 1239–1250. Springer, 2004a.
F. Streichert, H. Ulmer, and A. Zell. Evaluating a hybrid encoding and three crossover operators on the constrained portfolio selection problem. In CEC2004. Congress on Evolutionary
Computation, 2004, volume 1, pages 932–939. IEEE Press, 2004b.
S. W. Wallace and W. T. Ziemba, editors. Applications of stochastic programming, volume 5 of
MPS/SIAM Series on Optimization. Society for Industrial and Applied Mathematics (SIAM),
2005.
5
| 5 |
1
Convolutional Recurrent Neural Networks for
Dynamic MR Image Reconstruction
arXiv:1712.01751v2 [] 26 Jan 2018
Chen Qin*† , Jo Schlemper*, Jose Caballero, Anthony N. Price, Joseph V. Hajnal and Daniel
Rueckert Fellow, IEEE
In order to accelerate MRI acquisition, most approaches
consider undersampling the data in k-space (frequency domain). Due to the violation of the Nyquist sampling theorem,
undersampling introduces aliasing artefacts in the image
domain. Images can be subsequently reconstructed by solving
an optimisation problem that regularises the solution with
assumptions on the underlying data, such as smoothness,
sparsity or, for the case of dynamic imaging, spatio-temporal
redundancy. Past literature has shown that exploiting spatiotemporal redundancy can greatly improve image reconstruction
quality compared to compressed sensing (CS) based single
frame reconstruction methods [1], [2]. However, the limitations
of these optimisation based approaches are the following: firstly,
it is non-trivial to propose an optimisation function without
introducing significant bias on the considered data. In addition,
the manual adjustments of hyperparameters are nontrivial.
Secondly, these optimisation problems often involve highly
nonlinear, non-convex terms. As a consequence, the majority
of approaches resort on iterative algorithms to reconstruct the
images, in which often neither attaining the global minimum
nor convergence to a solution is guaranteed. Furthermore,
the reconstruction speeds of these methods are often slow.
Proposing a robust iterative algorithm is still an active area of
Index Terms—Recurrent neural network, convolutional neural research.
network, dynamic magnetic resonance imaging, cardiac image
In comparison, deep learning methods are gaining popularity
reconstruction
for their accuracy and efficiency. Unlike traditional approaches,
the prior information and regularisation are learnt implicitly
from data, allowing the reconstruction to look more natural.
I. I NTRODUCTION
However, the limitation is that so far only a handful of
AGNETIC Resonance Imaging (MRI) is a non-invasive approaches exist [3], [4] for dynamic reconstruction. Hence, the
imaging technique which offers excellent spatial reso- applicability of deep learning models to this problem is yet to be
lution and soft tissue contrast and is widely used for clinical fully explored. In particular, a core question is how to optimally
diagnosis and research. Dynamic MRI attempts to reveal both exploit spatio-temporal redundancy. In addition, most deep
spatial and temporal profiles of the underlying anatomy, which learning methods do not exploit domain-specific knowledge,
has a variety of applications such as cardiovascular imaging which potentially enables the networks to efficiently learn data
and perfusion imaging. However, the acquisition speed is representation by regulating the mechanics of network layers,
fundamentally limited due to both hardware and physiological hence boosting their performances.
constraints as well as the requirement to satisfy the Nyquist
In this work, we propose a novel convolutional recurrent
sampling rate. Long acquisition times are not only a burden neural network (CRNN) method to reconstruct high quality
for patients but also make MRI susceptible to motion artefacts. dynamic MR image sequences from undersampled data, termed
CRNN-MRI, which aims to tackle the aforementioned prob† Corresponding author: Chen Qin. (Email address: [email protected])
lems of both traditional and deep learning methods. Firstly,
*These authors contributed equally to this work.
we
formulate a general optimisation problem for solving
C. Qin, J. Schlemper, J. Caballero and D. Rueckert are with the Biomedical
Image Analysis Group, Department of Computing, Imperial College London, accelerated dynamic MRI based on variable splitting and
SW7 2AZ London, UK.
alternate minimisation. We then show how this algorithm
J. V. Hajnal, and A. N. Price are with with the Division of Imaging Sciences
and Biomedical Engineering Department, King’s College London, St. Thomas’ can be seen as a network architecture. In particular, the
Hospital, SE1 7EH London, U.K.
proposed method consists of a CRNN block which acts as the
Abstract—Accelerating the data acquisition of dynamic magnetic resonance imaging (MRI) leads to a challenging ill-posed
inverse problem, which has received great interest from both
the signal processing and machine learning communities over
the last decades. The key ingredient to the problem is how
to exploit the temporal correlations of the MR sequence to
resolve aliasing artefacts. Traditionally, such observation led
to a formulation of a non-convex optimisation problem, which
was solved using iterative algorithms. Recently, however, deep
learning based-approaches have gained significant popularity
due to their ability to solve general inverse problems. In this
work, we propose a unique, novel convolutional recurrent neural
network (CRNN) architecture which reconstructs high quality
cardiac MR images from highly undersampled k-space data by
jointly exploiting the dependencies of the temporal sequences
as well as the iterative nature of the traditional optimisation
algorithms. In particular, the proposed architecture embeds
the structure of the traditional iterative algorithms, efficiently
modelling the recurrence of the iterative reconstruction stages
by using recurrent hidden connections over such iterations. In
addition, spatio-temporal dependencies are simultaneously learnt
by exploiting bidirectional recurrent hidden connections across
time sequences. The proposed method is able to learn both the
temporal dependency and the iterative reconstruction process
effectively with only a very small number of parameters, while
outperforming current MR reconstruction methods in terms of
computational complexity, reconstruction accuracy and speed.
M
2
proximal operator and a data consistency layer corresponding Furthermore, these methods are not able to exploit the prior
to the classical data fidelity term. In addition, the CRNN knowledge that can be learnt from the vast number of MRI
block employs recurrent connections across each iteration step, exams routinely performed, which should be helpful to further
allowing reconstruction information to be shared across the guide the reconstruction process.
multiple iterations of the process. Secondly, we incorporate
Recently, deep learning-based MR reconstruction has gained
bidirectional convolutional recurrent units evolving over time popularity due to its promising results for solving inverse and
to exploit the temporal dependency of the dynamic sequences compressed sensing problems. In particular, two paradigms
and effectively propagate the contextual information across have emerged: the first class of approaches proposes to use
time frames of the input. As a consequence, the unique CRNN convolutional neural networks (CNNs) to learn an end-to-end
architecture jointly learns representations in a recurrent fashion mapping, where architectures such as SRCNN [10] or U-net
evolving over both time sequences as well as iterations of the [11] are often chosen for MR image reconstruction [12], [13],
reconstruction process, effectively combining the benefits of [14], [15]. The second class of approaches attempts to make
traditional iterative methods and deep learning.
each stage of iterative optimisation learnable by unrolling the
To the best of our knowledge, this is the first work applying end-to-end pipeline into a deep network [9], [16], [17], [18],
RNNs for dynamic MRI reconstruction. The contributions of [19]. For instance, Hammernik et al. [9] introduced a trainable
this work are the following: Firstly, we view the optimisation formulation for accelerated parallel imaging (PI) based MRI
problem of dynamic data as a recurrent network and describe reconstruction termed variational network, which embedded a
a novel CRNN architecture which simultaneously incorporates CS concept within a deep learning approach. ADMM-Net [17]
the recurrent relationship of data over time and iterations. was proposed by reformulating an alternating direction method
Secondly, we demonstrate that the proposed method shows of multipliers (ADMM) algorithm to a deep network, where
promising results and improves upon the current state-of-the-art each stage of the architecture corresponds to an iteration in
dynamic MR reconstruction methods both in reconstruction the ADMM algorithm. More recently, Schlemper et al. [18]
accuracy and speed. Finally, we compare our architecture to 3D proposed a cascade network which simulated the iterative
CNN which does not impose the recurrent structure. We show reconstruction of dictionary learning-based methods and were
that the proposed method outperforms the CNN at different later extended for dynamic MR reconstructions [3]. Most
undersampling rates and speed, while requiring significantly approaches so far have focused on 2D images, whereas only a
few approaches exist for dynamic MR reconstruction [3], [4].
fewer parameters.
While they show promising results, the optimal architecture,
training scheme and configuration spaces are yet to be fully
II. R ELATED W ORK
explored.
One of the main challenges associated with recovering an
More recently, several ideas were proposed, which integrate
uncorrupted image is that both the undersampling strategy deep neural networks with model-based optimisation methods
and a-priori knowledge of appropriate properties of the image to solve inverse problems [20], [21]. In contrast to these papers,
need to be taken into account. Methods like k-t BLAST and which proposed to deal with a fidelity term and a regularisation
k-t SENSE [5] take advantage of a-priori information about term separately, we integrate the two terms in a single deep
the x-f support obtained from the training data set in order network, so that the network can be trained end-to-end. In
to prune a reconstruction to optimally reduce aliasing. An addition, as we will show, we integrate a hidden connection
alternative popular approach is to exploit temporal redundancy over the optimisation iteration to enable the information used
to unravel from the aliasing by using CS approaches [1], [6] or for the reconstruction at each iteration to be shared across all
CS combined with low-rank approaches [2], [7]. The class of stages of the reconstruction process, aiming for an iterative
methods which employ CS to the MRI reconstruction is termed algorithm that can fully benefit from information extracted at
as CS-MRI [8]. They assume that the image to be reconstructed all processing stages. As to the nature of the proposed RNN
has a sparse representation in a certain transform domain, and units, previous work involving RNNs only updated the hidden
they need to balance sparsity in the transform domain against state of the recurrent connection with a fixed input [22], [23],
consistency with the acquired undersampled k-space data. For [24], while the proposed architecture progressively updates
instance, an example of successful methods enforcing sparsity the input as the optimisation iteration increases. In addition,
in x-f domain is k-t FOCUSS [1]. A low rank and sparse previous work only modelled the recurrence of iteration or time
reconstruction scheme (k-t SLR) [2] introduces non-convex [25] exclusively, whereas the proposed method jointly exploits
spectral norms and uses a spatio-temporal total variation norm both dimensions, yielding a unique architecture suitable for
in recovering the dynamic signal matrix. Dictionary learning the dynamic reconstruction problem.
approaches were also proposed to train an over-complete basis
of atoms to optimally sparsify spatio-temporal data [6]. These
III. C ONVOLUTIONAL R ECURRENT N EURAL N ETWORK
methods offer great potential for accelerated imaging, however,
FOR MRI RECONSTRUCTION
they often impose strong assumptions on the underlying data,
requiring nontrivial manual adjustments of hyperparameters A. Problem Formulation
depending on the application. In addition, it has been observed
Let x ∈ CD denote a sequence of complex-valued MR
that these methods tend to result in blocky [9] and unnatural images to be reconstructed, represented as a vector with
reconstructions, and their reconstruction speed is often slow. D = Dx Dy T , and let y ∈ CM (M << D) represent the
3
undersampled k-space measurements, where Dx and Dy are
width and height of the frame respectively and T stands
for the number of frames. Our problem is to reconstruct x
from y, which is commonly formulated as an unconstrained
optimisation problem of the form:
argmin R(x) + λky − Fu xk22
x
(1)
Here Fu is an undersampling Fourier encoding matrix, R
expresses regularisation terms on x and λ allows the adjustment
of data fidelity based on the noise level of the acquired
measurements y. For CS and low-rank based approaches, the
regularisation terms R often employed are `0 or `1 norms in
the sparsifying domain of x as well as the rank or nuclear
norm of x respectively. In general, Eq. 1 is a non-convex
function and hence, the variable splitting technique is usually
adopted to decouple the fidelity term and the regularisation
term. By introducing an auxiliary variable z that is constrained
to be equal to x, Eq. 1 can be reformulated to minimize the
following cost function via the penalty method:
argmin R(z) + λky − Fu xk22 + µkx − zk22
(2)
individually parameterise each step. In our formulation, we
model each optimisation stage (i) as a learnt, recurrent, forward
encoding step fi (x(i−1) , z(i−1) ; θ, y, λ, Ω). The difference
is that now we use one model which performs proximal
operator, however, it also allows itself to propagate information
across iteration, making it adaptable for the changes across the
optimisation steps. The detail will be discussed in the following
section. The different strategies are illustrated in Fig 1.
B. CRNN for MRI reconstruction
RNN is a class of neural networks that makes use of
sequential information to process sequences of inputs. They
maintain an internal state of the network acting as a "memory",
which allows RNNs to naturally lend themselves to the
processing of sequential data. Inspired by iterative optimisation
schemes of Eq. 3, we propose a novel convolutional RNN
(CRNN) network. In the most general scope, our neural
encoding model is defined as follows,
xrec = fN (fN −1 (· · · (f1 (xu )))),
(5)
x,z
where µ is a penalty parameter. By applying alternate minimisation over x and z, Eq. 2 can be solved via the following
iterative procedures:
z(i+1) = argmin R(z) + µkx(i) − zk22
(3a)
x(i+1) = argmin λky − Fu xk22 + µkx − z(i+1) k22
(3b)
z
in which xrec denotes the prediction of the network, xu is the
sequence of undersampled images with length T and also the
input of the network, fi (xu ; θ, λ, Ω) is the network function
for each iteration of optimisation step, and N is the number
of iterations. We can compactly represent a single iteration fi
of our network as follows:
x
where x(0) = xu = FH
u y is the zero-filled reconstruction taken
as an initialisation and z can be seen as an intermediate state
of the optimisation process. For MRI reconstruction, Eq. 3b is
often regarded as a data consistency (DC) step where we can
obtain the following closed-form solution [18]:
(i−1)
x(i)
+ CRNN(x(i−1)
rnn = xrec
rec ),
x(i)
rec
=
DC(x(i)
rnn ; y, λ0 , Ω),
(6a)
(6b)
where CRNN is a learnable block explained hereafter, DC
(i)
is the data consistency step treated as a network layer, xrec
is the progressive reconstruction of the undersampled image
λ0
H
(i)
H
(i)
(0)
(i)
x(i+1) =
xu at iteration i with xrec = xu , xrnn is the intermediate
( DC(z ; y, λ0 , Ω) = F ΛFz + 1+λ0 Fu y,
reconstruction image before the DC layer, and y is the
1
if k 6∈ Ω
Λkk =
1
acquired k-space samples. Note that the variables xrec , xrnn are
if
k
∈
Ω
1+λ0
(4) analogous to x, z in Eq. 3 respectively. Here, we use CRNN
in which F is the full Fourier encoding matrix (a discrete to encode the update step, which can be seen as one step
Fourier transform in this case), λ0 = λ/µ is a ratio of of a gradient descent in the sense of objective minimisation,
regularization parameters from Eq. 4, Ω is an index set of the or a more general approximation function regressing the
(i+1)
− x(i) , i.e. the distance required to move
acquired k-space samples and Λ is a diagonal matrix. Please difference z
refer to [18] for more details of formulating Eq. 4 as a data to the next state. Moreover, note that in every iteration,
consistency layer in a neural network. Eq. 3a is the proximal CRNN updates its internal state H given an input which is
operator of the prior R, and instead of explicitly determining discussed shortly. As such, CRNN also allows information to
the form of the regularisation term, we propose to directly be propagated efficiently across iterations, in contrast to the
learn the proximal operator by using a convolutional recurrent sequential models using CNNs which collapse the intermediate
feature representation to z(i) .
neural network (CRNN).
In order to exploit the dynamic nature and the temporal
Previous deep learning approaches such as Deep-ADMM net
[17] and method proposed by Schlemper et al. [18] unroll the redundancy of our data, we further propose to jointly model the
traditional optimisation algorithm. Hence, their models learn a recurrence evolving over time for dynamic MRI reconstruction.
sequence of transition x(0) → z(1) → x(1) → · · · → z(N ) → The proposed CRNN-MRI network and CRNN block are shown
x(N ) to reconstruct the image, where each state transition at in Fig. 2(a), in which CRNN block comprised of 5 components:
stage (i) is an operation such as convolutions independently
1) bidirectional convolutional recurrent units evolving over
parameterised by θ, nonlinearities or a data consistency step.
time and iterations (BCRNN-t-i),
However, since the network implicitly learns some form of
2) convolutional recurrent units evolving over iterations only
proximal operator at each iteration, it may be redundant to
(CRNN-i),
4
(a) Traditional Optimisation
(c) CRNN-MRI
(𝑖)
argmin
(𝑖)
𝑥𝑟𝑒𝑐
CRNN
𝑥𝑟𝑒𝑐
𝐳
⨁
𝐳
DC
𝑖
DC
𝑖
𝐳
(b) CNN
(0)
𝑥𝑟𝑒𝑐
CNN(1)
𝐳
1
DC
(𝑖−1)
…
CNN(i)
𝑥𝑟𝑒𝑐
𝐳
𝑖
DC
Fig. 1: (a) Traditional optimisation algorithm using variable splitting and alternate minimisation approach, (b) the optimisation
unrolled into a deep convolutional network incorporating the data consistency step, and (c) the proposed architecture which
models optimisation recurrence.
1
𝑖
𝐱 𝑟𝑒𝑐
(a) CRNN-MRI
Ω
DC
⨁
k-space samples
𝐱 𝑟𝑒𝑐
2
𝐱 𝑟𝑒𝑐
DC
DC
DC
⨁
⨁
⨁
(1)
𝐇5
(2)
𝐇5
𝐇5
Unfolding
(iteration)
(1)
𝐇4
(1)
𝐇3
(1)
𝐇2
(1)
𝐇𝟏
𝐇4
CRNN
𝑁
𝐱 𝑟𝑒𝑐
(b)
(2)
(N)
Layer 5
(CNN)
(N)
Layer 4
(CRNN-i)
(N)
Layer 3
(CRNN-i)
(N)
Layer 2
(CRNN-i)
(N)
Layer 1
(BCRNN-t-i)
𝐇4
..
CNN
𝐇3
(2)
𝐇3
..
CRNN-i
𝐇2
(2)
𝐇2
..
CRNN-i
𝐇1
(2)
𝐇1
CRNN-i
𝐱 𝑟𝑒𝑐
𝐇𝑙,𝑡−1)
𝐇𝑙,𝑡
𝑖
(c)
𝑖−1
𝐱 𝑟𝑒𝑐
Feed-forward
convolution
Recurrent
convolution
(iteration)
Recurrent
convolution
(time)
1
𝐱𝑢
BCRNN-t-i
𝑖
𝑖
…
𝐇 𝑙,𝑡−1
…
𝐇𝑙,𝑡−1
𝑖−1
𝑖−1
𝐇𝑙,𝑡
𝑖
𝐇𝑙,𝑡+1
𝑖
𝑖−1
…
𝐇 𝑙,𝑡+1
…
𝑖
𝐇𝑙,𝑡
𝐇𝑙,𝑡+1
𝐇𝑙
𝑖
𝑖
𝐇 𝑙,𝑡
𝑖
𝐇𝑙,𝑡−1
𝑁−1
𝐱 𝑟𝑒𝑐
𝐇𝑙,𝑡+1
𝑖
𝐇𝑙−1,
𝑡−1
𝑖
𝐇𝑙−1,𝑡
𝑖
𝐇𝑙
𝐇𝑙
𝑖
𝑖
𝑖
𝐇𝑙−1,
𝑡+1
Fig. 2: (a) The overall architecture of proposed CRNN-MRI network for MRI reconstruction. (b) The structure of the proposed
(0)
network when unfolded over iterations, in which xrec = xu . (c) The structure of BCRNN-t-i layer when unfolded over the time
sequence. The black arrows indicate feed-forward convolutions. The blue arrows and red arrows indicate recurrent convolutions
over iterations and the time sequence respectively.
3) 2D convolutional neural network (CNN),
4) residual connection and
5) DC layers.
We introduce details of the components of our network in the
following subsections.
1) CRNN-i: As aforementioned, we encapsulate the iterative
optimisation procedures explicitly with RNNs. In the CRNNi unit, the iteration step is viewed as the sequential step in
the vanilla RNN. If the network is unfolded over the iteration
dimension, the network can be illustrated as in Fig. 2(b), where
information is propagated between iterations. Here we use H
to denote the feature representation of our sequence of frames
(i)
throughout the network. Hl denotes the representation at layer
l (subscript) and iteration step i (superscript). Therefore, at
(i−1)
iteration (i), given the input Hl−1 and the previous iteration’s
(i−1)
(i)
hidden state Hl
, the hidden state Hl at layer l of a CRNNi unit can be formulated as:
(i)
(i)
(i−1)
Hl = σ(Wl ∗ Hl−1 + Wi ∗ Hl
+ Bl ).
(7)
Here ∗ represents convolution operation, Wl and Wi
represent the filters of input-to-hidden convolutions and hiddento-hidden recurrent convolutions evolving over iterations
5
(i)
respectively, and Bl represents a bias term. Here Hl is backward direction. Therefore, for the formulation of BCRNNthe representation of the whole T sequence with shape t-i unit, given (1) the current input representation of the l-th
(batchsize, T, nc , Dx , Dy ), where nc is the number of channels layer at time frame t and iteration step i, which is the output
which is 2 at the input and output but is greater while processing representation from (l − 1)-th layer H(i)
l−1,t , (2) the previous
inside the network, and the convolutions are computed on the iteration’s hidden representation within the same layer H(i−1) ,
l,t
→
− (i)
last two dimensions. The latent features are activated by the
(3) the hidden representation of the past time frame H l,t−1 , and
rectifier linear unit (ReLU) as a choice of nonlinearity, i.e.
←
−(i)
the hidden representation of the future time frame H l,t+1 , then
σ(x) = max(0, x).
of the current l-th layer of time
The CRNN-i unit offers several advantages compared to the hidden state representation
(i)
frame
t
at
iteration
i,
H
with
shape (batchsize, nc , Dx , Dy ),
l,t
independently unrolling convolutional filters at each stage.
can
be
formulated
as:
Firstly, compared to CNNs where the latent representation
−
→(i)
−
→
−
→
(i)
(i−1)
from the previous state is not propagated, the hidden-to-hidden
H l,t = σ(Wl ∗ Hl−1,t + Wt ∗ H il,t−1 + Wi ∗ Hl,t + B l ),
←
−(i)
←
−(i)
←
−
iteration connections in CRNN-i units allow contextual spatial
(i)
(i−1)
H l,t = σ(Wl ∗ Hl−1,t + Wt ∗ H l,t+1 + Wi ∗ Hl,t + B l ),
information gathered at previous iterations to be passed to the
−
→(i) ←
−(i)
(i)
Hl,t = H l,t + H l,t ,
future iterations. This enables the reconstruction step at each
(8)
iteration to be optimised not only based on the output image
Similar
to
the
notation
in
Section
III-B1,
W
represents
the
t
but also based on the hidden features from previous iterations,
evolving over time. When
where the hidden connection convolutions can "memorise" the filters of recurrent convolutions
(1)
l
=
1
and
i
=
1,
H
=
x
,
that is the t-th frame of
u
t
0,t
useful features to avoid redundant computation. Secondly, as
undersampled
input
data,
and
when
l = 1 and i = 2, ...T ,
the iteration number increases, the effective receptive field of a
(i)
(i−1)
H
=
x
,
which
stands
for
the
t-th frame of the
rec
t
0,t
CRNN-i unit in the spatial domain also expands whereas CNN
intermediate
reconstruction
result
from
iteration
i − 1. For
resets it at each iteration. This property allows the network to
− (i)
←
−(i)
(0) →
H
,
and
,
they
are
set
to
be
zero
initial
hidden
H
H
l,t
l,0
l,T +1
further improve the reconstruction by allowing it to have better
states.
contextual support. In addition, since the weight parameters
The temporal connections of BCRNN-t-i allow information
are shared across iterations, it greatly reduces the number
to
be propagated across the whole T time frames, enabling it to
of parameters compared to CNNs, potentially offering better
learn
the differences and correlations of successive frames. The
generalization properties.
filter
responses of recurrent convolutions evolving over time
In this work, we use a vanilla RNN to model the recurrence
express dynamic changing biases, which focus on modelling
due to its simplicity. Note this can be naturally generalised
the temporal changes across frames, while the filter responses
to other RNN units, such as long short-term memory (LSTM)
of recurrent convolutions over iterations focus on learning the
and gated recurrent unit (GRU), which are considered to have
spatial refinement across consecutive iteration steps. In addition,
better memory properties, although using these units would
we
note that learning recurrent layers along the temporal
significantly increase computational complexity.
direction is different to using 3D convolution along the space
2) BCRNN-t-i: Dynamic MR images exhibit high temporal and temporal direction. 3D convolution seeks invariant features
redundancy, which is often exploited as a-priori knowledge to across space-time, hence several layers of 3D convolutions are
regularise the reconstruction. Hence, it is also beneficial for the required before the information from the whole sequence can
network to learn the dynamics of sequences. To this extent, we be propagated to a particular time frame. On the other hand,
propose a bidirectional convolutional recurrent unit (BCRNN- learning recurrent 2D convolutions enables the model to easily
t-i) to exploit both temporal and iteration dependencies jointly. and efficiently propagate the information through time, which
BCRNN-t-i includes three convolution layers: one on the input also yields fewer parameters and a lower computational cost.
which comes into the unit from the previous layer, one on
In summary, the set of hidden states for a CRNN block
−(i) →
− (i)
the hidden state from the past and future time frames and
(i)
(i) ←
to update at iteration i is H = {Hl , Hl,t , H l,t , H l,t }, for
the one on the hidden state from the previous iteration of
l = 1, . . . , L and t = 1, . . . , T , where L is the total number of
the unit (Fig. 2(c)). Note that we simultaneously consider
layers in the CRNN block and T is the total number of time
temporal dependencies from past and future time frames, and
frames.
the encoding weights are shared for both directions. The output
for the BCRNN-t-i layer is obtained by summing the feature
maps learned from both directions. The illustration figure of C. Network Learning
the unit when it is unfolded over time sequence is shown in
Given the training data S of input-target pairs (xu , xt ), the
Fig. 2(c).
network learning proceeds by minimizing the pixel-wise mean
As we need to propagate information along temporal squared error (MSE) between the predicted reconstructed MR
dimensions in this unit, here we introduce an additional index t image and the fully sampled ground truth data:
in the notation to represent the variables related with time frame
X
1
2
(i)
L (θ) =
kxt − xrec k2
(9)
t. Here Hl,t represents feature representations at l-th layer,
n
S
→
− (i)
(xu ,xt )∈S
time frame t, and at iteration i, H l,t denotes the representations
calculated when information is propagated forward inside the where θ = {Wl , Wi , Wt , Bl }, l = 1 . . . L, and nS stands
←
−(i)
BCRNN-t-i unit, and similarly, H l,t denotes the one in the for the number of samples in the training set S. Note that
6
the total number of time sequences T and iteration steps N
assumed by the network before performing the reconstruction
is a free parameter that must be specified in advance. The
network weights were initialised using He initialization [26]
and it was trained using the Adam optimiser [27]. During
training, gradients were hard-clipped to the range of [−5, 5]
to mitigate the gradient explosion problem. The network was
implemented in Python using Theano and Lasagne libraries.
aliasing artefact. The evaluation was done via a 3-fold cross
validation. The minibatch size during the training was set to
1, and we observed that the performance can reach a plateau
within 6 × 104 backpropagations.
B. Evaluation Method
We compared the proposed method with the representative
algorithms of the CS-based dynamic MRI reconstruction, such
as k-t FOCUSS [1] and k-t SLR [2], and two variants of
IV. E XPERIMENTS
3D CNN networks named 3D CNN-S and 3D CNN in our
experiments. The built baseline 3D CNN networks share the
A. Dataset and Implementation Details
same architecture with the proposed CRNN-MRI network but
The proposed method was evaluated using a complexall the recurrent units and 2D CNN units were replaced with
valued MR dataset consisting of 10 fully sampled short-axis
3D convolutional units, that is, in each iteration, the 3D CNN
cardiac cine MR scans. Each scan contains a single slice
block contain 5 layers of 3D convolutions, one DC layer and a
SSFP acquisition with 30 temporal frames, which have a
residual connection. Here 3D CNN-S refers to network sharing
320 × 320 mm field of view and 10 mm thickness. The
weights across iterations, however, this does not employ the
raw data consists of 32-channel data with sampling matrix
hidden-to-hidden connection as in the CRNN-i unit. The 3D
size 192 × 190, which was then zero-filled to the matrix size
CNN-S architecture was chosen so as to make a fair comparison
256 × 256. The raw multi-coil data was reconstructed using
with the proposed model using a comparable number of network
SENSE [28] with no undersampling and retrospective gating.
parameters. In contrast, 3D CNN refers to the network without
Coil sensitivity maps were normalized to a body coil image
weight sharing, in which the network capacity is N = 10 times
and used to produce a single complex-valued reconstructed
of that of 3D CNN-S, and approximately 12 times more than
image. In experiments, the complex valued images were backthat of our first proposed method (Proposed-A).
transformed to regenerate k-space samples, simulating a fully
Reconstruction results were evaluated based on the followsampled single-coil acquisition. The input undersampled image
ing quantitative metrics: MSE, peak-to-noise-ratio (PSNR),
sequences were generated by randomly undersampling the
structural similarity index (SSIM) [29] and high frequency
k-space samples using Cartesian undersampling masks, with
error norm (HFEN) [30]. The choice of the these metrics was
undersampling patterns adopted from [1]: for each frame the
made to evaluate the reconstruction results with complimentary
eight lowest spatial frequencies were acquired, and the sampling
emphasis. MSE and PSNR were chosen to evaluate the overall
probability of k-space lines along the phase-encoding direction
accuracy of the reconstruction quality. SSIM put emphasis
was determined by a zero-mean Gaussian distribution. Note
on image quality perception. HFEN was used to quantify the
that the undersampling rates are stated with respect to the
quality of the fine features and edges in the reconstructions,
matrix size of raw data, which is 192 × 190.
and here we employed the same filter specification as in [30],
The architecture of the proposed network used in the
[31] with the filter kernel size 15 × 15 pixels and a standard
experiment is shown in Fig. 2: each iteration of the CRNN
deviation of 1.5 pixels. For PSNR and SSIM, it is the higher
block contains five units: one layer of BCRNN-t-i, followed
the better, while for MSE and HFEN, it is the lower the better.
by three layers of CRNN-i units, and followed by a CNN
unit. For all CRNN-i and BCRNN-t-i units, we used a kernel
size k = 3 and the number of filters was set to nf = 64 for C. Results
Proposed-A and nf = 128 for Proposed-B in Table I. The CNN
The comparison results of all methods are reported in Table I,
after the CRNN-i units contains one convolution layer with where we evaluated the quantitative metrics, network capacity
k = 3 and nf = 2, which projects the extracted representation and reconstruction time. Numbers shown in Table I are mean
back to the image domain which contains complex-valued values of corresponding metrics with standard deviation of
images expressed using two channels. The output of the CRNN different subjects in parenthesis. Bold numbers in Table I
block is connected to the residual connection, which sums indicate the better performance of the proposed methods than
the output of the block with its input. Finally, we used DC the competing ones. Compared with the baseline method (k-t
layers on top of the CRNN output layers. During training, the FOCUSS and k-t SLR), the proposed methods outperform them
iteration step is set to be N = 10, and the time sequence by a considerable margin at different acceleration rates. When
for training is T = 30. Note that this architecture is by no compared with deep learning methods, note that the network
means optimal and more layers can be added to increase the capacity of Proposed-A is comparable with that of 3D CNN-S
ability of our network to better capture the data structures. and the capacity of Propose-B is around one third of that of 3D
While the original sequence has size 256 × 256 × T , we extract CNN. Though their capacities are much smaller, both Proposedpatches of size 256 × Dpatch × T , where Dpatch is the patch A and Proposed-B outperform 3D CNN-S and 3D CNN for
size and the direction of patch extraction corresponds to the all acceleration rates by a large margin, which shows the
frequency-encoding direction. Note that since we only consider competitiveness and effectiveness of our method. In addition,
Cartesian undersampling, the aliasing occurs only along the we can see a substantial improvement of the reconstruction
phase encoding direction, so patch extraction does not alter the results on all acceleration rates and in all metrics when the
7
TABLE I: Performance comparisons (MSE, PSNR:dB, SSIM, and HFEN) on dynamic cardiac data with different acceleration
rates. MSE is scaled to 10−3 . The bold numbers are better results of the proposed methods than that of the other methods.
Method
k-t FOCUSS
k-t SLR
3D CNN-S
3D CNN
Proposed-A
Proposed-B
-
-
338,946
3,389,460
262,020
1,040,132
6×
MSE
PSNR
SSIM
HFEN
0.592 (0.199)
32.506 (1.516)
0.953 (0.040)
0.211 (0.021)
0.371( 0.155)
34.632 (1.761)
0.970 (0.033)
0.161 (0.016)
0.385 (0.124)
34.370 (1.526)
0.976 (0.008)
0.170 (0.009)
0.275 (0.096)
35.841 (1.470)
0.983 (0.005)
0.138 (0.013)
0.261 (0.097)
36.096 (1.539)
0.985 (0.004)
0.131 (0.013)
0.201 (0.074)
37.230 (1.559)
0.988 (0.003)
0.112 (0.010)
9×
MSE
PSNR
SSIM
HFEN
1.234 (0.801)
29.721 (2.339)
0.922 (0.043)
0.310(0.041)
0.846 (0.572)
31.409 (2.404)
0.951 (0.025)
0.260 (0.034)
0.929 (0.474)
30.838 (2.246)
0.950 (0.016)
0.280 (0.034)
0.605 (0.324)
32.694 (2.179)
0.968 (0.010)
0.215 (0.021)
0.516 (0.255)
33.281 (1.912)
0.972 (0.009)
0.201 (0.025)
0.405 (0.206)
33.379 (2.017)
0.979 (0.007)
0.173 (0.021)
11×
MSE
PSNR
SSIM
HFEN
1.909 (0.828)
27.593 (2.038)
0.880 (0.060)
0.390 (0.023)
1.237 (0.620)
29.577 (2.211)
0.924 (0.034)
0.327 (0.028)
1.472 (0.733)
28.803 (2.151)
0.925 (0.022)
0.363 (0.041)
0.742 (0.325)
31.695 (1.985)
0.960 (0.010)
0.257 (0.029)
0.688 (0.290)
31.986 (1.885)
0.964 (0.009)
0.248 (0.033)
0.610 (0.300)
32.575 (1.987)
0.968 (0.011)
0.227 (0.030)
15s
451s
8s
8s
3s
6s
Capacity
Time
images and their corresponding error maps from different
reconstruction methods. As one can see, our proposed model
(Proposed-B) can produce more faithful reconstructions for
those parts of the image around the myocardium where there
are large temporal changes. This is reflected by the fact that
RNNs effectively use a larger receptive field to capture the
characteristics of aliasing seen within the anatomy. For the
3D CNN approaches, it is also observed that it is not able to
denoise the background region. This could be explained by the
fact that 3D CNN only exploits local information due to the
small filter sizes it used, while in contrast, the proposed CRNN
improves the denoising of the background region because of its
larger receptive field sizes. Their temporal profiles at x = 120
are shown in Fig. 5. Similarly, one can see that the proposed
model has overall much smaller error, faithfully modelling
the dynamic data. This suggests its capability to learn motion
Fig. 3: Mean PSNR values (Proposed-B) vary with the number compensation implicitly between frames although the network
of iterations at test time on data with different acceleration is trained for the dynamic image reconstruction. It could be
rates. Here ac stands for acceleration rate.
due to the fact that spatial and temporal features are learned
separately in the proposed model while 3D CNN seeks invariant
feature learning across space and time.
number of network parameters is increased for the proposed
In terms of speed, the proposed RNN-based reconstruction
method (Proposed-B), and therefore we will only show the is faster than the 3D CNN approaches because it only
results from Proposed-B in the following. The number of performs convolution along time once per iteration, removing
iterations used by the network at test time is set to be the the redundant 3D convolutions which are computationally
same as the training stage, which is N = 10, however, if expensive. Reconstruction time of 3D CNN and the proposed
the iteration number is increased up to N = 17, it shows an methods reported in Table I were calculated on a GPU GeForce
improvement of 0.324dB on average. Fig. 3 shows the model’s GTX 1080, and the time for k-t FOCUSS and k-t SLR were
performance varying with the number of iterations at test time. calculated on CPU.
In fact, for accelerated imaging, higher undersampling factors
significantly add aliasing to the initial zero-filled reconstruction,
V. D ISCUSSION AND CONCLUSION
making the reconstruction more challenging. This suggests that
while the 3D CNN possesses higher modelling capacity owing
In this work, we have demonstrated that the presented
to its large number of parameters, it may not necessarily be network is capable of producing faithful image reconstructions
an ideal architecture to perform dynamic MR reconstruction, from highly undersampled data, both in terms of various
presumably because the simple CNN is not as efficient as quantitative metrics as well as inspection of error maps.
propagating the information across the whole sequence.
In contrast to unrolled deep network architectures proposed
A comparison of the visualization results of a reconstruction previously, we modelled the recurrent nature of the optimisation
from 9× acceleration is shown in Fig. 4 with the reconstructed iteration using hidden representations with the ability to
8
Fig. 4: The comparison of reconstructions on spatial dimension with their error maps. (a) Ground Truth (b) Undersampled
image by acceleration factor 9 (c,d) Proposed-B (e,f) 3D CNN (g,h) 3D CNN-S (i,j) k-t FOCUSS (k,l) k-t SLR
Additionally, current analysis only considers a single coil setup.
In the future, we will also aim at investigating such methods in
a scenario where multiple coil data from parallel MR imaging
can be used jointly for higher acceleration acquisition.
To conclude, inspired by variable splitting and alternate
minimisation strategies, we have presented an end-to-end deep
learning solution, CRNN-MRI, for accelerated dynamic MRI
reconstruction, with a forward, CRNN block implicitly learning
iterative denoising interleaved by data consistency layers to
Fig. 5: The comparison of reconstructions along temporal enforce data fidelity. In particular, the CRNN architecture is
dimension with their error maps. (a) Ground Truth (b) Under- composed of the proposed novel variants of convolutional
sampled image by acceleration factor 9 (c,d) Proposed-B (e,f) recurrent unit which evolves over two dimensions: time and
iterations. The proposed network is able to learn both the
3D CNN (g,h) 3D CNN-S (i,j) k-t FOCUSS (k,l) k-t SLR
temporal dependency and the iterative reconstruction process
effectively, and outperformed the other competing methods in
retain and propagate information across the optimisation steps. terms of computational complexity, reconstruction accuracy
Compared with 3D CNN models, the proposed methods have a and speed for different undersampling rates.
much lower network capacity but still have a higher accuracy,
reflecting the effectiveness of our architecture. This is due to
R EFERENCES
the ability of the proposed RNN units to increase the receptive
[1] H. Jung, J. C. Ye, and E. Y. Kim, “Improved k–t BLAST and k–t SENSE
field size while iteration steps increase, as well as to efficiently
using FOCUSS,” Physics in medicine and biology, vol. 52, no. 11, p.
propagate information across the temporal direction. In addition,
3201, 2007.
our network also offers very fast reconstruction on a GPU
[2] S. G. Lingala, Y. Hu, E. Dibella, and M. Jacob, “Accelerated dynamic
MRI exploiting sparsity and low-rank structure: K-t SLR,” IEEE
GeForce GTX 1080 compared with other competing methods.
Transactions on Medical Imaging, vol. 30, no. 5, pp. 1042–1054, 2011.
In this work, we modeled the recurrence using the relatively
[3] J. Schlemper, J. Caballero, J. V. Hajnal, A. Price, and D. Rueckert, “A
simple (vanilla) RNN architecture. For the future work, we will
deep cascade of convolutional neural networks for dynamic MR image
reconstruction,” arXiv preprint arXiv:1704.02422, 2017.
explore other recurrent units such as LSTM or GRU. As they are
[4] D. Batenkov, Y. Romano, and M. Elad, “On the global-local dichotomy
trained to explicitly select what to remember, they may allow
in sparsity modeling,” arXiv preprint arXiv:1702.03446, 2017.
the units to better control the flow of information and could
[5] J. Tsao, P. Boesiger, and K. P. Pruessmann, “k-t BLAST and k-t
SENSE: Dynamic MRI with high frame rate exploiting spatiotemporal
reduce the number of iterations required for the network to
correlations,” Magnetic Resonance in Medicine, vol. 50, no. 5, pp. 1031–
generate high-quality output. In addition, we have found that the
1042, 2003.
majority of errors between the reconstructed image and the fully
[6] J. Caballero, A. N. Price, D. Rueckert, and J. V. Hajnal, “Dictionary
learning and time sparsity for dynamic MR data reconstruction,” IEEE
sampled image lie at the part where motion exists, indicating
Transactions on Medical Imaging, vol. 33, no. 4, pp. 979–994, 2014.
that motion exhibits a challenge for such dynamic sequence
[7] R. Otazo, E. Candès, and D. K. Sodickson, “Low-rank plus sparse
reconstruction, and CNNs or RNNs trained for reconstruction
matrix decomposition for accelerated dynamic MRI with separation of
background and dynamic components,” Magnetic Resonance in Medicine,
loss cannot fully learn such motion compensation implicitly
vol. 73, no. 3, pp. 1125–1136, 2015.
during training. Thus it will be interesting to explore the
[8] M. Lustig, D. L. Donoho, J. M. Santos, and J. M. Pauly, “Compressed
benefits of doing simultaneous motion compensation and image
sensing mri,” IEEE signal processing magazine, vol. 25, no. 2, pp. 72–82,
reconstruction for the improvement in the dynamic region.
2008.
9
[9] K. Hammernik, T. Klatzer, E. Kobler, M. P. Recht, D. K. Sodickson,
T. Pock, and F. Knoll, “Learning a variational network for reconstruction
of accelerated MRI data,” arXiv preprint arXiv:1704.00447, 2017.
[10] C. Dong, C. C. Loy, K. He, and X. Tang, “Learning a deep convolutional network for image super-resolution,” in European Conference on
Computer Vision. Springer, 2014, pp. 184–199.
[11] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks
for Biomedical Image Segmentation,” Medical Image Computing and
Computer-Assisted Intervention, pp. 234–241, 2015.
[12] D. Lee, J. Yoo, and J. C. Ye, “Deep artifact learning for compressed
sensing and parallel MRI,” arXiv preprint arXiv:1703.01120, 2017.
[13] Y. S. Han, J. Yoo, and J. C. Ye, “Deep learning with domain adaptation for accelerated projection reconstruction MR,” arXiv preprint
arXiv:1703.01135, 2017.
[14] S. Wang, Z. Su, L. Ying, X. Peng, and D. Liang, “Exploiting deep
convolutional neural network for fast magnetic resonance imaging,” in
ISMRM 24th Annual Meeting and Exhibition, 2016.
[15] S. Wang, N. Huang, T. Zhao, Y. Yang, L. Ying, and D. Liang, “1D Partial
Fourier Parallel MR imaging with deep convolutional neural network,”
in ISMRM 25th Annual Meeting and Exhibition, vol. 47, no. 6, 2017,
pp. 2016–2017.
[16] J. Adler and O. Öktem, “Learned primal-dual reconstruction,” arXiv
preprint arXiv:1707.06474, 2017.
[17] J. Sun, H. Li, Z. Xu et al., “Deep ADMM-Net for compressive sensing
mri,” in Advances in Neural Information Processing Systems, 2016, pp.
10–18.
[18] J. Schlemper, J. Caballero, J. V. Hajnal, A. Price, and D. Rueckert,
“A deep cascade of convolutional neural networks for MR image
reconstruction,” arXiv preprint arXiv:1703.00555, 2017.
[19] J. Adler and O. Öktem, “Solving ill-posed inverse problems using iterative
deep neural networks,” arXiv preprint arXiv:1704.04058, 2017.
[20] K. Zhang, W. Zuo, S. Gu, and L. Zhang, “Learning deep cnn denoiser
prior for image restoration,” arXiv preprint arXiv:1704.03264, 2017.
[21] J. Chang, C.-L. Li, B. Póczos, B. Kumar, and A. C. Sankaranarayanan,
“One network to solve them all—solving linear inverse problems using
deep projection models,” arXiv preprint arXiv:1703.09912, 2017.
[22] K. Gregor, I. Danihelka, A. Graves, D. Rezende, and D. Wierstra, “Draw:
A recurrent neural network for image generation,” in Proceedings of
The 32nd International Conference on Machine Learning, 2015, pp.
1462–1471.
[23] M. Liang and X. Hu, “Recurrent convolutional neural network for object
recognition,” in Proceedings of the IEEE Conference on Computer Vision
and Pattern Recognition, 2015, pp. 3367–3375.
[24] J. Kuen, Z. Wang, and G. Wang, “Recurrent attentional networks for
saliency detection,” in Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition, 2016, pp. 3668–3677.
[25] Y. Huang, W. Wang, and L. Wang, “Bidirectional recurrent convolutional
networks for multi-frame super-resolution,” in Advances in Neural
Information Processing Systems, 2015, pp. 235–243.
[26] K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers:
Surpassing human-level performance on imagenet classification,” in
Proceedings of the IEEE international conference on computer vision,
2015, pp. 1026–1034.
[27] D. Kingma and J. Ba, “Adam: A method for stochastic optimization,”
arXiv preprint arXiv:1412.6980, 2014.
[28] K. P. Pruessmann, M. Weiger, M. B. Scheidegger, P. Boesiger et al.,
“SENSE: sensitivity encoding for fast MRI,” Magnetic resonance in
medicine, vol. 42, no. 5, pp. 952–962, 1999.
[29] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image
quality assessment: from error visibility to structural similarity,” IEEE
transactions on image processing, vol. 13, no. 4, pp. 600–612, 2004.
[30] S. Ravishankar and Y. Bresler, “Mr image reconstruction from highly
undersampled k-space data by dictionary learning,” IEEE transactions
on medical imaging, vol. 30, no. 5, pp. 1028–1041, 2011.
[31] X. Miao, S. G. Lingala, Y. Guo, T. Jao, M. Usman, C. Prieto, and K. S.
Nayak, “Accelerated cardiac cine mri using locally low rank and finite
difference constraints,” Magnetic resonance imaging, vol. 34, no. 6, pp.
707–714, 2016.
| 1 |
A Self-Taught Artificial Agent for Multi-Physics
Computational Model Personalization
Dominik Neumanna,c,∗, Tommaso Mansib , Lucian Itud,e ,
Bogdan Georgescub , Elham Kayvanpourf , Farbod Sedaghat-Hamedanif ,
Ali Amrf , Jan Haasf , Hugo Katusf , Benjamin Mederf , Stefan Steidlc ,
Joachim Horneggerc , Dorin Comaniciub
arXiv:1605.00303v1 [] 1 May 2016
a
Medical Imaging Technologies, Siemens Healthcare GmbH, Erlangen, Germany
b
Medical Imaging Technologies, Siemens Healthcare, Princeton, USA
c
Pattern Recognition Lab, FAU Erlangen-Nürnberg, Erlangen, Germany
d
Siemens Corporate Technology, Siemens SRL, Brasov, Romania
e
Transilvania University of Brasov, Brasov, Romania
f
Department of Internal Medicine III, University Hospital Heidelberg, Germany
Abstract
Personalization is the process of fitting a model to patient data, a critical
step towards application of multi-physics computational models in clinical
practice. Designing robust personalization algorithms is often a tedious,
time-consuming, model- and data-specific process. We propose to use artificial intelligence concepts to learn this task, inspired by how human experts
manually perform it. The problem is reformulated in terms of reinforcement learning. In an off-line phase, Vito, our self-taught artificial agent,
learns a representative decision process model through exploration of the
computational model: it learns how the model behaves under change of
parameters. The agent then automatically learns an optimal strategy for
on-line personalization. The algorithm is model-independent; applying it
to a new model requires only adjusting few hyper-parameters of the agent
and defining the observations to match. The full knowledge of the model
itself is not required. Vito was tested in a synthetic scenario, showing that
it could learn how to optimize cost functions generically. Then Vito was
applied to the inverse problem of cardiac electrophysiology and the personalization of a whole-body circulation model. The obtained results suggested
that Vito could achieve equivalent, if not better goodness of fit than standard methods, while being more robust (up to 11% higher success rates)
and with faster (up to seven times) convergence rate. Our artificial intelligence approach could thus make personalization algorithms generalizable
and self-adaptable to any patient and any model.
∗
Corresponding author
Email address: [email protected] (Dominik Neumann)
Preprint submitted to Medical Image Analysis
May 3, 2016
Keywords: Computational Modeling, Model Personalization,
Reinforcement Learning, Artificial Intelligence.
1. Introduction
Computational modeling attracted significant attention in cardiac research over the last decades (Clayton et al., 2011; Frangi et al., 2001; Hunter
and Borg, 2003; Kerckhoffs et al., 2008; Krishnamurthy et al., 2013; Kuijpers et al., 2012; Noble, 2002). It is believed that computational models
can improve patient stratification and therapy planning. They could become the enabling tool for predicting disease course and therapy outcome,
ultimately leading to improved clinical management of patients suffering
from cardiomyopathies (Kayvanpour et al., 2015). A crucial prerequisite for
achieving these goals is precise model personalization: the computational
model under consideration needs to be fitted to each patient. However, the
high complexity of cardiac models and the often noisy and sparse clinical
data still hinder this task.
A wide variety of manual and (semi-)automatic model parameter estimation approaches have been explored, including Aguado-Sierra et al. (2010,
2011); Augenstein et al. (2005); Chabiniok et al. (2012); Delingette et al.
(2012); Itu et al. (2014); Konukoglu et al. (2011); Le Folgoc et al. (2013);
Marchesseau et al. (2013); Neumann et al. (2014a,b); Prakosa et al. (2013);
Schmid et al. (2006); Seegerer et al. (2015); Sermesant et al. (2009); Wallman et al. (2014); Wang et al. (2009); Wong et al. (2015); Xi et al. (2013);
Zettinig et al. (2014). Most methods aim to iteratively reduce the misfit
between model output and measurements using optimization algorithms,
for instance variational (Delingette et al., 2012) or filtering (Marchesseau
et al., 2013) approaches. Applied blindly, those techniques could easily fail
on unseen data, if not supervised, due to parameter ambiguity, data noise
and local minima (Konukoglu et al., 2011; Neumann et al., 2014a; Wallman
et al., 2014). Therefore, complex algorithms have been designed combining
cascades of optimizers in a very specific way to achieve high levels of robustness, even on larger populations, i.e. 10 or more patients (Kayvanpour et al.,
2015; Neumann et al., 2014b; Seegerer et al., 2015). However, those methods
are often designed from tedious, trial-and-error-driven manual tuning, they
are model-specific rather than generic, and their generalization to varying
data quality cannot be guaranteed. On the contrary, if the personalization
task is assigned to an experienced human, given enough time, he almost always succeeds in manually personalizing a model for any subject (although
solution uniqueness is not guaranteed, but this is inherent to the problem).
There are several potential reasons why a human expert is often superior to standard automatic methods in terms of personalization accuracy
and success rates. First, an expert is likely to have an intuition of the
2
model’s behavior from his prior knowledge of the physiology of the modeled
organ. Second, knowledge about model design and assumptions, and model
limitations and implementation details certainly provide useful hints on the
“mechanics” of the model. Third, past personalization of other datasets
allows the expert to build up experience. The combination of prior knowledge, intuition and experience enables to solve the personalization task more
effectively, even on unseen data.
Inspired by humans and contrary to previous works, we propose to address the personalization problem from an artificial intelligence (AI) perspective. In particular, we apply reinforcement learning (RL) methods (Sutton
and Barto, 1998) developed in the AI community to solve the parameter
estimation task for computational physiological models. With its roots in
control theory on the one hand, and neuroscience theories of learning on the
other hand, RL encompasses a set of approaches to make an artificial agent
learn from experience generated by interacting with its environment. Contrary to standard (supervised) machine learning (Bishop, 2006), where the
objective is to compute a direct mapping from input features to a classification label or regression output, RL aims to learn how to perform tasks. The
goal of RL is to compute an optimal problem-solving strategy (agent behavior), e.g. a strategy to play the game “tic-tac-toe” successfully. In the AI
field, such a behavior is often represented as a policy, a mapping from states,
describing the current “situation” the agent finds itself in (e.g. the current
locations of all “X” and “O” on the tic-tac-toe grid), to actions, which allow
the agent to interact with the environment (e.g. place “X” on an empty cell)
and thus influence that situation. The key underlying principle of RL is that
of reward (Kaelbling et al., 1996), which provides an objective means for the
agent to judge the outcome of its actions. In tic-tac-toe, the agent receives
a high, positive reward if the latest action led to a horizontal, vertical or
diagonal row full of “X” marks (winning), and a negative reward (punishment) if the latest action would allow the opponent to win in his next move.
Based on such rewards, the artificial agent learns an optimal winning policy
through trial-and-error interactions with the environment.
RL was first applied to game (e.g. Tesauro, 1994) or simple control
tasks. However, the past few years saw tremendous breakthroughs in RL for
more complex, real-world problems (e.g. Barreto et al., 2014; Kveton and
Theocharous, 2012; Nguyen-Tuong and Peters, 2011). Some noteworthy examples include Mülling et al. (2013), where the control entity of a robot
arm learned to select appropriate motor primitives to play table tennis, and
Mnih et al. (2015), where the authors combine RL with deep learning to
train an agent to play 49 Atari games, yielding better performance than an
expert in the majority of them.
Motivated by these recent successes and building on our previous work
(Neumann et al., 2015), we propose an RL-based personalization approach,
henceforth called Vito, with the goal of designing a framework that can, for
3
Off-line phase
Assimilate model
behavior
On-line phase
Learn optimal
personalization
strategy
Personalize model
for new patients
Figure 1: Overview of Vito: a self-taught artificial agent for computational model personalization, inspired by how human operators approach the personalization problem.
the first time to our knowledge, learn by itself how to estimate model parameters from clinical data while being model-independent. As illustrated in
Fig. 1, first, like a human expert, Vito assimilates the behavior of the physiological model under consideration in an off-line, one-time only, data-driven
exploration phase. From this knowledge, Vito learns the optimal strategy
using RL (Sutton and Barto, 1998). The goal of Vito during the on-line
personalization phase is then to sequentially choose actions that maximize
future rewards, and therefore bring Vito to the state representing the solution of the personalization problem. To setup the algorithm, the user needs
to define what observations need to be matched, the allowed actions, and a
single hyper-parameter related to the desired granularity of the state-space.
Then everything is learned automatically. The algorithm does not depend
on the underlying model.
Vito was evaluated on three different tasks. First, in a synthetic experiment, convergence properties of the algorithm were analyzed. Then,
two tasks involving real clinical data were evaluated: the inverse problem
of cardiac electrophysiology and the personalization of a lumped-parameter
model of whole-body circulation. The obtained results suggested that Vito
can achieve equivalent (or better) goodness of fit as standard optimization
methods, increased robustness and faster convergence rates.
A number of novelties and improvements over Neumann et al. (2015)
are featured in this manuscript. First, an automatic, data-driven statespace quantization method is introduced that replaces the previous manual
technique. Second, the need to provide user-defined initial parameter values
is eliminated by employing a new data-driven technique to initialize personalization of unseen data. Third, a stochastic personalization policy is
introduced, for which the previously used standard deterministic policy is
a special case. Fourth, the convergence properties are evaluated in parameter space using a synthetic personalization scenario. In addition, thorough
evaluation of Vito’s performance with increasing amount of training samples was conducted and personalization of the whole-body circulation model
was extended to several variants involving two to six parameters. Finally,
the patient database used for experimentation was extended from 28 to 83
patients for the cardiac electrophysiology experiments, and from 27 to 56
for the whole-body circulation experiments.
4
The remainder of this manuscript is organized as follows. Sec. 2 presents
the method. In Sec. 3, the experiments are described and the results are
presented. Sec. 4 concludes the manuscript with a summary and discussions
about potential limitations and extensions of the method.
2. Method
This section presents the reinforcement-learning (RL) framework for
computational model personalization. Sec. 2.1 introduces Markov decision
process (MDP). Sec. 2.2 defines the personalization problem and how it can
be reformulated in terms of an MDP. Sec. 2.3 describes how the artificial
agent, Vito, learns how the model behaves. Next, Sec. 2.4 provides details
about state-space quantization, and Sec. 2.5 describes how the model knowledge is encoded in the form of transition probabilities. All steps mentioned
so far are performed in an off-line training phase. Finally, Sec. 2.6 explains
how the learned knowledge is applied on-line to personalize unseen data.
2.1. Model-based Reinforcement Learning
2.1.1. MDP Definition
A crucial prerequisite for applying RL is that the problem of interest, here personalization, can be modeled as a Markov decision process
(MDP). An MDP is a mathematical framework for modeling decision making when the decision outcome is partly random and partly controlled by
a decision maker (Sutton and Barto, 1998). Formally, an MDP is a tuple
M = (S, A, T , R, γ), where:
• S is the finite set of states that describe the agent’s environment, nS
is the number of states, and st ∈ S is the state at time t.
• A is the finite set of actions, which allow the agent to interact with
the environment, nA is the number of actions, and at ∈ A denotes the
action performed at time t.
• T : S × A × S → [0; 1] is the stochastic transition function, where
T (st , at , st+1 ) describes the probability of arriving in state st+1 after
the agent performed action at in state st .
• R : S × A × S → R is the scalar reward function, where rt+1 =
R(st , at , st+1 ) is the immediate reward the agent receives at time t + 1
after performing action at in state st resulting in state st+1 .
• γ ∈ [0; 1] is the discount factor that controls the importance of future
versus immediate rewards.
5
2.1.2. Value Iteration
The value of a state, V ∗ (s), is the expected discounted reward the agent
accumulates when it starts in state s and acts optimally in each step:
(∞
)
X
∗
k
V (s) = E
γ rt+k+1 st = s ,
(1)
k=0
where E{} denotes the expected value given the agent always selects the
optimal action, and t is any time step. Note that the discount factor γ is
a constant and the superscript k its exponent. V ∗ can be computed using
value iteration (Sutton and Barto, 1998), an iterative algorithm based on
dynamic programming. In the first iteration i = 0, let Vi : S → R denote
an initial guess for the value function that maps states to arbitrary values.
Further, let Qi : S × A → R denote the ith “state-action value function”guess, which is computed as:
X
Qi (s, a) =
T (s, a, s0 ) R(s, a, s0 ) + γVi (s0 ) .
(2)
s0 ∈S
Value iteration iteratively updates Vi+1 from the previous Qi :
∀s ∈ S :
Vi+1 (s) = max Qi (s, a) ,
a∈A
(3)
until the left- and right-hand side of Eq. 3 are equal for all s ∈ S; then
V ∗ ← Vi+1 and Q∗ ← Qi+1 . From this equality relation, also known as
the Bellman equation (Bellman, 1957), one can obtain an optimal problemsolving strategy for the problem described by the MDP (assuming that all
components of the MDP are known precisely). It is encoded in terms of a
deterministic optimal policy π ∗ : S → A:
π ∗ (s) = arg max Q∗ (s, a) ,
(4)
a∈A
i.e. a mapping that tells the agent in each state the optimal action to take.
2.1.3. Stochastic Policy
In this work not all components of the MDP are known precisely, instead some are approximated from training data. Value iteration, however,
assumes an exact MDP to guarantee optimality of the computed policy.
Therefore, instead of relying on the deterministic policy π ∗ (Eq. 4), a generalization to stochastic policies π̃ ∗ is proposed here to mitigate potential
issues due to approximations. Contrary to Eq. 4, where for each state only
the one action with maximum Q∗ -value is considered, a stochastic policy
stores several candidate actions with similar high Q∗ -value and returns one
6
Model
parameters
Model
Model
state
Objectives
Measured
data
Patient
Figure 2: A computational model f is a dynamic system that maps model input parameters x to model state (output) variables y. The goal of personalization is to tune x such
that the objectives c, defined as the misfit between y and the corresponding measured
data z of a given patient, are optimized (the misfit is minimized).
of them through a random process each time it is queried. To this end, the
Q∗ (s, ·)-values for a given state s are first normalized:
Q̃∗s (a) =
Q∗ (s, a) − mina0 ∈A [Q∗ (s, a0 )]
.
maxa0 ∈A [Q∗ (s, a0 )] − mina0 ∈A [Q∗ (s, a0 )]
(5)
All actions whose normalized Q̃∗s -value is below a threshold of = 54 (set
empirically and used throughout the entire manuscript) are discarded, while
actions with large values are stored as potential candidates. Each time
the stochastic policy is queried, a = π̃∗ (s), it returns one of the candidate
actions P
a selected randomly with probability proportional to its Q̃∗s -value:
∗
Q̃s (a)/ a0 Q̃∗s (a0 ); the sum is over all candidate actions a0 .
2.2. Reformulation of the Model Personalization Problem into an MDP
2.2.1. Problem Definition
As illustrated in Fig. 2, any computational model f is governed by a set of
parameters x = (x1 , . . . , xnx )> , where nx denotes the number of parameters.
x is bounded within a physiologically plausible domain Ω, and characterized
by a number of ny (observable) state variables y = (y1 , . . . , yny )> . The
state variables can be used to estimate x. Note that some parameters may
be pre-estimated or assigned fixed values. The goal of personalization is to
optimize a set of nc objectives c = (c1 , . . . , cnc )> . The objectives are scalars
defined as ci = d(yi , zi ), where d is a measure of misfit, and zi denotes
the patient’s measured data (z) corresponding to yi . In this work d(yi , zi ) =
yi −zi . Personalization is considered successful if all user-defined convergence
criteria ψ = (ψ1 , . . . , ψnc )> are met. The criteria are defined in terms of
maximum acceptable misfit per objective: ∀i ∈ {1, . . . , nc } : |ci | < ψi .
2.2.2. Problem Reformulation
Personalization is mapped to a Markov decision process as follows:
States: An MDP state encodes the misfit between the computed model
state (outcome of forward model run) and the patient’s measurements.
Thus, MDP states carry the same type of information as objective vectors c,
yet the number of MDP states has to be finite (Sec. 2.1), while there are an
infinite number of different objective vectors due to their continuous nature.
Therefore the space of objective vectors in Rnc is reduced to a finite set of
7
representative states: the MDP states S, each s ∈ S covering a small region
in that space. One of those states, ŝ ∈ S, encodes personalization success
as it is designed such that it covers exactly the region where all convergence
criteria are satisfied. The goal of Vito is to learn how to reach that state.
Actions: Vito’s actions modify the parameters x to fulfill the objectives c.
An action a ∈ A consists in either in- or decrementing one parameter xi by
1×, 10× or 100× a user-specified reference value δi with δ = (δ1 , . . . , δnx )> .
This empirically defined quantization of the intrinsically continuous action
space yielded good results for the problems considered in this work.
Transition function: T encodes the agents knowledge about the computational model f and is learned automatically as described in Sec. 2.5.
Rewards: Inspired by the “mountain car” benchmark (Sutton and Barto,
1998), the rewards are defined as always being equal to R(s, a, s0 ) = −1
(punishment), except when the agent performs an action resulting in personalization success, i.e. when s0 = ŝ. In that case, R(·, ·, ŝ) = 0 (no punishment).
Discount factor: The large discount factor γ = 0.99 encourages policies
that favor future over immediate rewards, as Vito should always prefer the
long-term goal of successful personalization to short-term appealing actions
in order to reduce the risk of local minima.
2.3. Learning Model Behavior through Model Exploration
Like a human operator, Vito first learns how the model “behaves” by
experimenting with it. This is done through a “self-guided sensitivity analysis”. A batch of sample transitions is collected through model exploration
episodes E p = {ep1 , ep2 , . . . }. An episode epi is a sequence of ne-steps consecutive transitions generated from the model f and the patient p for whom
the target measurements zp are known. An episode is initiated at time
t = 0 by generating random initial model parameters xt within the physiologically plausible domain Ω. From the outputs of a forward model run
yt = f (xt ), the misfits to the patient’s corresponding measurements are
computed, yielding the objectives vector ct = d(yt , zp ). Next, a random exploration policy πrand that selects an action according to a discrete uniform
probability distribution over the set of actions is employed. The obtained
at ∈ A is then applied to the current parameter vector, yielding modified
parameter values xt+1 = at (xt ). From the output of the forward model run
yt+1 = f (xt+1 ) the next objectives ct+1 are computed. The next action at+1
is then selected according to πrand , and this process is repeated ne-steps − 1
times. Hence, each episode can be seen as a set of consecutive tuples:
e = {(xt , yt , ct , at , xt+1 , yt+1 , ct+1 ),
t = 0, . . . , ne-steps − 1} .
(6)
In this work, ne-steps = 100 transitions are created in each episode as a tradeoff between sufficient length of an episode to cover a real personalization
scenario and sufficient exploration of the parameter space.
8
The model is explored with many different training patients and the
S resulting episodes are combined into one large training episode set E = p E p .
The underlying hypothesis (verified in experiments) is that the combined
E allows to cancel out peculiarities of individual patients, i.e. to abstract
from patient-specific to model-specific knowledge.
2.4. From Computed Objectives to Representative MDP State
As mentioned above, the continuous space of objective vectors is quantized into a finite set of representative MDP states S. A data-driven approach is proposed. First, all objective vectors observed during training are
clustered according to their distance to each other. Because the ranges of
possible values for the individual objectives can vary significantly depending on the selected measurements (due to different types of measurements,
different units, etc.), the objectives should be normalized during clustering to avoid bias towards objectives with relatively large typical values. In
this work the distance measure performs implicit normalization to account
for these differences: the distance between two objective vectors (c1 , c2 ) is
defined relative to the inverse of the convergence criteria ψ:
q
kc1 − c2 kψ = (c1 − c2 )> diag(ψ)−1 (c1 − c2 ) ,
(7)
where diag(ψ)−1 denotes a diagonal matrix with ( ψ11 , ψ12 , . . . ) along its diagonal. The centroid of a cluster is the centroid of a representative state.
In addition, a special “success state” ŝ representing personalization success
is created, which covers the region in state-space where all objectives are
met: ∀i : |ci | < ψi . The full algorithm is described in Appendix A. Finally, an operator φ : Rnc → S that maps continuous objective vectors c to
representative MDP states is introduced:
φ(c) = arg min kc − ξs kψ
(8)
s∈S
where ξs denotes the centroid corresponding to state s. For an example
state-space quantization see Fig. 3.
2.5. Transition Function as Probabilistic Model Representation
In this work, the stochastic MDP transition function T encodes the
agent’s knowledge about the computational model f . It is learned from the
training data E. First, the individual samples (xt , yt , ct , at , xt+1 , yt+1 , ct+1 )
are converted to state-action-state transition tuples Ê = {(s, a, s0 )}, where
s = φ(ct ), a = at and s0 = φ(ct+1 ). Then, T is approximated from statistics
over the observed transition samples:
T (s, a, s0 ) = P
{(s, a, s0 ) ∈ Ê}
,
s00 ∈S
9
{(s, a, s00 ) ∈ Ê}
(9)
Data-driven quantization
Manual quantization
150
150
100
100
50
50
0
0
-50
-50
-100
-100
-150
-150
-150
-100
-50
0
50
100
150
-150
-100
-50
0
50
100
150
Figure 3: State-space quantization. Left: Example data-driven quantization of a twodimensional state-space into nS = 120 representative states. The states are distributed
according to the observed objective vectors c in one of the experiments in Sec. 3.2. The
objectives were QRS duration [ms] (c1 ) and electrical axis [deg] (c2 ). The center rectangle
(green region) denotes the success state ŝ where all objectives are met (∀i : |ci | < ψi ); see
text for details. Right: Manual quantization as used in Neumann et al. (2015).
where |{·}| denotes the cardinality of the set {·}. If nS and nA are large
compared to the total number of samples it may occur that some stateaction combinations are not observed: |{(s, a, ·) ∈ Ê}| = 0. In that case
uniformity is assumed: ∀s00 ∈ S : T (s, a, s00 ) = 1/nS .
M is now fully defined. Value iteration (Sec. 2.1) is applied and the
stochastic policy π̃∗ is computed, which completes the off-line phase.
2.6. On-line Model Personalization
On-line personalization, as illustrated in Fig. 4, can be seen as a two-step
procedure. First, Vito initializes the personalization of unseen patients from
training data. Second, Vito relies on the computed policy π̃∗ to guide the
personalization process.
2.6.1. Data-driven Initialization
Good initialization can be decisive for a successful personalization. Vito’s
strategy is to search for forward model runs in the training database E for
which the model state f (x) = y ≈ zp is similar to the patient’s measurements. To this end, Vito examines all parameters Ξ = {x ∈ E | f (x) ≈ zp }
that yielded model states similar to the patient’s measurements. Due to
ambiguities induced by the different training patients, data noise and model
assumptions, Ξ could contain significantly dissimilar parameters. Hence,
picking a single x ∈ Ξ might not yield the best initialization. Analyzing Ξ
probabilistically instead helps Vito to find likely initialization candidates.
The details of the initialization procedure are described in Appendix B.
10
Unseen
patient
Personalized
parameters
Initialization
Run forward model
Update parameters
Check convergence
Observe state
Select action
and detect oscillations
Figure 4: Vito’s probabilistic on-line personalization phase. See text for details.
Given the patient’s measurements zp , the procedure outputs a list of initialization candidates X0 = (x00 , x000 , . . . ). The list is sorted by likelihood with
the first element, x00 , being the most likely one.
2.6.2. Probabilistic Personalization
The first personalization step initializes the model parameter vector x0
with the most likely among all initialization candidates, x0 ∈ X0 (see previous section for details). Then, as illustrated in Fig. 4, Vito computes the
forward model y0 = f (x0 ) and the misfit between the model output and the
patient’s measurements c0 = d(y0 , zp ) to derive the first state s0 = φ(c0 ).
Given s0 , Vito decides from its policy the first action to take a0 = π̃∗ (s0 ),
and walks through state-action-state sequences to personalize the computational model f by iteratively updating the model parameters through MDP
actions. Bad initialization could lead to oscillations between states as observed in previous RL works (Kveton and Theocharous, 2012; Neumann
et al., 2015). Therefore, upon detection of an oscillation, which is done by
monitoring the parameter traces to detect recurring sets of parameter values,
the personalization is re-initialized at the second-most-likely x0 ∈ X0 , etc. If
all |X0 | initialization candidates have been tested, a potential re-initialization
defaults to fully random within the physiologically plausible parameter domain Ω. The process terminates once Vito reaches state ŝ (success), or when
a pre-defined maximum number of iterations is reached (failure).
3. Experiments
Vito was applied to a synthetic parameter estimation problem and to two
challenging problems involving real clinical data: personalization of a cardiac electrophysiology (EP), and a whole-body-circulation (WBC) model.
All experiments were conducted using leave-one-out cross-validation. The
numbers of datasets and transition samples used for the different experiments are denoted ndatasets and nsamples , respectively.
3.1. Synthetic Experiment: the Rosenbrock Function
First, Vito was employed in a synthetic scenario, where the ground-truth
model parameters were known. The goals were to test the ability of Vito to
11
5
≥5
0.25
4.5
4
2
3.5
0.25
3
1
3
0
2.5
2
0.25
-1
-2
1.5
1
-4
0.5
-5
-5 -4 -3 -2 -1
0.25
-3
0
1
2
3
4
5 -5 -4 -3 -2 -1
0
1
2
3
4
5 -5 -4 -3 -2 -1
0
1
2
3
4
5
Maximum observerd error
4
0
Figure 5: Synthetic experiment. Left: Contour plot of the Rosenbrock function f α=1 with
global minimum at x = (1, 1)> (red dot). The color scale is logarithmic for visualization
purposes: the darker, the lower the function value. Mid: Maximum L2 -error in parameter
space after personalization over all functions for varying initial parameter values. See text
for details. Yellow represents errors ≥ 5 (maximum observed error ≈ 110). Right: Same
as mid panel, except the extended action set was used. The red dots are the 100 groundtruth parameters x = (α, α2 )> generated for random α.
optimize cost functions generically, and to directly evaluate the performance
in the parameter space.
3.1.1. Forward Model Description
The Rosenbrock function (Rosenbrock, 1960), see Fig. 5, left panel, is
a non-convex function that is often used to benchmark optimization algorithms. It was treated as the forward model in this experiment:
f α (x1 , x2 ) = (α − x1 )2 + 100 · (x2 − x21 )2 ,
(10)
where x = (x1 , x2 )> were the model parameters to estimate for any α, and
f α : Ω → R. As described in Sec. 2.2.2, each of Vito’s actions a ∈ A in- or
decrements a parameter value by multiples (1×, 10×, 100×) of parameterspecific reference values. The reference values were set to δ = (0.01, 0.01)> ,
determined as 0.1% of the defined admissible parameter space per dimension,
Ω = [−5; 5]2 . The parameter α ∈ R defines a family of functions {f α }. The
goal was to find generically arg minx1 ,x2 f α (x1 , x2 ).
The Rosenbrock function has a unique global minimum at x = (α, α2 )> ,
where both terms T1 = (α − x1 ) and T2 = (x2 − x21 ) evaluate to 0. The
personalization objectives were therefore defined as c = (|T1 − 0|, |T2 − 0|)> ,
with the measured data z = (0, 0)> were zero for both objectives and the
computed data y = (T1 , T2 )> . The convergence criteria were set empirically
to ψ = (0.05, 0.05)> .
3.1.2. Evaluation
Vito was evaluated on ndatasets = 100 functions f α with randomly generated α ∈ [−2, 2]. In the off-line phase, for each function, nsamples =
12
10 · ne-steps = 1000 samples, i.e. ten training episodes, each consisting in
ne-steps = 100 transitions (Sec. 2.3), were generated to learn the policy. The
number of representative states was set to nS = 100. To focus on Vito’s
on-line personalization capabilities, both the data-driven initialization and
the re-initialization on oscillation (Sec. 2.6) were disabled. In total, 441 experiments with different initializations (sampled on a 21 × 21 uniform grid
spanned in Ω) were conducted. For each experiment all 100 functions were
personalized using leave-one-family-function-out cross validation, and the
error value from the function exhibiting the maximum L2 -error (worst-case
scenario) between ground-truth (α, α2 ) and estimated parameters was plotted. As one can see from the large blue region in Fig. 5, mid panel, for the
majority of initial parameter values Vito always converged to the solution
(maximum L2 -error < 0.25; the maximum achievable accuracy depended on
the specified convergence criteria ψ and on the reference values δ, which
“discretized” the parameter space). However, especially for initializations
far from the ground-truth (near border regions of Ω), Vito was unable to personalize some functions properly, which was likely due to the high similarity
of the Rosenbrock function shape in these regions.
To investigate this issue, the experiment was repeated after additional
larger parameter steps were added to the set of available actions: A0 =
A ∪ {±500δ1 ; ±500δ2 }. As shown in Fig. 5, right panel, Vito could now
personalize successfully starting from any point in Ω. The single spot with
larger maximum error (bright spot at approximately x = (−1, 2)> ) can be
explained by Vito’s stochastic behavior: Vito may have become unlucky if
it selected many unfavorable actions in sequence due to the randomness
introduced by the stochastic policy. Enabling re-initialization on oscillation
solved this issue entirely. In conclusion, this experiment showed that Vito
can learn how to minimize a cost function generically.
3.2. Personalization of Cardiac Electrophysiology Model
Vito was then tested in a scenario involving a complex model of cardiac
electrophysiology coupled with 12-lead ECG. Personalization was performed
for real patients from actual clinical data. A total of ndatasets = 83 patients were available for experimentation. For each patient, the end-diastolic
bi-ventricular anatomy was segmented from short-axis cine magnetic resonance imaging (MRI) stacks as described in Zheng et al. (2008). A tetrahedral anatomical model including myofibers was estimated and a torso atlas
affinely registered to the patient based on MRI scout images. See Zettinig
et al. (2014) for more details.
3.2.1. Forward Model Description
The depolarization time at each node of the tetrahedral anatomical
model was computed using a shortest-path graph-based algorithm, similar
13
to the one proposed in Wallman et al. (2012). Tissue anisotropy was modeled by modifying the edge costs to take into account fiber orientation. A
time-varying voltage map was then derived according to the depolarization
time: at a given time t, mesh nodes whose depolarization time was higher
than t were assigned a trans-membrane potential of −70 mV, 30 mV otherwise. The time-varying potentials were then propagated to a torso model
where 12-lead ECG acquisition was simulated, and QRS duration (QRSd)
and electrical axis (EA) were derived (Zettinig et al., 2014). The model
was controlled by the conduction velocities (in m/s) of myocardial tissue
and left and right Purkinje network: x = (vMyo , vLV , vRV )> . The latter two
domains were modeled as fast endocardial conducting tissue. The admissible parameter space Ω was set to [200; 1000] for vMyo and [500; 5000] for
both vLV and vRV . Reference increment values to build the action set A
were set to δ = (5, 5, 5)> m/s for the three model parameters. The goal of
EP personalization was to estimate x from the measured QRSd and EA.
Accounting for uncertainty in the measurements and errors in the model,
a patient was considered personalized if QRSd and EA misfits were below
ψ = (5 ms, 10◦ )> , respectively.
3.2.2. Number of Representative States
In contrast to Neumann et al. (2015), where state-space quantization
required manual tuning of various threshold values, the proposed approach
relies on a single hyper-parameter only: nS , the number of representative
states (Sec. 2.4). To specify nS , eight patients were selected for scouting.
Exhaustive search was performed for nS ∈ {10, 20, . . . , 490, 500} representative states. The goodness of a given configuration was evaluated based
on the success rate (relative number of successfully personalized cases according to convergence criteria ψ) over five independent, consecutive, leaveone-patient-out cross-validated personalization runs of the eight patients.
Furthermore, the average number of required forward model runs was considered. To this end, 100 training episodes (100·ne-steps = 104 transition samples) per patient were generated for each personalization run as described in
Sec. 2.3. As one can see from Fig. 6, good performance was achieved from 50
to 300 representative states. The large range of well performing nS indicates
a certain level of robustness with respect to that hyper-parameter. A slight
performance peak at 120 representative states was observed. Therefore,
nS = 120 was selected for further experimentation as compromise between
maintaining a low number of states and sufficient state granularity. An example quantization with nS = 120 is visualized in Fig. 3. The eight scouting
datasets were discarded for the following experiments to avoid bias in the
analysis.
14
0.8
80
0.6
60
0.4
40
0.2
20
0
10
50
100
150
200
250
300
350
Number of representative states
400
450
0
500
Average # forward model runs
100
120
Success rate
1
Figure 6: Hyper-parameter scouting. Vito’s performance for varying number of representative states nS on eight scouting datasets. The solid and dashed curves represent success
rate and average number of forward runs until convergence, respectively, aggregated over
five personalization runs with varying training data.
3.2.3. Reference Methods
Vito’s results were compared to two standard personalization methods
based on BOBYQA (Powell, 2009), a widely-used gradient-free optimizer
known for its robust performance and fast convergence. The first approach,
“BOBYQA simple”, mimicked the most basic estimation setup, where only
the minimum level of model and problem knowledge were assumed.
Pnc The
objective function was the sum of absolute QRSd and EA errors: i=1 |ci |.
It was minimized in a single optimizer run where all three parameters in
x were tuned simultaneously. The algorithm terminated once all convergence criteria ψ were satisfied (success) or if the number of forward model
evaluations exceeded 100 (failure). The second approach, “BOBYQA cascade”, implemented an advanced estimator with strong focus on robustness,
which computed the optimum parameters in a multi-step iterative fashion.
It is based on Seegerer et al. (2015), where tedious manual algorithm and
cost function tuning was performed on a subset of the data used in this
manuscript. In a first step, the myocardial conduction velocity was tuned to
yield good match between computed and measured QRS duration. Second,
left and right endocardial conduction velocities were optimized to minimize
electrical axis error. Both steps were repeated until all parameter estimates
were stable.
To remove bias towards the choice of initial parameter values, for each
of the two methods all datasets were personalized 100 times with different
random initializations within the range of physiologically plausible values
Ω. The differences in performance were striking: only by changing the
initialization, the number of successfully personalized cases varied from 13 to
37 for BOBYQA simple, and from 31 to 51 for BOBYQA cascade (variability
of more than 25% of the total number of patients). These results highlight
the non-convexity of the cost function to minimize.
15
0
135
Data-driven
initialization
90
Full personalization
Absolute EA error [deg]
50
Full personalization
Fixed initialization
Absolute QRSd error [ms]
100
Data-driven
initialization
Fixed initialization
180
150
45
0
Training samples per dataset
Training samples per dataset
Figure 7: Absolute errors over all patients after initialization with fixed parameter values
(blue), after data-driven initialization for increasing amount of training data (white), and
after full personalization with Vito (green). Data-driven initialization yielded significantly
reduced errors if sufficient training data were available (> 102 ) compared to initialization
with fixed values. Full personalization further reduced the errors by a significant margin.
The red bar and the box edges indicate the median absolute error, and the 25 and 75
percentiles, respectively. Left: QRS duration errors. Right: Electrical axis errors.
3.2.4. Full Personalization Performance
First, Vito’s overall performance was evaluated. The full personalization
pipeline consisting in off-line learning, initialization, and on-line personalization was run on all patients with leave-one-patient-out cross-validation
using 1000 training episodes (nsamples = 1000 · ne-steps = 105 transition samples) per patient. The maximum number of iterations was set to 100. The
green box plots in the two panels of Fig. 7 summarize the results. The
mean absolute errors were 4.1 ± 5.6 ms and 12.4 ± 13.3◦ in terms of QRSd
and EA, respectively, a significant improvement over the residual error after
initialization. In comparison to the reference methods, the best BOBYQA
simple run yielded absolute errors of 4.4 ± 10.8 ms QRSd and 15.5 ± 18.6◦
EA on average, and the best BOBYQA cascade run 0.1 ± 0.2 ms QRSd and
11.2 ± 15.8◦ EA, respectively. Thus, in terms of EA error all three methods
yielded comparable performance, and while BOBYQA simple and Vito performed similarly in terms of QRSd, BOBYQA cascade outperformed both
in this regard. However, considering success rates, i.e. successfully personalized patients according to the defined convergence criteria (ψ) divided by
total number of patients, both the performance of Vito (67%) and BOBYQA
cascade (68%) were equivalent, while BOBYQA simple reached only 49% or
less. In terms of run-time, i.e. average number of forward model runs until convergence, Vito (31.8) almost reached the high efficiency of BOBYQA
simple (best: 20.1 iterations) and clearly outperformed BOBYQA cascade
(best: 86.6 iterations), which means Vito was ≈ 2.5× faster.
16
3.2.5. Residual Error after Initialization
A major advantage over standard methods such as the two BOBYQA approaches is Vito’s automated, data-driven initialization method (Sec. 2.6.1),
which eliminates the need for user-provided initial parameter values. To
evaluate the utility of this step, personalization using Vito was stopped directly after initialization (the most likely x0 was used) and the errors in terms
of QRSd and EA resulting from a forward model run f with the computed
initial parameter values were quantified. This experiment was repeated for
increasing number of transition samples per dataset: nsamples = 100 . . . 105 ,
and the results were compared to the error after initialization when fixed
initial values were used (the initialization of the best performing BOBYQA
experiment was used). As one can see from Fig. 7, with increasing amount
of training data both errors decreased notably. As few as 102 transitions per
dataset already provided more accurate initialization than the best tested
fixed initial values. Thus, not only does this procedure simplify the setup
of Vito for new problems (no user-defined initialization needed), this experiment showed that it can reduce initial errors by a large margin, even
when only few training transitions were available. It should be noted that
Vito further improves the model fit in its normal operating mode (continue
personalization after initialization), as shown in the previous experiment.
3.2.6. Convergence Analysis
An important question in any RL application relates to the amount of
training needed until convergence of the artificial agent’s behavior. For Vito
in particular, this translates to the amount of transition samples required
to accurately estimate the MDP transition function T to compute a solid
policy on the one hand, and to have enough training data for reliable parameter initialization on the other hand. To this end, Vito’s overall performance
(off-line learning, initialization, personalization) was evaluated for varying
number of training transition samples per dataset. As one can see from the
results in Fig. 8, with increasing amount of training data the performance
increased, suggesting that the learning process was working properly. Even
with relatively limited training data of only nsamples = 102 samples per patient, Vito outperformed the best version of BOBYQA simple (49% success
rate). Starting from nsamples ≈ 3000, a plateau at ≈66% success rate was
reached, which remained approximately constant until the maximum tested
number of samples. This was almost on par with the top BOBYQA cascade
performance (68% success rate). Also the run-time performance increased
with more training data. For instance, Vito’s average number of iterations
was 36.2 at 103 samples, 31.5 at 104 samples, or 31.8 at 105 samples.
These results suggested that not only Vito can achieve similar performance as an advanced, manually engineered method, but also the number
of required training samples was not excessive. In fact, a rather limited
and thus well manageable amount of data, which can be computed in a
17
60
0.4
40
# Forward model runs
0.2
0
100
101
102
103
Training samples per dataset
104
20
0
0.6
0.4
0.2
0
BOBYQA
simple
BOBYQA
cascade
100
80
60
40
20
0
Average # forward model runs
0.6
# Forward model runs
0.8
Success rate
80
Reference Methods
# Forward model runs
Success rate
1
Success rate
Success rate
0.8
100
Average # forward model runs
Success rate
Vito
1
Figure 8: EP personalization results. Personalization success rate in blue and average
number of iterations in red. Left: Vito’s performance for increasing number of training
transition samples per dataset. Each dot represents results from one experiment (crossvalidated personalization of all 75 datasets), solid/dashed line is low-pass filtered mean,
shaded areas represent 0.5× and 1× standard deviation. Right: Performance of both
reference methods. Each shade represents 10% of the results, sorted by performance.
reasonable time-frame, sufficed.
3.3. Personalization of Whole-Body Circulation Model
Next, Vito was asked to personalize a lumped-parameter whole-body
circulation (WBC) model from pressure catheterization and volume data.
A subset of ndatasets = 56 patients from the EP experiments were used for
experimentation. The discrepancy was due to missing catheterization data
for some patients, which was required for WBC personalization only. For
each patient, the bi-ventricular anatomy was segmented and tracked from
short-axis cine MRI stacks throughout one full heart cycle using shapeconstraints, learned motion models and diffeomorphic registration (Wang
et al., 2013). From the time-varying endocardial meshes, ventricular volume
curves were derived. Manual editing was performed whenever necessary.
3.3.1. Forward Model Description
The WBC model to personalize was based on Itu et al. (2014). It contained a heart model (left ventricle (LV) and atrium, right ventricle and
atrium, valves), the systemic circulation (arteries, capillaries, veins) and the
pulmonary circulation (arteries, capillaries, veins). Time-varying elastance
models were used for all four chambers of the heart. The valves were modeled
through a resistance and an inertance. A three-element Windkessel model
was used for the systemic and pulmonary arterial circulation, while a twoelement Windkessel model was used for the systemic and pulmonary venous
circulation. We refer the reader to Itu et al. (2014); Neumann et al. (2015);
Westerhof et al. (1971) for more details. Personalization was performed with
respect to the patient’s heart rate as measured during catheterization.
The goal of this experiment was to compare Vito’s personalization performance for the systemic part of the model in setups with increasing number of parameters to tune and objectives to match. To this end, Vito was
18
x
Initial volume
LV max. elastance
Aortic resistance
Aortic compliance
Dead volume
Time to Emax
Default value
400 mL
2.4 mmHg/mL
1100 g/(cm4 s)
1.4 ·109 cm4 s2 /g
10 mL
300 ms
Ω
[200; 1000] mL
[0.2; 5] mmHg/mL
[500; 2500] g/(cm4 s)
[0.5; 6] ·109 cm4 s2 /g
[−50; 500] mL
[100; 600] ms
Setups
6, 5, 3, 2
6, 5, 3, 2
6, 5, 3
6, 5
6, 5
6
Table 1: WBC parameters x, their default values and domain Ω. The last column denotes the experiment setups in which a parameter was personalized (e.g. “5”: parameter
was among the estimated parameters in 5p experiment). Default values were used in
experiments where the respective parameters were not personalized.
c
End-diastolic LV volume
End-systolic LV volume
Mean aortic pressure
Peak-systolic aortic pressure
End-diastolic aortic pressure
Ejection time
ψ
20 mL
20 mL
10 mmHg
10 mmHg
10 mmHg
50 ms
Measured range
[129; 647] mL
[63; 529] mL
[68; 121] mmHg
[83; 182] mmHg
[48; 99] mmHg
[115; 514] ms
Setups
6, 5, 3, 2
6, 5, 3, 2
6, 5, 3
6, 5
6, 5
6
Table 2: WBC objectives c, their convergence criteria ψ and range of measured values in
the patient population used for experimentation.
employed on setups with two to six parameters (2p, 3p, 5p, 6p): initial
blood volume, LV maximum elastance, time until maximum elastance is
reached, total aortic resistance and compliance, and LV dead volume. The
reference values δ to define Vito’s allowed actions A were set to .5% of the
admissible parameter range Ω for each individual parameter, see Table 1 for
details. The personalization objectives were MRI-derived end-diastolic and
end-systolic LV volume, ejection time (time duration during which the aortic valve is open and blood is ejected), and peak-systolic, end-diastolic, and
mean aortic blood pressures as measured during cardiac catheterization, see
Fig. 9. To account for measurement noise, personalization was considered
successful if the misfits per objective were below acceptable threshold values
ψ as listed in Table 2.
3.3.2. Number of Representative States
Along the same lines as Sec. 3.2.2, the hyper-parameter for state-space
quantization was tuned based on the eight scouting patients. The larger the
dimensionality of the state-space, the more representative states were needed
to yield good performance. In particular, for the different WBC setups, the
numbers of representative states (nS ) yielding the best scouting performance
were 70, 150, 400 and 600 for the 2p, 3p, 5p and 6p setup, respectively. The
scouting datasets were discarded for the following experiments.
19
LV volume [mL]
Aortic pressure [mmHg]
160
140
WBC 2p
Measured
Computed
160
WBC 3p
160
140
140
120
120
120
100
100
100
80
80
60
60
60
40
40
40
20
200
20
200
20
200
185
End-diastolic
volume
185
Mean
pressure
80
End-diastolic
volume
185
WBC 5p
Peak-systolic
pressure
160
140
100
Mean
pressure
80
End-diastolic
volume
40
20
200
End-diastolic
volume
Ejection
time
170
155
155
155
155
140
140
140
110
0 0.2 0.4 0.6 0.8 1
Time [s]
125
110
125
110
0 0.2 0.4 0.6 0.8 1
Time [s]
Endsystolic
volume
0 0.2 0.4 0.6 0.8 1
Time [s]
End-diastolic
aortic pressure
185
170
125
Mean
pressure
60
End-diastolic
aortic pressure
170
Endsystolic
volume
Peak-systolic
pressure
120
170
Endsystolic
volume
WBC 6p
140
125
110
Endsystolic
volume
0 0.2 0.4 0.6 0.8 1
Time [s]
Figure 9: Goodness of fit in terms of time-varying LV volume and aortic pressure for Vito
personalizing an example patient based on the different WBC setups. The added objectives
per setup are highlighted in the respective column. With increasing number of parameters
and objectives Vito manages to improve the fit between model and measurements.
3.3.3. Reference Method
A gradient-free optimizer (Lagarias et al., 1998) based on the simplex
method was used to benchmark Vito. The objective function was the sum
of squared differences between computed and measured values, weighted by
the inverse of the convergence criteria to counter the different ranges of
objective values (e.g. due to different types of measurements and different
units): kckψ (Eq. 7). Compared to non-normalized optimization, the algorithm converged up to 20% faster and success rates increased by up to
8% under otherwise identical conditions. Personalization was terminated
once all convergence criteria were satisfied (success), or when the maximum
number of iterations was reached (failure). To account for the increasing
complexity of optimization with increasing number of parameters nx , the
maximum number of iterations was set to 50 · nx for the different setups.
As one can see from Fig. 10, right panels, with increasing number of
parameters to be estimated, the performance in terms of success rate and
number of forward model runs decreased slightly. This is expected as the
problem becomes harder. To suppress bias originating from (potentially
poor) initialization, the reference method was run 100 times per setup (as
in EP experiments), each time with a different, randomly generated set of
initial parameter values. The individual performances varied significantly
for all setups.
20
Vito
Success rate
1
1
0.8
0.8
0.6
0.6
0.4
0.4
2p
3p
5p
6p
Average # forward model runs
0.2
0
0
10
300
1
10
2
10
3
0.2
0
4
10
10
2p
3p
5p
6p
300
2p
3p
5p
6p
250
200
250
200
150
150
100
100
50
50
0
100
Reference Method
101
102
103
Training samples per dataset
104
0
2p
3p
5p
#Parameters
6p
Figure 10: WBC model personalization results (top: success rate, bottom: average number
of forward model runs until convergence) for various estimation setups (different colors),
see text for details. Left: Vito’s performance for increasing number of training transition
samples per dataset. Each dot represents results from one experiment (cross-validated
personalization of all 48 datasets), solid/dashed lines are low-pass filtered mean, shaded
areas represent 0.5× and 1× standard deviation. Right: Performance of reference method.
Each shade represents 10% of the results, sorted by performance; darkest shade: best 10%.
21
3.3.4. Convergence Analysis
For each WBC setup the full Vito personalization pipeline was evaluated
for increasing training data (nsamples = 100 . . . 105 ) using leave-one-patientout cross-validation. The same iteration limits as for the reference method
were used. The results are presented in Fig. 10, left panels. With increasing
data, Vito’s performance, both in terms of success rate and run-time (iterations until convergence), increased steadily until reaching a plateau. As
one would expect, the more complex the problem, i.e. the more parameters
and objectives involved in the personalization, the more training data was
needed to reach the same level of performance. For instance, Vito reached
80% success rate with less than nsamples = 50 training samples per dataset in
the 2p setup, whereas almost 90× as many samples were required to achieve
the same performance in the 6p setup.
Compared to the reference method, given enough training data, Vito
reached equivalent or better success rates (e.g. up to 11% higher success
rate for 6p) while significantly outperforming the reference method in terms
of run-time performance. In the most basic setup (2p), if nsamples ≥ 103 , Vito
converged after 3.0 iterations on average, while the best reference method
run required 22.6 iterations on average, i.e. Vito was seven times faster. For
the more complex setups (3p, 5p, 6p), the speed-up was not as drastic. Yet,
in all cases Vito outperformed even the best run of the reference method by
a factor of 1.8 or larger.
4. Conclusion
4.1. Summary and Discussion
In this manuscript, a novel personalization approach called Vito has been
presented. To our knowledge, it is the first time that biophysical model
personalization is addressed using artificial intelligence concepts. Inspired
by how humans approach the personalization problem, Vito first learns the
characteristics of the computational model under consideration using a datadriven approach. This knowledge is then utilized to learn how to personalize
the model using reinforcement learning. Vito is generic in the sense that it
requires only minimal and intuitive user input (parameter ranges, authorized
actions, number of representative states) to learn by itself how to personalize
a model.
Vito was applied to a synthetic scenario and to two challenging personalization tasks in cardiac computational modeling. The problem setups
and hyper-parameter configurations are listed in Table 3. In most setups
the majority of hyper-parameters were identical and only few (nS ) required
manual tuning, suggesting good generalization properties of Vito. Another
key result was that Vito was up to 11% more robust (higher success rates)
compared to standard personalization methods. Vito’s ability to generalize
22
Application
Rosenbrock
Rosenbrock ext.
EP
WBC 2p
WBC 3p
WBC 5p
WBC 6p
nx
2
2
3
2
3
5
6
nc
2
2
2
2
3
5
6
ndatasets
100
100
83 (75)
56 (48)
56 (48)
56 (48)
56 (48)
nS
100
100
120
70
150
400
600
nA /nx
6
8
6
6
6
6
6
nplateau
n/a
n/a
3 000
450
2 000
3 500
20 000
Table 3: Applications considered in this manuscript described in terms of the number
of parameters (nx ), objectives (nc ) and datasets (ndatasets ) used for experimentation (in
brackets: excluding scouting patients, if applicable); and Vito’s hyper-parameters: the
number of representative MDP states (nS ) and the number of actions per parameter
(nA /nx ). The last column (nplateau ) denotes the approximate number of samples needed
to reach the performance “plateau” (see convergence analyses in Sec. 3.2.6 and Sec. 3.3.4).
the knowledge obtained from a set of training patients to personalize unseen patients was shown as all experiments reported in this manuscript were
based on cross-validation. Furthermore, Vito’s robustness against training
patients for whom we could not find a solution was tested. In particular, for
about 20% of the patients, in none of the electrophysiology experiments in
Sec. 3.2 any personalization (neither Vito nor the reference methods) could
produce a result that satisfied all convergence criteria. Hence, for some
patients no solution may exist under the given electrophysiology model configuration1 . Still, all patients were used to train Vito, and surprisingly Vito
was able to achieve equivalent success rate as the manually engineered personalization approach for cardiac EP.
Generating training data could be considered Vito’s computational bottleneck. However, training is i) performed off-line and one-time only, and
ii) it is independent for each training episode and each patient. Therefore,
large computing clusters could be employed to perform rapid training by
parallelizing this phase. On-line personalization, on the contrary, is not parallelizable in its current form: the parameters for each forward model run
depend on the outcome of the previous iteration. Since the forward computations are the same for every “standard” personalization method (not
including surrogate-based approaches), the number of forward model runs
until convergence was used for benchmarking: Vito was up to seven times
faster compared to the reference methods. The on-line overhead introduced
by Vito (convert data into an MDP state, then query policy) is negligible.
As such, Vito could become a unified framework for personalization of
any computational physiological model, potentially eliminating the need for
1
Potential solution non-existence may be due to possibly invalid assumptions of the
employed EP model for patients with complex pathologies.
23
an expert operator with in-depth knowledge to design and engineer complex
optimization procedures.
4.2. Challenges and Outlook
Important challenges still remain, such as the incorporation of continuous
actions, the definition of states and their quantization. In this work we propose a data-driven state-space quantization strategy. Contrary to Neumann
et al. (2015), where a threshold-based state-quantization involving several
manually tuned threshold values (Fig. 3) was employed, the new method
is based on a single hyper-parameter only: the number of representative
states. Although it simplifies the setup of Vito, this quantization strategy
may still not be optimal, especially if only little training data is available.
Therefore, advanced approaches for continuous reinforcement learning with
value function approximation (Mnih et al., 2015; Sutton and Barto, 1998)
could be integrated to fully circumvent quantization issues.
At the same time, such methods could improve Vito’s scalability towards
high-dimensional estimation tasks. In this work we showed that Vito can be
applied to typical problems emerging in cardiac modeling, which could be
described as medium-scale problems with moderate number of parameters to
personalize and objectives to match. In unreported experiments involving
>10 parameters, however, Vito could no longer reach satisfactory performance, which is likely due to the steeply increasing number of transition
samples needed to sample the continuous state-space of increasing dimensionality sufficiently during training. The trends in Sec. 3.3 confirm the need
for more data. In the future, experience replay (Adam et al., 2012; Lin, 1993)
or similar techniques could be employed to increase training data efficiency.
Furthermore, massively parallel approaches (Nair et al., 2015) are starting
to emerge, opening up new avenues for large-scale reinforcement learning.
Although the employed reinforcement learning techniques guarantee convergence to an optimal policy, the computed personalization strategy may
not be optimal for the model under consideration as the environment is only
partially observable and the personalization problem ill-posed: there is no
guarantee for solution existence or uniqueness. Yet, we showed that Vito
can solve personalization more robustly and more effectively than standard
methods under the same conditions. However, a theoretical analysis in terms
of convergence guarantees and general stability of the method would be desirable, in particular with regards to the proposed re-initialization strategy.
As a first step towards this goal, in preliminary (unreported) experiments on
the EP and the WBC model we observed that the number of patients which
do not require re-initialization (due to oscillation) to converge to a successful
personalization consistently increased with increasing training data.
The data-driven initialization proposed in this work simplifies Vito’s
setup by eliminating the need for user-provided initialization. However,
currently there is no guarantee that the first initialization candidate is the
24
one that will yield the “best” personalization outcome. Therefore, one could
investigate the benefits of a fuzzy personalization scheme: many personalization processes could be run in parallel starting from the different initialization candidates. Parameter uncertainty quantification techniques (Neumann
et al., 2014a) could then be applied to compute a probability density function over the space of model parameters. Such approaches aim to gather
complete information about the solution-space, which can be used to study
solution uniqueness and other interesting properties.
An important characteristic of any personalization algorithm is its stability against small variations of the measured data. A preliminary experiment indicated good stability of Vito: the computed parameters from
several personalization runs, each involving small random perturbations of
the measurements, were consistent. Yet in a small group of patients some
parameter variability was observed, however, it was below the variability of
the reference method under the same conditions. To what extent certain
degrees of variability will impact other properties of the personalized model
such as its predictive power will be subject of future research. We will also
investigate strategies to improve Vito’s stability further. For instance, the
granularity of the state-space could provide some flexibility to tune the stability: less representative states means a larger region in state space per
state, thus small variations in the measured data might have less impact
on personalization outcome. However, this could in turn have undesirable
effects on other properties of Vito such as success rate or convergence speed
(see Sec. 3.2.2).
Beyond these challenges, Vito showed promising performance and versatility, making it a first step towards an automated, self-taught model personalization agent. The next step will be to investigate the predictive power
of the personalized models, for instance for predicting acute or long-term
response in cardiac resynchronization therapy (Kayvanpour et al., 2015; Sermesant et al., 2009).
Appendix A. Data-driven State-Space Quantization
This section describes the details of the proposed data-driven quantization approach to define the set of representative MDP states S (see Sec. 2.4).
It is based on clustering, in particular on the weighted k -means algorithm
described in Arthur and Vassilvitskii (2007). To this end, all objective vectors C = {c ∈ E} are extracted from the training data (Sec. 2.3). C ⊂ Rnc
represents all observed “continuous states”. The goal is to convert C into
the finite set of representative MDP states S while taking into account that
Vito relies on a special “success state” ŝ encoding personalization success.
The success state ŝ does not depend on the data, but on the maximum
acceptable misfit ψ. In particular, since personalization success implies that
all objectives are met, ŝ should approximate a hyperrectangle centered at
25
Figure A.11: Preprocessing of k -means input data to enforce the success state ŝ. Left:
Continuous state-space with observed objective vectors c (blue points). The points with
dashed outline will be canceled out. Right: Delineation of ŝ in green, enforced by inserted
vectors (green / red points) with large weights. See text for details.
0 and bounded at ±ψ, i.e. a small region in Rnc where ∀i : |ci | < ψi . To
enforce ŝ, the input to weighted k -means is preprocessed as follows.
First, the 0-vector is inserted into C, along with two vectors per dimension i, where all components are zero, except the ith component, which is set
to ±2ψi . These 2nc + 1 inserted vectors are later converted into centroids
of representative states to delineate the desired hyperrectangle for ŝ as illustrated in Fig. A.11. Furthermore, to avoid malformation of ŝ, no other
representative state should emerge within that region. Therefore, all vectors
c ∈ C, where ∀i : |ci | < 2ψi (except for the inserted vectors) are canceled
out by assigning zero weight, while the inserted vectors are assigned large
weights → ∞ and all remaining vectors weights of 1.
Next, k -means is initialized by placing a subset of the initial centroids at
the locations of the inserted states, and the remaining nS − 2nc − 1 centroids
at random vectors in C. Both the large weight and the custom initialization
enforce the algorithm to converge to a solution where one cluster centroid
is located at each inserted vector, while the other centroids are distributed
according to the training data. To ensure equal contribution of all objectives
(cancel out different units, etc.), similarity is defined relative to the inverse
of the user-defined convergence criteria (Eq. 7).
Finally, after k -means converged, the resulting centroids, denoted ξs , are
used to delineate the region in Rnc assigned to a representative state s.
Appendix B. Data-driven Initialization
This section describes the details of the proposed data-driven initialization approach to compute a list of candidate initialization parameter vectors
X0 = (x00 , x000 , . . . ) for a new patient p based on the patient’s measurements
zp and the training database E (see Sec. 2.6.1).
26
First, all model states are extracted from the training database: Υ =
{y ∈ E}. Next, Υ is fed to a clustering algorithm (e.g. k -means). As in
Appendix A, the distance measure is defined relative to the inverse of the
convergence criteria (Eq. 7). The output is a set of centroids (for simplicity,
in this work the number of centroids was set to nS ), and each vector is
assigned to one cluster based on its closest centroid. Let Υp ⊆ Υ denote the
members of the cluster whose centroid is closest to zp and Ξp = {x ∈ E |
f (x) ∈ Υp } the set of corresponding model parameters. For each cluster, an
approximation of the likelihood over the generating parameters is computed
in terms of a probability density function. In this work a Gaussian mixture
model is assumed:
GMMp (x) =
M
X
νm N (x; µm , Σm ) .
(B.1)
m=1
The parameter vectors in Ξp are treated as random samples drawn from
GMMp . Its properties, namely the number of mixture components M , their
weights νm , and their means µm and covariance matrices Σm , are estimated
from these samples using a multivariate kernel density estimator with automated kernel bandwidth estimation, see Kristan et al. (2011) for more
details. Finally, the M estimated means are selected as initialization candidates and stored in a list X0 = (µm0 , µm00 , . . . ). The elements of X0 are
sorted in descending order according to their corresponding νm -values to prioritize more likely initializations: µm0 is the mean with m0 = arg maxm νm .
References
References
Adam, S., Busoniu, L., Babuska, R., 2012. Experience replay for real-time
reinforcement learning control. IEEE Sys. Man. Cybern. 42 (2), 201–212.
Aguado-Sierra, J., Kerckhoffs, R. C. P., Lionetti, F., Hunt, D., Villongco,
C., Gonzales, M., Campbell, S. G., McCulloch, A. D., 2010. A computational framework for patient-specific multi-scale cardiac modeling. In:
Kerckhoffs, R. C. (Ed.), Patient-Specific Modeling of the Cardiovascular
System. Springer, pp. 203–223.
Aguado-Sierra, J., Krishnamurthy, A., Villongco, C., Chuang, J., Howard,
E., Gonzales, M. J., Omens, J., Krummen, D. E., Narayan, S., Kerckhoffs,
R. C. P., McCulloch, A. D., 2011. Patient-specific modeling of dyssynchronous heart failure: a case study. Prog. Biophys. Mol. Bio. 107 (1),
147–155.
Arthur, D., Vassilvitskii, S., 2007. k-means++: The advantages of careful
seeding. In: ACM-SIAM Symp. Discr. Algorithm. pp. 1027–1035.
27
Augenstein, K. F., Cowan, B. R., LeGrice, I. J., Nielsen, P. M., Young, A. A.,
2005. Method and apparatus for soft tissue material parameter estimation
using tissue tagged magnetic resonance imaging. J. Biomech. Eng. 127 (1),
148–157.
Barreto, A., Precup, D., Pineau, J., 2014. Practical kernel-based reinforcement learning. arXiv preprint arXiv:1407.5358.
Bellman, R., 1957. Dynamic Programming. Princeton University Press.
Bishop, C. M., 2006. Pattern recognition and machine learning. Vol. 4.
Springer New York.
Chabiniok, R., Moireau, P., Lesault, P.-F., Rahmouni, A., Deux, J.-F.,
Chapelle, D., 2012. Estimation of tissue contractility from cardiac cinemri using a biomechanical heart model. Biomech. Model. Mechan. 11 (5),
609–630.
Clayton, R., Bernus, O., Cherry, E., Dierckx, H., Fenton, F., Mirabella,
L., Panfilov, A., Sachse, F., Seemann, G., Zhang, H., 2011. Models of
cardiac tissue electrophysiology: progress, challenges and open questions.
Prog. Biophys. Mol. Bio. 104 (1), 22–48.
Delingette, H., Billet, F., Wong, K. C. L., Sermesant, M., Rhode, K., Ginks,
M., Rinaldi, C. A., Razavi, R., Ayache, N., 2012. Personalization of cardiac motion and contractility from images using variational data assimilation. IEEE T. Biomed. Eng. 59 (1), 20–24.
Frangi, A. F., Niessen, W. J., Viergever, M., 2001. Three-dimensional modeling for functional analysis of cardiac images, a review. IEEE T. Med. Imaging 20 (1), 2–5.
Hunter, P. J., Borg, T. K., 2003. Integration from proteins to organs: the
physiome project. Nat. Rev. Mol. Cell Bio. 4 (3), 237–243.
Itu, L., Sharma, P., Georgescu, B., Kamen, A., Suciu, C., Comaniciu, D.,
2014. Model based non-invasive estimation of PV loop from echocardiography. IEEE Eng. Med. Biol. Soc., 6774–6777.
Kaelbling, L. P., Littman, M. L., Moore, A. W., 1996. Reinforcement learning: A survey. J. Artif. Intell. Res., 237–285.
Kayvanpour, E., Mansi, T., Sedaghat-Hamedani, F., Amr, A., Neumann, D.,
Georgescu, B., Seegerer, P., Kamen, A., Haas, J., Frese, K. S., Irawati, M.,
Wirsz, E., King, V., Buss, S., Mereles, D., Zitron, E., Keller, A., Katus,
H. A., Comaniciu, D., Meder, B., 2015. Towards personalized cardiology:
Multi-scale modeling of the failing heart. PLoS ONE 10 (7), e0134869.
28
Kerckhoffs, R. C. P., Lumens, J., Vernooy, K., Omens, J., Mulligan, L., Delhaas, T., Arts, T., McCulloch, A., Prinzen, F., 2008. Cardiac resynchronization: insight from experimental and computational models. Prog. Biophys. Mol. Bio. 97 (2), 543–561.
Konukoglu, E., Relan, J., Cilingir, U., Menze, B. H., Chinchapatnam, P.,
Jadidi, A., Cochet, H., Hocini, M., Delingette, H., Jaı̈s, P., Haı̈ssaguerre,
M., Ayache, N., Sermesant, M., 2011. Efficient probabilistic model personalization integrating uncertainty on data and parameters: Application to eikonal-diffusion models in cardiac electrophysiology. Prog. Biophys. Mol. Bio. 107 (1), 134–146.
Krishnamurthy, A., Villongco, C. T., Chuang, J., Frank, L. R., Nigam, V.,
Belezzuoli, E., Stark, P., Krummen, D. E., Narayan, S., Omens, J. H.,
McCulloch, A. D., Kerckhoffs, R. C. P., 2013. Patient-specific models of
cardiac biomechanics. J. Comput. Phys. 244, 4–21.
Kristan, M., Leonardis, A., Skočaj, D., 2011. Multivariate online kernel
density estimation with gaussian kernels. Pattern Recogn. 44 (10), 2630–
2642.
Kuijpers, N. H., Hermeling, E., Bovendeerd, P. H., Delhaas, T., Prinzen,
F. W., 2012. Modeling cardiac electromechanics and mechanoelectrical
coupling in dyssynchronous and failing hearts. J. Cardiovasc. Transl. Res.
5 (2), 159–169.
Kveton, B., Theocharous, G., 2012. Kernel-based reinforcement learning on
representative states. In: Association for the Advancement of Artificial
Intelligence. pp. 977–983.
Lagarias, J. C., Reeds, J. A., Wright, M. H., Wright, P. E., 1998. Convergence properties of the Nelder-Mead simplex method in low dimensions.
SIAM J. Optimiz. 9 (1), 112–147.
Le Folgoc, L., Delingette, H., Criminisi, A., Ayache, N., 2013. Current-based
4D shape analysis for the mechanical personalization of heart models. In:
Medical Computer Vision. Recognition Techniques and Applications in
Medical Imaging. Vol. 7766 of LNCS. Springer, pp. 283–292.
Lin, L.-J., 1993. Reinforcement learning for robots using neural networks.
Tech. rep., DTIC Document.
Marchesseau, S., Delingette, H., Sermesant, M., Cabrera-Lozoya, R., TobonGomez, C., Moireau, P., Figueras i Ventura, R. M., Lekadir, K., Hernandez, A., Garreau, M., Donal, E., Leclercq, C., Duckett, S. G., Rhode,
K., Rinaldi, C. A., Frangi, A. F., Razavi, R., Chapelle, D., Ayache, N.,
2013. Personalization of a cardiac electromechanical model using reduced
29
order unscented kalman filtering from regional volumes. Med. Image Anal.
17 (7), 816–829.
Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare,
M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumara, D.,
Wierstra, D., Legg, S., Hassabis, D., 2015. Human-level control through
deep reinforcement learning. Nature 518 (7540), 529–533.
Mülling, K., Kober, J., Kroemer, O., Peters, J., 2013. Learning to select and
generalize striking movements in robot table tennis. Int. J. Robot. Res.
32 (3), 263–279.
Nair, A., Srinivasan, P., Blackwell, S., Alcicek, C., Fearon, R., De Maria,
A., Panneershelvam, V., Suleyman, M., Beattie, C., Petersen, S., Legg, S.,
Mnih, V., Kavukcuoglu, K., Silver, D., 2015. Massively parallel methods
for deep reinforcement learning. arXiv:1507.04296.
Neumann, D., Mansi, T., Georgescu, B., Kamen, A., Kayvanpour, E., Amr,
A., Sedaghat-Hamedani, F., Haas, J., Katus, H., Meder, B., Hornegger,
J., Comaniciu, D., 2014a. Robust image-based estimation of cardiac tissue
parameters and their uncertainty from noisy data. In: MICCAI. Vol. 8674
of LNCS. Springer, pp. 9–16.
Neumann, D., Mansi, T., Grbic, S., Voigt, I., Georgescu, B., Kayvanpour,
E., Amr, A., Sedaghat-Hamedani, F., Haas, J., Katus, H., Meder, B.,
Hornegger, J., Kamen, A., Comaniciu, D., 2014b. Automatic image-tomodel framework for patient-specific electromechanical modeling of the
heart. In: IEEE Int. Symp. Biomed. Imaging. pp. 935–938.
Neumann, D., Mansi, T., Itu, L., Georgescu, B., Kayvanpour, E., SedaghatHamedani, F., Haas, J., Katus, H., Meder, B., Steidl, S., Hornegger,
J., Comaniciu, D., 2015. Vito – a generic agent for multi-physics model
personalization: Application to heart modeling. In: MICCAI. Vol. 9350
of LNCS. Springer, pp. 442–449.
Nguyen-Tuong, D., Peters, J., 2011. Model learning for robot control: a
survey. Cogn. Process. 12 (4), 319–340.
Noble, D., 2002. Modeling the heart – from genes to cells to the whole organ.
Science 295 (5560), 1678–1682.
Powell, M. J., 2009. The BOBYQA algorithm for bound constrained optimization without derivatives. Cambridge NA Report NA2009/06.
Prakosa, A., Sermesant, M., Allain, P., Villain, N., Rinaldi, C., Rhode,
K., Razavi, R., Delingette, H., Ayache, N., 2013. Cardiac electrophysio30
logical activation pattern estimation from images using a patient-specific
database of synthetic image sequences. IEEE T. Biomed. Eng.
Rosenbrock, H., 1960. An automatic method for finding the greatest or least
value of a function. Comput. J. 3 (3), 175–184.
Schmid, H., Nash, M., Young, A., Hunter, P., 2006. Myocardial material parameter estimation – a comparative study for simple shear.
J. Biomech. Eng. 128 (5), 742–750.
Seegerer, P., Mansi, T., Jolly, M.-P., Neumann, D., Georgescu, B., Kamen,
A., Kayvanpour, E., Amr, A., Sedaghat-Hamedani, F., Haas, J., Katus, H., Meder, B., Comaniciu, D., 2015. Estimation of regional electrical
properties of the heart from 12-lead ECG and images. In: Statistical Atlases and Computational Models of the Heart – Imaging and Modelling
Challenges. Vol. 8896 of LNCS. Springer, pp. 204–212.
Sermesant, M., Billet, F., Chabiniok, R., Mansi, T., Chinchapatnam, P.,
Moireau, P., Peyrat, J.-M., Rhode, K., Ginks, M., Lambiase, P., Arridge,
S., Delingette, H., Sorine, M., Rinaldi, C. A., Chapelle, D., Razavi, R.,
Ayache, N., 2009. Personalised electromechanical model of the heart for
the prediction of the acute effects of cardiac resynchronisation therapy.
In: Functional Imaging and Modeling of the Heart. Vol. 5528 of LNCS.
Springer, pp. 239–248.
Sutton, R. S., Barto, A. G., 1998. Reinforcement learning: An introduction.
Vol. 1. MIT press Cambridge.
Tesauro, G., 1994. Td-gammon, a self-teaching Backgammon program,
achieves master-level play. Neural Comput. 6 (2), 215–219.
Wallman, M., Smith, N. P., Rodriguez, B., 2012. A comparative study of
graph-based, eikonal, and monodomain simulations for the estimation of
cardiac activation times. IEEE T. Biomed. Eng. 59 (6), 1739–1748.
Wallman, M., Smith, N. P., Rodriguez, B., 2014. Computational methods
to reduce uncertainty in the estimation of cardiac conduction properties
from electroanatomical recordings. Med. Image Anal. 18 (1), 228–240.
Wang, V. Y., Lam, H., Ennis, D. B., Cowan, B. R., Young, A. A., Nash,
M. P., 2009. Modelling passive diastolic mechanics with quantitative mri
of cardiac structure and function. Med. Image. Anal. 13 (5), 773–784.
Wang, Y., Georgescu, B., Chen, T., Wu, W., Wang, P., Lu, X., Ionasec, R.,
Zheng, Y., Comaniciu, D., 2013. Learning-based detection and tracking
in medical imaging: a probabilistic approach. In: Deformation Models.
Vol. 7 of LNCVB. Springer, pp. 209–235.
31
Westerhof, N., Elzinga, G., Sipkema, P., 1971. An artificial arterial system
for pumping hearts. J. Appl. Physiol. 31 (5), 776–781.
Wong, K. C., Sermesant, M., Rhode, K., Ginks, M., Rinaldi, C. A.,
Razavi, R., Delingette, H., Ayache, N., 2015. Velocity-based cardiac contractility personalization from images using derivative-free optimization.
J. Mech. Behav. Biomed. 43, 35–52.
Xi, J., Lamata, P., Niederer, S., Land, S., Shi, W., Zhuang, X., Ourselin,
S., Duckett, S. G., Shetty, A. K., Rinaldi, C. A., Rueckert, D., Razavi,
R., Smith, N. P., 2013. The estimation of patient-specific cardiac diastolic
functions from clinical measurements. Med. Image. Anal. 17 (2), 133–146.
Zettinig, O., Mansi, T., Neumann, D., Georgescu, B., Rapaka, S., Seegerer,
P., Kayvanpour, E., Sedaghat-Hamedani, F., Amr, A., Haas, J., Steen,
H., Katus, H., Meder, B., Navab, N., Kamen, A., Comaniciu, D., 2014.
Data-driven estimation of cardiac electrical diffusivity from 12-lead ECG
signals. Med. Image Anal., 1361–1376.
Zheng, Y., Barbu, A., Georgescu, B., Scheuering, M., Comaniciu, D., 2008.
Four-chamber heart modeling and automatic segmentation for 3-D cardiac
CT volumes using marginal space learning and steerable features. IEEE
T. Med. Imaging 27 (11), 1668–1681.
32
| 5 |
An Effectful Treatment of Dependent Types∗
Matthijs Vákár1
1
University of Oxford, Dept. Computer Science, [email protected]
arXiv:1603.04298v1 [] 14 Mar 2016
Abstract
We extend Levy’s call-by-push-value (CBPV) analysis from simple to dependent type theory
(DTT) in order to study the interaction between computational effects and dependent types. We
define the naive system of dependently typed CBPV, dCBPV-, and its extension with a principle
of Kleisli extensions for dependent functions, dCBPV+. We investigate these systems from the
points of view of syntax, categorical semantics, concrete models and operational semantics, in
presence of a range of effects. We observe that, while the expressive power of dCBPV+ is needed
if we want well-defined call-by-value (CBV) and call-by-name (CBN) translations of DTT, it
is a less straightforward system than dCBPV-, in presence of some effects. Indeed, to be able
to construct specific models and to retain the subject reduction property in the operational
semantics, we are required to impose certain subtyping conditions, the idea being that the type
of a computation may only become more (not less) specified as certain effects are executed.
1998 ACM Subject Classification F.3.2 Semantics of Programming Languages
Keywords and phrases Dependent types, effects, call-by-push-value, linear dependent types
Digital Object Identifier 10.4230/LIPIcs.xxx.yyy.p
1
Introduction
Dependent types [1] are slowly being taking up by the functional programming community and
are in the transition from a quirky academic hobby to a practical approach to building certified
software. Purely functional dependently typed languages like Coq [2] and Agda [3] have
existed for a long time. If the technology is to become more widely used in practice, however,
it is crucial that dependent types can be smoothly combined with the wide range of effects
that programmers make use of in their day to day work, like non-termination and recursion,
mutable state, input and output, non-determinism, probability and non-local control.
Although some languages exist which combine dependent types and effects, like Cayenne
[4], ΠΣ [5], Zombie [6], Idris [7], Dependent ML [8] and F? [9], there have always been
some strict limitations. For instance, the first four only combine dependent types with first
class recursion (although Idris has good support for emulating other effects), Dependent ML
constrains types to depend only on static natural numbers and F? does not allow types to
depend on effectful terms at all (including non-termination). Somewhat different is Hoare
Type Theory (HTT) [10], which defines a programming language for writing effectful programs
as well as a separation logic encoded in a system of dependent types for reasoning about these
programs. We note that the programming fragment is not merely an extension of the logical
one, which would be the elegant solution suggested by the Curry-Howard correspondence.
The sentiment of most papers discussing the marriage of these ideas seems to be that
dependent types and effects form a difficult though not impossible combination. However, as
far as we are aware, treatment has so far been on a case-by-case basis and no general theoretical
analysis has been given which discusses, on a conceptual level, the possibilities, difficulties
and impossibilities of combining general computational effects and dependent types.
∗
Data types, that is. The applicability of the results of this paper to the care of individuals with a
dependent personality disorder may be rather limited.
© Matthijs Vákár;
licensed under Creative Commons License CC-BY
Conference title on which this volume is based on.
Editors: Billy Editor and Bill Editors; pp. 1–15
Leibniz International Proceedings in Informatics
Schloss Dagstuhl – Leibniz-Zentrum für Informatik, Dagstuhl Publishing, Germany
2
An Effectful Treatment of Dependent Types
In a somewhat different vein, there has long been an interest in combining linearity and
dependent types. This was first done in Cervesato and Pfenning’s LLF [11]. Recently, a
semantic analysis of LLF was given by the author in [12, 13] which has proved important e.g.
in the development of a game semantics for dependent types. One aspect that this abstract
semantics as well as the study of particular models highlight is - more so than in the simply
typed case - the added insight and flexibility obtained by decomposing the !-comonad into
an adjunction1 . This corresponds to working with dependently typed version of Benton’s
LNL-calculus [14] rather than Barber and Plotkin’s DILL [15], as was done in [16].
Similarly, it has proved problematic to give a dependently typed version of Moggi’s
monadic metalanguage [17]. We hope that this paper illustrates that also in this case a
decomposed adjunction perspective, like that of CBPV [18], is more flexible than a monadic
perspective. Recall from [19] that if we decompose both linear logic and the monadic
metalanguage into an adjunction, we can see the former to be a restricted case of the latter
which only describes (certain) commutative effects.
In this paper, we show that the analysis of linear DTT of [13, 16, 12] generalises straightforwardly to general (non-commutative) effects to give a dependently typed CBPV calculus that
we call dCBPV-, which allows types to depend on values (including thunks of computations)
but which lacks a Kleisli extension (or sequencing) principle for dependent functions. This
calculus is closely related to Harper and Licata’s dependently typed polarized intuitionistic
logic [20]. Its categorical semantics is obtained from that for linear dependent types, by
relaxing a condition on the adjunction which would normally imply, among other things, the
commutativity of the effects described. It straightforwardly generalises Levy’s adjunction
models for CBPV [21] (from locally indexed categories to more general comprehension categories [22]) and, in a way, simplifies Moggi’s strong monad models for the monadic metalanguage
[17], as was already anticipated by Plotkin in the late 80s: in a dependently typed setting
the monad strength follows straightforwardly from the natural demand that its adjunction
is compatible with substitution and, similarly, the distributivity of coproducts follows from
their compatibility with substitution. In fact, we believe the categorical semantics of CBPV
is most naturally understood as a special case of a that of dCBPV-. Particular examples of
models are given by models of linear DTT and by Eilenberg-Moore adjunctions for strict2
indexed monads on models of pure DTT. The small-step operational semantics for CBPV of
[18] transfers to dCBPV- without any difficulties with the expected subject reduction and
(depending on the effects considered) strong normalization and determinacy results.
When formulating candidate CBV- and CBN-translations of DTT into dCBPV-, it
becomes apparent that the latter is only well-defined if we work with the weak (non-dependent)
elimination rules for positive connectives, while the former is ill-defined altogether. To obtain
a CBV-translation and the CBN-translation in all its glory, we have to add a principle of
Kleisli extensions (or sequencing) for dependent functions to dCBPV-. We call the resulting
calculus dCBPV+, to which we can easily extend our categorical and operational semantics.
Normalization and determinacy results for the operational semantics remain the same.
However, depending on the effects we consider, we may need to add extra coercion rules to
the calculus to salvage subject reduction. These embody the idea that a computation can only
obtain a more (not less) precise type after certain effects, like non-deterministic branching,
1
2
Indeed, connectives seem to be most naturally formulated one either the linear or cartesian side: Σ- and
Id-constructors operate on cartesian types while Π-constructors operate on linear types.
For brevity, from now on we shall drop the modifier "strict" for indexed structures. For instance, if we
mention an indexed honey badger, we shall really mean a strict indexed honey badger.
M. Vákár
are executed. We analyse on a case-by-case basis the principle of dependent Kleisli extensions
in dCBPV- models of a range of effects. This leads us to the same subtyping conditions.
Before concluding, we discuss of the possibility of adding some additional connectives. In
particular, we address the matter of a dependently typed enriched effect calculus (EEC) [23].
One the one hand, we hope this analysis gives a helpful theoretical framework in which we
can study various combinations of dependent types and effects from an algebraic, denotational
and operational point of view. It gives a robust motivation for the equations we should
expect to hold in both CBV- and CBN-versions of effectful DTT, through their translations
into dCBPV, and it guides us in modelling dependent types in effectful settings like game
semantics. Moreover, it explains why combinations of dependent types and effects are slightly
more straightforward in CBN than in CBV, as dependent Kleisli extensions are not required.
On the other, noting that not all effects correspond to sound logical principles, an expressive system like CBPV or a monadic language, with fine control over where effects occur, is an
excellent combination with dependent types as it allows us to use the language both for
writing effectful programs and pure logical proofs about these programs. Similar to HTT in
aim, but different in implementation, we hope that dCBPV can be expanded in future to an
elegant language, serving both for writing effectful programs and for reasoning about them.
2
A Very Brief Recap of Simple Call-by-Push-Value
We briefly recall the spirit of Levy’s CBPV analysis of simple type theory [18, 24, 21]. The
long version of this paper [25] includes a detailed exposition, aimed at generalising smoothly
to dependent types. CBPV roughly gives an adjunction decomposition of Moggi’s monadic
metalanguage [17]. This allows one to not only fully and faithfully encode Moggi’s metalanguage and with that CBV λ-calculi, but also CBN ones. CBPV has a natural small-step
operational semantics, satisfying subject reduction and, depending on the effects considered,
determinism at each step and strong normalization. For terms of ground type, this reproduces
the usual CBV and CBN operational semantics in presence of a wide range of effects.
CBPV makes a clear distinction between the worlds of value types (inhabited by values)
and computation types (the home of stacks and, in particular, computations), between which
we have an adjunction of half-modalities F a U , which yield as special cases Moggi’s modality
T = U F and the linear logic exponential ! = F U . F A can be read as the type of computations
that can return values of type A and U B as the type of thunks of computations of type B.
By only including positive type formers like inductive types on value types and negative ones
like coinductive types on computation types, one is able to retain the entire βη-equational
theory even in the presence of effects. As a consequence, we obtain an elegant categorical
semantics in terms of adjunctions of locally indexed categories.
3
A Syntax for Dependently Typed Call-by-Push-Value (dCBPV)
We generalise CBPV, allowing types to depend on values, but not computations (cf. linear [11]
or polarised [20] DTT, where types can only depend on cartesian or positive terms, respectively). dCBPV makes the following judgements: well-formedness of contexts ` Γ; ∆ ctxt,
where Γ is a list of distinct identifier declarations of value types and ∆ is a list of declarations of
computation type (writing · for the empty list and Γ as a shorthand for Γ; ·), well-formedness
of value types Γ ` A vtype and computation types Γ ` B ctype (sometimes underlined to be
explicit), typing of values Γ `v V : A, computations Γ `c M : B and stacks Γ; ∆ `k K : B,
equality judgements for contexts, types (in context) and (typed) terms (in context).
As types can depend on terms in a dependently typed system, we define both in one
inductive definition. A discussion of the syntactic subtleties of DTT can be found in [1].
3
4
An Effectful Treatment of Dependent Types
To start with, we have rules, which we shall not list, stating that all judgemental
equalities are equivalence relations and that all term, type and context constructors as well as
substitutions respect judgemental equality. In similar vein, we have conversion rules, stating
that we may swap contexts and types for judgementally equal ones in all judgements. To
form contexts, we have the rules of figure 1 and, to form types, those of figure 2.
` Γ; · ctxt
Γ ` B ctype
` Γ; B ctxt
` Γ; ∆ ctxt
Γ ` A vtype
` Γ, x : A; ∆ ctxt
·; · ctxt
Figure 1 Rules for contexts of dCBPV, where x is assumed to be a fresh identifier.
Γ, x : A, Γ0 ` A0 vtype
Γ `v V : A
Γ, Γ0 [V /x] ` A0 [V /x] vtype
Γ ` B ctype
Γ ` U B vtype
Γ `v V : A
Γ `v V 0 : A
0
Γ ` IdA (V, V ) vtype
` Γ ctxt
Γ ` 1 vtype
{Γ ` Ai vtype}1≤i≤n
Γ ` Σ1≤i≤n Ai vtype
Γ, x : A ` A0 vtype
Γ ` Σx:A A0 vtype
Γ, x : A, Γ0 ` B ctype
Γ ` A vtype
Γ ` F A ctype
{Γ ` Bi ctype}1≤i≤n
Γ, x : A ` B ctype
Γ ` Πx:A B ctype
Γ `v V : A
Γ, Γ0 [V /x] ` B[V /x] ctype
Γ ` Π1≤i≤n Bi ctype
Figure 2 Rules for forming the types of dCBPV.
For these types, we consider the values and computations formed using the rules of figure 3,
to which we could add the obvious admissible weakening and exchange rules.
` Γ, x : A, Γ0 ctxt
Γ, x : A, Γ0 `v x : A
Γ, x : A, Γ0 `v/c R : B
Γ `v V : A
0
Γ, Γ [V /x] `
Γ, z : U F A, Γ0 ` B ctype
0
Γ `c M : F A
v/c
let x be V in R : B[V /x]
Γ, x : A, Γ0 [tr x/z] `c N : B[tr x/z]
c
Γ, Γ [thunk M/z] ` M to x in N : B[thunk M/z].
Γ `c M : B
Γ ` thunk M : U B
v
Γ `v V : U B
Γ `c force V : B
Γ `v V : 1
` Γ ctxt
Γ `v hi : 1
Γ`
Γ `v Vi : Ai
Γ ` hi, Vi i : Σ1≤i≤n Ai
Γ `v V : Σ1≤i≤n Ai
Γ `v V1 : A1
Γ `v V2 : A2 [V1 /x]
Γ `v hV1 , V2 i : Σx:A1 A2
Γ `v V : Σx:A1 A2
Γ `v V : A
Γ `v refl V : IdA (V, V )
Γ `v V : IdA (V1 , V2 )
v
v/c
Γ `v/c R : B[hi/z]
pm V as hi in R : B[V /z]
{Γ, x : Ai `v/c Ri : B[hi, xi/z]}1≤i≤n
Γ `v/c pm V as hi, xi in Ri : B[V /z]
v/c
Γ`
v/c
Γ`
{Γ `c Mi : Bi }1≤i≤n
Γ `c M : Π1≤i≤n Bi
Γ `c λi Mi : Π1≤i≤n Bi
Γ `c i‘M : Bi
Γ, x : A1 , y : A2 `v/c R : B[hx, yi/z]
pm V as hx, yi in R : B[V /z]
Γ, x : A `v/c R : B[x/x0 , refl x/p]
pm V as (refl x) in R : B[V1 /x, V2 /x0 , V /p]
Γ, x : A `c M : B
Γ `c λx M : Πx:A B
Γ `v V : A
Γ `c M : Πx:A B
Γ `c V ‘M : B[V /x]
Figure 3 Values and computations of dCBPV+. Those of dCBPV- are obtained by demanding
the side condition that z is not free in Γ0 ; B in the rule for forming M to x in N . A rule involving
`v/c is a shorthand for two rules: one with `v and one with `c in both the hypothesis and conclusion.
Indices i are a slight abuse of notation: e.g. λi Mi is an abbreviation for λ(1,...,n) (M1 , . . . , Mn ).
As anticipated by Levy [18], the only rule that requires care is that for forming a sequenced
computation M to x in N . He suggested that z should not be free in the context Γ0 ; B. We
call the resulting system with this restriction dCBPV- and the more permissive system where
we drop this restriction and allow Kleisli extensions of dependent functions dCBPV+. We
leave the discussion of stacks until section 6.
M. Vákár
5
We generate judgemental equalities for values and computations through the rules of
figure 4. Note that we are using extensional Id-types, in the sense of Id-types with an η-rule.
This is only done for the aesthetics of the categorical semantics. They may not be suitable for
an implementation, however, as they make type checking undecidable for the usual reasons
[1]. The syntax and semantics can just as easily be adapted to intensional Id-types, the
obvious choice to for an implementation.
(return V ) to x in M = M [V /x]
force thunk M = M
M = M to x in return x
V = thunk force V
pm hi, V i as hi, xi in Ri = Ri [V /x]
pm hi as hi in R = R
R[V /z] = pm V as hi, xi in R[hi, xi/z]
R[V /z] = pm V as hi in R[hi/z]
pm hV, V 0 i as hx, yi in R = R[V /x, V 0 /y]
R[V /z] = pm V as hx, yi in R[hx, yi/z]
pm (refl V ) as (refl x) in R = R[V /x]
i‘λj Mj = Mi
R[V1 /x, V2 /y, V /z] = pm V as (refl w) in R[w/x, w/y, (refl w)/z]
M = λi i‘M
V ‘λx M = M [V /x]
M = λx x‘M
let x be V in R = R[V /x]
(M to x in N ) to y in N 0 = M to x in (N to y in N 0 )
M to x in λi Ni = λi (M to x in Ni )
#x
#x,y
#w
#x
#x
#y
M to x in λy N = λy (M to x in N )
Figure 4 Equations of dCBPV. These should be read as equations of typed terms: we impose
them if we can derive that both sides of the equation are terms of the same type (in context). We
#x1 ,...,xn
write
=
to indicate that for the equation to hold, the identifiers x1 , . . . , xn should, in both
terms being equated, be replaced by fresh identifiers, in order to avoid unwanted variable bindings.
Figures 5 and 6 show the natural candidate CBV- and CBN-translations of DTT with
some of the usual connectives into dCBPV: we treat sums, projection products, pattern
matching dependent products and unit types, dependent function types and identity types.
We can define the translations for projection dependent products as we have for their simple
relatives, if we add projection dependent products to dCBPV as is sketched in section 8.
It turns out that without Kleisli extensions for dependent functions, the CBV-translation
is not well-defined as it results in untypable terms. The CBN-translation is, but only
if we restrict to the weak (non-dependent) elimination rules for Σ1≤i≤n -, 1-, Σ- and Idtypes, meaning that the type we are eliminating into does not depend on the type being
eliminated from. One would expect the CBV-translation to factorise as a translation into
a dependently typed version of Moggi’s monadic metalanguage without dependent Kleisli
extensions, followed by a translation of this into dCBPV-. While the latter is definable, the
former is ill-defined3 . Perhaps this is a (partial) explanation of why all (CBV) dependently
typed languages with effects have encapsulated the effects in a monad. The exceptions are
non-termination and recursion. As we shall see in section 6, dependent Kleisli extensions are
straightforward in that case without imposing subtyping conditions, which means we can
treat these as a first class effects and do not have to encapsulate them in a monad.
It seems likely that one could obtain soundness and completeness results for these
translations with respect to the correct equational theories for CBV- and CBN-DTT. As
we are not aware of any such equational theories being described in literature, we propose
to define these through their translations into dCBPV. Both translations might result in a
broken η-law for Id-types, in presence of effects, even if we assume one in dCBPV.
3
Although, this may remind the reader of the situation of dependently typed dual intuitionistic linear
logic (a comonadic language), where the Girard translation (essentially the CBN-translation) from DTT
fails, we appear to be observing a qualitatively different phenomenon rather than a mere mirror image.
6
An Effectful Treatment of Dependent Types
CBV type
Γ ` A type
CBPV type
U F Γv ` Av vtype
Σ1≤i≤n Ai
Σ1≤i≤n Avi
Π1≤i≤n Ai
U Π1≤i≤n F Avi
Πx:A A0
U (Πx:Av F A0v [tr x/z])
1
1
Σx:A A0
Σx:Av A0v [tr x/z]
IdA (M, N )
IdU F Av (thunk M v ,
thunk N v )
CBV term
x1 : A1 , . . . , xm : Am
`M :A
x
let x be M in N
hi, M i
pm M as hi, xi in Ni
λi Mi
i‘N
λx M
M ‘N
hi
pm M as hi in N
hM, N i
pm M as hx, yi in N
refl M
pm M as (refl x) in N
CBPV term
x1 : Av1 , . . . , xm : Avm [. . . tr xi /zi . . .]
`c M v : F (Av [tr x1 /z1 , . . . , tr xn /zn ])
return x
M v to x in N v
M v to x in return hi, xi
M v to z in (pm z as hi, xi in Niv )
return thunk (λi Miv )
N v to z in (i‘force z)
return thunk λx M v
M v to x in (N v to z in (x‘force z))
return hi
M v to z in (pm z as hi in N v )
M v to x in (N v to y in return hx, yi)
M v to z in (pm z as hx, yi in N v )
M v to z in return refl tr z
M v to z in (pm z as (refl y) in
(force y to x in N v ))
Figure 5 A translation of dependently typed CBV into dCBPV. We write tr as an abbreviation
for thunk return and U F Γ := z1 : U F A1 , . . . , zn : U F An for a context Γ = x1 : A1 , . . . , xn : An .
CBN type
Γ ` B type
CBPV type
U Γn ` B n ctype
Σ1≤i≤n B i
F Σ1≤i≤n U B n
i
Π1≤i≤n B i
Π1≤i≤n B n
i
Πx:B B 0
Πx:U B n B 0
1
F1
Σx:B B 0
F (Σx:U B n U B 0 )
IdB (M, M 0 )
F (IdU B (thunk M n ,
thunk M 0n ))
n
n
CBN term
x1 : B 1 , . . . , x m : B m ` M : B
x
let x be M in N
hi, M i
pm M as hi, xi in Ni
λ i Mi
i‘M
λx M
N ‘M
hi
pm M as hi in N
hM, N i
pm M as hx, yi in N
refl M
pm M as (refl x) in N
CBPV term
n
c
n
n
x1 : U B n
1 , . . . , xm : U B m ` M : B
force x
let x be (thunk M n ) in N n
return hi, thunk M n i
M n to z in (pm z as hi, xi in Nin )
λi Min
i‘M n
λx M n
(thunk N n )‘M n
return hi
M n to z in (pm z as hi in N n )
return hthunk M n , thunk N n i
M n to z in (pm z as hx, yi in N n )
return refl thunk M n
M n to z in (pm z as (refl x) in N n )
Figure 6 A translation of dependently typed CBN into dCBPV. We write U Γ := x1 :
U A1 , . . . , xn : U An for a context Γ = x1 : A1 , . . . , xn : An .
4
Abstract Categorical Semantics
We have reached the point in the story that was our initial motivation to study dCBPV: its
very natural categorical semantics. To formulate our dCBPV models, we recall the notion of
an indexed category with full and faithful democratic comprehension, which is equivalent to
Jacobs’ notion of a split full democratic comprehension category with unit [22].
C
I Definition 1 (Comprehension Axiom). Let B op −→ Cat be an indexed category (a functor
f
to the category Cat of small categories). Given B 0 −→ B in B, let us write the change of
base functor C(B) −→ C(B 0 ) as −{f }. Recall that C satisfies the comprehension axiom if
B has a terminal object · and all fibres C(B) have terminal objects 1B that are stable
under change of base;
M. Vákár
7
f
the presheaves (B 0 −→ B) - C(B 0 )(1, C{f }) on B/B are representable: we have rephf,−i
pB,C
resenting objects B.C −→ B and natural bijections C(B 0 )(1, C{f }) −→ B/B(f, pB,C ).
Write vB,C for the (universal) element of C(B.C)(1, C{pB,C }) corresponding to idpB,C . Define
B.C
diagB,C := hidB.C , vB,C i
C(B)(C 0 , C)
pB,−
}i
- B.C.C{pB,C }, B 0 .C{f } qf,C := hpB ,C{f } ; f, vB ,C{fB.C and
:= hpB,C 0 , vB,C 0 ; −{pB,C 0}i
B/B(pB,C 0 , pB,C ). When these last maps pB,−
0
0
are full and faithful for all B, we call the comprehension full and faithful, respectively. When
∼ B, we call it democratic.
the comprehension induces an equivalence C(·) =
I Definition 2 (dCBPV- Model). By a dCBPV- model, we shall mean the following data.
C
An indexed category B op −→ Cat with full and faithful democratic comprehension;
D
An indexed category B op −→ Cat;
An indexed adjunction F a U : D C (adjunctions compatible with change of base);
Π-types in D: right adjoints −{pA,B } a ΠB : D(A.B) D(A) satisfying the right
Beck-Chevalley condition (a technical condition, compatibility with substitution [22]);
Finite products (Π1≤i≤n Di ) in the fibres of D, stable under change of base;
Σ-types in C: objects ΣC D of C(B) such that pB,ΣC D = pB.C,D ; pB,C ;
Id-types in C: objects IdC of C(B.C.C) such that pB.C.C,IdC = diagB,C ;
0, +-types in C: finite coproducts (Σ1≤i≤n Ci ) in the fibres of C, stable under change of
base, such that the following canonical morphisms are bijections, for E ∈ {C, D},
E(C.Σ1≤i≤n Ci )(X, Y ) −→ Π1≤i≤n E(C.Ci )(X{pC,hi,idCi i }, Y {pC,hi,idCi i });
I Definition 3 (dCBPV+ Model). By a dCBPV+ model, we mean a dCBPV- model with
(−)∗
specified maps C(Γ.A.Γ0 {pΓ,ηA })(1, U B{qpΓ,ηA ,Γ0 }) −→ C(Γ.U F A.Γ0 )(1, U B), called dependent Kleisli extensions, where η is the unit of F a U , such that (−)∗ agrees with the usual
Kleisli extensions of F a U and is compatible with η and −{−} in the obvious sense.
I Remark. Note that, for both notion of model, we retrieve (up to equivalence) Levy’s
adjunction models for CBPV if we restrict to C = self(B), where self(B)(B)(B 0 , B 00 ) := B(B ×
B 0 , B 00 ), and D that are locally B-indexed in the sense that change of base is the identity on
objects and we drop the requirement of identity types (which then corresponds to objects 1/B
such that 1/B × B ∼
= 1). In particular, stability of the adjunction under change of base then
implies the strength of the monad T = U F and stability of coproducts becomes distributivity.
sA,B
More generally, in a dCBPV- model, we can define a strength ΣA T B −→ T ΣA B, while a
s0A,B
tA,B
costrength ΣT A B −→ T ΣA T B{pΓ,ηA } (hence a pairing ΣT A T B −→ T ΣA B{pΓ,ηA }) can
only be defined in dCBPV+ models.
This semantics is sound and complete.
I Theorem 4 (dCBPV Semantics). We have a sound interpretation of dCBPV- in a dCBPVmodel and of dCBPV+ in a dCBPV+ model, where we also give the interpretation of the
stack judgement of section 6 (together with the obvious interpretation of terms, e.g. interpreting M to x in N using Kleisli extensions, which we leave to the imagination of the reader):
[[·]] = ·
[[Γ, x : A]] = [[Γ]].[[A]]
[[Γ `v A]] = C([[Γ]])(1, [[A]])
[[Γ `c B]] = D([[Γ]])(F 1, [[B]])
[[Γ; B `k C]] = D([[Γ]])([[B]], [[C]])
[[A[V /x]]] = [[A]]{hhid[[Γ]] , [[V ]]i, id[[Γ0 [V /x]]] i}
[[B[V /x]]] = [[B]]{hhid[[Γ]] , [[V ]]i, id[[Γ0 [V /x]]] i}
[[IdA (V, V 0 )]] = Id[[A]] {hhid[[Γ]] , [[V ]]i, [[V 0 ]]i}
[[Σ1≤i≤n Ai ]] = Σ1≤i≤n [[Ai ]]
[[Σx:A A0 ]] = Σ[[A]] [[A0 ]]
[[1]] = 1
[[Π1≤i≤n B i ]] = Π1≤i≤n [[B i ]]
[[Πx:A B]] = Π[[A]] [[B]]
[[F A]] = F [[A]]
[[U B]] = U [[B]].
The interpretations in such categories are complete: an equality of values or computations holds in all interpretations iff it is provable in the syntax of dCBPV. In fact, if we add
the obvious (admissible) weakening and exchange rules to dCBPV, the interpretation defines
8
An Effectful Treatment of Dependent Types
an onto correspondence between categorical models and syntactic theories in dCBPV which
satisfy mutual soundness and completeness results. This correspondence becomes 1-1 and we
obtain completeness for the stack judgement if we include complex stacks.
Proof (sketch). The proof goes almost entirely along the lines of the soundness and completeness proofs for linear DTT in [12]. For completeness result, we build a syntactic category,
after conservatively extending our syntax with complex stacks as in [21].
J
5
Concrete Models
We can first note that if we restrict to the identity adjunction, both dCBPV- and dCBPV+
reduce to a reformulation of Jacobs’ notion of a full split democratic comprehension category
with Σ-, Σ1≤i≤n -, Π- and extensional Id-types, which is a standard notion of model of pure
DTT [22]. An example is the usual families of sets model Fam(Set) of pure DTT. (Recall
that Fam(Set) is defined as the restriction to Setop ⊆ Catop of the Cat-enriched hom-functor
into Set.) In particular, this shows consistency of the calculi.
I Theorem 5. Both dCBPV- and dCBPV+ are consistent.
More interestingly, any model of linear DTT supporting the appropriate connectives [12, 13]
gives rise to a model of dCBPV-, modelling commutative effects.
I Theorem 6. The notion of a model given by [12] for the dependently typed linear-nonlinear logic of [16] with the additional connectives of finite additive disjunctions is precisely a
dCBPV- model such that we have symmetric monoidal closed structures on the fibres of D,
stable under change of base, s.t. F consists of strong monoidal functors (sending nullary and
binary products in C to I and ⊗ in D) and which supports Σ⊗
F − -types (see section 8).
By analogy with the simply typed case, models of DTT on which we have an indexed monad
are again a source of dCBPV- models (indexing of the monad corresponding to a strength).
C
I Theorem 7. Let B op −→ Cat be a model of pure DTT (as above) on which we have an
indexed monad T (a family of monads, stable under change of base). Then, the indexed
Eilenberg-Moore adjunction C C T gives a model of dCBPV-.
Proof (sketch). As in the simply typed setting, a product of algebras is just the product of
their carriers equipped with the obvious algebra structure. Indeed, it is a basic fact from
category theory that the forgetful functor from the Eilenberg-Moore category creates limits.
k
Given an object T B −→ B of C T (Γ.A), we note that we also obtain a canonical T -algebra
structure on Π-types of carriers (starting from the identity on ΠA B), our definition of ΠA k:
T
C(Γ)(ΠA B, ΠA B) ∼
= C(Γ.A)((ΠA B){pΓ,A }, B) −→ C(Γ.A)(T ((ΠA B){pΓ,A }), T B)
−;k
∼
= C(Γ.A)((T ΠA B){pΓ,A }, T B) −→ C(Γ.A)((T ΠA B){pΓ,A }, B) ∼
= C(Γ)(T ΠA B, ΠA B).
We leave the verification of the T -algebra axioms to the reader.
J
A concrete example to which we can apply the previous theorem is obtained for any
monad T on Set. Indeed, we can lift T (pointwise) to an indexed monad on Fam(Set). In
a different vein, given a model C of pure DTT, the usual exception, global state, reader,
writer and continuation monads (which we form using objects of C(·)) give rise to indexed
monads, hence we obtain models of dCBPV-. More exotic examples are the many indexed
monads that arise from homotopy type theory, like truncation and cohesion (shape and sharp)
M. Vákár
modalities [26, 27, 28]. A caveat there is that the identity types are intensional and that
many equations are often only assumed up to propositional rather than judgemental equality.
Models of dCBPV+ are harder to come by. Not every dCBPV- model generated by
theorem 7 admits dependent Kleisli extensions. Their existence needs to be assessed on a
case-by-case basis. Moreover, one indexed monad might be given dependent Kleisli extensions
in several inequivalent ways. Therefore, we treat some dCBPV- models for common effects
and discuss the (im)possibility of dependent Kleisli extensions.
Fortunately, the exceptions (and divergence) monads (−) + E on Set admit dependent
Kleisli extensions, hence give rise to models of dCBPV+. Indeed, for a dependent function
f ∈ Πa∈A B(a) + E, we can define f ∗ ∈ Πx∈A+E B(x) + E by setting f ∗ (a) := f (a) for a ∈ A
and else f ∗ (e) := e. (Generally, it suffices to give dependent Kleisli extensions for B = F A0 .)
Similarly, the domain semantics of DTT with recursion of [29] yields a model of dCBPV+
with intensional Id-types. C is defined to be an indexed category of continuous families of
Scott predomains (disjoint unions of Scott domains) with continuous functions. Σ-types
are interpreted as the usual set theoretic way and are endowed with the product order,
Σ1≤i≤n -types are disjoint unions and IdA (x, y) := {z ∈ A | z ≤ x, y} with the induced order
from A. D is defined to be the indexed category over the category of Scott predomains of
continuous families of Scott domains with strict continuous functions. U is defined to be
the usual inclusion, while F is the lift. ΠA B is defined to be the set of continuous sections
pA,U B
of ΣA U B −→ A with the pointwise order. We note that for f ∈ ΠA U B, we can define
f ∗ (a) := f (a) for a ∈ A and f ∗ (⊥) = ⊥.
By contrast, a non-trivial writer monad M × − (for a monoid M ) on Set does not admit
dependent Kleisli extensions. Indeed, we could take B to be a predicate on M × A such that
B(1M , a) = {∗} and otherwise B(m, a) = ∅ (which expresses that nothing is being written).
∗ ∈ Πa∈A B(1m , a) does not have a dependent Kleisli extension unless A = ∅.
Similarly, a non-trivial reader monad (−)S on Set does not admit dependent Kleisli
extensions. Indeed, we can define predicates B to specify that all s ∈ S yield the same value.
An analogous argument applies to non-trivial global state monads (S × −)S on Set.
−
For a continuation monad R(R ) on Set, one would hope to define dependent Kleisli
B(eva )
)
extensions for f ∈ Πa∈A R(R
as f ∗ (t)(k) := t(λa f (a)(k)). However, this is only wellB(c,t)
defined if we have ∀a∈A(c) R
⊆ RB(c,eva ) . In particular, we would have that B(c, evA ) =
0
B(c, evA0 ) for all a, a ∈ A(c).
Somewhat dubiously, we can define dependent Kleisli extensions for the erratic choice
S
monad P on Set as f ∗ (t) := ( a∈t f (a)) ∩ B(t), for f ∈ Πa∈A PB(a). This might not
correspond to the desired operational behaviour: indeed, non-deterministic branching is not
accumulated if it contradicts typing.
Many of the non-examples above of dCBPV+ models, could be mended by restricting
the class of predicates we consider, which would involve passing to a less simplistic model of
dependent types than mere families of sets. For example, for the writer monad, we want to
demand for types B depending on M × A the inclusion B(1M , a) ⊆ B(m, a). We will see this
idea reflected in the subject reduction property of the operational semantics of dCBPV+.
6
Operational Semantics
Importantly, CBPV admits a natural operational semantics that, for ground terms, reproduces
the usual operational semantics of CBV and CBN under the specified translations into CBPV
[18]. This can easily be extended to dCBPV.
We define the operational semantics on dCBPV terms not involving complex values which
unnecessarily complicate the presentation of the operational semantics. Complex values are
9
10
An Effectful Treatment of Dependent Types
defined to be values containing pm as in - and let be in -constructs. As the normalization of
values do not produce effects, all reasonable evaluation strategies for them are observationally
indistinguishable and we could choose our favourite. However, we may still want to allow
complex values to occur in types, as this greatly increases the expressivity of our type system.
To make sure we lose no expressive power while expressive power when excluding complex
values, we add the rule of figure 7 to dCBPV. We note that this rule introduces no new
terms to dCBPV, at least up to judgemental equality, as M to x in N = return V to x in N =
let x be V in N . This means that it is invisible to the categorical semantics. It allows us
to define the operational semantics for a wider range of terms, however. It allows us to
eliminate complex values from computations (even at types built with complex values!).
Γ `c M = return V : F A
Γ, x : A, Γ0 `c N : B
Γ, Γ0 [V /x] `c M to x in N : B[V /x]
Figure 7 An extra rule to add to dCBPV which does not introduce any new terms.
I Theorem 8 (Redundancy of Complex Values for Computations). For any dCBPV- or
f : B (of dCBPV- or
dCBPV+ computation Γ `c M : B, there is a computation Γ `c M
f : B.
dCBPV+, respectively) which does not contain complex values, such that Γ `c M = M
Moreover, both the CBV- and CBN-translations only produce complex-value-free computations.
We present a small-step operational semantics for (complex value-free) dCBPV computations
in terms of a simple abstract machine that Levy calls the CK-machine. The configuration of
such a machine consists of a pair M, K where Γ `c M : B is a complex value-free computation
and Γ; B `k K : C is a compatible (simple) stack. We call C the type of the configuration.
Stacks4 are formed according to the rules of figure 8.
` Γ; C ctxt
k
Γ; C ` nil : C
Γ, x : A `c M : B
k
Γ; Bj `k K : C
Γ; B `k K : C
Γ; F A ` [·] to x in M :: K : C
k
Γ; Π1≤i≤n Bi ` j :: K : C
Γ `v V : A
Γ; B[V /x] `k K : C
Γ; Πx:A B `k V :: K : C
Figure 8 The rules for forming (simple) stacks.
The initial configurations, transitions (which embody directed versions of the β-rules
of our equational theory) and terminal configurations in the evaluation of a computation
Γ `c M : C on the CK-machine are specified by figure 9.
The operational semantics of dCBPV- and dCBPV+ satisfy the following basic properties.
I Theorem 9 (Pure dCBPV Operational Properties). For every configuration of the CKmachine, at most one transition applies. No transition applies precisely when the configuration
is terminal. Every configuration of type C reduces, in a finite number of transitions, to a
unique terminal configuration of type C.
Proof (sketch). The proof for dCBPV- is no different from the simply typed situation [18],
as the transitions are defined on untyped configurations and are easily seen to preserve types.
Note that types only depend on values, which never get reduced in our transitions.
Only the subject reduction property for dCBPV+ requires some thought. Suppose we
start from a configuration M, K with Γ `c M : B and Γ; B `k K : C. What could conceivably
4
To be precise, these are what Levy calls simple stacks [18]. Analogous to complex values, one can also
conservatively extend the calculus with so-called complex stacks, e.g. by allowing pattern matching into
stacks. This gives us a 1-1 correspondence between categorical models and syntactic theories.
M. Vákár
Initial Configuration
M
, nil
Terminal Configurations
return V
, nil
λi Mi
, nil
λx M
, nil
force z
, K
pm z as hi, xi in Mi , K
pm z as hi in M
, K
pm z as hx, yi in M
, K
pm z as (refl x) in M , K
11
Transitions
let V be x in M
M to x in N
return V
force thunk M
pm hj, V i as hi, xi in Mi
pm hi as hi in M
pm hV, W i as hx, yi in M
pm (refl V ) as (refl x) in M
j‘M
λi Mi
V ‘M
λx M
,
,
,
,
,
,
,
,
,
,
,
,
K
K
[·] to x in N :: K
K
K
K
K
K
K
j :: K
K
V :: K
M [V /x]
M
N [V /x]
M
Mj [V /x]
M
M [V /x, W/y]
M [V /x]
M
Mj
M
M [V /x]
,
,
,
,
,
,
,
,
,
,
,
,
K
[·] to x in N :: K
K
K
K
K
K
K
j :: K
K
V :: K
K
Figure 9 The behaviour of the CK-machine in the evaluation of a computation Γ `c M : C.
happen during a transition is that the resulting configuration M 0 , K 0 can be given types
Γ `c M 0 : B 0 and Γ0 ; B 00 `k K 0 : C, but not Γ ` B 0 = B 00 .
The only transition that can possibly cause this problem is the one for initial computation
return V (see figure 9). Indeed, what could have happened is that we started from a
configuration M to x in N, K, with Γ `c M to x in N : B[thunk M/z], which transitions
to M, [·] to x in N :: K. After this, we continue reducing M , until we end up in a
configuration return V, [·] to x in N :: K. It is now critical when we apply one more transition
(the dangerous one) that for the resulting configuration N [V /x], K we again have that
Γ ` N [V /x] : B[thunk M/z], while we know that Γ ` N [V /x] : B[tr V /z].
This is indeed the case in pure dCBPV+, as all minimal sequences of transitions of the
shape M, K
M 0 , K just consist of an application of a directed version (left to right) of one of
the judgemental equalities of figure 4, hence M = return V so Γ ` N [V /x] : B[thunk M/z]. J
7
Adding Effects
We show by example how one adds effects to dCBPV, focussing on the operational semantics.
Figure 10 gives examples of effects one could consider: divergence, recursion, printing elements
m of some monoid M, erratic choice from finitely many alternatives, errors e from some set
E, writing a global states s ∈ S and reading a global state to s. The framework fits many
more examples like probabilistic erratic choice, local references and control operators [18].
Γ `c diverge : B
Γ `c error e : B
Γ, z : U B `c M : B
Γ ` c µz M : B
Γ `c M : B
Γ `c print m . M : B
{Γ `c Mi : B}1≤i≤n
Γ `c choosei (Mi ) : B
Γ `c M : B
Γ `c write s . M : B
{Γ `c Ms : B}s∈S
Γ `c readtos (Ms ) : B
Figure 10 Some examples of effects we could add to CBPV.
For the operational semantics of printing and state, we need to add some hardware to our
machine. Therefore, a configuration will now consist of a quadruple M, K, m, s where M, K
are as before, m is an element of a printing monoid (M, , ∗) modelling some channel for
output and s is an element of a finite pointed set of states (S, s0 ) holding the current value of
our storage cell. We lift the operational semantics of all existing language constructs to this
setting by specifying that they do not modify m and s, that terminal configurations can have
any value of m and s and that initial configurations always have value m = and s = s0 for
the fixed initial state s0 . Figure 11 specifies the operational semantics for our effects.
We extend the results of the previous section to this effectful setting. While we will have
to add some extra rules to dCBPV+, dCBPV- satisfies theorem 10 without any further rules.
12
An Effectful Treatment of Dependent Types
Transitions
diverge
µz M
choosei (Mi )
print n . M
write s0 . M
readtos0 (Ms0 )
,
,
,
,
,
,
K
K
K
K
K
K
,
,
,
,
,
,
m
m
m
m
m
m
,
,
,
,
,
,
s
s
s
s
s
s
diverge
M [thunk µz M/z]
Mj
M
M
Ms
,
,
,
,
,
,
K
K
K
K
K
K
,
,
,
,
,
,
m
m
m
m∗n
m
m
,
,
,
,
,
,
s
s
s
s
s0
s
Terminal Configurations
error e , K , m , s
Figure 11 The operational semantics for divergence, recursion, erratic choice, errors, printing
and writing and reading global state.
I Theorem 10 (Effectful dCBPV Operational Properties). Transitions respect the type of the
configuration. None applies precisely if we are in a terminal configuration. In absence of
erratic choice, at most one transition applies to each configuration. In absence of divergence
and recursion, every configuration reduces to a terminal one in a finite number of steps.
As highlighted by the proof of theorem 9, to guarantee subject reduction for dCBPV+, we
again need to establish that for every transition M, K, m, s
M 0 , K, m0 , s0 of figure 10,
c
0
c
Γ ` N : B[thunk M /z] implies that Γ ` N : B[thunk M/z]. A sufficient condition, would
be to add equations M = M 0 . For divergence and errors, these equations would be vacuous,
and for recursion, it gives us the usual equation µz M = M [thunk µz M/z]. For printing,
erratic choice and global state, however, these equations are not reasonable to demand.
Γ `c N : B[thunk M 0 /z]
Instead, for these effects, we suggest to add explicitly the rules
.
c
0
Γ ` N : B[thunk (M )/z]
Specifically, we see that a computation type in which we substitute (the thunk of) a
computation with multiple branches (erratic choice, reading state) has to contain each of the
types obtained by substituting any of the branches instead. Similarly, a type in which we
substitute (the thunk of) a computation which writes (printing, writing a state) and then
performs a computation M has to contain the type obtained by just substituting M . In
short: a type can only shrink as the computation unfolds. As anticipated, these rules restore
the subject reduction property for effectful dCBPV+ and make theorem 10 hold for it too.
We can extend the CBV- and CBN-translations to the effectful setting following figure
12. By analogy with the simply typed situation, this should induce the expected operational
semantics, at least for terms of ground type, although we are not aware of such an operational
semantics ever being outlined in literature.
CBV Term M
op(M1 , . . . , Mm )
µx M
CBPV Term M v
v
op(M1v , . . . , Mm
)
µz (force z to x in M v )
CBN Term M
op(M1 , . . . , Mm )
µz M
CBPV Term M n
n
op(M1n , . . . , Mm
)
n
µz M
Figure 12 The CBV- and CBN-translations for effectful terms. Here, we treat all algebraic
effects (in particular, divergence, errors, erratic choice, printing and global state) uniformly by
letting op range over their operations. z is assumed to be fresh in the CBV-translation µx M .
I Remark (Type Checking). One could argue that a type checker is as important an operational
aspect to the implementation of a DTT as a small-step semantics. We leave the description of
a type checking algorithm to future work. We note that the core step in the implementation
of a type checker is a normalization algorithm for directed versions (from left to right) of the
judgemental equalities, as this would give us a normalization procedure for types. We hope
to be able to construct such an algorithm for dCBPV- using normalization by evaluation
by combining the techniques of [30] and [31]. Our hope is that this will lead to a proof
of decidable type checking of the system at least in absence of the η-law for Id-types and
without recursion. We note that the complexity of a type checking algorithm can vary widely
M. Vákár
13
depending on which equations we include for effects. The idea is that one only includes
a basic set of program equations (perhaps including algebraicity equations for effects) as
judgemental equalities to be able to decide type checking and to postulate other equations
(like the Plotkin-Power equations for state) as propositional equalities, which can be used
for manual or tactic-aided reasoning about effectful programs. Type checking for effectful
dCBPV+ seems more challenging, as the system essentially features a form of subtyping.
8
More Connectives?
We very briefly outline how to add more connectives to dCBPV.
A first class of connectives we consider are projection dependent products, or Σ-types
on computations. Given a context Γ, z1 : U B1 , . . . , zn : U Bn , we can form these types
Γ ` Πdep
1≤i≤n Bi ctype and we construct and destruct their terms using the rules of figure 13.
{Γ `c Mi : Bi [thunk M1 /z1 , . . . , thunk Mi−1 /zi−1 ]}1≤i≤n
c
Γ ` λi Mi :
Πdep
1≤i≤n Bi
Γ `c M : Πdep
1≤i≤n Bi
c
Γ ` i‘M : Bi [thunk 1‘M/z1 , . . . , thunk (i − 1)‘M/zi−1 ]
Figure 13 Rules for projection dependent products, to which we add the obvious β- and η-laws.
Their categorical semantics corresponds to having strong n-ary Σ-types in D in the sense of
objects Πdep
1≤i≤n Bi such that pΓ,U Πdep
Bi = pΓ.U B1 .··· .U Bn−1 ,U Bn ; . . . ; pΓ,U B1 . In particular,
1≤i≤n
∼
we see that U Πdep
1≤i≤2 B i = ΣU B 1 U B 2 . Similarly, we can define equivalents of the other positive
connectives R like Σ1≤i≤n - and Id-types on computation types in the sense of connectives
R0 (B 1 , . . . , B n ) such that U R0 (B 1 , . . . , B n ) ∼
= R(U B 1 , . . . , U B n ). These correspond to e.g.
additive Σ- and Id-types in linear DTT. In all cases, we give an operational semantics where
destructors push to the stack and constructors pop the stack and substitute.
Another connective from linear DTT that generalizes to dCBPV is the multiplicative
Σ-type Γ ` Σ⊗
F A B ctype, for Γ, x : A ` B ctype. These type formers are dependently typed
generalizations of the F (−) ⊗ −-connective of the EEC (there written !(−) ⊗ −). Their
categorical semantics, as in linear DTT, is given by left adjoint functors Σ⊗
F A a −{pΓ,A } to
change of base in D, satisfying the left Beck-Chevalley condition. In particular, they satisfy
the equation F ΣA A0 ∼
= ΣF A F A0 .
Finally, to build a practically useful system, strong type forming mechanisms like inductive
families and induction-recursion (including universes) should probably be added to dCBPV
as constructions on value types. While it seems believable that we then still get a CBVand CBN-translation in dCBPV+ for inductive families, the situation for universes is less
straightforward. Another question is if such translations are desirable or if we are better off
working with the simpler system dCBPV- in the first place.
9
Future Work
In this paper, we gave an extension to the realm of dependent types of Levy’s CBPV analysis
of CBV- and CBN-λ-calculi. We hope, one the one hand, that this can shed some light
on the theoretical puzzle of how the relationship between effects and dependent types can
be understood. On the other hand, we hope it can provide some guidance in the daunting
but worthwhile challenge of combining dependent types with effects in a system which is
simultaneously mathematically elegant and practically useful for writing certified real world
software. To further achieve these two goals, we have the following future work in mind.
A first priority is the implementation of a type checker for dCBPV- (and perhaps
dCBPV+). A second question to be addressed is what needs to be done to make dCBPV into
14
An Effectful Treatment of Dependent Types
a practically useful language for certified effectful programming. Does dCBPV- suffice for
this or do we need to turn to dCBPV+? Thirdly, the recent CBN game semantics for DTT
of [32], which can be extended to model recursion, local ground store and control operators,
should give rise to a model of dCBPV+. Fourthly, we want to investigate if the CBV- and
CBN-translations into dCBPV+ extend to more expressive type forming mechanisms like
inductive families and inductive-recursive definitions like type universes. In particular, we
hope this will lead to a better understanding of the rather enigmatic nature of CBV-type
dependency. Finally, it remains to be seen what the status is of subject reduction in dCBPV+
in presence of other effects like control operators, local state and general references. We hope
the game semantics of DTT can be of use here.
Related Work This paper is based on the preprint [25], which provides more context, proofs
and discussion. Since this appeared, independent work by Ahman, Ghani and Plotkin [33]
has been made public which partly overlaps with sections 3 and 4 of this paper. It describes a
dependently typed EEC, closely related to dCBPV- extended with complex stacks and Σ⊗
F −types, and its categorical semantics which - modulo details - is a fibrational reformulation of
our semantics for this calculus. Additionally, our sections 3 and 4 consider a more expressive
calculus dCBPV+ where we can substitute effectful computations in dependent functions.
This allows us to define CBV- and CBN-translations into it. Also relevant is the work on
linear [16, 13] and polarised [20] DTT and on domain and game semantics for DTT [29, 32].
Acknowledgements I want to thank Tim Zakian, Alex Kavvos and Sam Staton for many
interesting discussions and Paul Blain Levy for his explanations. I am grateful to Samson
Abramsky for his support. The author is funded by the EPSRC and the Clarendon Fund.
References
1
2
3
4
5
6
7
8
9
10
Martin Hofmann. Extensional Constructs in Intensional Type Theory. Springer, 1997.
The Coq development team. The Coq proof assistant reference manual. LogiCal Project,
2004. Version 8.0.
Ulf Norell. Towards a practical programming language based on dependent type theory,
volume 32. Chalmers University of Technology, 2007.
Lennart Augustsson. Cayenne — a language with dependent types. In ACM SIGPLAN
Notices, volume 34, pages 239–250. ACM, 1998.
Thorsten Altenkirch, Nils Anders Danielsson, Andres Löh, and Nicolas Oury. πσ: Dependent types without the sugar. In Functional and Logic Programming, pages 40–55. Springer
Berlin Heidelberg, 2010.
Chris Casinghino, Vilhelm Sjöberg, and Stephanie Weirich. Combining proofs and programs
in a dependently typed language. ACM SIGPLAN Notices, 49(1):33–45, 2014.
Edwin Brady. Idris, a general-purpose dependently typed programming language: Design
and implementation. Journal of Functional Programming, 23(05):552–593, 2013.
Hongwei Xi and Frank Pfenning. Dependent types in practical programming. In Proceedings
of the 26th ACM SIGPLAN-SIGACT symposium on Principles of programming languages,
pages 214–227. ACM, 1999.
Nikhil Swamy, Catalin Hritcu, Chantal Keller, Aseem Rastogi, Antoine Delignat-Lavaud,
Simon Forest, Karthikeyan Bhargavan, Cédric Fournet, Pierre-Yves Strub, Markulf Kohlweiss, et al. Dependent types and multi-monadic effects in f*, 2015.
Aleksandar Nanevski, Greg Morrisett, and Lars Birkedal. Polymorphism and separation in
hoare type theory. In ACM SIGPLAN Notices, volume 41, pages 62–73. ACM, 2006.
M. Vákár
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
Iliano Cervesato and Frank Pfenning. A linear logical framework. In LICS’96. Proceedings.,
pages 264–275. IEEE, 1996.
Matthijs Vákár. Syntax and semantics of linear dependent types. arXiv preprint
arXiv:1405.0033, 2014.
Matthijs Vákár. A categorical semantics for linear logical frameworks. pages 102–116, 2015.
P Nick Benton. A mixed linear and non-linear logic: Proofs, terms and models. In Computer
Science Logic, pages 121–135. Springer, 1995.
Andrew Barber and Gordon Plotkin. Dual intuitionistic linear logic. University of Edinburgh, Department of Computer Science, 1996.
Neelakantan R Krishnaswami, Pierre Pradic, and Nick Benton. Integrating linear and
dependent types. In Proceedings of the 42nd Annual ACM SIGPLAN-SIGACT Symposium
on Principles of Programming Languages, pages 17–30. ACM, 2015.
Eugenio Moggi. Notions of computation and monads. Information and computation,
93(1):55–92, 1991.
Paul Blain Levy. Call-by-push-value: A Functional/imperative Synthesis, volume 2.
Springer Science & Business Media, 2012.
Nick Benton and Philip Wadler. Linear logic, monads and the lambda calculus. In Logic
in Computer Science, 1996. LICS’96. Proceedings., Eleventh Annual IEEE Symposium on,
pages 420–431. IEEE, 1996.
Daniel R Licata and Robert Harper. Positively dependent types. In Proceedings of the 3rd
workshop on Programming languages meets program verification, pages 3–14. ACM, 2009.
Paul Blain Levy. Adjunction models for call-by-push-value with stacks. Theory and Applications of Categories, 14(5):75–110, 2005.
Bart Jacobs. Comprehension categories and the semantics of type dependency. Theoretical
Computer Science, 107(2):169–207, 1993.
Jeff Egger, Rasmus Ejlers Møgelberg, and Alex Simpson. Enriching an effect calculus with
linear types. In CSL, volume 5771, pages 240–254. Springer, 2009.
Paul Blain Levy. Call-by-push-value: Decomposing call-by-value and call-by-name. HigherOrder and Symbolic Computation, 19(4):377–414, 2006.
Matthijs Vákár.
A framework for dependent types and effects.
arXiv preprint
arXiv:1512.08009, 2015.
U HoTTbaki.
Homotopy Type Theory: Univalent Foundations of Mathematics.
http://homotopytypetheory.org/book, Institute for Advanced Study, 2013.
Michael Shulman. Brouwer’s fixed-point theorem in real-cohesive homotopy type theory.
arXiv preprint arXiv:1509.07584, 2015.
Urs Schreiber and Michael Shulman. Quantum gauge field theory in cohesive homotopy
type theory. arXiv preprint arXiv:1408.0054, 2014.
Erik Palmgren and Viggo Stoltenberg-Hansen. Domain interpretations of Martin-Löf’s
partial type theory. Annals of Pure and Applied Logic, 48(2):135–196, 1990.
Andreas Abel, Klaus Aehlig, and Peter Dybjer. Normalization by evaluation for martin-löf
type theory with one universe. Electronic Notes in Theoretical Computer Science, 173:17–
39, 2007.
Danel Ahman and Sam Staton. Normalization by evaluation and algebraic effects. Electronic Notes in Theoretical Computer Science, 298:51–69, 2013.
Samson Abramsky, Radha Jagadeesan, and Matthijs Vákár. Games for dependent types.
In Automata, Languages, and Programming, pages 31–43. Springer, 2015.
Danel Ahman, Neil Ghani, and Gordon Plotkin. Dependent types and fibred computational
effects. Springer, 2016. To appear in Foundations of Software Science and Computation
Structures, available at http://homepages.inf.ed.ac.uk/s1225336/.
15
| 6 |
Structured Triplet Learning with POS-tag Guided Attention
for Visual Question Answering
Zhe Wang1 Xiaoyi Liu2 Liangjian Chen1 Limin Wang3 Yu Qiao4 Xiaohui Xie1 Charless Fowlkes1
arXiv:1801.07853v1 [] 24 Jan 2018
1
Dept. of CS, UC Irvine
2
Microsoft
Abstract
Visual question answering (VQA) is of significant interest due to its potential to be a strong test of image understanding systems and to probe the connection between language and vision. Despite much recent progress, general
VQA is far from a solved problem. In this paper, we focus
on the VQA multiple-choice task, and provide some good
practices for designing an effective VQA model that can
capture language-vision interactions and perform joint reasoning. We explore mechanisms of incorporating part-ofspeech (POS) tag guided attention, convolutional n-grams,
triplet attention interactions between the image, question
and candidate answer, and structured learning for triplets
based on image-question pairs 1 . We evaluate our models
on two popular datasets: Visual7W and VQA Real Multiple Choice. Our final model achieves the state-of-the-art
performance of 68.2% on Visual7W, and a very competitive performance of 69.6% on the test-standard split of VQA
Real Multiple Choice.
1. Introduction
The rapid development of deep learning approaches
[9, 40, 39, 41] has resulted in great success in the areas of
computer vision [11, 32, 31, 28, 29, 30] and natural language processing [24, 18]. Recently, Visual Question Answer (VQA) [3, 10] has attracted increasing attention, since
it evaluates the capacity of vision systems for a deeper semantic image understanding, and is inspiring the development of techniques for bridging computer vision and natural language processing to allow joint reasoning [3, 42].
VQA also forms the basis for practical applications such as
tools for education, assistance for the visually-impaired, or
support for intelligence analysts to actively elicit visual information through language-based interfaces [4].
Given an image, a typical VQA task is to either gen1 Code: https://github.com/wangzheallen/STL-VQA
Contact: [email protected]
3
CVL, ETH Zurich
4
SIAT, CAS
Q: Why was the hand of the woman
over the left shoulder of the man?
A: They were together and engaging in
affection.
A: The woman was trying to get the
man’s attention.
A: The woman was trying to scare the
man.
A: The woman was holding on to the
man for balance.
Figure 1. An example of the multiple-choice VQA. The red answer is the ground-truth, and the green ones are human-generated
wrong answers.
erate an answer as a free-form response to an open-ended
question, or pick from a list of candidate answers (multiplechoice) [3, 42]. Similar to [13] and [27], we mainly focus on the multiple-choice task in this paper, an example of
which is shown in Fig. 1. A good VQA system should be
able to interpret the question semantically, extract the key
information (i.e., objects, scenes, actions, etc.) presented in
the given image, and then select a reasonable answer after
jointly reasoning over the language-visual interactions.
In this paper, we propose a simple but effective VQA
model that performs surprisingly well on two popular
datasets: Visual7W Telling and VQA Real MultipleChoice. We start with the architecture in [27], which combines word features from the question and answer sentences
as well as hierarchical CNN features from the input image.
Our insights on “good practice” are fourfold: (i) To precisely capture the semantics in questions and answers, we
propose to exploit a part-of-speech (POS) tag guided attention model to ignore less meaningful words (e.g., coordinating conjunctions such as “for”, “and”, “or”) and put more
emphasis on the important words such as nouns, verbs and
adjectives. (ii) We leverage a convolutional n-gram model
[16] to capture local context needed for phrase-level or even
sentence-level meaning in questions and answers. (iii) To
integrate the vision component (represented by the CNN
features extracted from the pre-trained deep residual network (ResNet)), we introduce a triplet attention mechanism
based on the affinity matrix constructed by the dot product
of vector representations of each word in the question or answer and each sub-region in the image, which measures the
matching quality between them. After appropriate pooling
and normalization, we linearly combine the attention coefficients from questions and answers to weight relevant visual
features. (iv) To encourage the learning to be more discriminative, we mine hard negative samples by sending answers
corresponding to the same question to the network to learn
simultaneously. By setting the margin between the correct
answer and hard negative answer, we observe performance
increasing.
Our proposed methods achieve state-of-the-art performance of 68.2% on Visual7W benchmark, and an competitive performance of 69.6% on test-standard split of VQA
Real Multiple Choice. Our approach offers simple insights
for effectively building high performance VQA systems.
2. Related Work
Models in VQA: Existing VQA solutions vary from symbolic approaches [21, 33], neural-based approaches [20, 25,
38, 23], memory-approaches [34, 14], to attention-based approaches [7, 19, 26, 17, 35]. In addition to these models,
some efforts have been spent on better understanding the
behavior of existing VQA models [1]. Jabri et al. [13]
proposed a simple model that takes answers as input and
performs binary predictions. Their model simply averages
word vectors to get sentence representations, but competes
well with other more complex VQA systems (e.g. LSTM).
Our work proposes another language representation (i.e.,
Convolutional n-grams) and achieves better performances
on both the Visual7W dataset [42] and the VQA dataset [3].
Attention in VQA: A number of recent works have explored image attention models for VQA [7, 19, 36]. Zhu
et al. [42] added spatial attention to the standard LSTM
model for pointing and grounded QA. Andreas et al. [2]
proposed a compositional scheme that exploited a language
parser to predict which neural module network should be
instantiated to answer the question. Fukui et al. [7] applied
multi-modal bilinear to attend images using questions. Gan
[8] link the COCO segmentation and caption task, and add
segmentation-like attention to their VQA system. Unlike
these works, we propose a POS tag guided attention, and
utilize both question and answer to drive visual attention.
Learning in VQA model: VQA models use either a softmax [3] or a binary loss [13]. Softmax-loss-based models
formulate VQA as a classification problem and construct a
large fixed answer database. Binary-loss-based models formulate multi-choice VQA as a binary classification problem by sending image, question and candidate answer as a
triplet into network. They are all based on independently
classifying (image, question, answer) triplets. Unlike our
model, none incorporate the other triplets corresponding to
the same (image, question) pair during training.
POS Tag Usage in Vision and Language: Our POS tag
guided attention is different from [37]. They use POS tag
to guide the parser to parse the whole sentence and model
LSTM in a hierarchical manner and apply it to image caption task. Instead, We directly parse the POS tag of each
word and utilize the POS tag as a learning mask on the word
glove vector. We divide POS tag into more fine-grained 7
categories as Sec. 3.2.1. There are also other papers using POS tag in different ways. [6] calculates average precision for different POS tag. [12] discovers POS tags are
very effective cues for guiding the Long Short-Term Memory (LSTM) based word generator.
3. Model Architecture
3.1. Architecture Overview
For the multiple-choice VQA, each provided training
sample consists of one image I, one question Q and N
candidate answers A1 , . . . , AN , where A1 is the groundtruth answer. We formulate this as a binary classification
task by outputing a target prediction for each candidate
triple {“image”: I, “question”:Q, “candidate answer”:
Ai , “target”: ti }, where, for example, t1 = 1 and ti = 0
for i = 2, . . . , N .
Different from previous work [38, 13], we adopt the architecture in Fig. 2, which combines features from question, answer and image in a hierarchical manner. For a fair
comparison between our model and [13], we have implemented both and obtained the same performance of 64.8%
as [13] on Visual7W, but we find that the hierarchical model
training converges much faster (in less than 20 epochs).
Denote the final vector representations of the question,
image and the i-th candidate answer by x Q , x I and x Ai . In
the first stage, x Q and x I are combined via the Hadamard
product (element-wise multiplication) to obtain a joint representation of question and image:
x QI = x Q
xI .
In the second stage, x QI and x Ai are fused as:
W QI x QI + b QI )
x QIAi = tanh (W
x Ai .
In the last stage, a binary classifier is applied to predict the
probability of the i-th candidate answer being correct:
pi = sigmoid W QIAx QIAi + b QIA .
We jointly train all the weights, biases and embedding matrices to minimize the cross-entropy or structured loss. For
inference on test samples, we calculate the probability of
correctness for each candidate answer, and select the one
with highest probability as the predicted correct answer:
i? = arg maxi=1,...,N pi .
Why was the hand of the woman over the left shoulder of the man?
They were together and engaging in affection.
Glove
Glove
……
……
(i)
WRB V O
N O O
N
O O
J
N
CNN
(ResNet200)
O O N
N
V
O
O
V
O
N
W*H*C
(ii)
Conv-1Gram
Conv-2Gram
Conv-3Gram
maxpool
……
……
(iii)
Conv-1Gram
Conv-2Gram
Conv-3Gram
Triplet-A
Attention
Triplet-Q
Attention
𝑿𝑸
maxpool
𝑿𝒊
𝑿𝑨
𝑿𝑸𝑰
𝑿𝑸𝑰𝑨
score
Figure 2. Illustration of our pipeline for VQA. Best viewed in color. We first extract Glove vector representations of each word in the
question and answer, which are weighted by a POS tag guided attention for each word. We transform each sentence using a convolutional
n-gram to encode contextual information and average to get QUESTION-vec. For visual features, we utilize a standard CNN model
and conduct weighted summation to IMAGE-vec by applying triplet attention. Finally we combine QUESTION-vec, IMAGE-vec and
ANSWER-vec to score the quality of the proposed answer. Structured learning on triplets is applied after getting the score for answers
corresponding for each image question pair.
3.2. Model Details
In this subsection, we will describe how to obtain the final vector representations of x Q , x I and x Ai . We use the
same mechanism for obtaining x Q and x Ai , and thus focus on the former. We represent each question with words
q1 , q2 , . . . , qM by X Q = [ee1 , e 2 , . . . , e M ], where e m is the
corresponding vector for word qm in the embedding matrix
E.
3.2.1
POS Tag Guided Attention
For each question, it is expected that some words (i.e.,
nouns, verbs and adjectives) should matter more than the
others (e.g., the conjunctions). Hence, we propose to assign different weight to each word based on its POS tag to
impose different attentions.
In practice, we find that it works better to group the original 45 pos tags into a smaller number of categories. Specifically, we consider the following seven categories:
1. CD for cardinal numbers;
2. J (including JJ, JJR and JJS) for adjectives;
3. N (including NN, NNS, NNP and NNPS) for nouns;
4. V (including VB, VBD, VBG, VBN, VBP and VBZ)
for verbs;
5. WP (including WP and WP$) for Wh-pronouns;
6. WRB for Wh-adverb;
7. O for Others.
An example of one question and its POS tag categories is
shown in Fig. 2. For the question “Why was the hand of the
woman over the left shoulder of the man”, the POS tags are
given as “WRB, V, O, N, O, O, N, O, O, J, N, O, O, N”. Each
category is assigned with one attention coefficient P OS i ,
which will be learned during training. In this way, each
word is represented by êei = e i × P OS i , and the question
is represented by [êe1 , êe2 , . . . , êeM ].
3.2.2
Convolutional N-Gram
We propose using a convolutional n-gram to combine contextual information over multiple words represented as vectors. Specifically, we utilize multiple
window sizes for one-dimensional convolutional neural
network.
For a window of size L, we apply the
corresponding filter FL for each word êei , obtaining
FL (êei−(L−1)/2 , . . . , êei , . . . , êei+(L−1)/2 ), when L is odd;
and FL (êei−L/2 , . . . , êei , . . . , êei+L/2−1 ) when L is even.
Therefore, we not only consider the i-th word, but also the
context within the window. Practically, we apply the filters
with window sizes from 1 to L, and then max-pool all of
them along each word to obtain a new representation
e˜i = maxpool (FL , FL−1 , ..., F1 ).
3.2.3
The i-th column of A reflects the closeness of matching between each word in the question and the i-th sub-region.
Via max-pooling, we can find the word that matches most
closely to the i-th sub-region. Thus, the attention weights
from the question to the image are obtained as
A) .
att Q−I = maxpool (A
Similarly, att Ai −I is obtained as the attention weights from
the candidate answer to the image. Then, the final vector
representation for the image is
x I = X I × att TI .
Remark 2: In our experiments, we find that the combination of using relu only for the image features and tanh
for others is most effective.
Final Sentence Representation
An efficient and effective way to compute a sentence embedding is to average the embeddings of its constituent
words [15]. Therefore, we P
let the final question vector repM
1
resentation to be x Q = M
i=1 ẽi .
Remark 1: In our experiments, we have found that the
simple average sentence embedding shows much better performance than the widely used RNNs or LSTMs in both
Visual7W and VQA datasets. The reason might be that
the questions and candidate answers in the majority of the
two VQA datasets tend to be short and have simple dependency structure. RNNs or LSTMs should be more successful when the questions or candidate answers in VQA are
more complicated, require deeper reasoning and with more
data (e.g., Visual Gnome). Here we only compare models
directly trained from Visual7W or VQA.
3.2.5
Structured Learning for Triplets
For the multiple-choice VQA, each provided training sample consists of one image I, one question Q and N candidate answers A1 , . . . , AN , where A1 is the ground-truth
answer. We formulate this as a binary classification task
by outputing a target prediction for each candidate triple
{I, Q, Ai , ti }, where, e.g., t1 = 1 and ti = 0 for i =
2, . . . , N . The output of our model for the i-th candidate
answer (as discussed above) is given by:
pi = sigmoid W QIAx QIAi + b QIA .
The standard binary classification loss seeks to minimize:
Lb = −
3.2.4
Triplet Attention
An effective attention mechanism should closely reflect the
interactions between the question/answer and the image.
Our proposed triplet attention mechanism is given as
att I = norm λ1 × att Q−I + att Ai −I ,
x) = Px(xx) , att Q−I and att Ai −I are the attenwhere norm(x
tion weights from the question or candidate answer to the
image, respectively. λ1 is a learned coefficient to balance
the influences imposed from the question and candidate answer on the image features.
For a given image, the raw CNN features for each subregion X I,raw = [cc1 , c 2 , . . . , c K ] are transformed as X I =
W I X I,raw + b I ). With the previously obtained reprelu (W
resentation X Q = [ẽe1 , . . . , ẽeM ] for the associated question,
an affinity matrix is obtained as
A = softmax X TQ × X I .
N
X
ti log pi
i=1
To improve the model’s discriminative ability, we introduce
a structured learning loss that imposes a margin between
the correct answer and any incorrect answer. We simultaneously compute scores for all candidate answers corresponding to the same image, question pair. We encourage the distance between target positive answer score and the hardest
negative answer (highest scoring negative). The structured
learning loss is given by:
Ls = max(max(margin + pi − p1 , 0))
i
where the margin is the large margin scalar we set to encourage the ability of network to distinguish right triplet and
wrong triplet. Thus the final loss to minimize is:
L = Lb + λ2 Ls ,
where λ2 is applied here to balance these two loss functions.
4. Experiments
In the following, we first introduce the datasets for evaluation and the implementation details of our approach. Then,
we explore the proposed good practices for handling visual
question answering problem and do ablation study to verify
the effectiveness for each step. After this, we make comparison with the state-of-the-art. Finally, we visualize the
learned POS tag guided attention for word embedding and
triplet attention for images, and compare the attention when
wrong answers appear and right answers appear.
4.1. Dataset and Evaluation protocol
Visual7W [42]: The dataset includes 69,817 training
questions, 28,020 validation questions, 42,031 test questions, 14,366 training images 5,678 validating images and
8,609 testing images. There are a total of 139,868 QA pairs.
Each question has 4 candidate answers. The performance
is measured by the percentage of correctly answered questions.
VQA Real Multiple Choice [3]: The dataset includes
248,349 questions for training, 121,512 for validation, and
244,302 for testing. Each question has 18 candidate answers. We follow the VQA evaluation protocol in [3]. In
the following, we mainly report our results on Visual7W,
since the results on VQA are similar.
4.2. Experiment Setup
We use Tensorflow [5] to develop our model. The Adam
optimizer is adopted with a base learning rate of 0.0002 for
the language embedding matrix and 0.0001 for others. The
momentum is set to be 0.99, and the batch size is 18 for Visual7W and 576 for VQA, since the latter is a much larger
and more imbalanced dataset. Our model is trained within
20 epochs with early stopping if the validation accuracy has
not improved in the last 5 epochs. Word embedding is extracted using Glove [24], and the dimension is set as 300.2
The size of the hidden layer QI and QIA are both 4,096.
Dimension of image embedding is 2,048. relu is used for
image embedding, while tanh is used for others. Batch normalization is applied just before last full connected layer for
classification, and no dropout is used.
In our experiments, when attention is not applied, we
resize each image to be 256×256, crop the center area of
224×224, and take the activation after last pooling layer
of the pre-trained ResNet 200-layer model [11] as the extracted CNN feature (size 2048). When the proposed triplet
attention is applied (ablation study), we rescale the image
to be 224×224, and take the activation from the last pooling layer as the extracted features of its sub-regions (size
7 × 7 × 2048). For the full model we rescale the image
2 In our experiments, Glove achieves better performance than
Word2Vec.
to be 448×448, and take the activation from the last pooling layer as the extracted features of its sub-regions (size
14 × 14 × 2048).
4.3. Evaluation on Good Practices
Exploration of Network Training: As most datasets for
VQA are relatively small and rather imbalanced, training
deep networks is challenging due to the risk of learning
biased-data and over-fitting. To mitigate these problems,
we devise several strategies for training as follows:
Handling Imbalanced Data. As shown in Section 3.1,
we reformulate the learning as a binary classification problem. The positive and negative examples for each question are quite unbalanced, i.e., 1:3 for Visual7W and 1:17
for VQA. Thus during training, for each epoch, we sample a certain number of negative examples corresponding
each question. Figure 3(a) illustrates our exploration on Visual7W and it suggests two negative samples for each positive one.
Appropriate Training Batch Size. Handling data imbalance results in changes that require further adjustment of
optimization parameters such as batch size. We explore the
effect of training batch size after handling data imbalance
and show the results in Figure 3(b). We can see training
batch size of 18 achieves best performance for Visual7W.
Batch normalization. Batch Normalization (BN) is an
important component that can speed up the convergence of
training. We explore where to add BN, such as the source
features (i.e., x Q , x I or x Ai ), after the first stage (i.e, x QI ),
or after the second stage (i.e., x QIAi ). Figure 3(d) shows
that adding BN to x QIAi not only maintains efficient training speed, but also improves the performance.
Evaluation on POS Tag guided Attention: POS tags provide an intuitive mechanism to guide word-level attention.
But to achieve the optimal performance, a few practical
concerns have to be taken care of. Specifically, we found
weight initialization was important. And the best performance is gained when the POS tag guided attention weights
are initialized with a uniform distribution between 0 and 2.
The performance comparison before and after adding POS
tag guided attention is in Table.1. POS tag guided attention
alone helps improve performance by 0.7% on Visual7W.
Evaluation on Convolutional N-Gram: We propose the
convolutional n-gram mechanism to incorporate the contextual information in each word location. Specifically, we apply 1D convolution with window size of 1 word, 2 words,
3 words, etc, and use max pooling along each dimension
between different convolutional n-grams. The exploration
of window size is illustrated in Figure 3(c). We adopt a
convolutional 3-gram of window size 1, 2 and 3 for its efficiency and effectiveness. As shown in Table. 1, convolutional 3-gram alone helps improve performance by 0.6% on
Visual7W.
67.2
67.2
67.0
66.8
67.0
66.8
1:2
(a) Imbalance
9
1:3
55
5.0
No BN Data
QI
QIA
(d) Batch Normalization
2.5
epoch
7.5
accuracy
10.0
36
72
66.5
66.0
(b) Batch Size
65.0
108
1-gram 2-gram 3-gram 4-gram
(c) Convolutional N-gram
67.6
67.6
12.5
60
18
67.8
15.0
65
67.0
65.5
67.5
67.4
accuracy
1:1
accuracy
67.5
66.6
66.6
50
68.0
accuracy
67.4
accuracy
accuracy
67.4
67.2
67.0
66.8
0.1
0.2
0.3
0.4
(e) lambda2
0.5
67.4
67.3
67.2
0.1
0.2
(f) margin
0.3
Figure 3. Good practices of histograms on data imbalance, appropriate training batch size, batch normalization (with time and performance)
and convolutional n-gram. λ2 and margin is also explored in this picture.
Method
Our Baseline
+POS tag guided attention (POS-Att)
+Convolutional N-Gram (Conv N-Gram)
+POS-Att +Conv N-Gram
+POS-Att +Conv N-Gram +Triplet attention-Q
+POS-Att +Conv N-Gram +Triplet attention-A
+POS-Att +Conv N-Gram +Triplet attention-Q+A
+POS-Att +Conv N-Gram +Triplet attention-Q+A + Structured Learning Triplet
Visual7W
65.6
66.3
66.2
66.6
66.8
67.0
67.3
67.5
VQA validation
58.3
58.7
59.3
59.5
60.1
60.1
60.2
60.3
Table 1. Performance of our model on Visual7W and VQA validation set, for fast ablation study, we use the 7*7 image feature instead of
14*14 image feature by feed the 1/2 height and 1/2 width of original size image to the network
Evaluation on Triplet Attention: Our triplet attention
model spatially attends to image regions based on both
question and answer. We try to add only question attention
att Q−I , only answer attention att A−I , and both question
and answer attention att I (initial λ1 is set as 0.5). Resulting comparisons are shown in Table. 1. Answer attention alone improves more than question alone while our
proposed triplet attention mechanism improves the performance by 0.7%.
Evaluation on Structured Learning on Triplets: Our structured learning on triplets mechanism not only uses the traditional binary classification loss but also encourages large
margin between positive and hard negative answers. It helps
to improve performance on Visual7W from 67.3 to 67.5 and
on VQA validation set from 60.2 to 60.3. We explore the
parameter λ2 , which balances the two losses and margin,
to mine hard negative samples. And they are illustrated in
Figure 3(e) and Figure 3(f). In our final model, λ2 is set as
0.5 for Visual7W and 0.05 for VQA dataset while margin is
set as 0.2 for both dataset.
State-of-the-art performance: After verifying the effectiveness of all the components above, we compare our full
model with the state-of-the-art performance in Table. 2. For
Visual7W, our model improves the state-of-the-art performance by 1.1%. And we compare our model on VQA with
other state-the-art performance based on the same training
schema( no extra data, no extra language embedding, single model performance). We also get the best performance
(69.6%) on VQA test-standard set.
4.4. Visualization and Analysis
We visualize the POS tag guided attention for questions/answers and the triplet attention for images in Fig 4.
The first column is original image. The second column
is the visualization based on question with wrong answer
attention. It should be compared with the fourth column,
Method
Co-attention [19]
RAU [22]
Attention-LSTM [42]
MCB + Att [7]
Zero-shot [27]
MLP [13]
VQS [8]
Full model
Visual7W
55.6
62.2
65.7
67.1
68.2
VQA Test Standard
66.1
67.3
68.9
69.6
VQA Test Dev all
65.8
67.7
68.6
65.2
68.9
69.7
Y/N
79.7
81.9
80.8
80.6
81.9
Num
40.0
41.1
17.6
39.4
44.3
Other
59.8
61.5
62.0
65.3
64.7
Table 2. Quantitative results on Visual7W [42], the test2015-standard split on VQA Real Multiple Choice [3] and the test2015-develop split
with each division (Y/N, number, and others). For fair comparison, we only compare with the single model performance without using
other large dataset (e.g. Visual Gnome) for pre-training. Our full model outperforms the state-of-the-art performance by a large margin on
Visual7W and obtains a competitive result on VQA test-standard set.
2.00
2.00
1.75
1.75
1.50
1.50
1.25
1.25
1.00
1.00
0.75
0.75
o
wh
is
ll
rs
eba playe
bas
ge
ima
the
in
o
wh
2.00
2.00
1.75
1.75
1.50
1.50
1.25
1.25
1.00
1.00
0.75
is
in
the
ge en
ima m
,
n ,
s
kid
er
pap
llas
bre
um
me
wo
0.75
at
wh
the
is
h
wit
d
ing
e re
ceil
cov
tile
at
wh
s
2.00
2.00
1.75
1.75
1.50
1.50
1.25
1.25
1.00
the
is
h
d
ing
ere wit
ceil
cov
1.00
0.75
0.75
at
wh
sits
the
on
le a
ce
cyc
poli
tor
mo
he
at sits
wh
t
lme
2.00
2.00
1.75
1.75
1.50
1.50
1.25
1.25
1.00
1.00
0.75
on
the
ce
clea
poli torcy
mo
pair
of
ves
glo
0.75
how
ny
als
ma
anim
are
the
in
tur
pic
e
two
how
2.00
2.00
1.75
1.75
1.50
1.50
1.25
1.25
1.00
ny
als
ma
anim
are
in
the
tur
pic
e
one
1.00
0.75
0.75
at
wh
is
the
n n
te r o
pat
the
r
floo
d
at is the ternon the floor lack ndsinst a hite
wh
oun
b
w
mo ga
pat
kgr
dia a
bac
r
d
soli colo
2.00
2.00
1.75
1.75
1.50
1.50
1.25
1.25
1.00
1.00
0.75
0.75
at
wh
r
colo
is
the
in
tra
e
blu
at
wh
r
colo
is
the
in
tr a
re d
Figure 4. Best viewed in color. The first column is original image. The second column is question and wrong answer attention visualization.
It should be compared with the fourth column, which is the question and right answer attention visualization. The third column and the
fifth column is the question with wrong answers and question with right answers.
which is the visualization based on question with right answer attention. The third column and the fifth column is the
question with wrong answers and question with right answers. The wrong question answer pairs from top to bottom
in the third column are: ”Who is in the image, basketball
players”, ”what is the ceiling covered with, ties”, ”what sits
on the police motorcycle, a helmet”, ”how many animals
are in the picture, two”, ”what is the pattern on the floor,
solid color”, ”what color is the train, blue”. And the right
question answer pairs from top to bottom in the fifth column are: ”Who is in the image, women kids”, ”what is the
ceiling covered with, paper umbrellas”, ”what sits on the
police motorcycle, a pair of gloves”, ”how many animals
are in the picture, one”, ”what is the pattern on the floor,
2.00
2.00
1.75
1.75
1.50
1.50
1.25
1.25
1.00
1.00
0.75
0.75
y
wh
are
the
ny
ws it
ma hado
s
so
re
s it
y
y
e
e o
wh ar ther s man adow
sh
n
noo
is
2.00
2.00
1.75
1.75
1.50
1.50
1.25
1.25
1.00
is
a
ny ay
sun d
1.00
0.75
0.75
at
wh
the
is
rch
chu
ed
call
ral
hed
cat
at
wh
2.00
2.00
1.75
1.75
1.50
1.50
1.25
1.25
1.00
an
rch alled rinity
rch
her chu
c
t
chu
lut
the
is
1.00
0.75
0.75
rs are
at
wh lette
the
on
t
r
tom righ corne
bot
idi
rs are
at
wh lette
2.00
2.00
1.75
1.75
1.50
1.50
1.25
1.25
1.00
1.00
the
on
t
r
tom righ corne pio
bot
0.75
0.75
at
wh
is
the color
of
the
n
ma
's
helm
at
wh
et blue
2.0
2.0
1.8
1.8
1.6
1.6
1.4
is
the color
of
the
n
ma
's
et lack
helm b
1.4
e re
wh
are
the
ple
peo
in
a
e re
wh
field
2.00
2.00
1.75
1.75
1.50
1.50
1.25
1.25
1.00
are
the
ple
peo
on
alk
ew
sid
1.00
0.75
0.75
at
wh
r
colo
is
the
k
des
p
lam
ite
wh
at
wh
r
colo
is
the
k
des
p
lam
gold
Figure 5. Failure case visualization. The layout is the same as Fig 4.
diamonds against a white background”, ”what color is the
train, red”. The first and second row shows our VQA system are able to link the attention to certain enough objects
in images instead of part of them while the third row shows
our VQA system’s ability to distinguish what really represent ”A policeman” and what relates to them. The fourth to
sixth row shows our VQA system can spot what should be
spotted without redundant area.
We can conclude that both our POS tag guided and triplet
attentions help the model focus on the desired part, and is
thus beneficial for final reasoning.
We also visualize some failure case in Fig 5. In these cases,
right question answer pairs either attend the same wrong
image part with wrong question answer pair, or attend to
some unreasonable place.
5. Conclusion
This paper presents a simple yet highly effective model
for visual question answering. We attribute this good performance to the novel POS tag guided and triplet attention
mechanisms, as well as a series of good practices including
structured learning for triplets. The former provides a justification and interpretation of our model, while the latter
makes it possible for our model to achieve fast convergence
and achieve better discriminative ability during learning.
Acknowledgement
This work was supported in part by NSF grants IIS1618806, IIS-1253538 and a hardware donation from
NVIDIA. Zhe Wang personally thanks Mr. Jianwei Yang
for the helpful discussion.
References
[1] A. Agrawal, D. Batra, and D. Parikh. Analyzing the behavior
of visual question answering models. In EMNLP, 2016.
[2] J. Andreas, M. Rohrbach, T. Darrell, and D. Klein. Deep
compositional question answering with neural module networks. In CVPR, 2016.
[3] S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra,
C. Lawrence Zitnick, and D. Parikh. VQA: Visual question
answering. In ICCV, 2015.
[4] J. P. Bigham, C. Jayant, H. Ji, G. Little, A. Miller, R. C.
Miller, R. Miller, A. Tatarowicz, B. White, S. White, , and
T. Yeh. VizWiz: Nearly realtime answers to visual questions.
In User Interface Software and Technology, 2010.
[5] M. A. .et al. Tensorflow: Large-scale machine learning on
heterogeneous distributed systems. 2016. arXiv:1603.04467.
[6] H. Fang et al. From captions to visual concepts and back. In
CVPR, 2015.
[7] A. Fukui, D. H. Park, D. Yang, A. Rohrbach, T. Darrell, and
M. Rohrbach. Multimodal compact bilinear pooling for visual question answering and visual grounding. In EMNLP,
2016.
[8] C. Gan, Y. Li, H. Li, C. Sun, and B. Gong. Vqs: Linking segmentations to questions and answers for supervised attention
in vqa and question-focused semantic segmentation. 2017.
ICCV.
[9] I. Goodfellow, Y. Bengio, and A. Courville. Deep Learning.
MIT Press, 2016.
[10] Y. Goyal, T. Khot, D. Summers-Stay, D. Batra, and
D. Parikh. Making the V in VQA matter: Elevating the role
of image understanding in visual question answering. 2016.
arXiv.org:1612.00837.
[11] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning
for image recognition. In CVPR, 2016.
[12] X. He et al. Image caption generation with part of speech
guidance. Pattern Recognition Letters, 2017.
[13] A. Jabri, A. Joulin, and L. van der Maaten. Revisiting visual
question answering baselines. In ECCV, 2016.
[14] A. Jiang, F. Wang, F. Porikli, and Y. Li. Compositional memory for visual question answering. 2015. arXiv.1511.05676.
[15] T. Kenter, A. Borisov, and M. de Rijke. Siamese CBOW: Optimizingword embeddings for sentence representations. In
Dutch-Belgian Information Retrieval Workshop, 2016.
[16] Y. Kim. Convolutional neural networks for sentence classification. In EMNLP, 2014.
[17] R. Li and J. Jia. Visual question answering with question
representation update. In NIPS, 2016.
[18] J. Lu, A. Kannan, J. Yang, D. Parikh, and D. Batra. Best
of both worlds: Transferring knowledge from discriminative
learning to a generative visual dialog model. In NIPS, 2017.
[19] J. Lu, J. Yang, D. Batra, and D. Parikh. Hierarchical
question-image co-attention for visual question answering.
In NIPS, 2016.
[20] L. Ma, Z. Lu, and H. Li. Learning to answer questions from
image using convolutional neural network. In AAAI, 2016.
[21] M. Malinowski and M. Fritz. A multi-world approach to
question answering about real-world scenes based on uncertain input. In NIPS, 2014.
[22] H. Noh and B. Han. Training recurrent answering units with
joint loss minimization for vqa. 2016. arXiv:1606.03647.
[23] D. S.-S. D. B. D. P. Peng Zhang, Yash Goyal. Yin and yang:
Balancing and answering binary visual questions. In CVPR,
2016.
[24] J. Pennington, R. Socher, and C. D. Manning. Glove: Global
vectors for word representation. In EMNLP, 2014.
[25] M. Ren, R. Kiros, and R. Zemel. Exploring models and data
for image question answering. In NIPS, 2015.
[26] K. J. Shih, S. Singh, and D. Hoiem. Where to look: Focus
regions for visual question answering. In CVPR, 2016.
[27] D. Teney and A. van den Hengel. Zero-shot visual question
answering. 2016. arXiv.1611.05546.
[28] L. Wang, Z. Wang, S. Guo, and Y. Qiao. Better exploiting
os-cnns for better event recognition in images. In ICCVW,
2015.
[29] L. Wang, Z. Wang, Y. Qiao, and L. V. Gool. Transferring
deep object and scene representations for event recognition
in still images. IJCV, 2017.
[30] L. Wang, Y. Xiong, Z. Wang, Y. Qiao, D. Lin, X. Tang, and
L. V. Gool. Temporal segment networks: Towards good practices for deep action recognition. In ECCV, 2016.
[31] Z. Wang, L. Wang, W. Du, and Y. Qiao. Exploring fisher
vector and deep networks for action spotting. In CVPRW,
2015.
[32] Z. Wang, L. Wang, Y. Wang, B. Zhang, and Y. Qiao.
Weakly supervised patchnets: Describing and aggregating
local patches for scene recognition. TIP, 26(4):2028–2041,
2017.
[33] Q. Wu, C. S. Peng Wang, A. Dick, and A. van den Hengel. Ask me anything: Free-form visual question answering
based on knowledge from external sources. In CVPR, 2016.
[34] C. Xiong, S. Merity, and R. Socher. Dynamic memory networks for visual and textual question answering. In ICML,
2016.
[35] H. Xu and K. Saenko. Ask, attend and answer: Exploring
question-guided spatial attention for visual question answering. In ECCV, 2016.
[36] Z. Yang, X. He, J. Gao, L. Deng, and A. Smola. Stacked
attention networks for image question answering. In CVPR,
2016.
[37] N. Zhenxing et al. Hierarchical multimodal lstm for dense
visual-semantic embedding. In ICCV, 2017.
[38] B. Zhou, Y. Tian, S. Sukhbaatar, A. Szlam, and R. Fergus. Simple baseline for visual question answering. 2015.
arXiv:1512.02167.
[39] W. Zhu et al. Deep multi-instance networks with sparse label
assignment for whole mammogram classification. In MICCAI, 2017.
[40] W. Zhu et al. Adversarial deep structured nets for mass segmentation from mammograms. IEEE ISBI, 2018.
[41] W. Zhu et al. Deeplung: Deep 3d dual path nets for automated pulmonary nodule detection and classification. In
IEEE WACV, 2018.
[42] Y. Zhu, O. Groth, M. Bernstein, and L. FeiFei. Visual7W:
Grounded Question Answering in Images. In CVPR, 2016.
| 1 |
IDEL: In-Database Entity Linking with Neural Embeddings∗
Torsten Kilias, Alexander Löser, Felix A. Gers
R. Koopmanschap, Y. Zhang, M. Kersten
Beuth University of Applied Sciences
Luxemburger Straße 10, Berlin
[tkilias,aloeser,gers]@beuth-hochschule.de
MonetDB Solutions
Science Park 123, Amsterdam
[firstletter.lastName]@monetdbsolutions.com
arXiv:1803.04884v1 [cs.DB] 13 Mar 2018
ABSTRACT
We present a novel architecture, In-Database Entity Linking (IDEL),
in which we integrate the analytics-optimized RDBMS MonetDB
with neural text mining abilities. Our system design abstracts core
tasks of most neural entity linking systems for MonetDB. To the
best of our knowledge, this is the first defacto implemented system
integrating entity-linking in a database. We leverage the ability of
MonetDB to support in-database-analytics with user defined functions (UDFs) implemented in Python. These functions call machine
learning libraries for neural text mining, such as TensorFlow. The
system achieves zero cost for data shipping and transformation
by utilizing MonetDB’s ability to embed Python processes in the
database kernel and exchange data in NumPy arrays. IDEL represents text and relational data in a joint vector space with neural
embeddings and can compensate errors with ambiguous entity representations. For detecting matching entities, we propose a novel
similarity function based on joint neural embeddings which are
learned via minimizing pairwise contrastive ranking loss. This function utilizes a high dimensional index structures for fast retrieval of
matching entities. Our first implementation and experiments using
the WebNLG corpus show the effectiveness and the potentials of
IDEL.
1
INTRODUCTION
A particular exciting source for complementing relational data is
text data. There are endless opportunities for enterprises in various
domains to gain advantages for their offerings or operations, when
the enterprise relational data can be linked to the abundance of text
data on the web. The entities represented in the relational data can
then be complemented, updated and extended with the information
in the text data (which is often more fresh).
Brand- and competitor monitoring. Fashion retailers must closely
observe news on the Web about their own brands and products,
as well as those of their competitors. Typically, information of the
brands and products are kept in some relational tables (e.g. product
catalogs). Next to that, a fashion retailer can retrieve relevant texts
(e.g. news articles, blogs and customer reviews about fashion items)
from the Web. With some text analysis tools, the fashion retailer
would be able to enrich her product catalog with information from
the text data to improve search results. In addition, by monitoring the public opinions about fashion items, the fashion retailer
would be able to adjust her products to the trends more timely and
accurately.
Disaster monitoring. For reinsurance companies (i.e. companies
insuring other insurance companies), anticipating potentially huge
claims from their insurance customers after some (e.g. natural) disasters is a core business. Next to collecting extensive historical data,
∗ This
manuscript is a preprint for a paper submitted to VLDB2018
Figure 1: An example of joining relational data (i.e. Organization) with text data (i.e. EntityMention). An exact match
strategy would have low recall, since it would not be able to
match ‘Big Blue’ with ‘IBM’ or ‘HP Inc.’ with ‘HP’.
such as customer information and various statistics, reinsurance
companies also closely monitor news messages as an additional data
source to predict probabilities of losses. For instance, when a container ship has had an accident, a reinsurance company would want
to know if this will affect her customers. Analogously, fires burning down entire buildings, such as important warehouses, cause
relevant losses for reinsurance companies. These events can break
a supply chain, therefore reinsurance company will most probably
receive claims from her affected customers for their losses. Reinsurance companies want to map text reports about companies and
disasters to their databases, so that their analysts can search for
information in both text and tabular data. In addition, those analysts want to combine fresh information from local newspaper
about companies in their database. So news messages serve here
as a source of fairly fresh data, in contrast to statistics/financial
reports written quarterly or even annually.
Entity linking between text and tables To realize these kinds
of applications, a basic step is linking entities mentioned in text
data to entities represented in relational data, so that missing data
in the relational tables can be filled in or new data can be added. An
example is shown in Figure 1, where the text data (i.e. Document)
is already preprocessed by some entity recognizers [2], which annotate the text data with entities recognized (i.e. Mention) and their
positions in the text (i.e. Span). However, further process of linking
the recognized entities to the relational entities requires advanced
text join techniques. The main challenge is to compute fuzzy joins
from synonyms, homonyms or even erroneous texts.
Classic text join. An often practiced but limited solution to this
problem is classic (i.e. linguistic features based) text join techniques [15], [8]. First, each entity in the relational data is represented as a character sequence; while a text join system is used to
extract candidate entity mentions from the text data. Next, the text
join system executes a cross join between the two sets of entities.
The step often produces a large set of matching candidates. So,
finally, users can apply some lexical filter condition, such as an
exact or a containment match, to reduce the result size. However,
this practice often suffers from low recall and precision when faced
with ambiguous entity type families and entity mentions. Typical
error sources are, for instance, matches between homonyms (e.g.
IBM as the IATA airport code or the IT company), hyponyms (e.g.
“SAP SE” vs. “SAP Deutschland SE & Co. KG”), synonyms (e.g. IBM
and “Big Blue”) and misspellings in text (e.g. “SAP Dtld. SE & Co.
KG”).
Entity-linking (EL) between text and a more structured representation has been an active research topic for many years in the web
community and among Computational Linguists [27]. EL systems
abstract the entity linking problem as a multi-value multi-class classification task [9]. Given a relational and a textual representation
of entities, an EL system learns a similarity model for a classifier
that can determine if there is a match or not.
paper, we propose in-database entity linking (IDEL), a single system in which relational data, text data and entity linking tools are
integrated. IDEL stores both relational and text data in MonetDB,
an open-source RDBMS optimized for in-memory analytics. Entity
linking components are tightly integrated into the kernel of MonetDB through SQL user defined functions (UDFs) implemented in
Python [26]. In this way, various neural machine learning libraries,
e.g. TensorFlow, can be used to facilitate entity linking with neural
embeddings. We chose neural embeddings, since the system will
learn ‘features’ from existing signals in relational and text data as
hidden layers in a neural network and therefore can reduce human
costs for feature engineering drastically.
In IDEL, we choose the RDBMS as the basis of the architecture,
and integrating text data and entity linking into it for several carefully considered reasons. First, while IDEL is a generally applicable
architecture for many text analysis applications, we primarily target at enterprise applications, in which enterprises mainly want to
enrich their fairly static relational data with information from dynamic text data. Since traditionally enterprise data is already stored
in an RDBMS, an RDBMS based architecture has the biggest potential of a seamless adaptation. Second, an RDBMS has an extensive
and powerful engine for pre- and post-entity-linking query processing and data analysis. Finally, in-database analytics (i.e. bring
the computation as close as possible to the data instead of moving the data to the computation) has long been recognized as the
way-to-go for big data analytics. Following the same philosophy,
we propose an in-database entity linking architecture, which can
directly benefit from existing in-database analytics features.
As a result, the following characteristics are realized in the IDEL
architecture:
The need for Neural Entity Linking in RDBMSs However, existing EL systems come stand-alone or as separate tools, while
so far, there has been little support inside RDBMSs for such advanced natural language processing (NLP) tasks. First, users (e.g.
data scientists) often have to use three systems: one for relational
data (RDBMS), one for text data (often Lucene) and one for EL tasks
(often homegrown). The first problem comes with the choice of a
proper EL-system. Although there are many research papers and
a few prototypes for several domains available, most work comes
with domain specific features, ontologies or dictionaries, and is
often not directly applicable for linking data from a particular database or needs extensive fine tuning for a particular domain. First,
to apply EL on relational data, a user has to move the data from the
RDBMS to the EL tool. Not only, this will require significant human
and technical resources for shipping data. Worse, very few data scientists receive a proper training in both worlds. Finding data scientists
with proper training in entity linking and with deep knowledge
in RDBMS is difficult. Moreover, transforming textual data in a
relational representation requires glue and development time from
data scientists to bind these different system landscapes seamlessly.
Finally, domain specific information extraction is an iterative task.
It requires to continuously adopt manually tweaked features for
recognizing entities. A result from the above discussion is that many
projects that combine textual data with existing relational data may
likely fail and/or be infeasible.
Overall, current approaches to EL have many technical drawbacks, e.g. high maintenance, data provenance problems, bad scalability due to data conversion, transferring and storage costs, and
also non-technical drawbacks, such as the difficulty of hiring people
trained in both the RDBMS and the NLP worlds.
Best of two worlds Users can seamlessly switch between SQL
and Python, so that they can choose the best execution environment for each part of their data analytics.
Flexible and extensible IDEL provides a set of pre-trained
neural network models. In addition, it permits users to plugin their own models or third-party models for entity linking.
Simple SQL-based user interface IDEL provides an SQLbased user interface to all parts of the system. The whole
workflow of entity linking can be executed by several calls
to the implemented SQL UDFs. All intermediate and final
results can be stored in the underlying database for further
analysis.
Robust to language errors IDEL adopts state-of-the-art neural embeddings for entity linking, which can achieve much
higher precision under the four typical error sources (i.e.
homonyms, hyponyms, synonyms and misspellings). In addition, the system leverages extra information from the relational data, such as attribute values, and integrity constraints
on data type and range.
No manual feature engineering IDEL does not require manual feature engineering; instead the system observes data
distributions in the text database to represent best entities
in relational and text data.
Our contributions Ideally, the system would execute EL without
any domain specific feature engineering, only triggered by a SQL
query and in a single system without costly data shipping. In this
This paper is further structured as follows. Section 2 presents
the architecture of IDEL. Section 3 describes the neural embedding
models for representing text and relational data. Section 4 describes
2
Figure 2: Architecture of IDEL (white rectangular boxes are data-containing components; colored figures represent system
components). The workflow of the process consists of four steps: 1) Vectorization: generate vector representations for both
relational and text data; 2) Matching: compute matching candidates from the relational data and the text by an initial query
(bootstrapping) or by a previously learned neural network; 3) Linking: rank and select the matching candidates with the
highest similarities; and 4) (Re-)training: the system uses matching candidates to (re-)train the neural network for improving
models in next matching iterations.
the implementation of IDEL. Section 5 reports our preliminary
evaluation results using the WebNLG data set with ten different
entity types and thousands of manually labeled sentences. Finally
we discusses related work in Section 6 and conclude with future
outlook in Section 7.
2
this kind of work, e.g SkipThought [17]. Further, we can enrich
both vector representations with additional discriminative features
we can derive from their respective data sets. For tuple vectors,
we can use additional constraints in the relational data, such as
foreign keys. For text vectors, we can use context from surrounding
sentences. How to enrich these vectors is discussed in Sections 3.1
and 3.2, respectively.
IDEL ARCHITECTURE
Figure 2 depicts the architecture of IDEL.We assume that relational
data is already stored in an RDBMS according to its schema. In IDEL,
we also store text data and neural embeddings in the same RDBMS.
Text data is simply stored as a collection of strings (e.g. a table with
a single string column). In this way, users can manage and query
relational data together with text data and neural embeddings. Our
approach for entity linking addresses both mapping directions, i.e.
text to relational data and vice versa. The process of entity linking
can be divided into four major steps:
Step 1: Vectorization. First, we compute the respective vector
representations (i.e. Tuple_Vector vR (r) and Text_Vector vT (t)) for the
two data sets. To create these vectors, we choose not to learn a
neural network ourselves, but adopt a pre-trained model instead.
From the machine learning community, there already exist welltrained networks that are known to be particularly suitable for
Step 2: Finding matching candidates. The next step is to find
matching candidates for entities in relational data with mentions in
text data. Assume a user enters an SQL query such as the following
to link relational and text data shown in Figure 1:
SELECT e.*, o.*
FROM EntityMention e, Building b
WHERE LINK_CONTAINS(e.Mention, b.name, $Strategy) = TRUE
This query joins EntityMention and tuples of building and evaluates if a name of a building is contained in the entity mentions.
In addition to entity mentions and building names, the function
LINK_CONTAINS takes a third parameter $Strategy so that different
strategies can be passed. So far, we support an exact match and,
most important for this work, a semantic match strategy. When
computing the matches for the first time, there is generally very
3
little knowledge about the data distribution. Therefore, we suggest bootstrapping an initial candidate pool. For example, one can
generate exact matches with a join between words in entity mentions and words in relational tuples describing those entities. This
strategy is inspired by Snowball [1]. The initial matches can be
used in later steps, such as linking and retraining. Other sources
for matchings are gold standards with manually labeled data. However, this approach is highly costly and time consuming, because it
requires expensive domain experts for the labeling and hundreds,
or preferably even thousands of matchings.
triggered a move of the entire community to work on entity-linking
with deep learning.
Figure 3 gives an overview of our methods for representing and
matching entities in a joint embedding space. It zoom in on the
(re-)training step in the IDEL architecture (Figure 2). In this section,
we first provide a formal description for our transformation of
relational and text data into their respective vector representations.
Next, we formalize a joint embedding space, in which similar pairs
of entities in the relational database and their corresponding entity
mentions are kept close to each other, while dissimilar pairs further
apart. Then, we learn a common joint embedding space with a
pairwise contrastive ranking loss function. Finally, in this joint
space we compute a similarity between an embedding vector for
relational and text data.
Step 3: Linking. Now, we create linkings of matching entities.
We interpret entity linking as a ranking task and assume that an
entity mention in the text is given and we try to find the k most
likely entities in the database. On the other hand, we can also
assume that an entity in the database is given and we are interested
in the k most likely entity mentions in the text for this entity.
This step uses the matching candidates found in the previous step
and generates a ranking. If the matching function used in step 2
returns similarity values, this steps will leverage that information to
compute a ranking and select the top N best matchings for further
use. In case the matching function, e.g. LINK_CONTAINS, does not
produce similarity values (possibly due to the chosen strategy), all
pairs of matching candidates are regarded to have equal similarity
and hence will be included in the result of the linking.
3.1
Integrate various signals from the relational model in a single entity embedding. The relational model features many rich
signals for representing entities, such as relation and attribute
names, attribute values, data types, and functional dependencies
between values. Moreover, some relations may have further interdependencies via foreign keys. These relation characteristics are
important signals for recognizing entities. Our approach is to represent the “relational entity signatures” relevant to the same entity
in a single entity embedding.
Step 4: Retraining. An initial matching strategy, such as bootstrapping of candidates from exact matches with a join between
words in sentences and words in tuples, is often unable to detect
difficult natural language features such as homonyms, hyponyms,
synonyms and misspellings. To improve results of the initial matching step, the system conducts a retraining step for improving neural
models of the semantic similarity function. Thereby, the system
updates previous models with retrained neural networks and recomputes the matching step. IDEL permits repeating the training updating - matching circle, because the database might get changed
due to updates, insertions and deletions. If those changes alter the
distributions of the data, the systems triggers the neural network
to be retrained with the new data so as to match the new entities
reliably while using existing matching models for bootstrapping
training data and reducing manual labeling efforts. In the next
section, we will describe in details how training is done.
3
Relational Data Embeddings
Vector representation of entity relations. To create embeddings
we require a vector space representation. Therefore, we transform
relations into vectors as follows:
Let R (A1 , . . . , An , FK 1 , . . . , FKm ) be a relation with attributes
A1 , . . . , An and foreign keys FK 1 , . . . , FKm referring to relations
R F K1 , . . . , R F Km . We define the domain of R as dom (R) = dom (A1 )×
. . . × dom (An ) × dom (FK 1 ) × . . . × dom (FKm ).
Embed attribute data types. Another important clue is the data
type: we transform text data from alpha-numeric attribute values,
such as CHAR, VARCHAR and TEXT, in neural embeddings represented by the function text2vec : T EXT → Rm ; we normalize
numerical attribute values, such as INTEGER and FLOAT, with their
mean and variance with the function norm : R → R; and we represent the remaining attributes from other data types as a one-hot
encoding (also known as 1-of-k Scheme) [4]. Formally, ∀ai ∈ Ai
we define a vector v(a) of a as:
EMBEDDING MODELS
text2vec (ai )
dom (Ai ) ⊆ T ext
v Ai (ai ) = norm (ai )
dom (Ai ) ⊆ Numbers
onehot (ai , Ai ) else
Entity linking and deep learning Since very recently, entity
linking techniques based on deep learning methods have started
to gain more interests. The first reason is a significantly improved
performance on most standard data sets reported by TAC-KBP
[10] [12]. Secondly, deep learning does not require costly feature
engineering for each novel domain by human engineers. Rather, a
system learns from domain specific raw data with high variance.
Thirdly, deep learning based entity linking with character- and
word-based embeddings often can further save language dependent
costs for feature engineering. Finally, deep learning permits entity
linking as a joined task of named entity recognition and entity
linking [2], with complementary signals from images, tables or
even from documents in other languages [12]. These recent findings
Embed foreign key relations. Foreign key relations are another
rich source of signals for representing an entity. Analogous to embeddings of entity relations from above, we encode embeddings for
these relations as well. We define ∀f k j ∈ FK j the vector v F K j f k j
of f k j as the sum of the vector representations of all foreign key
tuples v R F K j r f k j
v F K j (f ki ) =
4
Õ
r f k j ∈R F K j
vR F Kj rf k j
where r f k j ∈ R F K j is a foreign tuple from R F K j with f k j as primary
key.
Concatenating signature embeddings. Finally, we concatenate
all individual embeddings, i.e. entity relation, data type relation
and foreign key relation, into a single embedding for each entity,
i.e. ∀r = (a 1 , . . . , an , f k 1 , . . . , f km ) ∈ R, the vector v R (r ) of tuple
r is defined as:
v R (r ) =
3.2
v A1 (a 1 ) ⊕ · · · ⊕ v An (an ) ⊕
v F K1 (f k 1 ) ⊕ · · · ⊕ v F Km (f km )
Text Embeddings
Representing entities in text as spans. Text databases, such as
INDREX [15] and System-T [8], represent entities in text data as
so-called span data type:
Given a relation T (Span,T ext,T ext)
which contains tuples t =
spanent ity , textent ity , textsent ence where spanent ity ∈ Span is
the span of the entity, textent ity ∈ T ext is the covered text of the
entity and textsent ence ∈ T ext is the covered text of the sentence
containing the entity.
The above formalization covers the entity name, the context in
the same sentence and long range-dependencies in the entire indocument context of the entity. Thereby it implements the notion
of distributional semantics [11], a well-known concept in computational linguistics.
Vectorizing text spans and their context. Next, we need to vectorize spans and their context from above. We define the vectorization of text attributes of relations as a function text2vec which can
be “anything” from a pre-trained sentence embedding or a trainable recurrent network. In our model, we choose the popular and
well suited approach SkipThought [17] from the machine learning
community. Our rational is the following. First, SkipThought is
based on unsupervised learning of a generic, distributed sentence
encoder, hence there is no extra human effort necessary. Second,
using the continuity of text in a document, SkipThought trains an
encoder-decoder model that tries to reconstruct the surrounding
sentences of an encoded passage. Finally, a SkipThought embedding
introduces a semantic similarity for sentences. This can help with
paraphrasing and synonyms, a core problem in resolving entities
between relational and text data. In our implementation, we use
the pre-trained sentence embeddings from SkipThought.
3.3
Figure 3: Overview of representing and matching entities in
a joint embedding space in IDEL (white rectangular boxes
are data-containing components; colored figures represent
system components). First, we represent entities in relational tuples and in text data as vector representations. Second, the system generates a match between relational and
text data. These candidates embeds the system with two feed
forward network in the same vector space. We learn these
networks into the same vector space by using a scoringand a pairwise loss function. At prediction time, this scoring function measures the similarity between entities in the
joint embedding space.
WT for the transformation of text data. Weights WR and WT are
learnable parameters and will be trained with Stochastic Gradient
Decent.
Depending on the vector representations used, G R and GT can
be feed-forward, recurrent or convolutional neural networks, or
any combination of them. In our implementation, we use feedforward networks, because we transform the attribute values of the
relational data and the text with existing neural network models
into a common vector representation.
3.4
Pairwise Contrastive Loss Function
Scoring function. By nature, text and relational embeddings represent different areas in a vector space created from our feed forward
networks. Therefore, we must define a scoring function to determine how similar or dissimilar two representations in this vector
space are. We compare these two embeddings with a scoring function s (e R , eT ) : Rm × Rm → R ≥0 , where small values denote high
similarities and larger values dissimilar entities.
Currently, we use the cosine distance as the scoring function
s (e R , eT ), since our experiments with different distance measures,
such as euclidean distance, show no notable effect on the accuracy
of our results.
Joint Embedding Space
After the vectorization, we compute transformations for the entitymentions and relational data embeddings in a joint embedding
space in which pairs of entities in the relational database are already
represented. In this space similar entity mentions are placed close
to each other while dissimilar pairs far apart.
Let the first transformation e R : R → Rm to compute an
embedding for a tuple r ∈ R, while the second transformation
eT : T → Rm to compute an embedding for text t ∈ RT . We define
our transformations as follows:
Loss function. To train relational and text embeddings, e R and eT ,
we use a Stochastic Gradient Decent, which conducts a backwards
propagation of errors. Our loss (i.e. error) function is a variation
of the pairwise contrastive loss [16] applied first to the problem of
mapping images and their captions into the same vector space. This
e R (r ) = G R (v R (r ) ,WR )
eT (t) = GT (vT (t) ,WT )
where G R denotes a neural network with weights WR for the transformation of relational data and GT a neural network with weights
5
Before Learning Step
After Learning Step
t -1
s(r,t)
t+1
t -1
pull
t+1
push
r
t -2
sT+(r)
t+
sT+(r)
margin
2
r
relation
t+ 2
text pos.
t -1
text neg.
t -3
r
margin
t+2
t -2
t -3
8
Figure 4: Overview of a learning step for a relation r with a pairwise contrastive loss function. The left figure shows the
state
before the learning step, while the right figure shows the state after the training step. The relation r is located in the center of
each figure. It is surrounded by matching text examples t 1+ , t 2+ and not matching (i.e. contrastive) text examples t 1− , t 2− , t 3− . The
inner circle represents the average score between r and matching text sTr+ (r ) from equation 5. The outer circle is the margin
m. The loss function pulls matching text examples to the relation r and pushes contrastive examples towards the outer circle.
sTr+ (r ) =
loss function has desirable characteristics to solve our problem.
First, it can be applied to classification problems with a very large
set of classes, hence a very large space of 1000s of different entities.
Further, it is able to predict classes for which no examples are
available at training time, hence it will work well in scenarios
with frequent updates without retraining the classifier. Finally, it
is discriminative in the sense that it drives the system to make
the right decision, but does not cause it to produce probability
estimates which are difficult to understand during debugging the
system. These properties make this loss function an ideal choice
for our entity linking problem.
Applied to our entity linking task we consider either 1-N or M-N
mappings between relations. We define our loss function as follows:
L (R,T ) =
Õ
L R (r ) +
r ∈R
Õ
LT (t)
Figure 4 shows a learning step of this loss function for a relation r . In equations 2 and 3, the addition of s R t+ (t) and sTr+ (r ) pulls
embeddings vectors for positive examples together during the minimization of the loss function by decreasing their score. Contrarily,
the subtraction for a contrastive example of s (eT (t) , e R (r − )) and
s (e R (r ) , eT (t − )) pushes embedding vectors further apart, because
increasing their score minimizes this subtraction. The margin limits
the score for a contrastive example, since there the loss function
cannot push the embedding vector of a contrastive example further. This is crucial to learn mappings between two different vector
spaces.
Overall, we are not aware of any other work where loss functions for mapping pixels in images to characters are applied to
the problem of linking entities from text to a table. IDEL is the
first approach that abstracts this problem to entity linking. For our
specific problem we therefore modified the loss function of [16] by
replacing a single positive example with the average score of all
positive examples s R t+ (t) or sTr+ (r ). This pulls all positive examples
together, enabling our loss function to learn 1-N and M-N mappings
between relational data and text.
(1)
(2)
t − ∈Tr−
LT (t) =
Õ
max {0, m + s R t+ (t) − s(eT (t), e R (r − ))}
(5)
r
t ∈T
with L R (r ) as partial loss for r and LT (t) for t
Õ
L R (r ) =
max {0, m + sTr+ (r ) − s(e R (r ), eT (t − ))}
1 Õ
s e R (r ) , eT t +
+
Tr t + ∈T +
(3)
r − ∈R t−
3.5
where Tr− denotes the set of contrastive (i.e. not matching) examples
and Tr+ denotes the set of matching examples of T for r , such as
respectively R t− and R t+ for t. The hyper-parameter margin m controls how far apart matching (positive) and not matching (negative)
examples should be. Furthermore, the functions s R t+ (t) and sTr+ (r )
calculate the average score of all positive examples for r and t:
1 Õ
s R t+ (t) = +
s eT (t) , e R r +
(4)
R t r + ∈R +
Hyper-parameters and Sampling
Hyper-parameters. We evaluate several configurations for representing neural networks with relational G R or text data GT . Our
representation for relational data contains three layers: the input
layer containing 1023 neurons, the second layer 512 neurons and
the output layer 256 neurons. For representing text embeddings GT
we use two layers: an input layer with 1024 neurons and an output
layer with 256 neurons. We choose fewer layer for GT , because the
dimensionality of their input is smaller than that of the relational
t
6
data. All layers use the fast and aggressive activation function elu.
We train the model via gradient descent with Adam as optimizer
and apply dropout layers to all layers with a keep probability of
0.75. Since we use dropouts, we can choose a higher learning rate
of 1e−05 with an exponential decay of 0.9 every 1000 batches and
set our margin to 0.001.
Sampling batches from training set Our training set consists of
two tables (relational and text data) with a binary mapping between
their tuples. A tuple here denotes an entity represented as span
from text data, including span and context information, and an
entity, including attribute values, from a relational table. We train
our model with batches. Entities in training data, both text and
relational, are often Zipfian-distributed, hence, very few popular
entities appear much more frequently than the torso and long tail
of other entities. Analogous to Word2Vec in [20], we compensate
this unbalanced distribution during sampling batches for our training process. Our sampling strategy learns a distribution of how
often our classifier has seen a particular matching example for an
entity in prior batches. Based on this distribution, we draw less
frequently seen examples in the next batches and omit frequently
seen entity matching examples. Moreover, true positive and true
negative matchings are unbalanced. Because of the nature of the
entity linkage problem, we see much more negative examples than
positive examples. To compensate this additional imbalance, each
training example contains at least one true positive matching example (an entity represented in both text and relational data) and
other negative example.
4
MonetDB SQL engine
Embedded Python process
relational
embeddings
text
embeddings
relational
data
text
data
(2.1) Compute similarities
(2.2) Compute rankings
(2.3) Select topN
(2) Search for candidates
candidates
Figure 5: System architecture of IDEL, which supports an
entity linking process by first creating embeddings of relational and text data, and then searching for candidates,
which is currently done by computing first the similarities, then the rankings, before selecting the top N candidate. ’Search for candidates’ employs the Nearest-NeighborSearch index Spotify Annoy for fast retrieval of top-n matching entities.
IMPLEMENTATION
Despite the fact that several technologies are investigated by the
computational linguistics or the machine learning community, no
other database system currently permits the execution of neural
text mining in the database. One reason is the choice of the RDBMS,
which has a significant impact on the overall system. The ability
of MonetDB to support in-database-analytics through user defined
Python/SQL functions makes it possible to integrate different types
of systems to solve the overall entity-linkage task. We briefly describe this integration in this section.
4.1
(1) Create embeddings
Python subprocess to execute the called UDF. MonetDB exchanges
data of relational tables between the SQL engine and the embedded
Python process by means of NumPy arrays.
The MonetDB/Python integration features several important
optimizations to allow efficient executions of SQL Python UDFs:
• Zero data conversion cost. Internally, MonetDB stores data of
each column as a C-array. This binary structure is the same
as the NumPy arrays, hence, no data conversion is needed
between SQL and Python.
• Zero data transfer cost. The MonetDB SQL engine and the
embedded Python process share the same address space.
Passing columnar data between the two systems merely
means passing pointers back and forth.
• Parallel execution of SQL Python UDFs. When processing an
SQL query, MonetDB can speed up the execution by splitting
up columns and executing the query on each part in parallel.
MonetDB/Python integration allows an SQL Python UDF to
be declared as parallelizable, so that this UDF can be part of
a parallel query execution plan (otherwise the UDF will be
treated as a blocking operator).
MonetDB/Python/TensorFlow/Annoy
We have integrated the entity linking process into a single RDBMS,
MonetDB, as depicted in Figure 5, and store text, relational data and
embeddings in MonetDB. The computation is either implemented
in SQL queries or SQL UDFs in Python. MonetDB is an open-source
columnar RDBMS optimized for in-memory processing of analytical
workloads [5]. In recent years, MonetDB enriched its support for indatabase analytics by, among others, introducing MonetDB/Python
integration through SQL UDFs [26]. As a result, MonetDB users
can specify Python as the implementation language for their SQL
UDFs. Basically, any Python libraries accessible by the MonetDB
server can be imported. In Our work we base on the deep learning
library TensorFlow1 and the nearest neighbor search index for
neural embeddings, Spotify Annoy 2 . When such an SQL Python
UDF is called in an SQL query, MonetDB automatically starts a
Figure 5 shows the implementation of IDEL in MonetDB. We
store relational data in MonetDB according to their schemas and
text data in a table with a single string-typed column. First, we
create embedding vectors for both relational and text data by two
1 https://github.com/tensorflow
2 https://github.com/spotify/annoy
7
SQL Python UDFs, one for each input table. This step leverages TensorFlow’s machine learning features to load the pre-trained neural
network and apply it on the input tables. We return embedding
vectors as NumPy arrays and store them as BLOBs. The second
step finds matching candidates with the highest similarities among
embeddings. We employ nearest neighbor search with Annoy for a
given embedding, compute a ranked list for each entity according
to their similarities and finally return TopN candidates. All steps
are implemented in SQL.
Changes inside this architecture, such as different embedding
models or similarity functions, are transparent to upper layer applications.
4.2
representing this entity in a table). A naive search solution would
execute a full table scan and compute for each pair the distance
between the two vectors. With a growing numbers of entities in
tables or text, this operation becomes expensive. Therefore, we
represent embeddings for entities in a nearest neighbor search
index for neural embeddings. Following benchmarks of [3], we
implemented this index with Spotify Annoy for four reasons. First,
Annoy is almost as fast as the fastest libraries in the benchmarks.
Second, it has the ability to use static files as indexes which we can
share across processes. Third, Annoy decouples creating indexes
from loading them and we can create indexes as files and map them
into memory quickly. Finally, Annoy has a Python wrapper and a
fast C++ kernel, and thus fits nicely into our implementation.
The index bases on random projections to build up a tree. At
every intermediate node in the tree, a random hyperplane is chosen, which divides the space into two subspaces. This hyperplane
is chosen by sampling two points from the subset and taking the
hyperplane equidistant from them. Annoy applies this technique t
times to create a forest of trees. Hereby, the parameter t balances between precision and performance, see also work on Local sensitive
hashing (LSH) by [7]. During search, Spotify Annoy traverses the
trees and collects k candidates per tree. Afterwards, all candidate
lists are merged and the TopN are selected. We follow experiments
of [3] for news data sets and choose t = 200 for k = 400000 neighbors for N = 10.
The following examples executes a nearest neighbor search for
an embedding representing relational tuples of table building in the
space of indexed sentences that represent an entity if type building.
The query returns the Top10 matching sentences for this relational
entity.
Create Embeddings
We abstract core functionalities for entity linking as SQL UDFs. This
design principle permits us exchanging functionalities in UDFs for
different data sets or domains.
UDF:EmbedSentences This UDF embeds text sentences into the
joint vector space. It applies vT and GT from a trained model to
generate eT . Because of the learned joint vector space, the function
GT is coupled on GT which computes the tuple embedding for
each table to which we want to link. It takes as input a NumPy
array of strings, loads a trained neural network model into main
memory and apply this model to the NumPy array in parallel. Due
to the implementation of our model in TensorFlow, this UDF can
leverage GPUs to compute the embedding vectors. Finally, this
function transforms an array of embedding vectors into an array of
BLOBs and returns it to MonetDB. The following example executes
the UDF embed_sentence_f or _buildinд to retrieve sentences about
buildings and return their embeddings.
SELECT *
FROM query_index((
SELECT id, embedding, 10, 400000,
index_embedd_sentence_for_building
FROM embedd_building)) knn,
building_with_embedding r,
sentences_for_building_with_embedding s
WHERE r.id = knn.query_key AND s.id = knn.result_key;
CREATE TABLE embedd_sentences_for_building AS
SELECT *, embed_sentence_for_building(sentence) as embedding
FROM sentences;
This UDF returns a BLOB in Python and executes a Python script
that encodes each sentence into an embedding.
CREATE FUNCTION embed_sentences_building(sentences STRING)
RETURNS BLOB LANGUAGE PYTHON
{
from monetdb_wrapper.embed_udf import embed_udf
return embed_udf().run("path/to/repo","path/to/model",
"sentences", {"sentences": sentences })
};
5 EXPERIMENTAL EVALUATION
5.1 Data Set Description
WEBNLG Authors in [24] provide an overview of data sets where
entities have multiple attributes and are matched against text data
(i.e. sentences). One of the largest manually labeled data sets is
WebNLG [25] with ten different entity types and thousands of
manually labeled mappings from entity data in RDF to text. To
use WebNLG for our experiments, we have transformed the RDF
representation into a relational model and evaluated our work
against it. WebNLG contains relevant entity types for our scenarios
from the introduction, such as building, comics character, food or
written work (which are products) or universities and sport teams
(which are brands). Because of these different domains, this “mix”
is particularly hard to detect in single entity linking system and it
realistically models data for our example use case.
The following two examples from WEBNLG show candidate
sentences for entities of the type ‘building’. For the first entity
three attributes are shown in structured and text data, while the
second example features five attributes. Note that the later attribute
UDF:EmbedTuples This UDF embeds tuples of a table into the
joint vector space. It applies a trained neural network model on
v R and G R to generate e R . As input, it assumes arrays of relational
columns, and loads and applies a trained model in parallel to input
relations and outputs embedding vectors as an array of BLOBs. The
exact signature of this UDF depends on the schema of the table. In
the following example, we encode the table building with attributes
name, address and owner in the embedding.
CREATE TABLE building_with_embedding AS
SELECT *, embed_building(name, address, owner) as embedding
FROM building;
4.3
Search for Candidates
UDF:QueryNN The next task is, given a vector (e.g. an entity
represented in text), to retrieve a set of similar vectors (e.g. tuples
8
Entity Family Entity type
Instances Tuples Sentences Sentences/Instance Columns Avg. tuple density
Location
Airport
67
662
2831
42.25
52
0.01
Location
Building
58
380
2377
40
46
0.10
Location
City
65
243
609
9.37
25
0.16
Location
Monument
10
111
783
78.3
31
0.17
Product
ComicsCharacter 40
116
749
18.27
20
0.21
Product
Food
59
641
3646
61.8
34
0.14
Product
WrittenWork
52
486
2466
47.42
49
0.10
Brand
University
17
308
1112
65.41
39
0.16
Brand
SportTeam
55
471
1998
36.32
34
0.14
Person
Astronaut
17
459
1530
90
38
0.18
Table 1: The WebNLG data set provides a manually labeled ground truth for 10 different entity types, such as products, locations, persons and brands. It features several thousands of manually annotated sentences. Moreover, its structured representation describes entities with at least 20 up to 52 attributes. However, most attribute values are populated only sparsely: in this
table, “Avg. tuple density” denotes the portion of non-null attribute values for all entities of a particular type. Hence, most
structured entity tuples have only a few attribute values different from NULL.
is described over multiple sentences. Attribute names are highly
ambiguous and do not match words in texts. Furthermore, the
position of attributes in text varies.
non-NULL values averaged over all tuples for this type. We observe
that all entity types feature sparsely populated attribute values
only. Some entity types are described with a rather large number of
attribute values (up to 52), while most entity types contain 20 ∼ 30
attributes. For example, the entity types WrittenWork and Airport are
described with roughly 50 sparsely populated attribute values.
<entry size="3" eid="Id24" category="Building">
<modifiedtripleset>
<mtriple>200_Public_Square | floorCount | 45</mtriple>
<mtriple>
200_Public_Square | location | "Cleveland, Ohio 44114"
</mtriple>
<mtriple>200_Public_Square | completionDate | 1985</mtriple>
</modifiedtripleset>
<lex lid="Id3" comment="good">
200 Public Square, completed in 1985, has 45 floors and is
located in Cleveland, Ohio 44114.
</lex>
</entry>
Training, test and cold start scenario In realistic situations, new
entities are regularly added to the database. Hence, it is important
for our system to recognize such cold start entity representations
without needing to be re-trained. Hence, we need to consider previously seen entities for which we learn new matchings (hot and
running system) and entities we have never seen during training
(cold start scenario). To simulate these two scenarios we choose the
same setup as described in [9] and split the set of relational entity
instances into 20% unseen entities for the cold start scenario. We
kept the remaining 80% as previously seen entities and split this set
again into 80% for training and 20% for testing.
<entry size="5" eid="Id1" category="Building">
<modifiedtripleset>
<mtriple>103_Colmore_Row | floorCount | 23</mtriple>
<mtriple>103_Colmore_Row | completionDate | 1976</mtriple>
<mtriple>103_Colmore_Row | architect | John_Madin</mtriple>
<mtriple>
103_Colmore_Row | location |
"Colmore Row, Birmingham, England"
</mtriple>
<mtriple>John_Madin | birthPlace | Birmingham</mtriple>
</modifiedtripleset>
<lex lid="Id1" comment="good">
103 Colmore Row is located on Colmore Row, Birmingham,
England. It was designed by the architect, John Madin,
who was born in Birmingham. It has 23 floors and was
completed in 1976.
</lex>
</entry>
5.2
Experimental Setup
System setup We implemented our model using TensorFlow 1.3,
NumPy 1.13 and integrated it into MonetDB (release Jul2017-SP1).
We installed these software packages on a machine with two Intel®
Xeon® CPUs E5-2630 v3 with 2.40GHz, 64 GB RAM, and SSD discs.
Quality of training data Table 1 gives an overview of the WebNLG
data set and some important statistics for each Entity type. Instances
denotes the number of distinct entities for each type in the relational data. Tuples denotes the number of tuples in the relational
data for each distinct instance. Sentences denotes the number of
sentences that contain at least one of these instances in text data.
Sentences/Instance counts the average ratio of how often an instance
is represented in a sentence. In particular, for types City and ComicsCharacter we observe relatively very few sentences representing
each entity. As a result, the system might learn less variances during
training and sampling for these data types. Columns denotes the
number of distinct attributes in the relational schema for this entity
type. Avg. tuple density denotes the proportion of an attribute with
Measurements Our first set of experiments measures the effectiveness of our embeddings and entity linking. Given an entity
representation from the relational data, the output of entity linking
is an ordered list of sentences where this entity likely appears. A
common measure is Precision@1, which counts how often the system returns the correct sentence at rank 1. Analogously, Precision@5
and Precision@10 count how often the correct result returned by
the system is among the first five and ten results. Our second set of
experiments measures the efficiency. We measure execution times
for loading and creating embeddings before query run time and
for generating candidates, executing the similarity measure on
candidates and ranking candidates at runtime.
9
Prec@1
Prec@5
Prec@10
test train unseen test train unseen test train unseen
Airport
0.90 0.98 0.54
0.96 0.99 0.70
0.99 1
0.79
Astronaut
0.91 0.96 0.88
0.97 0.99 0.98
0.98 1
0.98
Building
0.89 0.99 0.77
0.94 1
0.90
0.97 1
0.95
City
0.73 0.98 0.93
0.93 1
0.98
0.96 1
1
ComicsCharacter 0.76 0.99 0.29
0.97 1
0.80
0.98 1
0.97
Food
0.85 0.94 0.69
0.94 0.98 0.90
0.94 0.98 0.91
Monument
0.94 1
0.90
0.98 1
0.98
1
1
1
SportTeam
0.90 1
0.66
0.97 1
0.83
0.99 1
0.92
University
0.95 1
0.93
0.99 1
1
1
1
1
WrittenWork
0.88 0.95 0.63
0.97 0.99 0.79
0.99 0.99 0.86
Table 2: Accuracy in Precision@k of the trained model for each entity type. In test we observe for all entity types, except
City and ComicsCharacter, a very high Precision@1>0.80. Even for a cold start scenario (columns unseen) we can still observe
decent Precision@1>0.60 for all entity types, except Airports and ComicsCharacter. In addition, we included columns Prec@5 and
Prec@10 to illustrate that our system de facto can retrieve the correct matching entities at lower ranks. Recall is not shown
in this diagram, since we measure precision numbers for all entities in our test data set.
5.3
Experimental Results
Entity Linking with very high precision Table 2 shows accuracy for Precion@k for each entity type. We observe high values
(≥ 0.80) during testing Precsion@1 for all entity types, except City
(0.73) and ComicsCharacter (0.76). This indicates that our system can
return the correct entity linking with very high precision. If we
measure the effectiveness of our system at Precsion@5, we observe
that our system returns the correct result for each entity, independent of the type, and with a high accuracy of ≥ 0.93. We report
an accuracy of ≥ 0.95 at Precision@10 for all entity types with a
perfect result for entity type university. Note that the columns train
denote “ideal systems” for the given training data. We observe that
even an ideal system fails in rare cases for Astronaut, WrittenWork,
City, Food and Airport.
Phase
Load Time
Load Time
Load Time
Load Time
Load Time
Step
Loading Model
UDF:EmbedTuples
UDF:EmbedSentences
Create Index for Tuples
Create Index for Sentences
RT (sec)
30.50
55.00
150.00
0.04
3.07
Load Time
Sum over all steps
208.1
Query Time
Query Time
Query Time
Cross Join Top10
UDF:QueryNN Top10 Sent.
UDF:QueryNN Top10 Tuples
115.9
9.60
29.15
Table 3: Runtime (in seconds) of different stages in IDEL for
the entity type Building.
Precision@1 >0.6 for cold start entities Our cold start scenario
measures how well our system can detect recently inserted entity matches, in particular, if it has neither seen these entities as
relational nor in sentence data. This experiment is particularly challenging for IDEL. During training the system could not observe any
contextual clues, neither from positive nor from negative examples. Hence, we test if the system can detect in such a challenging
scenario entities without re-learning embeddings and similarity
functions. Table 2 reports measures for such cold-start-entities in
the columns unseen. We observe that even for Precsion@1, it still
achieves decent measures (i.e.>0.6) for all entity types, except Airport
and ComicsCharacter.
IDEL should regularly update its embeddings and similarity functions asynchronously in the background so as to raise Precision@1
to 0.85 as reported in our experiments in testing.
Table 3 reports execution times averaged over all entity types.
We observe for steps at data loading time, such as embed sentences
and relational tuples, an average of 208 seconds. For the query
execution time and creating candidate tuples, storing embeddings,
applying the similarity metric, ranking and pruning TopK entity
mappings, we observe an average of 116 seconds. Our conclusion
is that once a user has set up in IDEL an initial query mapping
from entity types in relational data to sentences in text data, the
system can asynchronous rebuild embeddings in the background
to achieve very high Precision@1 values even for unseen entities
for the next time the user hits the same query.
Execution Engine In certain practical scenarios it is crucial to be
able to execute entity linking in minutes to hours. Consider the
brand monitoring example that triggers hourly alerts to a business
department about new products or statements in blogs about new
products. Typically, business demands here to react within a few
hours to place new advertisements or to inform customers. Therefore, it is crucial that IDEL executes entity linking in minutes to
hours. Moreover, we already discussed that IDEL can recognize
potentially unseen entities with a decent Precision@1>0.6. Ideally,
5.4
Error Analysis and Discussion
Understanding sampling and computing similarity function
To understand the behavior of IDEL we conducted a closer inspection on results and individual components. Figure 6 shows six snapshots from the joint embedding space in IDEL during the training
of the similarity function. For example, Figure 6(a) visualizes on
10
(a) Initial
(b) 100 Batches
(c) 200 Batches
(d) 600 Batches
(e) 1200 Batches
(f) 2400 Batches
Figure 6: The above six figures show the vector space of text and relational embeddings during the training period at varying
batch samples. In figure (a) we observe two tuple clusters: one for text data and one for relational data. Different colors indicate
different entities (here limited to 58 different entities of the class “Building”). During each batch sample our method clusters
similar entities in the latent space and separates dissimilar entities. Finally, in figure (f) we observe a system state with rather
sharply clustered text and relational representation for light medium, dark blue and green entities. At this stage, red and
yellow entities still need to be clustered and separated from others.
Performance for unseen entities suffers from sparse attribute
density or too few sentences IDEL can recognize unseen data
with a decent Precision@1>0.6, except for Airport and ComicsCharacter. This performance is comparable with other state-of-the-art
entity linking systems (see [14]). The low performance for the type
Airport is most probably due to the extreme sparse average tuple
density. As a result, during training the model often retrieves relational tuples with low information gain and many NULL-values.
A closer inspection reveals that several errors for this type are
disambiguation errors for potential homonyms and undiscovered
synonyms. The type ComicsCharacter also performs poorly for unseen entities compared to other types. This type has the second
lowest ratio for Sentence/Instance. Hence, each distinct comic character is represented on average by 18 sentences. The popularity of
text data, such as comic characters, often follows a Zipf distribution.
the right a cluster of 58 different entities of the type building in 380
tuples, while the left cluster denotes 2377 sentences mentioning
these entities. Colors indicate distinct entities3 . Figure 6(b)..(f) show
in steps of 100 batches how during training new instances (see Section 3.5), the shape of these clusters change. Finally, Figure 6(f)
shows clusters which combine sentence and relational representations for the same entity. However, we also observe “yellow” and
“red” entities with fewer training examples compared to the “blue”
and “light blue” entities. Our explanation is that the contrastive
pairwise loss function does not have sufficient “signals” gained
from training samples yet to cluster these entities as well.
3 To
keep the colors in the figures somewhat distinguishable, we show here only the
most frequent entities, instead of all 58 of them.
11
In fact, we inspected our set of comic characters and observed that
a few characters are described by the majority of sentences, while
most characters are described by only a few sentences. As a result,
the system could not learn enough variances to distinguish among
these seldom mentioned characters.
6
relations and entities in the triple format based on the structures
of graphs. However, they do not incorporate additional attributes
for the entities into the embedding; also, they only learn an embedding for binary relations, not for n-ary relations. At a very high
level, we also apply similar techniques for representing entities
in embeddings. However, our approach is based on SkipThought
and a pair wise loss function which works particularly well with
many classes (each entity represents its own class) and for sparse
data, two data characteristics often found in practical setups for
relational databases. Moreover, our approach is not restricted to
triple-based knowledge bases. We can learn an embedding for arbitrary n-ary relations and incorporate their attributes and related
entities. Finally, we are not aware of any work that incorporates
neural network based knowledge representation methods into the
query processor of an RDBMS.
Authors of [23] assign each entity a set of potentially related
entities and additional words from sentences mentioning the entity.
They weight and prune this signature in a graph, and extract, score
and assign subgraphs as semantic signature for each entity. In our
work, the idea of a signature is captured by describing an entity via
the relation which includes a primary key for the entity and the
depending foreign key relations. Further, our work is orthogonal
to [23]; we represent entity information in the vector space and
with neural embeddings and execute the system in a database.
RELATED WORK
Currently, the computational linguistics, the web and the database
communities work independently on important aspects of IDEL.
However, we are not aware of any system that provides the combined functionality of IDEL. Below we discuss existing work in the
areas of text databases, embeddings in databases and entity linking.
Text Databases Authors of Deep Dive [29], InstaRead [13] and
System-T [8] propose declarative SQL-based query languages for
integrating relational data with text data. Those RDBMS extensions
leverage built-in query optimization, indexing and security techniques. They rely on explicitly modeled features for representing
syntactic, semantic and lexical properties of an entity in the relational model. In this work we extend the text database system
INDREX [15, 28] with the novel functionality of linking relational
data to text data. Thereby, we introduce neural embeddings in a
main memory database system, effectively eliminating the need
for explicit feature modeling. Our execution system for INDREX
is MonetDB, a well known main memory database system [5]. To
our best knowledge, no other database system so far provides this
functionality for text data.
7
Entity Linking and knowledge base completion Entity linking
is a well-researched problem in computational linguistics4 . Recently,
embeddings have been proposed to jointly represent entities in text
and knowledge graphs [30]. Authors of [19] use an embedding for
4 See
SUMMARY AND OUTLOOK
IDEL combines in a single system relational and text representations of entities and capabilities for entity linking. The ability to
define Python routines in MonetDB permits us for the first time to
conduct this task in a single system, with zero data shipping cost
and negligible data transformation costs. To execute this powerful
functionality, we have extended MonetDB with UDFs to execute
neural embeddings to represent such entities in joint embedding
spaces, to compute similarities based on the idea of the pair-wise
contrastive loss and to speed up candidate retrieval with nearest
neighbor indexing structures. Therefore, the novelty of our work is
in the representation of text data and relational data in the same
space, the classification method for entity linking and in a single
integrated architecture.
To our best knowledge, this is the first working database system
which permits executing such queries on neural embeddings in an
RDBMS. As a result, organizations will be able to obtain licenses
for a single system only, do not need to hire additional trained
linguists, and avoid costly data integration efforts between multiple
systems, such as the RDBMS, the text data system (e.g. Lucene)
and a homegrown entity linking system. Finally, organizations can
reuse trusted and existing efforts from RDBMS, such as security,
user management and query optimization techniques.
In our future work we plan to investigate potentially more complex neural architectures, because they are likely able to adapt better
to the text and relational data. For example, we are currently limited
by the vocabulary of the pre-trained SkipThought. We also plan to
use a hybrid model that considers large external linguistic corpora
(as we currently do) and in addition very specific, potentially domain focused corpora from the text database to create improved
text embeddings or even character embeddings. Finally, we will
Embeddings in Databases Only since recently, authors of [6] investigated methods for integrating vector space embeddings for
neural network in relational databases and query processing. Authors focus on latent information in text data and other types of
data, e.g. numerical values, images and dates. For these data types
they embed the latent information for each row in the same table with word2vec [21] in the same vector space. Finally, they run
queries against this representation for retrieving similar rows. Similar to our work, they suggest to compute embeddings for each
row and to access embeddings via UDFs. Our approach goes much
further, since we embed latent information from at least two tables
in the same vector space, one representing entities and attributes
while the other representing spans of text data. Because of the nature of the problem, we can not assume that both representations
provide similar characteristics in this vector space. Rather, we need
to adopt complex techniques such as SkipThought and pair-wise
loss functions to compute similarity measures.
Recently, the information retrieval community recognized the
importance of end-to-end information retrieval with neural networks (see [22] for a tutorial). Authors suggest to encode attributes
in an indexed table as embeddings to answer topical queries against
them. Again, our work goes significantly beyond their ideas and
integrates information across text and relational data inside a database.
http://nlp.cs.rpi.edu/kbp/2017/elreading.html
12
investigate deeper effects of other distance functions, such as the
word mover’s distance [18].
ACKNOWLEDGMENTS
Our work is funded by the German Federal Ministry of Economic
Affairs and Energy (BMWi) under grant agreement 01MD16011E
(Medical Allround-Care Service Solutions), grant agreement
01MD15010B (Smart Data Web) and by the European Union‘s Horizon 2020 research and innovation program under grant agreement
No 732328 (FashionBrain).
REFERENCES
[1] E. Agichtein and L. Gravano. Snowball: extracting relations from large plain-text
collections. In ACM DL, pages 85–94, 2000.
[2] S. Arnold, R. Dziuba, and A. Löser. TASTY: Interactive Entity Linking As-YouType. In COLING’16 Demos, pages 111–115, 2016. 00000.
[3] M. Aumüller, E. Bernhardsson, and A. Faithfull. Ann-benchmarks: A benchmarking tool for approximate nearest neighbor algorithms. In SISAP 2017.
[4] C. M. Bishop. Pattern Recognition and Machine Learning (Information Science and
Statistics). Springer-Verlag New York, Inc., 2006.
[5] P. A. Boncz, M. L. Kersten, and S. Manegold. Breaking the memory wall in
MonetDB. Commun. ACM, 51(12):77–85, 2008.
[6] R. Bordawekar and O. Shmueli. Using word embedding to enable semantic queries
in relational databases. DEEM’17, pages 5:1–5:4. ACM.
[7] M. Charikar. Similarity estimation techniques from rounding algorithms. In
STOC 2002.
[8] L. Chiticariu, R. Krishnamurthy, Y. Li, S. Raghavan, F. R. Reiss, and
S. Vaithyanathan. SystemT: An algebraic approach to declarative information
extraction. ACL ’10.
[9] N. Gupta, S. Singh, and D. Roth. Entity linking via joint encoding of types,
descriptions, and context. In EMNLP 2017.
[10] H. D. S. H. H Ji, J Nothman. Overview of tac-kbp2016 tri-lingual edl and its
impact on end-to-end cold-start kbp. In TAC, 2016.
[11] Z. S. Harris. Distributional Structure. WORD, 10(2-3):146–162, Aug. 1954.
[12] B. Z. J. N. J. M. P. M. Heng Ji, Xiaoman Pan and C. Costello. Overview of tackbp2017 13 languages entity discovery and linking. In TAC, 2017.
[13] R. Hoffmann, L. Zettlemoyer, and D. S. Weld. Extreme Extraction: Only One
Hour per Relation. arXiv:1506.06418, 2015.
[14] H. Ji, J. Nothman, H. T. Dang, and S. I. Hub. Overview of tac-kbp2016 tri-lingual
edl and its impact on end-to-end cold-start kbp. TAC’16.
[15] T. Kilias, A. Löser, and P. Andritsos. INDREX: In-Database Relation Extraction.
Information Systems, 53:124–144, 2015.
[16] R. Kiros, R. Salakhutdinov, and R. S. Zemel. Unifying Visual-Semantic Embeddings
with Multimodal Neural Language Models. arXiv:1411.2539 [cs], Nov. 2014.
[17] R. Kiros, Y. Zhu, R. R. Salakhutdinov, R. Zemel, R. Urtasun, A. Torralba, and
S. Fidler. Skip-thought vectors. In NIPS’15.
[18] M. J. Kusner, Y. Sun, N. I. Kolkin, and K. Q. Weinberger. From word embeddings
to document distances. In ICML’15.
[19] Y. Lin, Z. Liu, M. Sun, Y. Liu, and X. Zhu. Learning entity and relation embeddings
for knowledge graph completion. In Proceedings of AAAI, 2015.
[20] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. Distributed Representations of Words and Phrases and their Compositionality. In NIPS’13.
[21] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. Distributed representations of words and phrases and their compositionality. In NIPS’13.
[22] B. Mitra and N. Craswell. Neural text embeddings for information retrieval. In
WSDM’17.
[23] A. Moro, A. Raganato, and R. Navigli. Entity Linking meets Word Sense Disambiguation: A Unified Approach. In TACL’14.
[24] L. Perez-Beltrachini and C. Gardent. Analysing data-to-text generation benchmarks. CoRR, abs/1705.03802, 2017.
[25] L. Perez-Beltrachini, R. Sayed, and C. Gardent. Building RDF content for data-totext generation. In COLING 2016.
[26] M. Raasveldt and H. Mühleisen. Vectorized UDFs in Column-Stores. SSDBM ’16.
ACM.
[27] X. Ren, A. El-Kishky, H. Ji, and J. Han. Automatic entity recognition and typing
in massive text data. In SIGMOD’16.
[28] R. Schneider, C. Guder, T. Kilias, A. Löser, J. Graupmann, and O. Kozachuk.
Interactive Relation Extraction in Main Memory Database Systems. COLING’16.
[29] J. Shin, S. Wu, C. Zhang, F. Wang, and C. Ré. Incremental Knowledge Base
Construction Using DeepDive. VLDB 2015.
[30] Z. Wang, J. Zhang, J. Feng, and Z. Chen. Knowledge graph embedding by
translating on hyperplanes. In AAAI’14.
13
| 9 |
A Delayed Promotion Policy for Parity Games
Massimo Benerecetti
Daniele Dell’Erba
Fabio Mogavero
Università degli Studi di Napoli Federico II
Università degli Studi di Napoli Federico II
Oxford University
Parity games are two-player infinite-duration games on graphs that play a crucial role in various
fields of theoretical computer science. Finding efficient algorithms to solve these games in practice is widely acknowledged as a core problem in formal verification, as it leads to efficient solutions of the model-checking and satisfiability problems of expressive temporal logics, e.g., the modal
µ C ALCULUS. Their solution can be reduced to the problem of identifying sets of positions of the
game, called dominions, in each of which a player can force a win by remaining in the set forever.
Recently, a novel technique to compute dominions, called priority promotion, has been proposed,
which is based on the notions of quasi dominion, a relaxed form of dominion, and dominion space.
The underlying framework is general enough to accommodate different instantiations of the solution
procedure, whose correctness is ensured by the nature of the space itself. In this paper we propose
a new such instantiation, called delayed promotion, that tries to reduce the possible exponential behaviours exhibited by the original method in the worst case. The resulting procedure not only often
outperforms the original priority promotion approach, but so far no exponential worst case is known.
1
Introduction
The abstract concept of game has proved to be a fruitful metaphor in theoretical computer science [1].
Several decision problems can, indeed, be encoded as path-forming games on graphs, where a player
willing to achieve a certain goal, usually the verification of some property on the plays derived from the
original problem, has to face an opponent whose aim is to pursue the exact opposite task. One of the
most prominent instances of this connection is represented by the notion of parity game [18], a simple
two-player turn-based perfect-information game played on directed graphs, whose nodes are labelled
with natural numbers called priorities. The goal of the first (resp., second) player, a.k.a., even (resp.,
odd) player, is to force a play π , whose maximal priority occurring infinitely often along π is of even
(resp., odd) parity. The importance of these games is due to the numerous applications in the area of
system specification, verification, and synthesis, where it is used as algorithmic back-end of satisfiability and model-checking procedures for temporal logics [6, 8, 16], and as a core for several techniques
employed in automata theory [7, 10, 15, 17]. In particular, it has been proved to be linear-time interreducible with the model-checking problem for the modal µ C ALCULUS [8] and it is closely related to other
games of infinite duration, as mean payoff [5, 11], discounted payoff [24], simple stochastic [4], and energy [3] games. Besides the practical importance, parity games are also interesting from a computational
complexity point of view, since their solution problem is one of the few inhabitants of the UPT IME ∩
C O UPT IME class [12]. That result improves the NPT IME ∩ C O NPT IME membership [8], which easily
follows from the property of memoryless determinacy [7, 18]. Still open is the question about the membership in PT IME . The literature on the topic is reach of algorithms for solving parity games, which
can be mainly classified into two families. The first one contains the algorithms that, by employing a
divide et impera approach, recursively decompose the problem into subproblems, whose solutions are
then suitably assembled to obtain the desired result. In this category fall, for example, Zielonka’s recursive algorithm [23] and its dominion decomposition [14] and big step [19] improvements. The second
D. Cantone and G. Delzanno (Eds.): Seventh Symposium on
Games, Automata, Logics and Formal Verification (GandALF’16)
EPTCS 226, 2016, pp. 30–45, doi:10.4204/EPTCS.226.3
© M. Benerecetti, D. Dell’Erba, & F. Mogavero
This work is licensed under the
Creative Commons Attribution License.
M. Benerecetti, D. Dell’Erba, & F. Mogavero
31
family, instead, groups together those algorithms that try to compute a winning strategy for the two players on the entire game. The principal members of this category are represented by Jurdziński’s progress
measure algorithm [13] and the strategy improvement approaches [20–22].
Recently, a new divide et impera solution algorithm, called priority promotion (PP, for short), has
been proposed in [2], which is fully based on the decomposition of the winning regions into dominions.
The idea is to find a dominion for some of the two players and then remove it from the game, thereby
allowing for a recursive solution. The important difference w.r.t. the other two approaches [14, 19] based
on the same notion is that these procedures only look for dominions of a certain size in order to speed
up classic Zielonka’s algorithm in the worst case. Consequently, they strongly rely on this algorithm for
their completeness. On the contrary, the PP procedure autonomously computes dominions of any size,
by suitably composing quasi dominions, a weaker notion of dominion. Intuitively, a quasi dominion
Q for player α ∈ {0, 1} is a set of vertices from each of which player α can enforce a winning play
that never leaves the region, unless one of the following two conditions holds: (i) the opponent α can
escape from Q (i.e., there is an edge from a vertex of α exiting from Q) or (ii) the only choice for player
α itself is to exit from Q (i.e., no edge from a vertex of α remains in Q). A crucial feature of quasi
dominion is that they can be ordered by assigning to each of them a priority corresponding to an underapproximation of the best value for α the opponent α can be forced to visit along any play exiting from it.
Indeed, under suitable and easy to check assumptions, a higher priority quasi α -dominion Q and a lower
priority one Q , can be merged into a single quasi α -dominion of the higher priority, thus improving the
approximation for Q . This merging operation is called a priority promotion of Q to Q . The PP
solution procedure has been shown to be very effective in practice and to often significantly outperform
all other solvers. Moreover, it also improves on the space complexity of the best know algorithm with an
exponential gain w.r.t. the number of priorities and by a logarithmic factor w.r.t. the number of vertexes.
Indeed, it only needs O(n · log k) space against the O(k · n · log n) required by Jurdziński’s approach [13],
where n and k are, respectively, the numbers of vertexes and priorities of the game. Unfortunately, the PP
algorithm also exhibits exponential behaviours on a simple family of games. This is due to the fact that,
in general, promotions to higher priorities requires resetting promotions previously performed at lower
ones.
In this paper, we continue the study of the priority promotion approaches trying to find a remedy to
this problem. We propose a new algorithm, called DP, built on top of a slight variation of PP, called PP+.
The PP+ algorithm simply avoids resetting previous promotions to quasi dominions of the same parity.
In this case, indeed, the relevant properties of those quasi dominions are still preserved. This variation
enables the new DP promotion policy, that delays promotions that require a reset and only performs
those leading to the highest quasi dominions among the available ones. For the resulting algorithm no
exponential worst case has been found. Experiments on randomly generated games also show that the
new approach performs much better than PP in practice, while still preserving the same space complexity.
2
Preliminaries
Let us first briefly recall the notation and basic definitions concerning parity games that expert readers
can simply skip. We refer to [1] [23] for a comprehensive presentation of the subject.
Given a partial function f : A ⇀ B, by dom(f) ⊆ A and rng(f) ⊆ B we indicate the domain and
range of f, respectively. In addition, ⊎ denotes the completion operator that, taken f and another partial
function g : A ⇀ B, returns the partial function f ⊎ g , (f \ dom(g)) ∪ g : A ⇀ B, which is equal to g on
its domain and assumes the same values of f on the remaining part of A.
A Delayed Promotion Policy for Parity Games
32
A two-player turn-based arena is a tuple A = hPs , Ps , Mvi, with Ps ∩ Ps = 0/ and Ps , Ps ∪ Ps ,
such that hPs, Mvi is a finite directed graph. Ps (resp., Ps ) is the set of positions of player 0 (resp., 1)
and Mv ⊆ Ps × Ps is a left-total relation describing all possible moves. A path in V ⊆ Ps is a finite or
infinite sequence π ∈ Pth(V) of positions in V compatible with the move relation, i.e., (πi , πi+1 ) ∈ Mv,
for all i ∈ [0, |π | − 1[. For a finite path π , with lst(π ) we denote the last position of π . A positional
strategy for player α ∈ {0, 1} on V ⊆ Ps is a partial function σα ∈ Strα (V) ⊆ (V ∩ Psα ) ⇀ V, mapping
each α -position v ∈ dom(σα ) to position σα (v) compatible with the move relation, i.e., (v, σα (v)) ∈ Mv.
With Strα (V) we denote the set of all α -strategies on V. A play in V ⊆ Ps from a position v ∈ V w.r.t.
a pair of strategies (σ , σ ) ∈ Str (V) × Str (V), called ((σ , σ ), v)-play, is a path π ∈ Pth(V) such
that π = v and, for all i ∈ [0, |π | − 1[, if πi ∈ Ps then πi+1 = σ (πi ) else πi+1 = σ (πi ). The play
function play : (Str (V) × Str (V)) × V → Pth(V) returns, for each position v ∈ V and pair of strategies
(σ , σ ) ∈ Str (V) × Str (V), the maximal ((σ , σ ), v)-play play((σ , σ ), v).
A parity game is a tuple a = hA , Pr, pri, where A is an arena, Pr ⊂ N is a finite set of priorities,
and pr : Ps → Pr is a priority function assigning a priority to each position. The priority function can
be naturally extended to games and paths as follows: pr(a) , maxv∈Ps pr(v); for a path π ∈ Pth, we
set pr(π ) , maxi∈[0,|π |[ pr(πi ), if π is finite, and pr(π ) , lim supi∈N pr(πi ), otherwise. A set of positions
V ⊆ Ps is an α -dominion, with α ∈ {0, 1}, if there exists an α -strategy σα ∈ Strα (V) such that, for
all α -strategies σα ∈ Strα (V) and positions v ∈ V, the induced play π = play((σ , σ ), v) is infinite and
pr(π ) ≡2 α . In other words, σα only induces on V infinite plays whose maximal priority visited infinitely
often has parity α . By a\V we denote the maximal subgame of a with set of positions Ps′ contained in
Ps\V and move relation Mv′ equal to the restriction of Mv to Ps′ .
The α -predecessor of V, in symbols preα (V) , {v ∈ Psα : Mv(v) ∩ V 6= 0}
/ ∪ {v ∈ Psα : Mv(v) ⊆ V},
collects the positions from which player α can force the game to reach some position in V with a single
move. The α -attractor atrα (V) generalises the notion of α -predecessor preα (V) to an arbitrary number
of moves, and corresponds to the least fix-point of that operator. When V = atrα (V), we say that V
is α -maximal. Intuitively, V is α -maximal if player α cannot force any position outside V to enter
the set. For such a V, the set of positions of the subgame a \ V is precisely Ps \ V. Finally, the set
escα (V) , preα (Ps \ V) ∩ V, called the α -escape of V, contains the positions in V from which α can
leave V in one move. The dual notion of α -interior, defined as intα (V) , (V ∩ Psα ) \ escα (V), contains,
instead, the α -positions from which α cannot escape with a single move. All the operators and sets
above actually depend on the specific game a they are applied in. In the rest of the paper, we shall only
add a as subscript of an operator, e.g., escαa (V), when the game is not clear from the context.
3
The Priority Promotion Approach
The priority promotion approach proposed in [2] attacks the problem of solving a parity game a by
computing one of its dominions D, for some player α ∈ {0, 1}, at a time. Indeed, once the α -attractor
D⋆ of D is removed from a, the smaller game a \ D⋆ is obtained, whose positions are winning for one
player iff they are winning for the same player in the original game. This allows for decomposing the
problem of solving a parity game to that of iteratively finding its dominions [14].
In order to solve the dominion problem, the idea is to start from a much weaker notion than that of
dominion, called quasi dominion. Intuitively, a quasi α -dominion is a set of positions on which player α
has a strategy whose induced plays either remain inside the set forever and are winning for α or can exit
from it passing through a specific set of escape positions.
M. Benerecetti, D. Dell’Erba, & F. Mogavero
33
Definition 3.1 (Quasi Dominion [2]). Let a ∈ PG be a game and α ∈ {0, 1} a player. A non-empty
set of positions Q ⊆ Ps is a quasi α -dominion in a if there exists an α -strategy σα ∈ Strα (Q) such
that, for all α -strategies σα ∈ Strα (Q), with intα (Q) ⊆ dom(σα ), and positions v ∈ Q, the induced play
π = play((σ , σ ), v) satisfies pr(π ) ≡2 α , if π is infinite, and lst(π ) ∈ escα (Q), otherwise.
Observe that, if all the induced plays remain in the set Q forever, this is actually an α -dominion
and, therefore, a subset of the winning region Wnα of α . In this case, the escape set of Q is empty,
i.e., escα (Q) = 0,
/ and Q is said to be α -closed. In general, however, a quasi α -dominion Q that is
/ need not be a subset of Wnα and it is called α -open.
not an α -dominion, i.e., such that escα (Q) 6= 0,
Indeed, in this case, some induced play may not satisfy the winning condition for that player once exited
from Q, by visiting a cycle containing a position with maximal priority of parity α . The set of pairs
(Q, α ) ∈ 2Ps × {0, 1}, where Q is a quasi α -dominion, is denoted by QD, and is partitioned into the sets
QD− and QD+ of open and closed quasi α -dominion pairs, respectively.
The priority promotion algorithm explores a partial order, Algorithm 1: The Searcher. [2]
whose elements, called states, record information about the
signature srcD : SD → QD+
aD
open quasi dominions computed along the way. The initial
function
src
(s)
D
state of the search is the top element of the order, where the
(Q, α ) ← ℜD (s)
quasi dominions are initialised to the sets of positions with the 1
2
if (Q, α ) ∈ QD+
aD then
same priority. At each step, a new quasi dominion is extracted
return (Q, α )
from the current state, by means of a query operator ℜ, and 3
else
used to compute a successor state, by means of a successor op4
return srcD (s ↓D (Q, α ))
erator ↓, if the quasi dominion is open. If, on the other hand,
it is closed, the search is over. Algorithm 1 implements the dominion search procedure srcD . A compatibility relation connects the query and the successor operators.
The relation holds between states of the partial order and the quasi dominions that can be extracted by the
query operator. Such a relation defines the domain of the successor operator. The partial order, together
with the query and successor operator and the compatibility relation, forms what is called a dominion
space.
Definition 3.2 (Dominion Space [2]). A dominion space for a game a ∈ PG is a tuple D , ha, S , , ℜ,
↓i, where (1) S , hS, ⊤, ≺i is a well-founded partial order w.r.t. ≺ ⊂ S × S with distinguished element
⊤ ∈ S, (2) ⊆ S × QD− is the compatibility relation, (3) ℜ : S → QD is the query operator mapping
each element s ∈ S to a quasi dominion pair (Q, α ) , ℜ(s) ∈ QD such that, if (Q, α ) ∈ QD− then
s (Q, α ), and (4) ↓ : → S is the successor operator mapping each pair (s, (Q, α )) ∈ to the element
s⋆ , s ↓(Q, α ) ∈ S with s⋆ ≺s.
The notion of dominion space is quite general and can be instantiated in different ways, by providing
specific query and successor operators. In [2], indeed, it is shown that the search procedure srcD is sound
and complete on any dominion space D. In addition, its time complexity is linear in the execution depth
of the dominion space, namely the length of the longest chain in the underlying partial order compatible
with the successor operator, while its space complexity is only logarithmic in the space size, since only
one state at the time needs to be maintained. A specific instantiation of dominion space, called PP
dominion space, is the one proposed and studied in [2]. Here we propose a different one, starting from a
slight optimisation, called PP+, of that original version.
PP+ Dominion Space. In order to instantiate a dominion space, we need to define a suitable query
function to compute quasi dominions and a successor operator to ensure progress in the search for a
closed dominion. The priority promotion algorithm proceeds as follows. The input game is processed in
34
A Delayed Promotion Policy for Parity Games
descending order of priority. At each step, a subgame of the entire game, obtained by removing the quasi
domains previously computed at higher priorities, is considered. At each priority of parity α , a quasi
α -domain Q is extracted by the query operator from the current subgame. If Q is closed in the entire
game, the search stops and returns Q as result. Otherwise, a successor state in the underlying partial
order is computed by the successor operator, depending on whether Q is open in the current subgame or
not. In the first case, the quasi α -dominion is removed from the current subgame and the search restarts
on the new subgame that can only contain position with lower priorities. In the second case, Q is merged
together with some previously computed quasi α -dominion with higher priority. Being a dominion space
well-ordered, the search is guaranteed to eventually terminate and return a closed quasi dominion. The
procedure requires the solution of two crucial problems: (a) extracting a quasi dominion from a subgame
and (b) merging together two quasi α -dominions to obtain a bigger, possibly closed, quasi α -dominion.
The solution of the first problem relies on the definition of a specific class of quasi dominions, called
regions. An α -region R of a game a is a special form of quasi α -dominion of a with the additional
requirement that all the escape positions in escα (R) have the maximal priority p , pr(a) ≡2 α in a. In
this case, we say that α -region R has priority p. As a consequence, if the opponent α can escape from
the α -region R, it must visit a position with the highest priority in it, which is of parity α .
Definition 3.3 (Region [2]). A quasi α -dominion R is an α -region in a if pr(a) ≡2 α and all the positions
in escα (R) have priority pr(a), i.e. escα (R) ⊆ pr− (pr(a)).
It is important to observe that, in any parity game, an α -region always exists, for some α ∈ {0, 1}. In
particular, the set of positions of maximal priority in the game always forms an α -region, with α equal to
the parity of that maximal priority. In addition, the α -attractor of an α -region is always an (α -maximal)
α -region. A closed α -region in a game is clearly an α -dominion in that game. These observations give
us an easy and efficient way to extract a quasi dominion from every subgame: collect the α -attractor
of the positions with maximal priority p in the subgame, where p ≡2 α , and assign p as priority of the
resulting region R. This priority, called measure of R, intuitively corresponds to an under-approximation
of the best priority player α can force the opponent α to visit along any play exiting from R.
Proposition 3.1 (Region Extension [2]). Let a ∈ PG be a game and R ⊆ Ps an α -region in a. Then,
R⋆ , atrα (R) is an α -maximal α -region in a.
A solution to the second problem, the merging operation, is obtained as follows. Given an α -region
R in some game a and an α -dominion D in a subgame of a that does not contain R itself, the two sets
are merged together, if the only moves exiting from α -positions of D in the entire game lead to higher
priority α -regions and R has the lowest priority among them. The priority of R is called the best escape
priority of D for α . The correctness of this merging operation is established by the following proposition.
Proposition 3.2 (Region Merging [2]). Let a ∈ PG be a game, R ⊆ Ps an α -region, and D ⊆ Psa\R an
α -dominion in the subgame a \ R. Then, R⋆ , R ∪ D is an α -region in a. Moreover, if both R and D are
α -maximal in a and a \ R, respectively, then R⋆ is α -maximal in a as well.
The merging operation is implemented by promoting all the positions of α -dominion D to the measure of R, thus improving the measure of D. For this reason, it is called a priority promotion. In [2] it is
shown that, after a promotion to some measure p, the regions with measure lower than p might need to be
destroyed, by resetting all the contained positions to their original priority. This necessity derives from
the fact that the new promoted region may attract positions from lower ones, thereby potentially invalidating their status as regions. Indeed, in some cases, the player that wins by remaining in the region may
even change from α to α . As a consequence, the reset operation is, in general, unavoidable. The original
priority promotion algorithm applies the reset operation to all the lower priority regions. However, the
following property ensures that this can be limited to the regions belonging to the opponent player only.
M. Benerecetti, D. Dell’Erba, & F. Mogavero
35
Proposition 3.3 (Region Splitting). Let a⋆ ∈ PG be a game and R⋆ ⊆ Psa⋆ an α -maximal α -region in
/ then R♮ is an α -region in
a⋆ . For any subgame a of a⋆ and α -region R ⊆ Ps in a, if R♮ , R \ R⋆ 6= 0,
a \ R⋆ .
This proposition, together with the observation that α -regions that can be extracted from the corresponding subgames cannot attract positions contained in any retained α -region, allows for preserving all
the lower α -regions computed so far.
To exemplify the idea, Table 1 shows a simulation of the resulting procedure on the parity game of
Figure 1, where diamond shaped positions belong to player 0 and square shaped ones to its opponent 1.
Player 0 wins the entire game, hence the 0-region containing all the positions is a 0-dominion in this case.
Each cell of the table contains a computed region. A downward arrow denotes a region that is open in
the subgame where it is computed, while an upward arrow means that the region gets to be promoted to
the priority in the subscript. The index of each row corresponds to the measure of the region. Following
the idea sketched above, the first region obtained is the single-position 0-region {a} of measure 6, which
is open because of the two moves leading to d and e. The open 1-region {b, f, h} of measure 5 is, then,
formed by attracting both f and h to b, which is open in the subgame where {a} is removed. Similarly, the
0-region {c, j} of measure 4 and the 1-region {d} of measure 3 are open, once removed {a, b, f, h} and
{a, b, c, f, h}, respectively, from the game. At priority 2, the 0-region {e} is closed in the corresponding
subgame. However, it is not closed in the whole game, because of the move leading to c, i.e., to the
region of measure 4. Proposition 3.2 can now be applied and a promotion of {e} to 4 is performed,
resulting in the new 0-region {c, e, j} that resets 1-region {d}. The search resumes at the corresponding
priority and, after computing the extension of such a region via the attractor, we obtain that it is still open
in the corresponding subgame. Consequently, the 1-region {d} of measure 3 is recomputed and, then,
priority 1 is processed to build the 1-region {g}. The latter is closed in the associated subgame, but not
in the original game, because of a move leading to position d. Hence, another promotion is performed,
leading to the closed region of measure 3 at Column 3, which in turn triggers a promotion to 5. When
the promotion of 0-region {i} to priority 6 is performed, however, 0-region {c, e, j} of measure 4 is not
reset. This leads directly to the configuration in Column 6, after the maximisation of 0-region 6, which
attracts b, d, g, and j. Notice that, as prescribed by Proposition 3.3, the set {c, e}, highlighted by the grey
area, is still a 0-region. On the other hand, the set {f, h}, highlighted by the dashed line and originally
included in 1-region {b, d, f, g, h} of priority 5, needs to be reset, since it is not a 1-region any more. It
is, actually, an open 0-region instead. Now, 0-region 4 is closed in its subgame and it is promoted to 6.
As result of this promotion, we obtain the closed 0-region {a, b, c, d, e, g, i, j}, which is a dominion for
player 0.
c/4
e/2
j/4
a/6
i/0
d/3
b/5
g/1
h/1
f/2
1
2
3
4
5
6
6
a↓
···
···
···
···
a, b, d, g, i, j ↓
5
b, f, h ↓
···
···
b, d, f, g, h ↓
···
4
c, j ↓
c, e, j ↓
···
c, j ↓
c, e, j ↓
3
d↓
d↓
d, g ↑5
2
e ↑4
1
0
Figure 1: Running example.
c, e ↑6
e ↑4
g ↑3
i ↑6
Table 1: PP+ simulation.
We can now provide the formal account of the PP+ dominion space. We shall denote with Rg the set
of region pairs in a and with Rg− and Rg+ the sets of open and closed region pairs, respectively.
Similarly to the priority promotion algorithm, during the search for a dominion, the computed regions,
together with their current measure, are kept track of by means of an auxiliary priority function r ∈ ∆ ,
A Delayed Promotion Policy for Parity Games
36
Ps → Pr, called region function. Given a priority p ∈ Pr, we denote by r(≥p) (resp., r(>p) , r(<p) , and
r(≡2 p) ) the function obtained by restricting the domain of r to the positions with measure greater than or
equal to p (resp., greater than, lower than, and congruent modulo
2 to p). Formally, r(∼p) , r↾{v ∈ Ps
≤p
(>p)
: r(v) ∼ p}, for ∼ ∈ {≥, >, <, ≡2 }. By ar , a \ dom r
, we denote the largest subgame obtained
(>p)
by removing from a all the positions in the domain of r
. The maximisation
of a priority function
α
−
−
, for all priorities
r ∈ ∆ is the unique priority function m ∈ ∆ such that m (q) = atr ≤q r (q) ∩ Psa≤q
m
am
q ∈ rng(r) with α , q mod 2. In addition, we say that r is maximal above p ∈ Pr iff r(>p) = m(>p) .
As opposed to the PP approach, where a promotion to p ≡2 α resets all the regions lower than p,
here we need to take into account the fact that the regions of the opponent α are reset, while the ones of
player α are retained. In particular, we need to ensure that, as the search proceeds from p downward to
any priority q < p, the maximisation of the regions contained at priorities higher than q can never make
the region recorded in r at q invalid. To this end, we consider only priority functions r that satisfy the
requirement that, at all priorities, they contain regions w.r.t. the subgames induced by their maximisations
m. Formally, r ∈ R ⊆ ∆ is a region function iff, for all priorities q ∈ rng(m) with α , q mod 2, it holds
that r− (q) ∩ Psa≤q
is an α -region in the subgame a≤q
m , where m is the maximisation of r.
m
The status of the search of a dominion is encoded by the notion of state s of the dominion space,
which contains the current region function r and the current priority p reached by the search in a. Initially,
r coincides with the priority function pr of the entire game a, while p is set to the maximal priority pr(a)
available in the game. To each of such states s , (r, p), we then associate the subgame at s defined as
as , a≤p
r , representing the portion of the original game that still has to be processed.
The following state space specifies the configurations in which the PP+ procedure can reside and the
relative order that the successor function must satisfy.
Definition 3.4 (State Space for PP+). A PP+ state space is a tuple S , hS, ⊤, ≺i, where:
1. S ⊆ R × Pr is the set of all pairs s , (r, p), called states, composed of a region function r ∈ R and
a priority p ∈ Pr such that (a) r is maximal above p and (b) p ∈ rng(r);
2. ⊤ , (pr, pr(a));
3. two states s , (r , p ), s , (r , p ) ∈ S satisfy s ≺s iff either (a) r (>q) = r (>q) and r− (q) ⊂
r− (q), for some priority q ∈ rng(r ) with q ≥ p , or (b) both r = r and p < p hold.
Condition 1 requires that every region r− (q) with measure q > p be α -maximal, where α = q mod 2.
This implies that r− (q) ⊆ Psa≤q
. Moreover, the current priority p of the state must be one of the measures
r
recorded in r. In addition, Condition 2 specifies the initial state, while Condition 3 defines the ordering
relation among states, which the successor operation has to comply with. It asserts that a state s is
strictly smaller than another state s if either there is a region recorded in s with some higher measure q
that strictly contains the corresponding one in s and all regions with measure grater than q are equal in
the two states, or state s is currently processing a lower priority than the one of s .
A region pair (R, α ) is compatible with a state s , (r, p) if it is an α -region in the current subgame
as . Moreover, if such a region is α -open in that game, it has to be α -maximal and needs to necessarily
contain the current region r− (p) of priority p in r.
Definition 3.5 (Compatibility Relation). An open quasi dominion pair (R, α ) ∈ QD− is compatible with
a state s , (r, p) ∈ S, in symbols s (R, α ), iff (1) (R, α ) ∈ Rgas and (2) if R is α -open in as then
R = atrαas (r− (p)).
M. Benerecetti, D. Dell’Erba, & F. Mogavero
37
Algorithm 2 provides the implementation for the query
Algorithm 2: Query Function.
function compatible with the priority-promotion mechanism.
signature ℜ : S → 2Ps ×{0, 1}
Line 1 simply computes the parity α of the priority to process
function ℜ(s)
in the state s , (r, p). Line 2, instead, computes the attractor
let (r, p) = s in
w.r.t. player α in subgame as of the region contained in r at the
α ← p mod 2
current priority p. The resulting set R is, according to Proposi- 1
2
R ← atrαas (r− (p))
−
tion 3.1, an α -maximal α -region of as containing r (p).
return (R, α )
The promotion operation is based on the notion of best es- 3
cape priority mentioned above, namely the priority of the lowest α -region in r that has an incoming move coming from the α -region, closed in the current subgame,
that needs to be promoted. This concept is formally defined as follows. Let I , Mv ∩ ((R ∩ Psα ) ×
(dom(r)\R)) be the interface relation between R and r, i.e., the set of α -moves exiting from R and reaching some position within a region recorded in r. Then, bepα (R, r) is set to the minimal measure of those
regions that contain positions reachable by a move in I. Formally, bepα (R, r) , min(rng(r↾rng(I))). Such
a value represents the best priority associated with an α -region contained in r and reachable by α when
escaping from R. Note that, if R is a closed α -region in as , then bepα (R, r) is necessarily of parity α and
greater than the measure p of R. This property immediately follows from the maximality of r above p.
Indeed, no move of an α -position can lead to a α -maximal α -region. For instance, for 0-region R = {e}
with measure 2 in Column 1 of Figure 1, we have that I = {(e, a), (e, c)} and r↾rng(I) = {(a, 6), (c, 4)}.
Hence, bep (R, r) = 4.
Algorithm 3: Successor Function.
Algorithm 3 reports the pseudo-code of the successor function, which differs from the one prosignature ↓ : → ∆ × Pr
posed in [2] only in Line 5, where Proposition 3.3 is
function s ↓ (R, α )
applied. Given the current state s and a compatible
let (r, p) = s in
region pair (R, α ) open in the whole game as inputs, 1
if (R, α ) ∈ Rg−
as then
⋆
⋆
⋆
⋆
it produces a successor state s , (r , p ) in the do- 2
r ← r[R 7→ p]
minion space. It first checks whether R is open also
3
p⋆ ← max(rng r⋆ (<p) )
in the subgame as (Line 1). If this is the case, it aselse
signs the measure p to region R and stores it in the
4
p⋆ ← bepα (R, r)
⋆
⋆
⋆
new region function r (Line 2). The new current
5
r⋆ ← pr ⊎ r(≥p )∨(≡2 p ) [R 7→ p⋆ ]
⋆
priority p is, then, computed as the highest prior- 6
return (r⋆ , p⋆ )
ity lower than p in r⋆ (Line 3). If, on the other hand,
R is closed in as , a promotion, merging R with some other α -region contained in r, is required. The next
priority p⋆ is set to the bep of R for player α in the entire game a w.r.t. r (Line 4). Region R is, then,
promoted to priority p⋆ and all and only the regions of the opponent α with lower measure than p⋆ in the
region function r are reset by means of the completion operator defined in Section 2 (Line 5).
The following theorem asserts that the PP+ state space, together with the same query function of PP
and the successor function of Algorithm 3 is a dominion space.
Theorem 3.1 (PP+ Dominion Space). For a game a, the PP+ structure D , ha, S , , ℜ, ↓i, where S
is given in Definition 3.4, is the relation of Definition 3.5, and ℜ and ↓ are the functions computed by
Algorithms 2 and 3 is a dominion space.
The PP+ procedure does reduce, w.r.t. PP, the number of reset needed to solve a game and the
exponential worst-case game presented in [2] does not work any more. However, a worst-case, which
is a slight modification of the one for PP, does exist for this procedure as well. Consider the game ah
containing h chains of length 2 that converge into a single position of priority 0 with a self loop. The
A Delayed Promotion Policy for Parity Games
38
i-th chain has a head of priority 2(h + 1) − i and a body composed of a single position with priority
i and a self loop. An instance of this game with h = 4 is depicted in Figure 2. The labels of the
positions correspond to the associated priorities. Intuitively, the execution depth of the PP+ dominion
space for this game is exponential, since the consecutive promotion operations performed on each chain
can simulate the increments of a partial form of binary counter, some of whose configurations are missing.
As a result, the number of configurations of the counter follows a Fibonacci-like sequence of the form
F(h) = F(h − 1) + F(h − 2) + 1, with F(0) = 1 and F(1) = 2.
The search procedure on a starts by building the following four open re0
gions: the 1-region {9}, the 0-region {8}, the 1-region {7}, and 0-region
{6}. This state represents the configuration of the counter, where all four
9
8
7
6
digits are set to 0. The closed 0-region {4} is then found and promoted to
6. Now, the counter is set to 0001. After that, the closed 1-region {3} is
3
1
2
4
computed that is promoted to 7. Due to the promotion to 7, the positions in
the 0-region with priority 6 are reset to their original priority, as they belong
Figure 2: The aPP+ game.
to the opponent player. This releases the chain with head 6, which corresponds to the reset of the least significant digit of the counter caused by the increment of the second
one, i.e., the counter displays 0010. The search resumes at priority 6 and the 0-regions {6} and {4} are
computed once again. A second promotion of {4} to 6 is performed, resulting in the counter assuming
value 0011. When the closed 0-region {2} is promoted to 8, however, only the 1−region {7, 3} is reset,
leading to configuration 0101 of the counter. Hence, configuration 0100 is skipped. Similarly, when, the
counter reaches configuration 0111 and 1-region {1} is promoted to 9, the 0-regions {8, 2} and {6, 4}
are reset, leaving 1-region {7, 3} intact. This leads directly to configuration 1010 of the counter, skipping
configuration 1000.
An estimate of the depth of the PP+ dominion space on the game ah is given by the following
theorem.
Theorem 3.2 (Execution-Depth Lower Bound). For all h ∈ N, there exists a PP+ dominion space DhPP+
with
+ 1 positions
and priorities, whose execution depth is Fib(2(h + 4))/Fib(h + 4) − (h + 6) =
k = 2h
√
k/2
.
Θ ((1 + 5)/2)
4
Delayed Promotion Policy
At the beginning of the previous section, we have observed that the time complexity of the dominion
search procedure srcD linearly depends on the execution depth of the underlying dominion space D.
This, in turn, depends on the number of promotions performed by the associated successor function and
is tightly connected with the reset mechanism applied to the regions with measure lower than the one
of the target region of the promotion. In fact, it can be proved that, when no resets are performed, the
number of possible promotions is bounded by a polynomial in the number of priorities and positions of
the game under analysis. Consequently, the exponential behaviours exhibited by the PP algorithm and
its enhanced version PP+ are strictly connected with the particular reset mechanism employed to ensure
the soundness of the promotion approach. The correctness of the PP+ method shows that the reset can be
restricted to only the regions of opposite parity w.r.t. the one of the promotion and, as we shall show in the
next section, this enhancement is also relatively effective in practice. However, we have already noticed
that this improvement does not suffices in avoiding some pathological cases and we do not have any
finer criteria to avoid the reset of the opponent regions. Therefore, to further reduce such resets, in this
section we propose a finer promotion policy that tries in advance to minimise the necessity of application
M. Benerecetti, D. Dell’Erba, & F. Mogavero
39
of the reset mechanism. The new solution procedure is based on delaying the promotions of regions,
called locked promotions, that require the reset of previously performed promotions of the opponent
parity, until a complete knowledge of the current search phase is reached. Once only locked promotions
are left, the search phase terminates by choosing the highest measure p⋆ among those associated with
the locked promotions and performing all the postponed ones of the same parity as p⋆ altogether. In
order to distinguish between locked and unlocked promotions, the corresponding target priorities of the
performed ones, called instant promotions, are recorded in a supplementary set P. Moreover, to keep
track of the locked promotions, a supplementary partial priority function er is used. In more detail, the
new procedure evolves exactly as the PP+ algorithm, as long as open regions are discovered. When a
closed one with measure p is provided by the query function, two cases may arise. If the corresponding
promotion is not locked, the destination priority q is recorded in the set P and the instant promotion
is performed similarly to the case of PP+. Otherwise, the promotion is not performed. Instead, it is
recorded in the supplementary function er, by assigning to R in er the target priority q of that promotion
and in r its current measure p. Then, the positions in R are removed from the subgame and the search
proceeds at the highest remaining priority, as in the case R was open in the subgame. In case the region R
covers the entire subgame, all priorities available in the original game have been processed and, therefore,
there is no further subgame to analyse. At this point, the delayed promotion to the highest priority p⋆
recorded in er is selected and all promotions of the same parity are applied at once. This is done by first
moving all regions from er into r and then removing from the resulting function the regions of opposite
parity w.r.t. p⋆ , exactly as done by PP+. The search, then, resumes at priority p⋆ . Intuitively, a promotion
is considered as locked if its target priority is either (a) greater than some priority in P of opposite parity,
which would be otherwise reset, or (b) lower than the target of some previously delayed promotion
recorded in er, but greater than the corresponding priority set in r. The latter condition is required to
ensure that the union of a region in r together with the corresponding recorded region in er is still a region.
Observe that the whole approach is crucially based on the fact that when a promotion is performed all the
regions having lower measure but same parity are preserved. If this was not the case, we would have no
criteria to determine which promotions need to be locked and which, instead, can be freely performed.
This idea is summarized by Table 2, which contains the ex1
2
3
4
ecution of the new algorithm on the example in Figure 1. The
6
a↓
···
···
a,b,d,g,i,j ↓
computation proceeds as for PP+, until the promotion of the
5
b,f,h ↓
···
···
1-region {g} shown in Column 2, occurs. This is an instant pro4
c,j ↓
c,e,j ↓
···
c,e ↑6
motion to 3, since the only other promotion already computed
3
d↓
d↓
d,g 6 ↑5
and recorded in P has value 4. Hence, it can be performed and
saved in P as well. Starting from priority 3 in Column 3, the
2
e ↑4
closed 1-region {d, g} could be promoted to 5. However, since
1
g ↑3
e
its target is greater than 4 ∈ P, it is delayed and recorded in r,
0
i ↑6
where it is assigned priority 5. At priority 0, a delayed promoTable 2: DP simulation.
tion of 0-region {i} to priority 6 is encountered and registered,
since it would overtake priority 3 ∈ P. Now, the resulting subgame is empty. Since the highest delayed
promotion is the one to priority 6 and no other promotion of the same parity was delayed, 0-region {i}
is promoted and both the auxiliary priority function er and the set of performed promotions P are emptied. Previously computed 0-region {c, e, j} has the same parity and, therefore, it is not reset, while the
positions in both 1-regions {b, f, h} and {d, g} are reset to their original priorities. After maximisation
of the newly created 0-region {a, i}, positions b, d, g, and j get attracted as well. This leads to the first
cell of Column 4, where 0-region {a, b, d, g, i, j} is open. The next priority to process is then 4, where
0-region {c, e}, the previous 0-region {c, e, j} purged of position j, is now closed in the corresponding
A Delayed Promotion Policy for Parity Games
40
subgame and gets promoted to 6. This results in a 0-region closed in the entire game, hence, a dominion
for player 0 has been found. Note that postponing the promotion of 1-region {d, g} allowed a reduction
in the number of operations. Indeed, the redundant maximisation of 1-region {b, f, h} is avoided.
It is worth noting that this procedure only requires a linear number of promotions, precisely h+1
2 ,
on the lower bound game ahPP+ for PP+. This is due to the fact that all resets are performed on regions
that are not destination of any promotion. Also, the procedure appears to be much more robust, in terms
of preventing resets, than the PP+ technique alone, to the point that it does not seem obvious whether an
exponential lower bound even exists. Further investigation is, however, needed for a definite answer on
its actual time complexity.
DP Dominion Space. As it should be clear from the above informal description, the delayed promotion
mechanism is essentially a refinement of the one employed in PP+. Indeed, the two approaches share
all the requirements on the corresponding components of a state, on the orderings, and on compatibility
relations. However, DP introduces in the state two supplementary elements: a partial priority function
er, which collects the delayed promotions that were not performed on the region function r, and a set of
priorities P, which collects the targets of the instant promotions performed. Hence, in order to formally
define the corresponding dominion space, we need to provide suitable constraints connecting them with
the other components of the search space. The role of function er is to record the delayed promotions
obtained by moving the corresponding positions from their priority p in r to the new measure q in er.
Therefore, as dictated by Proposition 3.2, the union of r−1 (q) and er−1 (q) must always be a region in
the subgame a≤q
r−1 (q) can only contain positions whose measure in r is of the same
r . In addition, e
parity as q and recorded in r at some lower priority greater than the current one p. Formally, we say that
er ∈ ∆⇀ , Ps ⇀ Pr is aligned with a region function r w.r.t. p if, for all priorities q ∈ rng(er), it holds that
(a) er− (q) ⊆ dom r(>p)∧(<q)∧(≡2q) and (b) r−1 (q) ∪er−1 (q) is an α -region in a≤q
r with α ≡2 q. The state
space for DP is, therefore, defined as follows.
Definition 4.1 (State Space for DP). A DP state space is a tuple S , hS, ⊤, ≺i, where:
1. S ⊆ SPP+ × ∆⇀ × 2Pr is the set of all triples s , ((r, p),er , P), called states, composed by a PP+
state (r, p) ∈ SPP+ , a partial priority function er ∈ ∆⇀ aligned with r w.r.t. p, and a set of priorities
P ⊆ Pr.
2. ⊤ , (⊤PP+ , ∅, 0);
/
s , _, _), s , (b
s , _, _) ∈ S.
3. s ≺s iff sb ≺PP+ sb , for any two states s , (b
The second property we need to enforce is expressed in the compatibility relation connecting the
query and successor functions for DP and regards the closed region pairs that are locked w.r.t. the current
state. As stated above, a promotion is considered locked if its target priority is either (a) greater than
some priority in P of opposite parity or (b) lower than the target of some previously delayed promotion
recorded in er, but greater than the corresponding priority set in r. Condition (a) is the one characterising
the delayed promotion approach, as it reduces the number of resets of previously promoted regions. The
two conditions are expressed by the following two formulas, respectively, where q is the target priority
of the blocked promotion.
φa (q, P) , ∃l ∈ P . l 6≡2 q ∧ l < q
φb (q, r,er ) , ∃v ∈ dom(er) . r(v) < q ≤ er(v)
Hence, an α -region R is called α -locked w.r.t. a state s , ((r, p),er , P) if the predicate φLck (q, s) ,
φa (q, P) ∨ φb (q, r,er ) is satisfied, where q = bepα (R, r ⊎er). In addition to the compatibility constraints
for PP+, the compatibility relation for DP requires that any α -locked region, possibly returned by the
query function, be maximal and contain the region r− (p) associated to the priority p of the current state.
M. Benerecetti, D. Dell’Erba, & F. Mogavero
41
Definition 4.2 (Compatibility Relation for DP). An open quasi dominion pair (R, α ) ∈ QD− is compatible with a state s , ((r, p), _, _) ∈ S, in symbols s (R, α ), iff (1) (R, α ) ∈ Rgas and (2) if R is α -open in
as or it is α -locked w.r.t. s then R = atrαas (r− (p)).
Algorithm 4 implements the successor function for DP. The pseudo-code on the right-hand side
consists of three macros, namely #Assignment, #InstantPromotion, and #DelayedPromotion, used by
the algorithm. Macro #Assignment(ξ ) performs the insertion of a new region R into the region function
r. In presence of a blocked promotion, i.e., when the parameter ξ is set to f, the region is also recorded
in er at the target priority of the promotion. Macro #InstantPromotion corresponds to the DP version of
the standard promotion operation of PP+. The only difference is that it must also take care of updating
the supplementary elements er and P. Macro #DelayedPromotion, instead, is responsible for the delayed
promotion operation specific to DP.
Algorithm 4: Successor Function.
(∆×Pr)×∆⇀ ×2Pr
1
2-5
6
7
8
9
10-13
14-17
18-21
22
signature ↓ : →
function s ↓ (R, α )
let ((r, p),er , P) = s in
if (R, α ) ∈ Rg−
as then
#Assignment(t)
else
q ← bepα (R, r ⊎er)
if φLck (q, s) then
br ← er[R 7→ q]
if R 6= Psas then
#Assignment(f)
else
#DelayedPromotion
else
#InstantPromotion
return ((r⋆ , p⋆ ),er⋆ , P⋆ )
Assignment & Promotion Macros.
1
2
3
4
1
2
3
4
1
2
3
4
macro #Assignment(ξ )
r⋆ ← r[R 7→ p]
p⋆ ← max(rng r⋆ (<p) )
er⋆ ← #if ξ #then er #else br
P⋆ ← P
macro #DelayedPromotion
p⋆ ← max(rng(br))
⋆
⋆
r⋆ ← pr ⊎ (r ⊎br)(≥p )∨(≡2 p )
er⋆ ← 0/
P⋆ ← 0/
macro #InstantPromotion
p⋆ ← q
⋆
⋆
r⋆ ← pr ⊎ r(≥p )∨(≡2 p ) [R 7→ p⋆ ]
⋆
er⋆ ← er(>p )
P⋆ ← P ∩ rng(r⋆ ) ∪ {p⋆ }
If the current region R is open in the subgame as , the main algorithm proceeds, similarly to Algorithm 3, at assigning to it the current priority p in r. This is done by calling macro #Assignment with
parameter t. Otherwise, the region is closed and a promotion should be performed at priority q, corresponding to the bep of that region w.r.t. the composed region function r ⊎er. In this case, the algorithm
first checks whether such promotion is locked w.r.t. s at Line 7. If this is not the case, then the promotion
is performed as in PP+, by executing #InstantPromotion, and the target is kept track of in the set P. If,
instead, the promotion to q is locked, but some portion of the game still has to be processed, the region
is assigned its measure p in r and the promotion to q is delayed and stored in er. This is done by executing #Assignment with parameter f. Finally, in case the entire game has been processed, the delayed
promotion to the highest priority recorded in er is selected and applied. Macro #DelayedPromotion is
executed, thus merging r with er. Function er and set P are, then, erased, in order to begin a new round of
the search. Observe that, when a promotion is performed, whether instant or delayed, we always preserve
the underlying regions of the same parity, as done by the PP+ algorithm. This is a crucial step in order
to avoid the pathological exponential worst case for the original PP procedure.
The soundness of the solution procedure relies on the following theorem.
A Delayed Promotion Policy for Parity Games
42
Theorem 4.1 (DP Dominion Space). For a game a, the DP structure D , ha, S , , ℜ, ↓i, where S is
given in Definition 4.1, is the relation of Definition 4.2, and ℜ and ↓ are the functions computed by
Algorithms 2 and 4, where in the former the assumption "let (r, p) = s" is replaced by "let ((r, p), _, _) =
s" is a dominion space.
It is immediate to observe that the following mapping h : ((r, p), _, _) ∈ SDP 7→ (r, p) ∈ SPP+ , which
takes DP states to PP+ states by simply forgetting the additional elements er and P, is a homomorphism.
This, together with a trivial calculation of the number of possible states, leads to the following theorem.
Theorem 4.2 (DP Size & Depth Upper Bounds). The size of the DP dominion space Da for a game
a ∈ PG with n ∈ N+ positions and k ∈ [1, n] priorities is bounded by 2k k2n . Moreover, its depth is not
greater then the one of the PP+ dominion space DaPP+ for the same game.
5
Experimental Evaluation
The technique proposed in the paper has been implemented in the tool PGS OLVER [9], which collects
implementations of several parity game solvers proposed in the literature and provides benchmarking
tools that can be used to evaluate the solver performances.1
Figure 3 compares the running times of the new algorithms, PP+ and DP, against the original version
PP2 and the well-known solvers Rec and Str, implementing the recursive algorithm [23] and the strategy
improvement technique [22], respectively. This first pool of benchmarks is taken from [2] and involves
2000 random games of size ranging from 1000 to 20000 positions and 2 outgoing moves per position.
Interestingly, random games with very few moves prove to be much more challenging for the priority
promotion based approaches than those with a higher number of moves per position, and often require
a much higher number of promotions. Since the behaviour of the solvers is typically highly variable,
even on games of the same size and priorities, to summarise the results we took the average running
time on clusters of games. Therefore, each point in the graph shows the average time over a cluster
of 100 different games of the same size: for each size value n, we chose the numbers k = n · i/10 of
priorities, with i ∈ [1, 10], and 10 random games were generated for each pair n and k. We set a time-out
to 180 seconds (3 minutes). Solver PP+ performs slightly better than PP, while DP shows a much more
convincing improvement on the average time. All the other solvers provided in PGS OLVER, including
the Dominion Decomposition [14] and the Big Step [19] algorithms, perform quite poorly on those
games, hitting the time-out already for very small instances. Figure 3 shows only the best performing
ones on the considered games, namely Rec and Str. Similar experiments were also conducted on random
games with a higher number of moves per position and up to 100000 positions. The resulting games
turn out to be very easy to solve by all the priority promotion based approaches. The reason seems to
be that the higher number of moves significantly increases the dimension of the computed regions and,
consequently, also the chances to find a closed one. Indeed, the number of promotions required by PP+
and DP on all those games is typically zero, and the whole solution time is due exclusively to a very
limited number of attractors needed to compute the few regions contained in the games. We reserve the
presentation of the results for the extended version.
To further stress the DP technique in comparison with PP and PP+, we also generated a second
pool of much harder benchmarks, containing more than 500 games, each with 50000 positions, 12000
priorities and 2 moves per positions. We selected as benchmarks only random games whose solution
1 All
the experiments were carried out on a 64-bit 3.1GHz I NTEL ® quad-core machine, with i5-2400 processor and 8GB of
RAM, running U BUNTU 12.04 with L INUX kernel version 3.2.0. PGS OLVER was compiled with OCaml version 2.12.1.
2 The version of PP used in the experiments is actually an improved implementation of the one described in [2].
M. Benerecetti, D. Dell’Erba, & F. Mogavero
43
104
2×
4×
8×
PP
PP+
DP
Rec
Strat
30
20
DP
Time (sec)
103
102
10
0
0
5
10
15
Number of nodes /103
Figure 3: Solution times on random games
from [2].
20
101 1
10
102
PP+
103
104
Figure 4: Comparison between PP+ and DP on
random games with 50000 positions.
requires PP+ between 30 and 6000 seconds. The results comparing PP+ and DP are reported in Figure 4
on a logarithmic scale. The figure shows that, in few cases, PP+ actually performs better than DP. This
is due to the fact that the two algorithms follow different solution paths within the dominion space and
that delaying promotions may also defer the discovery of a closed dominion. Nonetheless, the DP policy
does pay off significantly on the vast majority of the benchmarks, often solving a game between two to
eight times faster than PP+, as witnessed by the points below the dash-dotted line labeled 2× in Figure 4.
In [2] it is shown that PP solves all the known exponential worst cases for the other solvers without
promotions and, clearly, the same holds of DP as well. As a consequence, DP only requires polynomial
time on those games and the experimental results coincide with the ones for PP.
6
Discussion
Devising efficient algorithms that can solve parity games well in practice is a crucial endeavour towards
enabling formal verification techniques, such as model checking of expressive temporal logics and automatic synthesis, in practical contexts. To this end, a promising new solution technique, called priority
promotion, was recently proposed in [2]. While the technique seems very effective in practice, the approach still admits exponential behaviours. This is due to the fact that, to ensure correctness, it needs to
forget previously computed partial results after each promotion. In this work we presented a new promotion policy that delays promotions as much as possible, in the attempt to reduce the need to partially
reset the state of the search. Not only the new technique, like the original one, solves in polynomial
time all the exponential worst cases known for other solvers, but requires polynomial time for the worst
cases of the priority promotion approach as well. The actual complexity of the algorithm is, however,
currently unknown. Experiments on randomly generated games also show that the new technique often
outperforms the original priority promotion technique, as well as the state-of-the-art solvers proposed in
the literature.
A Delayed Promotion Policy for Parity Games
44
References
[1] K. Apt & E. Grädel (2011): Lectures in Game Theory for Computer Scientists. Cambridge University Press,
doi:10.1017/CBO9780511973468.
[2] M. Benerecetti, D. Dell’Erba & F. Mogavero (2016): Solving Parity Games via Priority Promotion. In:
CAV’16, LNCS 9780 (Part II), Springer, pp. 270–290, doi:10.1007/978-3-319-41540-6_15.
[3] K. Chatterjee, L. Doyen, T.A. Henzinger & J.-F. Raskin (2010): Generalized Mean-Payoff and
Energy Games.
In: FSTTCS’10, LIPIcs 8, Leibniz-Zentrum fuer Informatik, pp. 505–516,
doi:10.4230/LIPIcs.FSTTCS.2010.505.
[4] A. Condon (1992):
The Complexity
doi:10.4230/LIPIcs.FSTTCS.2010.505.
of
Stochastic Games.
IC
96(2),
pp.
[5] A. Ehrenfeucht & J. Mycielski (1979): Positional Strategies for Mean Payoff Games.
doi:10.1007/BF01768705.
203–224,
IJGT 8(2),
[6] E.A. Emerson & C.S. Jutla (1988): The Complexity of Tree Automata and Logics of Programs (Extended
Abstract). In: FOCS’88, IEEE Computer Society, pp. 328–337, doi:10.1109/SFCS.1988.21949.
[7] E.A. Emerson & C.S. Jutla (1991): Tree Automata, muCalculus, and Determinacy. In: FOCS’91, IEEE
Computer Society, pp. 368–377, doi:10.1109/SFCS.1988.21949.
[8] E.A. Emerson, C.S. Jutla & A.P. Sistla (1993): On Model Checking for the muCalculus and its Fragments.
In: CAV’93, LNCS 697, Springer, pp. 385–396, doi:10.1016/S0304-3975(00)00034-7.
[9] O. Friedmann & M. Lange (2009): Solving Parity Games in Practice. In: ATVA’09, LNCS 5799, Springer,
pp. 182–196, doi:10.1007/978-3-642-04761-9_15.
[10] E. Grädel, W. Thomas & T. Wilke (2002): Automata, Logics, and Infinite Games: A Guide to Current
Research. LNCS 2500, Springer, doi:10.1007/3-540-36387-4.
[11] V.A. Gurevich, A.V. Karzanov & L.G. Khachivan (1990): Cyclic Games and an Algorithm to Find Minimax
Cycle Means in Directed Graphs. USSRCMMP 28(5), pp. 85–91, doi:10.1016/0041-5553(88)90012-2.
[12] M. Jurdziński (1998): Deciding the Winner in Parity Games is in UP ∩ co-UP. IPL 68(3), pp. 119–124,
doi:10.1016/S0020-0190(98)00150-1.
[13] M. Jurdziński (2000): Small Progress Measures for Solving Parity Games. In: STACS’00, LNCS 1770,
Springer, pp. 290–301, doi:10.1007/3-540-46541-3_24.
[14] M. Jurdziński, M. Paterson & U. Zwick (2008): A Deterministic Subexponential Algorithm for Solving Parity
Games. SJM 38(4), pp. 1519–1532, doi:10.1137/070686652.
[15] O. Kupferman & M.Y. Vardi (1998): Weak Alternating Automata and Tree Automata Emptiness.
STOC’98, Association for Computing Machinery, pp. 224–233, doi:10.1145/276698.276748.
In:
[16] O. Kupferman, M.Y. Vardi & P. Wolper (2000): An Automata Theoretic Approach to Branching-Time Model
Checking. JACM 47(2), pp. 312–360, doi:10.1145/333979.333987.
[17] A.W. Mostowski (1984): Regular Expressions for Infinite Trees and a Standard Form of Automata. In:
SCT’84, LNCS 208, Springer, pp. 157–168, doi:10.1007/3-540-16066-3_15.
[18] A.W. Mostowski (1991): Games with Forbidden Positions. Technical Report, University of Gdańsk, Gdańsk,
Poland.
[19] S. Schewe (2007): Solving Parity Games in Big Steps. In: FSTTCS’07, LNCS 4855, Springer, pp. 449–460,
doi:10.1007/978-3-540-77050-3_37.
[20] S. Schewe (2008): An Optimal Strategy Improvement Algorithm for Solving Parity and Payoff Games. In:
CSL’08, LNCS 5213, Springer, pp. 369–384, doi:10.1007/978-3-540-87531-4_27.
[21] S. Schewe, A. Trivedi & T. Varghese (2015): Symmetric Strategy Improvement. In: ICALP’15, LNCS 9135,
Springer, pp. 388–400, doi:10.1007/978-3-662-47666-6_31.
M. Benerecetti, D. Dell’Erba, & F. Mogavero
45
[22] J. Vöge & M. Jurdziński (2000): A Discrete Strategy Improvement Algorithm for Solving Parity Games. In:
CAV’00, LNCS 1855, Springer, pp. 202–215, doi:10.1007/10722167_18.
[23] W. Zielonka (1998): Infinite Games on Finitely Coloured Graphs with Applications to Automata on Infinite
Trees. TCS 200(1-2), pp. 135–183, doi:10.1016/S0304-3975(98)00009-7.
[24] U. Zwick & M. Paterson (1996): The Complexity of Mean Payoff Games on Graphs. TCS 158(1-2), pp.
343–359, doi:10.1016/0304-3975(95)00188-3.
| 8 |
COMPARING DECISION SUPPORT TOOLS FOR CARGO SCREENING PROCESSES
Peer-Olaf Siebers(a), Galina Sherman(b), Uwe Aickelin(c) , David Menachof(d)
(a, c)
(a)
School of Computer Science, Nottingham University, Nottingham NG8 1BB, UK.
(b, d)
Business School, Hull University, Hull HU6 7RX, UK.
[email protected], (b)[email protected], (c)[email protected], (c)[email protected]
ABSTRACT
When planning to change operations at ports there
are two key stake holders with very different interests
involved in the decision making processes. Port
operators are attentive to their standards, a smooth
service flow and economic viability while border
agencies are concerned about national security. The
time taken for security checks often interferes with the
compliance to service standards that port operators
would like to achieve.
Decision support tools as for example Cost-Benefit
Analysis or Multi Criteria Analysis are useful helpers to
better understand the impact of changes to a system.
They allow investigating future scenarios and helping to
find solutions that are acceptable for all parties involved
in port operations.
In this paper we evaluate two different modelling
methods, namely scenario analysis and discrete event
simulation. These are useful for driving the decision
support tools (i.e. they provide the inputs the decision
support tools require). Our aims are, on the one hand, to
guide the reader through the modelling processes and,
on the other hand, to demonstrate what kind of decision
support information one can obtain from the different
modelling methods presented.
Keywords: port operation, service standards, cargo
screening, scenario analysis, simulation, cost benefit
analysis, multi criteria analysis
1.
INTRODUCTION
Businesses are interested in the trade-off between
the cost of risk mitigation and the expected losses of
disruptions (Kleindorfer and Saad 2005). Airports and
seaports face an additional complexity when conducting
such risk analysis. In these cases there are two key stake
holders with different interests involved in the decision
processes concerning the port operation or expansion
(Bichou 2004).
On the one hand we have port operators which are
service providers and as such interested in a smooth
flow of port operations as they have to provide certain
service standards (e.g. service times) and on the other
hand we have the border agency which represent
national security interests that need to be considered.
Checks have to be conducted to detect threats such as
weapons, smuggling and sometimes even stowaways. If
the security checks take too long they can compromise
the service standard targets to be achieved by the port
operators. Besides these two conflicting interest there is
also the cost factor for security that needs to be kept in
mind. Security checks require expensive equipment and
well trained staff. However, the consequences for the
public of undetected threats passing the border can be
severe. It is therefore in the interest of all involved
parties to find the right balance between service,
security, and costs.
But how can we decide the level of security
required to guarantee a certain threshold of detection of
threats while still being economically viable and not
severely disrupting the process flow? A tool frequently
used by business and government officials to support
the decision making is Cost-Benefit Analysis (CBA)
(Hanley and Spash 1993). While CBA is useful in our
case to find the right balance between security and costs
it struggles to provide decision support for the
consideration of service quality. This is due to the fact
that service quality is difficult to be expressed in
monetary terms. Multi Criteria Analysis (MCA) is a
tool that allows taking a mixture of monetary and non
monetary inputs into account. It can use the results of a
CBA as monetary input and service quality estimators
as non monetary input and produce some tables and
graphs to show the relation between cost/benefits of
different options (DCLG 2009).
Different modelling methods can be used to
conduct CBA. Damodaran (2007) lists Scenario
Analysis (SA), Decision Trees (DT) and Monte Carlo
Simulation (MCS) as being the most common ones in
the field of Risk Analysis. Less frequently used in Risk
Analysis but often used in Operational Research to
investigate different operational practices is Discrete
Event Simulation (DES) (Turner and Williams 2005;
Wilson 2005). Depending on the world view the
modeller adopts DES models can either be process
oriented or object oriented (Burns and Morgeson 1988).
Besides being useful for estimating factors required for
CBA, DES models also allow to investigation how well
service standards are reached in the system under
investigation as it considers delays that one experiences
while moving through the system (Laughery et al 1998).
It is therefore well suited, in conjunction with CBA, to
provide all the inputs required for a MCA. However,
sometimes it is even possible to find a solution that does
not require any investment. It might be feasible to
achieve the goal simply by changing certain working
routines. In such cases DES might be able to provide
the information required for making a better informed
decision without the need to conduct a full MCA.
In previous work (Sherman et al 2010) we
compared the efficiency of different methods for
conducting CBA by using the same case study. This
strategy allowed us to contrast all modelling methods
with a focus on the methods themselves, avoiding any
distortions caused by differences in the chosen case
studies. In this paper we continue our investigation but
this time we focus on the two competing DES
approaches and what information they can provide to
assist our analysis.
The remainder of the paper is structured as follows.
In Section 2 we introduce our case study system, the
Port of Calais. In Section 3 we show in detail how to
conduct a CBA using SA for our case study system. In
Section 4 we discuss the additional features DES has to
offer and we explain how to implement a model of the
case study system using different world views. Once we
discussed the specific features of the different
implementations, we demonstrate by an experiment
how to use DES to provide decision support. In Section
5 we summarise the findings from Sherman et al (2010)
and this paper in form of a table that shows real world
phenomena that can be modelled with the different
modelling methods, the data requirements, and the
decision support information that is provided. Finally
we suggest some future research activities.
2.
CASE STUDY
Our case study involves the cargo screening
facilities of the ferry Port of Calais (France). In this area
of the port there are two security zones, one operated by
French authorities and one operated by the UK Border
Agency (UKBA). The diagram in Figure 1 shows the
process flow in this area.
According to the UKBA, between April 2007 and
April 2008 about 900,000 lorries passed the border and
approximately 0.4% of the lorries had additional human
freight (UKBA 2008). These clandestines as they are
called by the UKBA are people who are trying to enter
the UK illegally - i.e. without having proper papers and
French Border Control Offices and Detention Facilities
French
Passport Check
French
Screening
Facilities
Tickets
French
Deep Search
Facilities
Controlled by
Calais Chamber of Commerce (CCI)
documents.
The search for clandestines is organised in three
major steps, one by France and two by the UKBA. On
the French site all arriving lorries are screened, using
passive millimetre wave scanners for soft sided lorries
and heartbeat detectors for hard sided lorries. If lorries
are classified as suspicious after the screening further
investigations are undertaken. For soft sided lorries
there is a second test with CO2 probes and if the result
is positive the respective lorry is opened. For hard sided
lorries there is no second test and they are opened
immediately.
Although 100% of lorries are screened at the
French site, not all clandestines are found. This shows
that the sensor efficiency in the field is less than 100%.
Unfortunately it is not known for any of the sensors
how much less their efficiency is and field tests cannot
be undertaken as it would be unethical to deliberately
lock someone in the back of a lorry. Another problem
with estimating the sensor efficiency is that the sensor
data has to be interpreted by human operators, who
might misinterpret them. Again, no data exist about the
efficiency of operators.
On the British site only a certain percentage of
lorries (currently 33%) is searched at the British sheds.
Here a mixture of measures is used for the inspection,
e.g. CO2 probes, dogs, and opening lorries. Once the
lorries passed the British sheds they will park in the
berth to wait for the ferry. In the berth there are mobile
units operating that search as many of the parked lorries
as possible before the ferry arrives, using the same
mixture of measures than in the sheds. As shown in
Table 1 only about 50% of the clandestines detected
were found by the French, about 30% in the sheds and
20% by the mobile units in the berth. The overall
number of clandestines that are not found by the
authorities is of course unknown.
Table 1: Statistics from Calais
Statistic
Total number of lorries entering Calais harbour
Total number of positive lorries found
Total number of positive lorries found on French site
Total number of positive lorries found on UK site
… In UK Sheds
… In UK Berth
Value
900,000
3474
1,800
1,674
890
784
The key question is: What measures should we
British Border Control Offices and Detention Facilities
British
Passport Check
British
Search Facilities
Berth Parking
Space
British
Deep Search
Facilities
Controlled by
UK Border Agency (UKBA)
Figure 1: Border control operations at Calais
employ to reduce the overall number of clandestines
that make it through to the UK? One way to improve
detection rates could be to intensify the search
activities. As we can see, clandestines are found at all
stages of the cargo screening process and we can be
sure that not all clandestines will be found in the end.
However, when increasing search activities we also
have to consider the disruptions this might cause to
traffic flow. As mentioned in Section 1 we have
different stakeholders with different interests involved
in the decision making process. The two key objectives
on which to base the decision are as follows: minimise
costs (for the tax payer) and maximise service quality
(by minimising waiting times and disruptions)
3.
USING CBA & SA FOR DECISION SUPPORT
CBA seeks to value the expected impacts of an
option in monetary terms (DCLG 2009). It involves
comparing the Total Expected Costs (TEC) of each
option against the total expected benefits, to see
whether the benefits outweigh the costs, and by how
much (Nas 1996). The aim is to determine the
efficiency of the interventions relative to each other and
the status quo. In our case total expected costs comprise
the investments we have to make to increase the search
activities. This might include things like increasing staff
level, staff training, getting new sensors with better
technology or building new search facilities. The total
expected benefits will be the money that is saved for
each clandestine that does not make it into the UK.
Clandestines that made it to the UK are very likely to
work illegally and therefore causing some income tax
losses. Furthermore, they will not pay their
contributions to health insurance and pensions.
Therefore, the government will have to look after them
once they are ill or old.
Uncertainty in the CBA parameters is often
evaluated using a sensitivity analysis, which shows how
the results are affected by changes in the parameters. In
our case we have one parameter for which it is
impossible to collect any data: the number of positive
lorries that make it into the UK. A positive lorry is a
lorry that has at least one clandestine on board. Usually
clandestines attempt to cross the border in small groups.
Conducting a sensitivity analysis cannot solve this
problem but it can show the impact of changing the key
parameter on the decision variable and can
consequently provide some additional information to
aid the decision making process and improve
confidence in the decision made in the end.
SA is the process of analysing possible future
events by considering alternative possible outcomes
(Refsgaarda et al 2007). To define our scenarios we
consider the following two factors: Traffic Growth (TG)
and Positive Lorry Growth (PLG). For each of the
factors we define three scenarios. Table 2 shows the
factors and scenarios investigated and an estimate of the
likelihood for each of the scenarios to occur in the
future. A justification for the scenario definitions can be
found in the following two paragraphs.
Table 2: Two factors with three scenarios each and their
probability of occurrence
Factor 1
Scenario 1
Scenario 2
Scenario 3
Factor 2
Scenario 1
Scenario 2
Scenario 3
TG
p(TG)
0%
10%
20%
PLG
0.25
0.50
0.25
p(PLG)
-50%
0%
25%
0.33
0.33
0.33
Our TG scenarios are based on estimates by the
port authorities who are planning to build a new
terminal in Calais in 2020 to cope with all the additional
traffic expected. According to DHB (2008) between
2010 and 2020 the traffic in the Port of Dover is
expected to double. We assume that this is also
applicable to the Port of Calais and have therefore
chosen the 10% per annum growth scenario as the most
likely one, while the other two are equally likely.
Our PLG scenarios are reflecting possible political
changes. The number of people trying to enter the UK
illegally depends very much on the economical and
political conditions in their home countries. This is
difficult to predict. If the situation stabilises then we
expect no changes in the number of attempts to illegally
enter the UK. However, as our worst case scenario we
assume a 25% growth. Another factor that needs to be
considered is that trafficking attempts very much
depend on the tolerance of the French government to let
clandestines stay nearby the ferry port while waiting for
the opportunity to get on one of the lorries. However, it
is currently under discussion that the French authority
might close the camps where clandestines stay which
would reduce the number of potential illegal immigrants
drastically. Therefore we added a scenario where the
number of attempts is drastically reduced by 50%. As
there is currently no indication of which of these
scenarios is most likely to occur we have assigned them
the equal probabilities of occurrence. We will assume
that any changes in clandestine numbers will
proportionally affect successful and unsuccessful
clandestines.
The question that needs to be answered here is how
the UKBA should respond to these scenarios. We
assume that there are three possible responses: not
changing the search activities, increasing the search
activities by 10% or increasing the search activities by
20%. For the CBA Search Growth (SG) is our primary
decision variable.
The cost for increasing the search activities in
Calais is difficult to estimate, as there is a mixture of
fixed and variable cost and operations are often jointly
performed by French, British and private contractors.
However, if we concentrate on UKBA’s costs, we can
arrive at some reasonable estimates, if we assume that
any increase in searches would result in a percentage
increase in staff and infrastructure cost. Thus we
estimate that a 10% increase in search activity would
cost £5M and a 20% increase £10M.
Now we need to estimate the benefits we would
expect when increasing the search activities. First we
need to derive a figure for the number of Positive
Lorries Missed (PLM) and how much each of these
lorries cost the tax payer. A best guess of “successful”
clandestines is approximately 50 per month (600 per
year). With an average of four clandestines on each
positive lorry an estimated 150 positive lorries are
missed each year. It is estimated that each clandestine
reaching the UK costs the government approx. £20,000
per year. Moreover, UKBA estimates that the average
duration of a stay of a clandestine in the UK is five
years, so the total cost of each clandestine slipping
through the search in Calais is £100,000, resulting in
£400,000 per PLM.
It is probably a fair assumption that an increase in
searches will yield a decrease in the number of positive
lorries and an increase in traffic will yield an increase in
PLM. In absence of further information we assume
linear relationships between the two parameters. Table 3
shows the number of PLM) assuming there is no PLG.
Equation 1 has been used to produce the table.
PLM(TG,SG)=PLM*(1+TG)/(1+SG)
(1)
Table 3: PLM for (PLG=0)
PLG 0%
TG 0%
TG 10%
TG 20%
SG 0%
150.00
165.00
180.00
SG +10%
SG +20%
136.36
125.00
150.00
137.50
163.64
150.00
With this information we can now calculate the
Economic Cost (EC) for all the different scenarios (see
Table 4) using Equation 2:
EC(TG,SG,PLG)=PLM(TG,SG)*(1+PLG)
(2)
Table 4: EC for different SG options
SG 0%
TG 0%
TG 10%
TG 20%
SG 10%
TG 0%
TG 10%
TG 20%
SG 20%
TG 0%
TG 10%
TG 20%
PLG -50%
PLG 0%
PLG 25%
£30,000,000
£60,000,000
£75,000,000
£33,000,000
£66,000,000
£82,500,000
£36,000,000
£72,000,000
£90,000,000
PLG -50%
PLG 0%
PLG 25%
£27,272,727
£54,545,455
£68,181,818
£30,000,000
£60,000,000
£75,000,000
£32,727,273
£65,454,545
£81,818,182
PLG -50%
PLG 0%
PLG 25%
£25,000,000
£50,000,000
£62,500,000
£27,500,000
£55,000,000
£68,750,000
£30,000,000
£60,000,000
£75,000,000
To be able to calculate the benefit we need to know
the combined probabilities of each scenario’s likelihood
to occur (listed in Table 2). We get this by multiplying
the probabilities of the individual scenarios as shown in
Equation 3. The results of these calculations can be
found in Table 5.
p(TG,PLG)=p(TG)*p(PLG)
(3)
Table 5: Combined probabilities
TG 0%
TG 10%
TG 20%
PLG -50%
PLG 0%
PLG 25%
0.0833
0.0833
0.0833
0.1667
0.1667
0.1667
0.0833
0.0833
0.0833
Now we multiply the EC from Table 4 with the
probabilities from Table 5 to receive the TEC for each
SG option, using Equation 4. The results are shown in
Table 6. The final step is to calculate the Net Benefit
(NB) by using SG=0 as the base case. The NB can be
calculated using Equation 5 (where C = cost for SG).
TEC(SG)=∑(EC(SG,TG,PLG)*p(TG,PLG))
(4)
NB(SG)=TEC(SG=0)-TEC(SG)-C(SG)
(5)
Table 6: CBA for different SG options
Option
SG
TEC
C
NB
1
2
0%
£60,500,000
£0
£0
10%
£55,000,000
£5,000,000
£500,000
3
20%
£50,416,667
£10,000,000
£83,333
The results of the CBA suggest that there is a small
benefit in choosing option 2. However, we need to keep
in mind that the calculations are based on a lot of
assumptions. Therefore, a small difference in the NB
might just be random noise. In particular we have one
factor that we do not know anything about - PLM. In
order to learn more about the impact of this factor we
can conduct a sensitivity analysis. Running CBA for
several different PLMs reveals that for a higher PLM
option 3 would give us the most benefits but for a lower
PLM option 1 would be the best choice. The sensitivity
analysis shows how important it is to get a good
estimate of the PLM.
£8,000,000
£6,000,000
£4,000,000
100 PLM
150 PLM
£2,000,000
200 PLM
250 PLM
£0
SG=0
SG=10%
SG=20%
-£2,000,000
-£4,000,000
Figure 2: Sensitivity analysis results
If nothing else, the process of thinking through
scenarios is a very useful exercise. In our case SA
helped us to define the scenarios of interest. The
sensitivity analysis later helped us to understand the
importance of PLM when choosing an option. Overall
SA only allows investigating a small number of factors
but it seems to be a useful tool for structuring the
overall investigation and get a first estimate regarding
the benefits that one could gain from the different
option.
4.
USING DES FOR DECISION SUPPORT
The main benefit of DES models is that time and
space can be taken into account which allows us for the
first time to assess service quality (in terms of waiting
time) and consider real world boundaries (e.g. space
limitations for queues). As we said before our goal is to
find a balance between service quality and security.
CBA on its own or in conjunction with any of the
methods introduced before does not provide us with
information about the impact of changes on service
quality. Another benefit we get by using DES modelling
is that it simplifies adding more operational details
about the real system and better supports the utilisation
of real world data which both make the models
themselves and results more credible.
In the following two sub sections we describe two
different DES modelling approaches: Process Oriented
DES (PO DES) and Object Oriented DES (OO DES).
The differences between these methods will be
described later in the subsequent sections. Here we look
at commonalities. For both modelling approaches we
need additional data regarding arrival rates, cycle times
(time lorries spend in a shed for screening), space
availability between stations for queuing, and resources
for conducting the searches.
In order to be able to emulate the real arrival
process of lorries in Calais we created hourly arrival
rate distributions for every day of the week from a year
worth of hourly number of arrival records that we
received from UKBA. These distributions allow us to
represent the real arrival process, including quiet and
peak times. In cases where this level of detail is not
relevant we use an exponential distribution for
modelling the arrival process and the average arrival
time calculated from the data we collected as a
parameter for this distribution.
The cycle times are based on data that we collected
through observations and from interviews with security
staff. In order to represent the variability that occurs in
the real system we use different triangular distribution
for each sensor types. Triangular distributions are
continuous distributions bounded on both sides. In
absence of a large sample of empirical data a triangular
distribution is commonly used as a first approximation
for the real distribution (XJTEK 2005). Every time a
lorry arrives at a shed a value is drawn from a
distribution (depending on the sensor that will be used
for screening) that determines the time the lorry will
spend in the shed.
We did not put any queue size limits into our case
study simulation model. However, we have a run time
display variable for each individual queue that displays
the maximum queue length of that queue. In this way
we can observe which queue is over its limit without
interrupting the overall simulation run. If necessary,
queue length restrictions could easily be added. In fact,
in one of our experiments we restrict the capacity of UK
shed queue and let lorries pass without searching them
if the queue in front of the shed is getting too long. This
strategy improves the service quality but has a negative
impact on security. Simulation allows us to see how big
the impact of this strategy is in the different scenarios.
We also added some more details to the UK berth
operation representation. We now consider the (hourly)
arrival of the ferry boat. When the ferry arrives all
search activities in the berth area are interrupted and all
lorries currently in the berth area are allowed to board
the ferry (as long as there is enough space on the ferry),
regardless if they have been checked or not. Again, this
strategy improves the service quality but has a negative
impact on security. This is an optional feature of the
DES model that can either be switched on or off.
Finally, in DES we can consider resources in a
more detailed way. While previously resources have
only been playing a role as a fixed cost factor (cost for
SG), we can now take into account how different
resource quantities influence the process flow and
subsequently the service quality. These quantities can
vary between different scenarios, but also between
different times of day (e.g. peak and quiet times).
Clearly, as we can see from the above, DES helps
to improve the decision making process. It allows
besides the monetary estimates to get information on
service quality and gain further insight into system
operations. This additional insight can be used for
decision making but also for optimising system
operations. DES also allows you to conduct a sensitivity
analysis to find high impact factors that need to be
estimated more carefully. Of course this does not come
without costs. DES models usually take much longer to
build and need additional empirical data.
4.1. DES using a process oriented world view
As before, we use the standard procedure for calculating
the TEC for the different SG options that enables us to
conduct a basic CBA. For this the PO DES delivers the
number of Positive Lorries Found (PLF) which allows
us to calculate the number of PLM under the
assumption that there is a linear relationship between
these two measures. This number can then be used as an
input for the CBA.
On a first view the PO DES model looks very
similar to the MCS model presented in Sherman et al
(2010) and in fact it is an extended version of this
model. In addition to the routing information defined in
the MCS model we have added arrival rates, cycle times
for screening the lorries and available resources. The
data required for implementing these additions into the
model has been provided by UKBA.
We now have a stochastic and dynamic simulation
model that allows us to observe the evolution of the
system over time. This is a big benefit compared to the
static models we used previously as it allows us for the
first time to consider our second objective in the
analysis - to provide high quality service. One of the
key factors for providing high quality service is to keep
average waiting times below a specified threshold
(service standard). By adding arrival rates, cycle times
and available resources the model is able to produce
realistic waiting time distributions, which gives us an
indication of the service quality we are achieving with
different parameter settings.
Another useful output that PO DES provides is
resource utilisation. This information allows us to finetune our CBA as we are able to better estimate SG
costs. So far we have assumed that SG has a fixed cost
which linear correlated with TG. In reality however, the
cost for SG might depend on the utilisation of the
existing resources. If in the current situation facility and
staff utilisation is low then SG can perhaps be
implemented without any additional costs. PO DES
allows to test at what level of TG additional resources
are required (throughput analysis).
The PO DES model also allows us to analyse queue
dynamics. A useful statistic to collect in this context is
the “maximum queue sizes" which enables us to find
bottlenecks in our system. With this information we can
then pinpoint where in the system we should add
additional resources to achieve maximum impact.
Removing bottlenecks improves the system flow and
consequently service quality.
Another interesting feature of our simulation model
is that it allows us to represent temporal limited
interventions and see what their effect is on system flow
and detection rates of positive lorries. These procedures
could be officers speeding up when queues are getting
longer (missing more positive lorries) or changing
search rates over time (less search at peak times) or
stopping search completely once a certain queue length
has been reached. PO DES can help us to find out,
which of these strategies is best.
However, the process oriented modelling approach
as described above has some limitations with regards to
flexibility. We are using proportions for defining the
vehicle routing (and detection rates), based on historic
data. This assumes that even if we have changes in the
system these proportions would always remain the
same. While this is acceptable for many situations (in
particular for smaller changes) there are occasions
where this assumption does not hold. For example, if
we change the proportion of vehicles searched in the
berth from 5% to 50% we cannot simply assume that
the detection rate grows proportionally. While it might
be easy to spot some positive lorries as they might have
some signs of tampering (so the detection rate for these
is very high and these where the ones reported in the
historic data), it will get more difficult once these have
been searched and a growth in search rate does not yield
an equivalent growth in detection rate any more. This
needs to be kept in mind when using a PO DES
approach.
Furthermore, the assumption of a linear relationship
between number of PLF and number of PLM which is
one of our key assumptions for the CBA is quite weak
in connection with PO DES as this relationship can be
influenced by some of the interventions. For example,
the temporal limited intervention of stopping search
completely once a certain queue length has been
reached influences the overall number of clandestines in
the system (when calculating this value by adding up
the number of PLF and number of PLM), although this
number should not be influenced by any strategic
interventions. This needs to be considered in the
analysis of the results..
4.2. DES using an object oriented world view
OO DES has a very different world view compare to PO
DES. Here we model the system by focusing on the
representation of the individual objects in the system
(e.g. lorries, clandestines, equipment, or staff) rather
than using predefined proportions for the routing of
entities based on historic data. In fact we only use
historic data as inputs (arrival rates, search rates,
staffing) and for validation purposes (numbers of PLF
at the different stages). The system definition is now
based on layout information and assumptions on sensor
capabilities. Taking this new perspective means that we
transfer all the “intelligence” from the process
definition into the object definition and therefore
change our modelling perspective from top down to
bottom up.
Unlike in DT, MCS, and PO DES we do not use a
probabilistic framework for directing lorries (and
deciding which of the lorries are positive). Instead we
aim to recreate processes as they appear in the real
system. At the entrance of the port we mark a number of
lorry entities as positive (to simulate clandestines
getting on board to lorries). This gives us complete
control over the number of positive lorries entering the
system. As the lorries go through the system they will
be checked at the different stages (French side, UK
sheds, UK berth). For these checks we use sensors that
have a specific probability of detecting true and false
positives. Only lorries that have been marked positive
earlier can be detected by the sensors as true positives.
The marked lorries that are not detected at the end (port
exit) are the ones that will make it through to the UK.
Officers can decide how to use the detectors (e.g. speed
up search time if queues are getting to long, change
search rates, etc.) depending on environmental
conditions.
One of the advantages of this simulation method is
that it is much easier to manipulate object and system
parameters, like for example the sensor detection rates
and search rates. When we change these parameters we
do not have to worry about linear dependencies any
more. In fact, we can find out relationships between
variables by looking at the evolution of the system over
time. However, the most interesting thing here is that
due to the fact that we model sensors as objects with
each having individual detection rates the number of
PLM becomes an output of the simulation. Yet, we have
to keep in mind that this output is still a function of
unknowns: PLM = f(lorries marked positive, sensor
detection rates ...). But overall, this is very useful
information as we can now do some what-if analysis
and see the trends of how changes in the system setup
impact on number of PLM. We do not rely on the
implicit assumption of a linear relationship between
PLM and SG any more. While thiss is not directly
solving our problem of estimating how many positive
lorries we miss it gives us some additional information
about system behaviour that might help us to decide in
one way or another.
4.3. Experimentation with the OO DES model
In this sub section
ion we want to show how we can use our
DES model to test alternative scenarios. We have
implemented our model in AnyLogicTM 6.6, a JavaTM
based multi-paradigm
paradigm simulation software. Figure 3
shows a screenshot of a section of the simulation model
(berth area) during execution.
To set up our OO DES we tried to reproduce the
base scenario (as defined in Table 1) as closely as
possible by calibrating our model to produce the correct
values for the number of PLF at the different stages of
the search process. We can do this by varying
varyin the
number of positive lorries entering the port, the sensor
detection rates, and the berth search rate. The results of
the calibration exercise are presented in Table 7
(Scenario 1). To get somewhere close to the real PLF
values at the different stages we had to increase the
number of positive lorries entering the port. Hence, also
the PLM value is much higher than the best guess we
used in our CBA. The detection rates for the UK sheds
and the UK berth had to be much higher than the ones
on the French side,
de, in order to match the true rates. We
assume that in the real system this is due to the fact that
UKBA uses some intelligence when choosing the
lorries to be screened. Therefore their success rate is
much higher. In particular in the berth officers can drive
around and pick suspicious lorries without any time
pressure.
Our scenarios are defined by TG and SG. All other
parameters (grey) depend on these values (with the
exception of the queue size restriction). All
dependencies are explained in Section 3. For this
experiment we assume that there is no change in the
number of people trying to get into the UK (PLG=0). In
Table 7 we only show the changes in the scenario setup.
Empty fields mean that there is no change in the set-up
set
for that specific parameter.
For this experiment we have defined a service
standard that needs to be achieved. We have a threshold
time in system
ystem that should not be exceeded by more
than 5% of the lorries that go through the system. We
have one intervention that allows us to influence
influe
the
process flow. We can define a threshold for the
maximum queue size in front of the UK sheds (queue
size restriction). If this threshold is exceeded lorries are
let pass without screening. While this intervention
improves the flow there is a risk that
th more positive
lorries are missed as less lorries are inspected.
The first three scenarios deal with TG. There is no
problem in regards to compliance with service
standards. Resource utilisation does not change as the
number of searches does not change. However, the
number of PLF is decreasing while the number of PLM
is going up. The next two scenarios deal with SG. The
increase in search activities causes some delays and at a
Figure 3: OO DES running in AnyLogicTM (screenshot of berth area only)
Table 7: OO DES simulation experiment set-ups and results (10 replications)
Scenarios
Traffic Growth (TG)
1
Queue size restriction
990000
1080000
900000
0.00550
0.00500
0.00458
0.00550
UK Sheds
0.330
0.300
0.275
0.363
0.396
UK Berth
0.600
0.545
0.500
0.660
0.720
2
3
4
5
0%
Arrivals
900000
0.41
UK Sheds
0.80
UK Berth
0.95
UK Sheds
10
9
7
0.858
1.019
1.268
0.863
0.859
0.860
0.863
UK Sheds
2.612
2.474
2.321
3.452
5.046
3.940
3.763
Service problem
1.831
1.783
1.856
2.439
3.620
2.901
2.788
18.099
18.085
18.155
18.517
19.274
18.893
18.834
0.049
0.019
0.019
0.020
0.036
0.068
0.052
UK Sheds
0.676
0.676
0.677
0.744
0.812
0.803
0.801
UK Berth
0.808
0.808
0.809
0.868
0.915
0.914
0.914
1774.9
1765.5
1745.9
1780.5
1774.3
1757.5
1769.7
900.8
814.0
733.8
981.2
1078.0
1061.2
1042.8
France
UK Sheds
UK Berth
Missed
699.9
658.4
630.7
715.9
743.0
746.5
746.8
1590.1
1697.2
1797.0
1480.7
1365.7
1361.7
1358.1
SG of 20% the system does not comply with service
standards any more. On the other hand, as expected, an
increase in search activities improves the number of
PLF and reduces the number of PLM. Scenario 5
indicates that service standards cannot be achieved with
the current staff/facilities and that an investment has to
be made in order to reduce the number of PLM and
comply with service standards. However, the last
scenario shows that there is also a strategic solution
possible that does not require any investment. By
managing the queues in front of the UK sheds it is
possible to reduce the number of service problems to
fewer than 5% (compliant with service standards) while
still keeping the number of PLM at a very low level.
Therefore, when applying this intervention no
investment is required.
As we have found a solution for our problem that
does not require balancing costs and benefits there is no
need to conduct a MCA. However, for other scenarios
where we are not so lucky we might want to consider
using MCA. A good guide to MCA with a worked
example is provided by DCLG (2009).
5.
6
France
Overall
Positive lorries
7
20%
off
1
Time in system (avg)
Resource utilisation
10%
6
0.44
France
Results
Waiting times (avg)*1)
5
0%
Positive
Detection Rates
4
20%
Soft-sided
Search rate
3
10%
Search Growth (SG)
Lorries
2
0%
CONCLUSIONS
To help the managers and analysts to decide which
method best to use for supporting their decision
processes concerning port operation and expansion it is
important to be clear about the real world phenomena
that can be considered with the different methods and
the decision support information that can be obtained by
applying them. Table 8 list these for all the methods
discussed in Sherman et al (2010) and in this paper.
The methods can be divided in two categories:
static and dynamic. For static methods the passage of
time is not important (Rubinstein and Kroese 2008).
SA, DT, and MCS belong to this category. The lack of a
concept of time in these methods makes is impossible to
analyse service quality of a system as all performance
measures for such analysis rely on the passage of time.
On the contrary, dynamic methods consider the passage
of time. DES belongs to this category and allows
evaluate a system’s compliance with service standards.
Besides, it provides other useful information about the
dynamics of the system that can be used for optimising
processes and improving flows.
DES can be implemented in different ways, either
as PO DES or OO DES. In our experience PO DES
seems to be easier to implement but less flexible (easier
to manipulate). OO DES seem to be more flexible but
also more difficult to implement. However, it has some
advantages. In our case OO DES was the only tool that
also provided an estimate of the number of PLM, which
helps us to better evaluate the effect of different
interventions.
For evaluating the trade-off between security and
cost CBA is a valuable tool. If we want to balance
security, cost and service then MCA is a better choice.
While CBA relies on monetary inputs, MCA allows
using monetary and non-monetary inputs. This is useful
as service quality is difficult to capture by monetary
measures. However, sometimes none of these tools is
required for the analysis as the answer might come
directly from the model as we have shown in Section
4.3.
A natural extension of the OO DES modelling
approach would be to add “intelligent” objects that have
a memory, can adapt, learn, move around, and respond
to specific situations. These could be used to model
officers that dynamically adapt their search strategies
based on their experiences, but also clandestines
Table 8: Comparison of modelling methods regarding real world phenomena that can be considered in the model (black)
and decision support information that can be obtained from the model (red)
SA
Scenarios (factors and
the decision variable)
Linear relationships
TEC
PLM, PLF
DT
Scenarios (factors and
the decision variable)
Linear relationships
TEC
PLM, PLF
System structure
Existing resources
MCS
Scenarios (factors and
the decision variable)
Linear relationships
TEC
PLM, PLF
System layout
Existing resources
System variability
learning from failed attempts and improving their
strategies when trying again. We are currently working
on implementing such “intelligent” objects.
REFERENCES
Bichou, K., 2004. The ISPS code and the cost of port
compliance: An initial logistics and supply chain
framework for port security assessment and
management. Maritime Economics and Logistics,
6(4), 322-348.
Burns, J.R. and Morgeson, J.D., 1988. An objectoriented world-view for intelligent, discrete, nextevent simulation. Management Science, 34(12),
1425-1440.
Damodaran, A., 2007. Strategic Risk Taking: A
Framework for Risk Management. New Jersey:
Wharton School Publishing.
Department for Communities and Local Government,
2009. Multi-Criteria Analysis: A Manual. London:
DCLG. Available from: http://eprints.lse.ac.uk/
12761/1/Multi-criteria_Analysis.pdf [accessed 19
July 2011].
Dover Harbour Board, 2008. Planning for the next
generation - Third round consultation document.
Dover: DHB. Available from http://www.dover
port.co.uk/_assets/client/images/collateral/dover_c
onsultation_web.pdf [accessed 20 July 2011]
Hanley, N. and Spash, C.L., 1993. Cost-Benefit
Analysis and the Environment. Hampshire:
Edward Elgar Publishing Ltd.
Kleindorfer, P.R. and Saad, G.H., 2005. Managing
disruption risks in supply chains. Production and
Operations Management, 14 (1), 53-98.
Laughery, R., Plott, B. and Scott-Nash, S., 1998.
Simulation of Service Systems. In: J Banks, ed.
Handbook of Simulation. Hoboken, NJ: John
Wiley & Sons.
PO DES
OO DES
Scenarios (factors and
Scenarios (factors and
the decision variable)
the decision variable)
Linear relationships
TEC
TEC
PLM, PLF
PLM, PLF
System layout
System layout
Existing resources
Existing resources
System variability
System variability
Service time distributions Servic time distributions
Resource utilisation
Resource utilisation
Dynamic system
Dynamic system
constraints
constraints
(e.g. peak times)
(e.g. peak times)
System throughput
System throughput
Waiting times
Waiting times
(service quality)
(service quality)
Time in system
Time in system
Bottleneck analysis
Bottleneck analysis
Dynamic decisions by
Dynamic decisions by
system
objects
Sensor detection rates
Number of pos. lorries
entering the system
Nas, T.F., 1996. Cost-Benefit Analysis: Theory and
Application. Thousand Oaks,CA: Sage Publishing.
Refsgaarda, J.C., Sluijsb, J.P., Højberga, A.L. and
Vanrolleghem, P.A., 2007. Uncertainty in the
environmental modelling process - A framework
and guidance. Environmental Modelling &
Software, 22(11), 1543-1556.
Rubinstein R.Y. and Kroese, D.P., 2008. Simulation
and the Monte Carlo Method (2nd Edition). New
York: John Wiley & Sons.
Sherman, G., Siebers, P.O., Aickelin, U. and Menachof,
D., 2010. Scenario Analysis, Decision Trees and
Simulation for Cost Benefit Analysis of the Cargo
Screening
Process.
Proceedings
of
the
International Workshop of Applied Modelling and
Simulation (WAMS). May 5-7, Buizos, Brasil.
Turner, K. and Williams, G., 2005. Modelling
complexity in the automotive industry supply
chain. Journal of Manufacturing Technology
Management, 16(4), 447-458.
UK Border Agency, 2008. Freight Search Figures
provided by the UK Border Agency [unpublished]
Wilson, D.L., 2005. Use of modeling and simulation to
support airport security. IEEE Aerospace and
Electronic Systems Magazine, 208, 3-8.
XJTEK, 2005. AnyLogic User's Guide. St. Petersburg,
Russian Federation: XJ Technologies.
| 5 |
arXiv:1508.07119v1 [] 28 Aug 2015
BI-COHEN-MACAULAY GRAPHS
JÜRGEN HERZOG AND AHAD RAHIMI
Abstract. In this paper we consider bi-Cohen-Macaulay graphs, and give a complete classification of such graphs in the case they are bipartite or chordal. General bi-Cohen-Macaulay graphs are classified up to separation. The inseparable
bi-Cohen-Macaulay graphs are determined. We establish a bijection between the
set of all trees and the set of inseparable bi-Cohen-Macaulay graphs.
Introduction
A simplicial complex ∆ is called bi-Cohen-Macaulay (bi-CM), if ∆ and its Alexander dual ∆∨ are Cohen-Macaulay. This concept was introduced by Fløystad and
Vatne in [8]. In that paper the authors associated to each simplicial complex ∆ in
a natural way a complex of coherent sheaves and showed that this complex reduces
to a coherent sheaf if and only if ∆ is bi-CM.
The present paper is an attempt to classify all bi-CM graphs. Given a field K and
a simple graph on the vertex set [n] = {1, 2, . . . , n}, one associates with G the edge
ideal IG of G, whose generators are the monomials xi xj with {i, j} an edge of G. We
say that G is bi-CM if the simplicial complex whose Stanley-Reisner ideal coincides
with IG is bi-CM. Actually, this simplicial complex is the so-called independence
complex of G. Its faces are the independent sets of G, that is, subsets D of [n] with
{i, j} 6⊂ D for all edges {i, j} of G.
By its very definition, any bi-CM graph is also a Cohen-Macaulay graph (CM
graph). A complete classification of all CM graphs is hopeless if not impossible.
However, such a classification is given for bipartite graphs [10, Theorem 3.4] and for
chordal graphs [11]. We refer the reader to the books [9] and [14] for a good survey
on edge ideals and its algebraic and homological properties.
Based on the classification of bipartite and chordal CM graphs, we provide in
Section 2 a classification of bipartite and chordal bi-CM graphs, see Theorem 2.1
and Theorem 2.2. In Section 1 we first present various characterizations of bi-CM
graphs. By using the Eagon-Reiner theorem [5], one notices that the graph G is
bi-CM if and only if it is CM and IG has a linear resolution. Cohen-Macaulay ideals
generated in degree 2 with linear resolution are of very special nature. They all
arise as deformations of the square of the maximal ideal of a suitable polynomial
2010 Mathematics Subject Classification. 05E40, 13C14.
Key words and phrases. Bi-Cohen–Macaulay, Bipartite and chordal graphs, Generic graphs,
Inseparability.
This paper was written during the visit of the second author at Universität Duisburg-Essen,
Campus Essen. He is grateful for its hospitality.
1
ring. From this fact arise constraints on the number of edges of the graph and on
the Betti numbers of IG .
Though a complete classification of all bi-CM graphs seems to be again impossible,
a classification of all bi-CM graphs up to separation can be given, and this is the
subject of the remaining sections.
A separation of the graph G with respect to the vertex i is a graph G′ whose vertex
set is [n] ∪ {i′ } having the property that G is obtained from G′ by identifying i with
i′ and such that xi − xi′ is a non-zerodivisor modulo IG′ . The algebraic condition
on separation makes sure that the essential algebraic and homological invariants of
IG and IG′ are the same. In particular, G is bi-CM if and only if G′ is bi-CM. A
graph which does not allow any separation is called inseparable, and a inseparable
graph which is obtained by a finite number of separation steps from G is called
a separable model of G. Any graph admits separable models and the number of
separable models of a graph is finite. Separable and inseparable graphs from the
view point of deformation theory have been studied in [1].
In Section 4 we determine all inseparable bi-CM graphs on [n] vertices. Indeed,
in Theorem 4.4 it is shown that for any tree T on the vertex set [n] there exists a
unique inseparable bi-CM graph GT determined by T , and any inseparable bi-CM
graph is of this form. Furthermore, if G is an arbitrary bi-CM graph and T is the
relation graph of the Alexander dual of IG , then GT is a separable model of G.
For a bi-CM graph G, the Alexander dual J = (IG )∨ of IG is a Cohen-Macaulay
ideal of codimension 2 with linear resolution. As described in [3], one attaches to
any relation matrix of J a relation tree T . Replacing the entries in this matrix
by distinct variables with the same sign, one obtains the so-called generic relation
matrix whose ideals of 2-minors JT and its Alexander has been computed in [13].
This theory is described in Section 3. The Alexander dual of JT is the edge ideal
of graph, which actually is the graph GT mentioned before and which serves as a
separable model of G.
1. Preliminaries and various characterizations of
Bi-Cohen-Macaulay graphs
In this section we recall some of the standard notions of graph theory which are
relevant for this paper, introduce the bi-CM graphs and present various equivalent
conditions of a graph to be bi-CM.
The graphs considered here will all be finite, simple graphs, that is, they will
have no double edges and no loops. Furthermore we assume that G has no isolated
vertices. The vertex set of G will be denoted V (G) and will be the set [n] =
{1, 2, . . . , n}, unless otherwise stated. The set of edges of G we denote by E(G).
A subset F ⊂ [n] is called a clique of G, if {i, j} ∈ E(G) for all i, j ∈ F with
i 6= j. The set of all cliques of G is a simplicial complex, denoted ∆(G).
A subset C ⊂ [n] is called a vertex cover of G if C ∩ {i, j} =
6 ∅ for all edges {i, j}
of G. The graph G is called unmixed if all minimal vertex covers of G have the
same cardinality. This concept has an algebraic counterpart. We fix a field K and
consider the ideal IG ⊂ S = K[x1 , . . . , xn ] which is generated by all monomials xi xj
2
with {i, j} ∈ E(G). The ideal IG is called the edge ideal of G. Let C ⊂ [n]. Then the
monomial prime ideal PC = ({xi : i ∈ C}) is a minimal prime ideal of IG if and only
if C is a minimal vertex cover of G. Thus G is unmixed if and only if IG is unmixed in
the algebraic sense. A subset D ⊂ [n] is called an independent set of G if D contains
no set {i, j} which is an edge of G. Note that D is an independent set of G if and
only if [n] \ D is a vertex cover. Thus the minimal vertex covers of G correspond to
the maximal independent sets of G. The cardinality of a maximal independent is
called the independence number of G. It follows that the Krull dimension of S/IG
is equal to c, where c is the independence number of G.
The graph G is called bipartite if V (G) is the disjoint union of V1 and V2 such that
V1 and V2 are independent sets, and G is called disconnected if V (G) is the disjoint
union of W1 and W2 and there is no edge {i, j} of G with i ∈ W1 and j ∈ W2 . The
graph G is called connected if it is not disconnected.
A cycle C (of length r) in G is a sequence of edges {ik , jk } with k = 1, 2, . . . , r
such that jk = ik+1 for k = 1, . . . , r − 1 and jr = i1 . A cord of C is an edge {i, j}
of G with i, j ∈ {i1 , . . . , ir } and {i, j} is not an edge of C. The graph G is called
chordal if each cycle of G of length ≥ 4 has a chord. A graph which has no cycle
and which is connected is called a tree.
Now we recall the main concept we are dealing with in this paper. Let I ⊂ S be
T
a squarefree monomial ideal. Then I = m
j=1 Pj where each of the Pj is a monomial
∨
prime ideal of I. The ideal I which is minimally generated by the monomials
Q
uj = xi ∈Pj xi is called the Alexander dual of I. One has (I ∨ )∨ = I. In the case that
I = IG , each Pj is generated by the variables corresponding to a minimal vertex
cover of G. Therefore, (IG )∨ is also called the vertex cover ideal of G.
According to [8] a squarefree monomial ideal I ⊂ S is called bi-Cohen-Macaulay
(or simply bi-CM) if I as well as the Alexander dual I ∨ of I is a Cohen-Macaulay
ideal. A graph G is called Cohen-Macaulay or bi-Cohen-Macaulay (over K) (CM or
bi-CM for short), if IG is CM or bi-CM. One important result regarding the Alexander dual that will be used frequently in this paper is the Eagon-Reiner theorem
which says that I is a Cohen-Macaulay ideal if and only if I ∨ has a linear resolution. Thus the Eagon-Reiner theorem implies that I is bi-CM if and only if I is a
Cohen-Macaulay ideal with linear resolution. From this description it follows that
a bi-CM graph is connected. Indeed, if this is not the case, then there are induced
subgraphs G1 , G2 ⊂ G such that V (G) is the disjoint union of V (G1 ) and V (G2 ). It
follows that IG = IG1 + IG2 , and the ideals IG1 and IG2 are ideals in a different set
of variables. Therefore, the free resolution of S/IG is obtained as the tensor product
of the resolutions of S/IG1 and S/IG2 . This implies that IG has relations of degree
4, so that IG does not have a linear resolution.
From now on we will always assume that G is connected, without further mentioning it.
Proposition 1.1. Let K be an infinite field and G a graph on the vertex set [n]
with independence number c. The following conditions are equivalent:
(a) G is a bi-CM graph over K;
3
(b) G is a CM graph over K, and S/IG modulo a maximal regular sequence of
linear forms is isomorphic to T /m2T where T is the polynomial ring over K
in n − c variables and mT is the graded maximal ideal of T .
Proof. We only need to show that IG has a linear resolution if and only if condition
(b) holds. Since K is infinite and since S/IG is Cohen-Macaulay of dimension c, there
exists a regular sequence x of linear forms on S/IG of length c. Let T = S/(x). Then
T is isomorphic to a polynomial ring in n − c variables. Let J be the image of IG
in T . Then J is generated in degree 2 and has a linear resolution if and only if IG
has linear resolution. Moreover, J is mT -primary. The only mT -primary ideals with
linear resolution are the powers of mT . Thus, IG has a linear resolution if and only
if J = m2T .
Corollary 1.2. Let G be a graph on the vertex set [n] with independence number c.
The following conditions are equivalent:
(a) G is a bi-CM graph over K;
(b) G is a CM graph over K and |E(G)| = n−c+1
;
2
(c) G is a CM graph over K and the number of minimal vertex covers of G is
equal to n − c +1;
(d) βi (IG ) = (i + 1) n−c+1
for i = 0, . . . , n − c − 1.
i+2
Proof. For the proof of the equivalent conditions we may assume that K is infinite
and hence we may use Proposition 1.1.
2
(a) ⇐⇒ (b): With the notation of Proposition
1.1 we have J = mT if and only
n−c+1
if the number of generators of J is equal to
. Since IG and J have the same
2
number of generators and since the number of generators of IG is equal to |E(G)|,
the assertion follows.
(b) ⇐⇒ (c): Since S/IG is Cohen-Macaulay, the multiplicity of S/IG is equal to
the length ℓ(T /J) of T /J. On the other hand, the multiplicity is also the number
of minimal prime ideals of IG which coincides with the number of minimal vertex
covers of G. Thus the length of T /J is equal to the number of minimal vertex covers
of G. Since J = m2T if and only if ℓ(T /J) = n − c + 1, the assertion follows.
(a) ⇒ (d): Note that βi (IG ) = βi (J) for all i. Since J is isomorphic to the ideal
of 2-minors of the matrix
!
y1 y2 . . . yn−c
0
0 y1 . . . yn−c−1 yn−c
in the variables y1 , . . . , yn−c , the Eagon-Northcott complex ([4], [6]) provides a free
resolution of J and the desired result follows.
(d) ⇒ (a): It follows from the description of the Betti numbers of IG that
proj dim S/IG = n − c. Thus, depth S/IG = c. Since dimS/IG = c, it follows
that IG is a Cohen-Macaulay ideal. Since |E(G)| = β0 (IG ) = n−c+1
, condition (b)
2
is satisfied, and hence G is bi-CM, as desired.
Finally we note that G is a bi-CM graph over K if and only if the vertex cover ideal
of G is a codimension 2 Cohen-Macaulay ideal with linear relations. Indeed, let JG
4
be the vertex cover ideal of G. Since JG = (IG )∨ , it follows from the Eagon-Reiner
theorem JG is bi-CM if and only if IG is bi-CM.
2. The classification of bipartite and chordal bi-CM graphs
In this section we give a full classification of the bipartite and chordal bi-CM
graphs.
Theorem 2.1. Let G be a bipartite graph on the vertex set V with bipartition
V = V1 ∪ V2 where V1 = {v1 , . . . , vn } and V2 = {w1 , . . . , wm }. Then the following conditions are equivalent:
(a) G is a bi-CM graph;
(b) n = m and E(G) = {{vi , wj } : 1 ≤ i ≤ j ≤ n}.
Proof. (a) ⇒ (b): Since G is a bi-CM graph, it is in particular a CM-graph, and
so n = m, and by [9, Theorem 9.1.13] there exists a poset P = {p1 , . . . , pn } such
that G = G(P ). Here G(P ) is the bipartite graph on V = {v1 , . . . , vn , w1, . . . , wn }
whose edges are those 2-element subset {vi , wj } of V such that pi ≤ pj . Thus
IG = IG(P ) = HP∨ , where
\
HP =
(xi , yj )
pi ≤pj
is an ideal of S = K[{xi , yi}pi ∈P ], the polynomial ring in 2n variables over K. Since
G is bi-CM, it follows that HP is Cohen–Macaulay, and hence
proj dim S/HP = 2n − depth S/HP = 2n − dim S/HP = height HP = 2.
Thus proj dim HP = 1, and hence, by [10, Corollary 2.2], the Sperner number of P ,
i.e., the maximum of the cardinalities of antichains of P equals 1. This implies that
P is a chain, and this yields (b).
(b) ⇒ (a): The graph G described in (b) is of the form G = G(P ) where P is a
chain. By what is said in (a) ⇒ (b), it follows that G is bi-CM.
The following picture shows a bi-CM bipartite graph for n = 4.
x1
•
x2
•
x3
•
x4
•
•
y1
•
y2
•
y3
•
y4
Figure 1. A bi-CM bipartite graph.
Theorem 2.2. Let G be a chordal graph on the vertex set [n]. The following conditions are equivalent:
(a) G is a bi-CM graph;
5
(b) Let F1 , . . . , Fm be the facets of the clique complex of G. Then m = 1,
or m > 1 and
(i) V (G) = V (F1 ) ∪ V (F2 ) ∪ . . . ∪ V (Fm ), and this union is disjoint;
(ii) each Fi has exactly one free vertex ji ;
(iii) the restriction of G to [n] \ {j1 , . . . , jm } is a clique.
Proof. Let In,d be the ideal generated by all squarefree monomials of degree d in
∨
S = K[x1 , . . . , xn ]. It is known (and easy to prove) that In,d
= In,n−d+1 , and that
all these ideals are Cohen-Macaulay, and hence all bi-CM. If m = 1, then IG = In,2
and the result follows.
Now let m > 1. A bi-CM graph is a CM graph. The CM chordal graphs have
been classified in [11]: they are the chordal graphs satisfying (b)(i). Thus for the
proof of the theorem we may assume that (b)(i) holds and simply have to show that
(b)(ii) and (b)(iii) are satisfied if and only if IG has a linear resolution.
Let Pi be the monomial prime ideal generated by the variables xk with k ∈
V (Fi ) \ {ji }, and let G′ be subgraph of G whose edges do not belong to any Fi . It
is shown in the proof of [11, Corollary 2.1] that there exists a regular sequence on
S/IG such that after reduction modulo this sequence one obtains the ideal J ⊂ T
where T is the polynomial ring on the variables xk with k 6= ji for i = 1, . . . , m and
where
(1)
J = (P12 , . . . , Pm2 , IG′ ).
By Proposition 1.1, it follows that IG has a linear resolution if and only if J = m2T ,
where mT denotes the graded maximal ideal of T .
So, now suppose first that IG has a linear resolution, and hence J = m2T . Suppose
that some Fi has more than one free vertex, say Fi has the vertex k with k 6= ji .
Choose any Ft different from Fi and let l ∈ Fj with l 6= jt . Then xk and xl belong
to T but xk xl 6∈ J as can be seen from (1). This is a contradiction. Thus (b)(ii)
follows.
Suppose next that the graph G′′ which is the restriction of G to [n] \ {j1 , . . . , jm }
is not a clique. Then there exist i, j ∈ V (G′′ ) such that {i, j} 6∈ E(G′′ ). However,
since all xk with k ∈ V (G′′ ) belong to T and since J = m2T , it follows xi xj ∈ J.
Thus, by (1), xi xj ∈ Pk2 for some k or xi xj ∈ IG′ . Since (b)(ii) holds, this implies in
both cases that {i, j} ∈ E(G′′ ), a contradiction. Thus (b)(iii) follows.
Conversely, suppose (b)(ii) and (b)(iii) hold. We want to show that J = m2T . Let
xi , xj ∈ T . We have to show that xi xj ∈ J. It follows from the description of J
that x2k ∈ J for all xk ∈ T . Thus we may assume that i 6= j. If {i, j} is not an
edge of any Fk , then by definition it is an edge of G′ , and hence xi xj ∈ IG′ ⊂ J.
On the other hand, if {i, j} is an edge of Fk for some k, then i, j 6= ik , and hence
xi xj ∈ Pk2 ⊂ J. Thus the desired conclusion follows.
Let G be a chordal bi-CM graph as in Theorem 2.2(b) with m > 1. We call the
complete graph G′′ which is the restriction of G to [n] \ {j1 , . . . , jm } the center of G.
The following picture shows, up to isomorphism, all bi-CM chordal graphs whose
center is the complete graph K4 on 4 vertices:
6
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Figure 2.
3. Generic Bi-CM graphs
As we have already seen in the first section, the Alexander dual J = IG∨ of the edge
ideal of a bi-CM graph G is a Cohen–Macaulay ideal of codimension 2 with linear
resolution. The ideal J may have several distinct relation matrices with respect to
the unique minimal monomial set of generators of J. As shown in [3], one may
attach to each of the relation matrices of J a tree as follows: let u1 , . . . , um be the
unique minimal set of generators of J. Let A be one of the relation matrices of J.
Because J has a linear resolution, the generating relations of J may be chosen all of
the form xk ui − xl uj = 0. This implies that in each row of the (m − 1) × m-relation
matrix A there are exactly two non-zero entries (which are variables with different
signs). We call such relations, relations of binomial type.
Example 3.1. Consider the bi-CM graph G on the vertex set [5] and edges {1, 2}
{2, 3}, {3, 1}, {2, 4}, {3, 4}, {4, 5} as displayed in Figure 3.
x2
•
x1 •
x4
•
x5
•
•
x3
Figure 3.
The ideal J = IG∨ is generated by u1 = x2 x3 x4 , u2 = x1 x3 x4 , u3 = x2 x3 x5 and
u4 = x1 x2 x4 . The relation matrices with respect to u1 , u2, u3 and u4 are the matrices
and
x1 −x2
0
0
0 −x4
0
A1 = x5
,
x1
0
0 −x3
x1 −x2
0
0
0 −x4
0
A2 = x5
.
0 x2
0 −x3
7
Coming back to the general case, one assigns to the relation matrix A the following
graph Γ: the vertex set of Γ is the set V (Γ) = {1, 2, . . . , m}, and {i, j} is said to
be an edge of Γ if and only if some row of A has non-zero entries for the ith- and
jth-component. It is remarked in [3] and easy to see that Γ is a tree. This tree is in
general not uniquely determined by G.
In our Example 3.1 the relation tree of A1 is
x3
•
•
x4
•
x1
•
x2
Figure 4.
while the relation tree of A2 is
•
x3
•
x1
•
x2
•
x4
Figure 5.
Now let J be any codimension 2 Cohen-Macaulay monomial ideal with linear
resolution. Then, as observed in Section 1, J ∨ = IG where G is a bi-CM graph.
Now we follow Naeem [13] and define for any given tree T on the vertex set [m] =
{1, . . . , m} with edges e1 , . . . , em−1 the (m − 1) × m-matrix AT whose entries akl are
defined as follows: we assign to the kth edge ek = {i, j} of T with i < j the kth row
of AT by setting
(2)
xij ,
akl = −xji ,
0,
if l = i,
if l = j,
otherwise.
The matrix AT is called the generic matrix attached to the tree T .
By the Hilbert-Burch theorem [2], the matrix AT is the relation matrix of the ideal
JT of maximal minors of AT , and JT is a Cohen-Macaulay ideal of codimension 2
with linear resolution.
We let GT be the graph such that IGT = J ∨ , and call GT the generic bi-CM graph
attached to T .
Our discussion so far yields
Proposition 3.2. For any tree T , the graph GT is bi-CM.
8
In order to describe the vertices and edges of GT , let i and j be any two vertices
of the tree T . There exists a unique path P : i = i0 , i1 , . . . , ir = j from i to j. We
set b(i, j) = i1 and call b(i, j) the begin of P , and set e(i, j) = ir−1 and call e(i, j)
the end of P .
It follows from [13, Proposition 1.4] that IGT is generated by the monomials
xib(i,j) xje(i,j). Thus the vertex set of the graph GT is given as
V (GT ) = {(i, j), (j, i) : {i, j} is an edge of T }.
In particular, {(i, k), (j, l)} is an edge of GT if and only if there exists a path P from
i to j such that k = b(i, j) and l = e(i, j).
In Example 3.1, let T1 and T2 be the relation trees of A1 and A2 , respectively.
Then the generic matrices corresponding to these trees are
x12 −x21
0
0
x
0
−x
0
B1 = 13
,
31
x14
0
0
−x41
and
x12 −x21
0
0
0
−x31
0
B2 = x13
.
0
x24
0
−x42
The generic graphs corresponding to the trees T1 and T2 are displayed in Figure 6.
• x12
x21
•
• x21
x14 •
• x31
•
x13
•
x41
x31
•
x12 •
• x13
•
x42
GT2
GT1
Figure 6.
It follows from this description
that
GT has 2(m − 1) vertices. Since GT is bi-CM,
n−c+1
the number of edges of GT is
, see Corollary 1.2. Here n − c is the degree of
2
the generators of IG∨ which is m − 1. Hence GT has m2 edges. Among the edges of
GT are in particular the m − 1 edges {(i, j), (j, i)} where {i, j} is an edge of T .
Proposition 3.3. Let A be the relation matrix of a codimension 2 Cohen-Macaulay
monomial ideal J with linear resolution, and assume that all the variables appearing
in A are pairwise distinct. Let T be the relation tree of A. Then J is isomorphic to
JT and J admits the unique relation tree, namely T .
9
Proof. Since all variables appearing in A are pairwise distinct, we may rename the
variables appearing in a binomial type relation and call them as in the generic matrix
xij and xji . Then A becomes AT and this shows that J ∼
= JT .
To prove the uniqueness of the relation tree, we first notice that the shifts in the
multigraded free resolution of J are uniquely determined and independent of the
particular choice of the relation matrix A. A possibly different relation matrix A′
can arise from A only be row operations with rows of the same multidegree. Let
r1 , . . . , rl by rows of A with the same multidegree corresponding to binomial type
relations, and fix a column j. Then the non-zero jth columns of each of the ri must
be the same, up to a sign. Since we assume that the variables appearing in A are
pairwise distinct, it follows that l = 1. In particular, there is, up to the order of the
rows, only one relation matrix with rows corresponding to binomial type relations.
This shows that T is uniquely determined.
4. Inseparable models of Bi-CM graphs
In order to state the main result of this paper we recall the concept of inseparability introduced by Fløystad et al in [7], see also [12].
Let S = K[x1 , . . . , xn ] be the polynomial ring over the field K and I ⊂ S a
squarefree monomial ideal minimally generated by the monomials u1 , . . . , um . Let y
be an indeterminate over S. A monomial ideal J ⊂ S[y] is called a separation of I
for the variable xi if the following holds:
(i) the ideal I is the image of J under the K-algebra homomorphism S[y] → S
with y 7→ xi and xj 7→ xj for all j;
(ii) xi as well as y divide some minimal generator of J;
(iii) y − xi is a non-zero divisor of S[y]/J.
The ideal I is called separable if it admits a separation, otherwise inseparable. If J is
an ideal which is obtained from I by a finite number of separation steps, then we say
that J specializes to I. If moreover, J is inseparable, then J is called an inseparable
model of I. Each monomial ideal admits an inseparable model, but in general not
only one. For example, the separable models of the powers of the graded maximal
ideal of S have been considered by Lohne [12].
Forming the Alexander dual behaves well with respect to specialization and separation.
Proposition 4.1. Let I ⊂ S be a squarefree monomial ideal. Then the following
holds:
(a) If J specializes to I, then J ∨ specializes to I ∨ .
(b) The ideal I is separable if and only I ∨ is separable.
Proof. (a) It follows from [7, Proposition 7.2] that if L ⊂ S[y] is a monomial ideal
such that y−xi is a regular element on S[y]/L with (S[y]/L)/(y−xi)(S[y]/L) ∼
= S/I,
∨
∨
∨ ∼
then y −xi is a regular element on S[y]/L and (S[y]/L )/(y −xi )(S[y]/L ) = S/I ∨ .
Repeated applications of this fact yields the desired result.
(b) We may assume that the ideal L as in (a) is a separation of I with respect to
xi . Since (a) holds, it remains to show that y as well as xi divides some generator
10
of L∨ . By assumption this is the case for L. Suppose that y does not divide any
generator of L∨ . Then it follows from the definition of the Alexander dual that y also
does not divide any generator of (L∨ )∨ . This is a contradiction, since L = (L∨ )∨ .
Similarly it follows that xi divides some generator of L∨ .
We now apply these concepts to edge ideals. Let G be a graph on the vertex
set [n]. We call G separable if IG is separable, and otherwise inseparable. Let J
be a separation of IG for the variable xi . Then by the definition of separation,
J is again an edge ideal, say J = IG′ where G′ is a graph with one more vertex
than G. The graph G is obtained from G′ by identifying this new vertex with
the vertex i of G. Algebraically, this identification amounts to say that S/IG ∼
=
(S ′ /IG′ )/(y−xi )(S ′ /IG′ ), where S ′ = S[y] and y−xi is a non-zerodivisor of S ′ /IG′ . In
particular, it follows that IG and IG′ have the same graded Betti-numbers. In other
words, all important homological invariants of IG and IG′ are the same. It is therefore
of interest to classify all inseparable graphs. An attempt for this classification is
given in [1].
Example 4.2. Let G be the triangle and G′ be the line graph displayed in Figure 7.
x3
•
•
x1
x3 •
•
x2
•
x1
• x4
•
x2
Figure 7. A triangle and its inseparable model
Then IG′ = (x1 x2 , x1 x3 , x2 x4 ). Since Ass(IG′ ) = {(x1 , x2 ), (x1 , x4 ), (x2 , x3 )}, it
follows that x3 − x4 is a non-zero divisor on S ′ /IG′ where S ′ = K[x1 , x2 , x3 , x4 ].
Moreover, (S ′ /IG′ )/(x3 − x4 )(S ′ /IG′ ) ∼
= S/IG . Therefore, the triangle in Figure 7
is obtained as a specialization from the line graph in Figure 7 by identifying the
vertices x3 and x4 .
We denote by G(i) the complementary graph of the restriction GN (i) of G to
N(i) where N(i) = {j : {j, i} ∈ E(G)} is the neighborhood of i. In other words,
V (G(i) ) = N(i) and E(G(i) ) = {{j, k} : j, k ∈ N(i) and {j, k} 6∈ E(G)}. Note that
G(i) is disconnected if and only if N(i) = A ∪ B, where A, B 6= ∅, A ∩ B = ∅ and all
vertices of A are adjacent to those of B.
Here we will need the following result of [1, Theorem 3.1].
Theorem 4.3. The following conditions are equivalent:
(a) The graph G is inseparable;
(b) G(i) is connected for all i.
Now we are ready to state our main result.
11
Theorem 4.4. (a) Let T be a tree. Then GT is an inseparable bi-CM graph.
(b) For any inseparable bi-CM graph G, there exists a unique tree T such that
G∼
= GT .
(c) Let G be any bi-CM graph. Then there exists a tree T such that GT is an
inseparable model of G.
Proof. (a) By Corollary 3.2, GT is a bi-CM graph. In order to see that GT is
inseparable we apply the criterion given in Theorem 4.3, and thus we have to prove
that for each vertex (i, j) of GT and for each disjoint union N((i, j)) = A ∪ B of the
neighborhood of (i, j) for which A 6= ∅ =
6 B, not all vertices of A are adjacent to
those of B.
As follows from the discussion in Section 3,
N((i, j)) = {(k, l) : there exists a path from i to l, and j = b(i, l) and k = e(i, l)}.
In particular, (j, i) ∈ N((i, j)). Let N((i, j)) = A ∪ B, as above. We may assume
that (j, i) ∈ A. Since T is a tree, then there is no path from j to any l with
(k, l) ∈ N((i, j)), because otherwise we would have a loop in T . This shows that
(j, i) is connected to no vertex in B, as desired.
(b) Let A be a relation matrix of J = IG∨ and T the relation tree of A. The nonzero entries of A are variables with sign ±1. Say the kth row of A has the non-zero
entries akik and akjk with ik < jk . We may assume that the variable representing
akik has a positive sign while that akjk has a negative sign, and that this is so for
each row. We claim that the variables appearing in the non-zero entries of A are
pairwise distinct. By Proposition 3.3 this then implies that T is the only relation
tree of J and that G ∼
= GT .
In order to prove the claim, we consider the generic matrix AT corresponding to T .
Let S ′ be the polynomial ring over S in the variables xij and xji with {i, j} ∈ E(T ).
For each k we consider the linear forms ℓk1 = xik jk − akik and ℓk2 = xjk ik − akjk .
For example, for the matrix A2 in Example 3.1 the linear forms are ℓ11 = x12 − x1 ,
ℓ12 = x21 − x2 , ℓ21 = x13 − x5 , ℓ22 = x31 − x4 , ℓ31 = x24 − x2 and ℓ32 = x42 − x3 .
We let ℓ be the sequence of linear form ℓ11 , ℓ12 , . . . , ℓm−1,1 , ℓm−1,2 in S ′ . Then
(S ′ /JT S ′ )/(ℓ)(S ′/JT S ′ ) ∼
= S/J. Since both ideals, J as well as JT , are CohenMacaulay ideals of codimension 2, it follows that ℓ is a regular sequence on S ′ /JT S ′ .
Thus, assuming the variables appearing in the non-zero entries of A are not all
pairwise distinct, we see that J is separable. Indeed, suppose that the variable xk
appears at least twice in the matrix. Then we replace only one of the xk by the
corresponding generic variable xij to obtain the matrix A′ . Let J ′ be the ideal of
maximal minors of A′ . It follows from the above discussions that xij −xk is a regular
element of S[xij ]/J ′ . In order to see that J ′ is a separation of J it remains to be
shown that xij as well as xk appear as factors of generators of J ′ . Note that J ′ is
a specialization of JT . The minors of AT which are the generators of JT are the
Q
monomials m+1
i=1 xib(i,j) for j = 1, . . . , m + 1, see [13, Proposition 1.2]. From this
i6=j
description of the generators of JT it follows that all entries of AT appear as factors
of generators of JT . Since J ′ is a specialization of JT , the same holds true for J ′ ,
and since xij as well as xk are entries of A′ , the desired conclusion follows.
12
Now since we know that J is separable, Proposition 4.1(b) implies that G is
separable as well. This is a contradiction.
(c) Let A be a relation matrix of J = IG∨ and T the corresponding relation tree.
As shown in the proof of part (b), JT specializes to J, and hence IGT specializes
to IG , by Proposition 4.1(a). By part (a), the graph GT is inseparable. Thus we
conclude that GT is an inseparable model of G, as desired.
References
[1] K. Altmann, M. Bigdeli, J. Herzog, D. Lu, Algebraically rigid simplicial complexes and graphs,
arXiv: 1503.08080[].
[2] W. Bruns and J. Herzog, "Cohen–Macaulay rings" (Revised edition), Cambridge Studies in
Advanced Mathematics 39, Cambridge University Press, 1998.
[3] W. Bruns and J. Herzog, On multigraded resolutions, Math. Proc. Camb. Phil. Soc. 118
(1995), 245–257.
[4] W. Bruns and U. Vetter, Determinantal rings, Springer Verlag, Graduate texts in Mathematics
150 (1995).
[5] J. A. Eagon, V. Reiner, Resolutions of Stanley-Reisner rings and Alexander duality, J. Pure
Appl. Algebra 130 (1998), 265-275.
[6] D. Eisenbud, Commutative Algebra with a view to Algebraic geometry, Springer Verlag, 1995.
[7] G. Fløystad, B. M. Greve, J. Herzog, Letterplace and co-letterplace ideals of posets,
arXiv:1501.04523[].
[8] G. Fløystad, J. E. Vatne, (Bi-)Cohen–Macaulay simplicial complexes and their associated
coherent sheaves, Comm. Algebra 33 (2005), 3121–3136.
[9] J. Herzog and T. Hibi, Monomial Ideals. GTM 260. Springer 2010.
[10] J. Herzog and T. Hibi, Distributive lattices, Bipartite graphs and Alexander duality, J. Algebraic Combin., 22(2005), 289–302.
[11] J. Herzog and T. Hibi, X. Zheng, Cohen–Macaulay chordal graphs, J. Combin. Theory Ser.
A, 113(2006), 911–916.
[12] H. Lohne, The many polarizations of powers of maximal ideals, arXiv:1303.5780 (2013).
[13] M. Naeem, Cohen–Macaulay monomial ideals of codimension 2, Manuscripta math,
127(2008), 533–545.
[14] R. H. Villarreal, Monomial Algebras, Second Edition, Monographs and Research Notes in
Mathematics, CRC Press, 2015.
Jürgen Herzog, Fachbereich Mathematik, Universität Duisburg-Essen, Fakultät
für Mathematik, 45117 Essen, Germany
E-mail address: [email protected]
Ahad Rahimi, Department of Mathematics, Razi University, Kermanshah, Iran
E-mail address: [email protected]
13
| 0 |
GANs for Biological Image Synthesis
arXiv:1708.04692v2 [] 12 Sep 2017
Anton Osokin
INRIA/ENS∗, France
HSE†, Russia
Anatole Chessel
École Polytechnique‡,
France
Rafael E. Carazo Salas
University of Bristol, UK
Proteins
Abstract
Real images
Federico Vaggi
ENS∗, France
Amazon, USA
Generated images
Alp14
In this paper, we propose a novel application of Generative Adversarial Networks (GAN) to the synthesis of cells
imaged by fluorescence microscopy. Compared to natural
images, cells tend to have a simpler and more geometric
global structure that facilitates image generation. However,
the correlation between the spatial pattern of different fluorescent proteins reflects important biological functions, and
synthesized images have to capture these relationships to be
relevant for biological applications. We adapt GANs to the
task at hand and propose new models with casual dependencies between image channels that can generate multichannel images, which would be impossible to obtain experimentally. We evaluate our approach using two independent
techniques and compare it against sensible baselines. Finally, we demonstrate that by interpolating across the latent
space we can mimic the known changes in protein localization that occur through time during the cell cycle, allowing
us to predict temporal evolution from static images.
Arp3
Cki2
Mkh1
Sid2
Tea1
Figure 1. Real (left) and generated (right) images of fission yeast
cells with protein Bgs4 depicted in the red channel and 6 other
proteins depicted in the green channel. The synthetic images were
generated with our star-shaped GAN. The star-shaped model can
generate multiple green channels aligned with the same red channel whereas the training images have only one green channel.
GANs [14] are family of successful models, which have
recently received widespread attention. Unlike most other
generative models, GANs do not rely on training objectives
connected to the log likelihood. Instead, GAN training can
be seen as a minimax game between two models: the generator aims to output images similar to the training set given
random noise; while the discriminator aims to distinguish
the output of the generator from the training set.
Originally, GANs were applied to the MNIST dataset
of handwritten digits [23, 14]. The consequent DCGAN
model [40] was applied to the CelebA dataset [26] of human faces, the LSUN [54, 40] and ImageNet [7] datasets
of natural images. We are not aware of any works applying
GANs to biological images.
We work with a recently created bioimage dataset used
to extract functional relationships between proteins, called
the LIN dataset [9] comprising 170,000 fluorescence microscopy images of cells. In the LIN dataset, each image corresponds to a cell and is composed of signals from
two independent fluorescence imaging channels (“red” and
“green”), corresponding to the two different proteins tagged
with red or green-emitting fluorophores, respectively.
1. Introduction
In the life sciences, the last 20 years saw the rise of light
fluorescence microscopy as a powerful way to probe biological events in living cells and organisms with unprecedented
resolution. The need to analyze quantitatively this deluge of
data has given rise to the field of bioimage informatics [31]
and is the source of numerous interesting and novel data
analysis problems, which current machine learning developments could, in principle, help solve.
Generative models of natural images are among the most
long-standing and challenging goals in computer vision.
Recently, the community has made significant progress in
this task by adopting neural network machinery. Examples
of recent models include denoising autoencoders [2], variational autoencoders [22], PixelCNNs [46] and Generative
Adversarial Networks (GANs) [14].
∗ DI
École normale supérieure, CNRS, PSL Research University, Paris
Research University Higher School of Economics, Moscow
‡ LOB, École Polytechnique, CNRS, INSERM, Université Paris-Saclay
† National
1
In the LIN dataset, the red channel signal always corresponds to a protein named Bgs4, which localizes to the
areas of active growth of cells. The green channel signal instead corresponds to any of 41 different “polarity factors”,
that is proteins that mark specific areas of the cells’ cortex
that help define a cell’s geometry. Polarity factors include
proteins like Alp14, Arp3, Cki2, Mkh1, Sid2 or Tea1
(see Figure 1 for image examples), each of which controls
the same biological process “cellular polarity” albeit each
in a slightly different way. Each of the green-labeled polarity factors was imaged independently of the others. The
biological aim of the LIN study is to investigate how those
polarity factors (or proteins) interact with one another.
In this paper, we present a novel application of GANs to
generate biological images. Specifically, we want to tackle
two concrete limitations of large scale fluorescent imaging
screens: we want to use the common information contained
in the red channel to learn how to generate a cell with several of the green-labeled proteins together. This would allow us to artificially predict how the localizations of those
(independently imaged) proteins might co-exist in cells if
they had been imaged together and circumvent the current
technical limitations of being able to only image a limited
number of signal channels at the same time. Second, taking
advantage of the relationship between Bgs4 and the cell
cycle stage, we want to study the dynamical changes in cellular localization that proteins undergo through time as cells
grow and divide.
To accomplish this, we make several contributions. We
modify the standard DCGAN [40] architecture by substituting the interdependence of the channels with the causal
dependence of the green on the red, allowing us to observe
multiple modes of green signal for a single red setting. Observing the mode collapse effect of GANs [32, 44] for our
separable architecture, we incorporate the recent Wasserstein GAN (WGAN-GP) objective [1, 15]. We propose
two approaches to generate multi-channel images: regular
WGAN-GP trained on multi-channel images, where extra
channels for training are mined by nearest neighbor search
in the training set, and a novel star-shaped generator trained
directly on the two-channel images. We carefully evaluate our models using two quantitative techniques: the neural network two-sample test (combining ideas from [28]
and [15]) and by reconstructing samples in a held out test set
with the optimization approach of [32]. For reproducibility,
we make the source code and data available online.1
This paper is organized as follows. In Section 2, we discuss related works. Section 3 reviews the relevant biological background for our application. In Section 4, we review
GANs and present our modeling contributions. We present
the experimental evaluation in Section 5 and conclude in
Section 6.
1 https://github.com/aosokin/biogans
2. Related Work
Generative Adversarial networks (GANs). Since the
seminal paper by Goodfellow et al. [14] of 2014 (see
also [13] for a detailed review), GANs are becoming an increasingly popular model for learning to generate with the
loss functions learned jointly with the model itself. Models with adversarial losses have been used in a wide range
of applications, such as image generation [8, 40], domain
adaptation [12], text-to-image synthesis [41], synthesis of
3D shapes [51] and texture [25], image-to-image translation [19], image super resolution [24] and even generating
radiation patterns in particle physics [6]. However, these
models suffer from issues such as mode collapse and oscillations during training, making them challenging to use in
practice. The community is currently tackling these problems from multiple angles. Extensive effort has been placed
on carefully optimizing the architecture of the network
[40, 42] and developing best practices to optimize the training procedure2. Another active area of research is improving
the training objective function [35, 4, 39, 55, 1, 32, 44, 15].
In this paper, we build on the DCGAN architecture [40]
combined with the Wasserstein loss [1, 15], where the latter is used to help with the mode collapse issue, appearing
especially in our separable setting.
Conditioning for GANs. Starting from conditioning on
the class labels [33, 8, 36, 11], researchers have extended
conditioning to user scribbles [56] and images [48, 19, 57].
While the quality of images generated by [48, 19, 57] is
high, their models suffer from conditional mode collapse,
i.e., given the first (source) image there is very little or no
variety in the second (target image). This effect might be
related to the fact that the dataset contained only one target image available for each source image, so the model
has only indirect supervision for generating multiple conditioned images. We have applied the pix2pix method of [19]
to the LIN dataset and it learned to produce high-quality
green images given the red input. However, it was unable to
generate multiple realistic green images for one red input.
Given the difficulty in learning robust latent spaces when
conditioning on an image, we opted for an alternate approach. We propose a new architecture for the generator,
where the red channel and green channels are given independent random noise, and only the red channel is allowed
to influence the green channel, see Figure 2 (right).
Factors of variation. Chen et al. [4] and Mathieu et
al. [30] used unsupervised methods that encourage disentangling factors of variation in the learned latent spaces,
e.g., separating the numerical value of a handwritten digit
from its writing style. In contrast to these works, we do not
rely on unsupervised training to discover factors of variations, but explicitly embed the separation into the model.
2 https://github.com/soumith/ganhacks
Analysis and synthesis of biological images. With large
scale imaging studies becoming more common in biology,
the automated analysis of images is now crucial in many
studies to prove the existence of an effect, process large
datasets or link with models and simulation [31, 5]. Although the field has only recently embraced deep learning,
neural networks are now starting to make a splash, mainly
in classical discriminative settings [47].
While, to our knowledge, this work is the first reported
use of GANs on samples from fluorescent microscopy, generative models have been widely used in biology [34]. For
example, Johnson et al [20] learned to generate punctuate
patterns in cells (conditional on microtubule localization)
showing the potential of those methods in studying the relative sub-cellular positions of several proteins of interest.
Recently, sharing of large biological datasets has greatly
improved [27]. Further, EBI has made a large investment
to develop the IDR (Image Data Resource) [50], a database
built on top of open source tools to facilitate the sharing of
terabyte sized datasets with complex metadata.
3.2. Fission Yeast Cells
Fission yeast (Schizosaccharomyces pombe) cells are rod
shaped unicellular eukaryotes with spherical hemisphere
caps. They are born 7 µm long and 4 µm wide, and grow
in length to 14 µm while maintaining their width constant.
Newly born fission yeast cells start by growing only at the
pre-existing end until they reach a critical size, and then
switch to bipolar (from the two sides) growth. Bipolar
growth continues until cells reach their final length, when
they stop growing and start to form a cytokinetic ring in the
middle, which is responsible for cleaving the mother cells
into two daughters [38]. Interestingly, for most of the cell
cycle the length of the cell is a good proxy for its “age”, i.e.
the time it has spent growing since its “birth”.
Bgs4, the protein tagged in the red channel, is responsible for cell wall remodeling, and localizes to areas of active
growth (see Figure 1 for examples of images). Thus, by observing Bgs4, one can accurately infer growth cycle stage,
and predict where cell growth is occurring.
3.3. The LIN Dataset
3. Biological Background
3.1. Fluorescent Imaging
Fluorescence microscopy is based on fluorescent compounds, i.e., compounds which can absorb light at given
wavelength (the absorption spectrum) and re-emit it almost
immediately at a slightly different wavelength (the emission spectrum). In the case of fluorescent proteins (FPs),
of which the Green Fluorescent Protein (GFP) [3, 45] is the
first and most widely used one, the fluorescing compound
is attached to the protein of interest via genetic engineering. Many FPs of various absorption and emission spectra
exist, e.g., Red Fluorescent Protein (RFP) [43]. By genetically tagging different proteins of interest with FPs of different color, one can image them in the same cell at the
same time and thus investigate their co-localization. However, the number of proteins that can be tagged and imaged
at the same time is limited to 3-4 due to the limited number
of FPs with non-overlapping absorption spectra.
Multi-channel fluorescent images are very different from
natural images. In natural images, color is determined by
the illumination and the properties of a particular material
in the scene. In order to generate realistic natural samples,
a GAN must capture the relationship between the materials
that make up a particular object and its hues. In contrast, in
fluorescent images, the intensity of light in a given channel
corresponds to the local concentration of the tagged protein, and the correlation between signals in different channels represents important information about the relationship
between proteins, but the color does not reflect any intrinsic
property about the protein itself.
All experiments in this paper make use of a recent dataset
of images of fission yeast cells, which was originally produced to study polarity networks [9]. The LIN dataset consists of around 170,000 of images, with each image being centered on one cell; cell segmentation was performed
separately (see [9] for details) and the corresponding outline is also available. Each image is a 3D stack of 2D
images where each pixel correspond to a physical size of
100nm; each z-plane is distant by 300nm. Every image is
composed of two channels, informally called the “red” and
the “green”, where light emitted at a precise wavelength is
recorded. In this dataset two types of fluorescent-tagged
proteins are used: Bgs4 in the red channel, and one of 41
different polarity regulating proteins in the green channel.
A full description of all tagged proteins is beyond the scope
of this paper: we refer interested readers to [29, 9].
In this paper, we concentrate on a subset of 6 different
polarity factors, spanning a large set of different cellular localizations. This gives us 26,909 images of cells, which
we, for simplicity, center crop and resize to resolution of
48 × 80.
4. GANs for Image Generation
4.1. Preliminaries
GAN. The framework of generative adversarial networks [14, 13] is formulated as a minimax two-player game
between two neural networks: generator and discriminator. The generator constructs images given random noise
whereas the discriminator tries to classify if its input image
is real (from the training set) or fake (from the generator).
The goal of the generator is to trick the discriminator, such
that it cannot easily classify. The discriminator is often referred to as the adversarial loss for training the generator.
More formally, consider a data-generating distribution IPd and a training set of images x ∈ X coming from
it. The generator G(z; θG ) is a neural network parameterized by θG that takes random noise z from distribution IPz
as input and produces an image xfake ∈ X . The discriminator D(x; θD ) is a neural network parameterized by θD that
takes either a training image x or a generated image xfake
and outputs a number in the segment [0, 1], where zero is
associated with fake images and one – with the real images.
As introduced in [14], the key quantity is the negative crossentropy loss on the discriminator output:
generated
images
upconv, tanh
concat
upconv, batchnorm
ReLu
concat
upconv, batchnorm
ReLu
concat
upconv, batchnorm
ReLu
L(θD , θG ) = IEx∼IPdata log D(x; θD )
+ IEz∼IPz log(1 − D(G(z; θG ); θD )). (1)
The discriminator maximizes (1) w.r.t. θD and the generator, at the same time, minimizes (1) w.r.t. θG . In practice,
both optimization tasks are attacked simultaneously by alternating between the steps of the two optimizers.
As noted by [14], the objective log(1 −
D(G(z; θG ); θD )) often leads to saturated gradients
at the initial stages of the training process when the
generator is ineffective, i.e., its samples are easy to
discriminate from the real data. One practical trick to
avoid saturated gradients is to train the generator by
maximizing log D(G(z; θG ); θD ) instead.
Goodfellow et al. [14] showed that the minimax formulation (1) can be reformulated via minimization of the JensenShannon (JS) divergence3 between the data-generating distribution IPd and the distribution IPG induced by IPz and G.
For the architectures of both the generator and the discriminator, we largely reuse a successful version of Radford
et al. [40] called DCGAN. The generator of DCGAN (see
Figure 2, left) is based on up-convolutions [10] interleaved
with ReLu non-linearity and batch-normalization [17]. We
refer to [40] for additional details.
Wasserstein GAN. Recently, Arjovsky et al. [1] have
demonstrated that in some cases the JS divergence behaves
badly and cannot provide any useful direction for training,
e.g., when it is discontinuous. To overcome these degeneracies, they consider the earth mover’s distance (equivalent to
the 1-st Wasserstein distance) between the distributions
W (IPd , IPG ) =
inf
P
I ∈Π(IPd ,IPG )
IE(x,x0 )∼IP kx − x0 k,
(2)
where set Π(IPd , IPG ) is a set of all joint distributions IP
on x and x0 whose marginals are IPd and IPG , respectively.
Intuitively, the distance (2) indicates the cost of the optimal
3 The Jensen-Shannon divergence between the two distributions
is a symmetrized version of the Kullback-Leibler divergence, i.e.,
JS(IPd , IPG ) = 21 KL(IPd kIPM ) + 21 KL(IPG kIPM ), where IPM =
1
(IPG + IPd ) is the averaged distribution.
2
concat
upconv, batchnorm
ReLu
Gaussian noise
DCGAN generator
separable generator
Figure 2. Architectures of the DCGAN generator (left) and our
separable generator (right).
movement of the probability mass from IPd to IPG . According to [1] by using duality, one can rewrite (2) as
W (IPd , IPG ) = sup IEx∼IPd D(x)−IEx0 ∼IPG D(x0 ) , (3)
D∈C 1
where C 1 is the set of all 1-Lipschitz functions D : X → R.
Optimizing w.r.t. the set C 1 is complicated. As a practical approximation to the set of all 1-Lipschitz functions,
Arjovsky et al. [1] suggest to use neural networks D(x; θD )
with all parameters θD clipped to a fixed segment. Very recently, Gulrajani et al. [15] proposed a surrogate objective
to (3), which is based on the L2 -distance between the norm
of the discriminator gradient at specific points and one. In
all, we arrive at the minimax game
W (θD , θG ) = IEz∼IPz D(G(z; θG ); θD )
− IEx∼IPdata D(x; θD ) + R(θD ), (4)
where R is the regularizer (see [15] for details). The objective (4) is very similar to the original game of GANs (1),
but has better convergence properties. In what follows, we
refer to the method of [15] as WGAN-GP.
4.2. Model Extensions
In this section, we present our modeling contributions.
First, we describe our approach to separate the red and green
channels of the generator. Second, we discuss a way to train
a multi-channel generator using the two-channel data in the
LIN dataset. Finally, we propose a new star-shaped architecture that uses the red-green channel separation to obtain
multiple channels in the output.
Channel separation. The key idea of the channel separation consists in separating the filters of all the upconvolutional layers and the corresponding features into
two halves. The first set of filters is responsible for generating the red channel, while the second half generates the
green channel. To make sure the green channel matches the
red one, we use one way connections from the red convolutional filters towards the green ones. Figure 2 (right) depicts
our modification in comparison to DCGAN (left).
Multi-channel models. The LIN dataset [9] contains
only two-channel images, the red and one type of the green
at a time. Obtaining up to 4 channels simultaneously from
a set of 40 proteins (a fixed red and 3 greens) would require
the creation of nearly 60,000 yeast strains. Scaling even
higher is currently impossible with this imaging technique
due to the limited number of FPs with non-overlapping absorption spectra. Because of these constraints, training the
generator only on a subset of channels is a task of practical importance. The first approach we present consists in
training a multi-channel GAN using an artificial training set
of multi-channel images created from the real two-channel
images. We proceed as follows: for each two-channel image, we search in every other class for its nearest-neighbors
(using L2 -distance) in the red channel. Then, we create a
new sample by combining the original image with the green
channels of its nearest neighbors in other classes.
We can then use this dataset to train a multi-output DCGAN. The only difference in the architecture is that the generator outputs c+1 channels, where c is the number of green
channels used in the experiment, and the discriminator takes
(c + 1)-channel images as input.
Star-shaped model. In our experiments, the multichannel approach did not perform well, because, even using
the nearest neighbors, the extra greens channels were not
exactly consistent with the original red signal, emphasizing
the importance of correlations between channels.
To overcome this effect, we propose a star-shaped architecture for the generator, consisting of a single red tower
(a stack of upconvolutional layers with non-linearities inbetween) that feeds into c green towers (see Figure 2, right).
Unlike the multi-channel model described above, the green
outputs are independent conditioned on the red. Thus, the
model can be trained using the existing two-channel images.
In our experiments, we found it important to use batch
normalization [18] in the red tower only once, compared to
a more naive way of c times. The latter leads to interference between several normalizations of the same features
and prevents convergence of the training scheme.
After the forward pass, we use c discriminators attached
to different versions of the greens, all paired with the same
generated red. For the WGAN-GP version of this model,
we apply the original procedure of [15] with the modifica-
tion that during the discriminator update we simultaneously
update all c discriminators, and the generator receives back
the accumulated gradient.
5. Experiments
Evaluating generative models is in general non-trivial. In
the context of GANs and other likelihood-free approaches,
evaluation is even harder, because the models do not provide a way to compute the log-likelihood on the test set,
which is the most common evaluation technique. Recently,
a number of techniques applicable to evaluating GANs have
been proposed [28, 32, 52]. Among those, we chose the following two: the neural-network two-sampled test discussed
by [28] combined with the surrogates of the earth mover’s
distance [1, 15] and an optimization-based approach of [32]
to check if the test samples can be well reconstructed. We
modify these techniques to match our needs and check their
performance using sensible baselines (Sections 5.1 and 5.2).
Finally, in Section 5.3, we show the cell growth cycle generated with our star-shaped model.
5.1. Neural-network Two-sample Test
Lopez-Paz and Oquab [28] have recently applied the
classifier two-sample test (C2ST) to evaluate the quality of
GAN models. A trained generator is evaluated on a heldout test set. This test test is split again into a test-train and
test-test subsets. The test-train set is then used to train a
fresh discriminator, which tries to distinguish fake images
(from the generator) from the real images. Afterwards, the
final measure of the quality of the generator is computed as
the performance of the new discriminator on the test-test set
and the freshly generated images.
When C2ST is applied for images, the discriminator is
usually a ConvNet, but even very small ConvNets can discriminate between fake and real images almost perfectly. To
obtain a useful measure, Lopez-Paz and Oquab [28] deliberately weaken the ConvNet by fixing some of its parameters
to the values obtained by pre-training on ImageNet.
ImageNet-based features are clearly not suitable for LIN
cell images, so we weaken the discriminator in another way.
We use the negation of the WGAN-GP [15] discriminator
objective as a surrogate to the earth mover’s distance. Similar to [28], we train this discriminator on the test-train subset and compute the final estimates on the test-test subset.
For all the runs, we repeat the experiment on 10 different
random splits of the test set and train the discriminator for
5000 steps with the optimizer used by [15]. For the experiments involving multi-channel generators, we train a separate discriminator for each green channel paired with the
red channel.
In our experiments, the training procedure occasionally
failed and produced large outliers. To be more robust, we
always report a median over 10 random splits together with
6.7±
0.1
50k
3.2±
0.1
sep. WGAN-GP
1k
6.0±
0.1
5k
2.2±
0.1
50k
1.6±
0.1
-
−0.7
±0.6
Figure 3. Scores of the classifier two-sample test (C2ST) between
the generators and the hold-out test sets of images. We report the
scores of separable GAN and WGAN-GP at different stages of
training. For each line, we show the samples from the corresponding models to demonstrate that the lower C2ST scores correspond
to better-looking (sharper, less artifacts, etc.) images. Best viewed
in color and on a screen. An extended version of this figure is
given in Appendix A.
the median absolute deviation to represent the variance.
In Appendix A, we additionally quantitatively and qualitatively compare the WGAN-GP [15], WGAN [1] and crossentropy discriminators used in C2ST.
Sanity checks of the two-sample test. We evaluate C2ST
in two baseline settings. First, we compare the separable GAN [40] and the WGAN-GP [15] models (based on
the same DCGAN architecture, trained on the same set
of images of 6 proteins) at different stages of the training
process. For each of these models, we also show qualitative difference between the generated images. Figure 3
shows that along the training process, quality of both GAN
and WGAN-GP improves, i.e., generated images become
sharper and contain less artifacts, consistent with the C2ST
score. To better visualize the difference between the trained
GAN and WGAN-GP models, in Figure 4, we show multiple samples of the green channel corresponding to the same
red channel. We see that the C2ST evaluation captures several aspects of the visual quality (such as sharpness, correct
shape, absence of artifacts, diversity of samples) and provides a meaningful score.
From Figures 3 and 4, we also conclude that the quality
of GAN samples is worse than the quality of WGAN-GP
according to visual inspection. C2ST (based on WGANGP) confirms this observation, which is not surprising given
that WGAN-GP was trained using the same methodology. Surprisingly, when evaluated with the cross-entropy
C2ST, WGAN-GP also performs better than GAN (see Ap-
Samples from separable models
separable GAN
5k
Samples
separable WGAN-GP
11.0±
0.1
sep. GAN
1k
Real
Steps C2ST
Figure 4. Samples generated by separable GAN (top) and WGANGP (bottom) models trained on the 6 selected proteins shown in
Figure 1. Each row has samples with identical red channel, but
different green ones. We observe that WGAN-GP provides much
larger variability of the green channel conditioned on the red. In
particular, in the three bottom rows, even the type of the protein
changes, which we have never observed for the samples of GAN
(this effect should be present, because the model is trained without
any distinction between the classes, but is surprisingly rare). This
difference is captured by the C2ST evaluation: the GAN model
has a score of 3.2 ± 0.1 compared to 1.6 ± 0.1 of WGAN-GP.
pendix A for details).
As the second baseline evaluation, we use C2ST to compare real images of different classes. Table 1 shows that
when evaluated w.r.t. the test set of the same class the estimates are significantly smaller (but with non-zero variance)
compared to when evaluated w.r.t. different classes. Note
that the C2ST score is not a metric. In particular, Table 1 is
not symmetric reflecting biases between the train/test splits.
Specifically to WGAN-GP, the score can also be negative,
because the quadratic regularization term is the dominant
part of the objective (4) when the two image sources are
very similar.
As an additional test, we include two extra proteins
Fim1 and Tea4 that are known to have similar localization
to Arp3 and Tea1, respectively. We observe that C2ST reflects this similarity by giving the pairs of similar proteins
a much smaller score compared to most of other pairs (but
still significantly higher than comparing a protein to itself).
test
Alp14
Arp3
Cki2
Mkh1
Sid2
Tea1
Fim1
Tea4
Alp14
0.1 ± 0.2
14.4 ± 0.2
8.6 ± 0.2
12.3 ± 0.4
9.0 ± 0.3
11.3 ± 0.3
16.3 ± 0.2
9.7 ± 0.6
Arp3
12.5 ± 0.3
0.8 ± 0.4
15.9 ± 0.3
12.2 ± 0.6
19.5 ± 0.4
11.5 ± 0.5
2.8 ± 0.3
15.8 ± 0.7
Cki2
8.1 ± 0.3
16.2 ± 0.2
-0.2 ± 0.3
13.6 ± 0.3
11.8 ± 0.5
15.9 ± 0.3
18.4 ± 0.2
14.0 ± 0.9
Mkh1
12.5 ± 0.5
11.5 ± 0.4
13.7 ± 0.4
-0.2 ± 0.4
13.4 ± 0.9
14.4 ± 0.6
14.5 ± 0.3
13.9 ± 0.9
Sid2
9.5 ± 0.2
20.5 ± 0.3
12.0 ± 0.3
12.4 ± 0.6
-0.6 ± 0.3
13.1 ± 0.1
23.4 ± 0.3
6.2 ± 0.4
Tea1
10.9 ± 0.3
13.2 ± 0.2
15.8 ± 0.3
13.3 ± 0.6
12.6 ± 0.3
-0.1 ± 0.4
15.1 ± 0.2
5.9 ± 0.3
Fim1
15.6 ± 0.3
3.7 ± 0.2
18.5 ± 0.4
15.1 ± 0.5
23.9 ± 0.4
14.5 ± 0.5
-0.2 ± 0.3
19.5 ± 0.7
Tea4
11.4 ± 0.3
18.3 ± 0.3
16.0 ± 0.5
14.9 ± 0.8
7.7 ± 0.6
6.9 ± 0.5
20.8 ± 0.5
-0.5 ± 0.7
Table 1. Results of C2ST with WGAN-GP when comparing real images of different proteins. For each run, the training images of one
class are evaluated w.r.t. the test images of another class. The reported values are comparable with Table 2, but not with Figure 3.
real images
separable red/green
class conditioned
Alp14
Arp3
Cki2
Mkh1
Sid2
Tea1
6 proteins
0.1 ± 0.2
0.8 ± 0.4
-0.2 ± 0.3
-0.2 ± 0.4
-0.6 ± 0.3
-0.1 ± 0.4
-0.1 ± 0.2
one-class
non-separable
7
7
0.6 ± 0.3
1.2 ± 0.3
0.3 ± 0.5
0.8 ± 0.6
0.8 ± 0.4
0.8 ± 0.5
0.8 ± 0.2
one-class
separable
3
7
1.2 ± 0.2
2.4 ± 0.4
1.0 ± 0.3
0.5 ± 0.4
1.0 ± 0.5
0.8 ± 0.5
1.1 ± 0.2
multi-channel
non-separable
7
3
3.2 ± 0.4
3.2 ± 0.4
2.5 ± 0.3
4.6 ± 0.5
4.5 ± 0.5
4.4 ± 0.3
3.7 ± 0.1
multi-channel
separable
3
3
2.3 ± 0.5
4.2 ± 0.4
3.6 ± 0.5
6.6 ± 0.5
3.2 ± 0.6
2.8 ± 0.5
3.8 ± 0.2
star-shaped
3
3
0.6 ± 0.3
2.1 ± 0.5
1.2 ± 0.3
2.4 ± 0.6
1.1 ± 0.6
1.1 ± 0.4
1.4 ± 0.1
Table 2. Results of C2ST with the WGAN-GP objective comparing several multi-channel models w.r.t. the real images. All the models
were trained with WGAN-GP. The values in this table are directly comparable to the ones in Table 1.
Results. Table 2 shows the results of C2ST applied to several models with multiple output channels (see Section 4.2):
the multi-channel model and its separable version, the starshaped model and the two baselines, which do not align
green channels of different classes with the same red channel: one-class generators trained individually for each class
and their separable versions. All the models were trained
with WGAN-GP with the same ratio of the width of the
generator tower to the number of output channels.
We observe that the individual one-class WGAN-GP
models lead to higher quality compared to all the models outputting synchronized channels for all the classes.
Among the models that synchronize channels, the starshaped model performs best, but for some proteins there is
a significant drop in quality w.r.t. the one-class models.
5.2. Optimization to Reconstruct the Test Set
One of the common failures of GANs is the loss of
modes from the distribution, usually referred to as mode
collapse. There is evidence [39] that image quality can be
inversely correlated with mode coverage. To test for the
mode collapse, we perform an experiment proposed in [32],
where for a fixed trained generator G we examine how well
it can reconstruct images from a held out test set. For each
image in the test set, we minimize the L2 -distance (normalized by the number of pixels) between the generated and test
images w.r.t. the noise vector z. We call this task regular reconstruction. We use 50 iterations of L-BFGS and run it 5
times to select the best reconstruction. We also performed
an additional task, separable reconstruction, which examines the ability of separable networks to reproduce modes
of the green channel conditioned on the red. In this task, we
use a two-step procedure: first, we minimize the L2 -error
between the red channels holding the green noise fixed, and
then we minimize the L2 -error in the green channel while
keeping the red noise fixed at it’s optimized value. To complete the study, we also report the negative log likelihood
(NLL) w.r.t. the prior IPz of the noise vectors z obtained
with a reconstruction procedure. As a baseline for the reconstruction error, we show the nearest neighbor cell (in
both red and green channels) from the training set and the
average L2 -distance to the nearest neighbors. As a baseline
for NLL, we show the averaged NLL for the random point
generated from IPz .
We apply the reconstruction procedure to evaluate four
models: separable one-class and star-shaped models trained
with both GAN and WGAN-GP algorithms. Figure 5 and
Table 3 present qualitative and quantitative results, respectively. For all the measurements, we report the median values and the median absolute deviation. In Figure 6, we plot
reconstruction errors vs. NLL values for the Mkh1, which
was the hardest protein in the separable reconstruction task.
Analyzing the results, we observe that separable reconstruction is a harder task than the single step procedure.
Second, WGAN-GP models can reconstruct better, probably because they suffer less from the mode collapse. And
400
400
GAN-sep
WGAN-GP-sep
GAN-star
WGAN-GP-star
350
250
200
150
150
100
0.025
(b)
(c)
(d)
(e)
separable
regular
Figure 5. Examples of cell reconstructions. (a) – a test image;
(b) – the L2 nearest neighbor; (c) – regular reconstruction by oneclass separable WGAN-GP; (d) – regular reconstruction by starshaped WGAN-GP; (e) – separable reconstruction by star-shaped
WGAN-GP; (f) – separable reconstruction by star-shaped GAN.
An extended version of this figure is given in Appendix B.
Nearest neighbors
Gaussian noise
GAN-sep
WGAN-GP-sep
GAN-star
WGAN-GP-star
GAN-sep
WGAN-GP-sep
GAN-star
WGAN-GP-star
L2 -error
0.079 ± 0.009
0.053 ± 0.007
0.043 ± 0.006
0.061 ± 0.008
0.041 ± 0.005
0.069 ± 0.011
0.062 ± 0.009
0.074 ± 0.011
0.058 ± 0.010
NLL
142 ± 5
166 ± 17
149 ± 8
139 ± 12
150 ± 8
158 ± 13
143 ± 6
142 ± 7
143 ± 7
Table 3. Reconstruction experiment. For the four trained models (GAN/WGAN-GP and separable one-class/star-shaped), we
report L2 -errors of the reconstructions and the negative log likelihoods (NLL) of the latent vectors found by the reconstruction.
finally, the star-shaped models do not degrade the performance in terms of reconstruction, except for some hard proteins (see more details in Appendix B).
5.3. Progression Through the Cell Cycle
As described in Section 3.2, the localization of Bgs4 can
be used to accurately pinpoint the cell cycle stage. However,
not nearly as much as is known about how the localization
of the other proteins changes within the cell cycle [29].
Using our separable GAN architecture, we can interpolate between points in the latent space [49] to move across
the different stages of growth and division. Due to the architecture of our network, the output of the green channel
will always remain consistent with the red output. We show
an example of the reconstructed cell cycle in Figure 7 and
several animated examples in the Suppl. Mat. [37]. As a
validation of our approach, Arp3 is seen gradually moving
a dot like pattern at the tips of the cell towards the middle
of the cell during mitosis, as has been previously described
in the literature [53].
It’s important to highlight that the LIN dataset lacks true
0.050
0.075 0.100
L2 -error
0.125
(a) regular reconstruction
(f)
250
200
100
(a)
300
NLL
NLL
300
GAN-sep
WGAN-GP-sep
GAN-star
WGAN-GP-star
350
0.025
0.050
0.075 0.100
L2 -error
0.125
(b) separable reconstruction
Figure 6. Reconstruction errors against negative log likelihood
(NLL) of the latent vectors found by reconstruction. We show all
the cells corresponding to protein Mkh1, which appears to be the
hardest for the star-shaped models. The vertical gray line shows
the median L2 -error of the nearest neighbor. Horizontal gray lines
show mean NLL (± 3 std) of the noise sampled from the Gaussian
prior. In the separable (red-first) setting, the star-shaped model
trained with GAN provides very bad reconstructions, whereas the
same model trained with WGAN-GP results in high NLL values.
An extended version of this figure is given in Appendix B.
Bgs4 Alp14 Arp3 Cki2 Mkh1 Sid2 Tea1
y
Figure 7. Cell cycle of a star-shaped WGAN-GP model.
multi-channel (3+) images, and as such, we are unable to
compare how our generated multi-channel images compare
to real fluorescent images. We hope that as more datasets
in biology become open, we will have a better baseline to
compare our model too.
6. Conclusion
Although generative modeling has seen an explosion in
popularity in the last couple of years, so far it has mostly
been applied to the synthesis of real world images. Our results in this paper suggest that modern generative models
can be fruitfully applied to images obtained by fluorescent
microscopy. By leveraging correlation between different
image channels, we were able to simulate the localization
of multiple proteins throughout the cell cycle. This could
enable in the future the exploration of uninvestigated, inaccessible or unaffordable biological/biomedical experiments,
to catalyze new discoveries and potentially enable new diagnostic and prognostic bioimaging applications.
Acknowledgements. A. Osokin was supported by the ERC
grant Activia (no. 307574). F. Vaggi was supported by a
grant from the chair Havas-Dauphine.
References
[1] M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein generative adversarial networks. In ICML, 2017. 2, 4, 5, 6, 11
[2] Y. Bengio, L. Yao, G. Alain, and P. Vincent. Generalized denoising auto-encoders as generative models. In NIPS, 2013.
1
[3] M. Chalfie, Y. Tu, G. Euskirchen, W. W. Ward, and D. C.
Prasher. Green fluorescent protein as a marker for gene expression. Science, pages 802–805, 1994. 3
[4] X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever,
and P. Abbeel. InfoGAN: Interpretable representation learning by information maximizing generative adversarial nets.
In NIPS, 2016. 2
[5] A. Chessel. An overview of data science uses in bioimage
informatics. Methods, 2017. 3
[6] L. de Oliveira, M. Paganini, and B. Nachman. Learning particle physics by example: Location-aware generative adversarial networks for physics synthesis. arXiv:1701.05927v1,
2017. 2
[7] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. FeiFei. ImageNet: A large-scale hierarchical image database.
In CVPR, 2009. 1
[8] E. L. Denton, S. Chintala, A. Szlam, and R. Fergus. Deep
generative image models using a Laplacian pyramid of adversarial networks. In NIPS, 2015. 2
[9] J. Dodgson, A. Chessel, F. Vaggi, M. Giordan, M. Yamamoto, K. Arai, M. Madrid, M. Geymonat, J. F. Abenza,
J. Cansado, M. Sato, A. Csikasz-Nagy, and R. E. Carazo
Salas. Reconstructing regulatory pathways by systematically mapping protein localization interdependency networks. bioRxiv:11674, 2017. 1, 3, 5
[10] A. Dosovitskiy, J. T. Springenberg, M. Tatarchenko, and
T. Brox. Learning to generate chairs, tables and cars with
convolutional networks. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 39:692–705, 2017.
4
[11] V. Dumoulin, I. Belghazi, B. Poole, O. Mastropietro,
A. Lamb, M. Arjovsky, and A. Courville. Adversarially
learned inference. In ICLR, arXiv:1606.00704v3, 2017. 2
[12] Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle,
F. Laviolette, M. Marchand, and V. Lempitsky. Domainadversarial training of neural networks. Journal of Machine
Learning Research, 17(59):1–35, 2016. 2
[13] I. Goodfellow. NIPS 2016 tutorial: Generative adversarial
networks. arXiv:1701.00160v3, 2017. 2, 3
[14] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu,
D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In NIPS, 2014. 1, 2, 3, 4, 11
[15] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and
A. Courville. Improved training of Wasserstein GANs.
arXiv:1704.00028v2, 2017. 2, 4, 5, 6, 11
[16] G. Hinton, N. Srivastava, and K. Swersky. Lecture 6e of the
course “Neural Networks for Machine Learning”. rmsprop:
Divide the gradient by a running average of its recent magnitude. http://www.cs.toronto.edu/˜tijmen/
csc321/slides/lecture_slides_lec6.pdf,
2012. 11
[17] S. Ioffe and C. Szegedy. Batch normalization: Accelerating
deep network training by reducing internal covariate shift. In
ICML, 2015. 4
[18] S. Ioffe and C. Szegedy. Batch normalization: Accelerating
deep network training by reducing internal covariate shift. In
ICML, 2015. 5
[19] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image
translation with conditional adversarial networks. In CVPR,
2017. 2
[20] G. R. Johnson, J. Li, A. Shariff, G. K. Rohde, and R. F. Murphy. Automated Learning of Subcellular Variation among
Punctate Protein Patterns and a Generative Model of Their
Relation to Microtubules. PLoS computational biology,
11(12):e1004614, 2015. 3
[21] D. Kingma and J. Ba. Adam: A method for stochastic optimization. In ICLR, arXiv:1412.6980v9, 2015. 11
[22] D. P. Kingma and M. Welling. Auto-encoding variational
Bayes. In ICLR, arXiv:1312.6114v10, 2014. 1
[23] Y. LeCun, C. Cortes, and C. J. C. Burges. The MNIST
database of handwritten digits. http://yann.lecun.
com/exdb/mnist/, 1998. 1
[24] C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and
W. Shi. Photo-realistic single image super-resolution using a
generative adversarial network. In CVPR, 2017. 2
[25] C. Li and M. Wand. Precomputed real-time texture synthesis
with markovian generative adversarial networks. In ECCV,
2016. 2
[26] Z. Liu, P. Luo, X. Wang, and X. Tang. Deep learning face
attributes in the wild. In ICCV, 2015. 1
[27] V. Ljosa, K. L. Sokolnicki, and A. E. Carpenter. Annotated
high-throughput microscopy image sets for validation. Nature Methods, 9(7):637, 2012. 3
[28] D. Lopez-Paz and M. Oquab. Revisiting classifier twosample tests. In ICLR, arXiv:1610.06545v3, 2017. 2, 5
[29] S. G. Martin and R. A. Arkowitz. Cell polarization in
budding and fission yeasts. FEMS microbiology reviews,
38(2):228–253, 2014. 3, 8
[30] M. Mathieu, J. Zhao, P. Sprechmann, A. Ramesh, and Y. LeCun. Disentangling factors of variation in deep representation using adversarial training. In NIPS, 2016. 2
[31] E. Meijering, A. E. Carpenter, H. Peng, F. A. Hamprecht, and
J.-C. Olivo-Marin. Imagining the future of bioimage analysis. Nature Biotechnology, 34(12):1250–1255, 2016. 1, 3
[32] L. Metz, B. Poole, D. Pfau, and J. Sohl-Dickstein.
Unrolled generative adversarial networks.
In ICLR,
arXiv:1611.02163v3, 2017. 2, 5, 7
[33] M. Mirza and S. Osindero. Conditional generative adversarial nets. arXiv:1411.1784v1, 2014. 2
[34] R. F. Murphy. Building cell models and simulations from
microscope images. Methods, 96:33–39, 2016. 3
[35] S. Nowozin, B. Cseke, and R. Tomiokao. f-GAN: Training
generative neural samplers using variational divergence minimization. In NIPS, 2016. 2
[36] A. Odena, C. Olah, and J. Shlens. Conditional image synthesis with auxiliary classifier GANs. In ICML, 2017. 2
[37] A. Osokin, A. Chessel, R. E. Carazo Salas, and F. Vaggi.
GANs for biological image synthesis. Code. https://
github.com/aosokin/biogans, 2017. 8, 11
[38] T. D. Pollard and J.-Q. Wu. Understanding cytokinesis:
lessons from fission yeast. Nature reviews Molecular cell
biology, 11(2):149–155, 2010. 3
[39] B. Poole, A. A. Alemi, J. Sohl-Dickstein, and A. Angelovam.
Improved generator objectives for GANs. In NIPS Workshop
on Adversarial Training, arXiv:1612.02780v1, 2016. 2, 7
[40] A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In ICLR, arXiv:1511.06434v2, 2016. 1, 2,
4, 6, 11
[41] S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and
H. Lee. Generative adversarial text to image synthesis. In
ICML, 2016. 2
[42] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training GANs.
In NIPS, 2016. 2
[43] N. C. Shaner, R. E. Campbell, P. A. Steinbach, B. N. Giepmans, A. E. Palmer, and R. Y. Tsien. Improved monomeric
red, orange and yellow fluorescent proteins derived from
Discosoma sp. red fluorescent protein. Nature biotechnology, 22(12):1567, 2004. 3
[44] I. Tolstikhin, S. Gelly, O. Bousquet, C.-J. Simon-Gabriel,
and B. Schölkopf. AdaGAN: Boosting generative models.
arXiv:1701.02386v1, 2017. 2
[45] R. Y. Tsien. The green fluorescent protein. Annual review of
biochemistry, 67:509–544, 1998. 3
[46] A. van den Oord, N. Kalchbrenner, L. Espeholt,
K. Kavukcuoglu, O. Vinyals, and A. Graves. Conditional image generation with PixelCNN decoders. In NIPS,
2016. 1
[47] D. A. Van Valen, T. Kudo, K. M. Lane, D. N. Macklin, N. T.
Quach, M. M. DeFelice, I. Maayan, Y. Tanouchi, E. A. Ashley, and M. W. Covert. Deep Learning Automates the Quantitative Analysis of Individual Cells in Live-Cell Imaging Experiments. PLOS Computational Biology, 12(11):e1005177,
2016. 3
[48] X. Wang and A. Gupta. Generative image modeling using
style and structure adversarial networks. In ECCV, 2016. 2
[49] T. White.
Sampling generative networks.
arXiv:1609.04468v3, 2016. 8
[50] E. Williams, J. Moore, S. W. Li, G. Rustici, A. Tarkowska,
A. Chessel, S. Leo, B. Antal, R. K. Ferguson, U. Sarkans,
A. Brazma, R. E. Carazo Salas, and J. Swedlow. The Image Data Resource: A scalable platform for biological image
data access, integration, and dissemination. Nature Methods,
14:775–781, 2017. 3
[51] J. Wu, C. Zhang, T. Xue, B. Freeman, and J. Tenenbaum.
Learning a probabilistic latent space of object shapes via 3D
generative-adversarial modeling. In NIPS, 2016. 2
[52] Y. Wu, Y. Burda, R. Salakhutdinov, and R. Grosse. On the
quantitative analysis of decoder-based generative models. In
ICLR, arXiv:1611.04273v1, 2017. 5
[53] H. Yan and M. K. Balasubramanian. Meiotic actin rings are
essential for proper sporulation in fission yeast. Journal of
Cell Science, 125(6):1429–1439, 2012. 8
[54] F. Yu, A. Seff, Y. Zhang, S. Song, T. Funkhouser, and J. Xiao.
LSUN: Construction of a large-scale image dataset using
deep learning with humans in the loop. arXiv:1506.03365v3,
2016. 1
[55] J. Zhao, M. Mathieu, and Y. LeCun. Energy-based generative
adversarial networks. In ICLR, arXiv:1609.03126v4, 2017.
2
[56] J.-Y. Zhu, P. Krähenbühl, E. Shechtman, and A. A. Efros.
Generative visual manipulation on the natural image manifold. In ECCV, 2016. 2
[57] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired imageto-image translation using cycle-consistent adversarial networks. In ICCV, 2017. 2
Supplementary materials for “GANs for Biological Image Synthesis”
A. Comparison of C2ST variants
B. Reconstructing images of different proteins
Figures 10 and 11 show extended results of the reconstruction experiment described in Section 5.2. We show
scatter plots of NLL vs. reconstruction error for the six selected proteins and provide reconstruction examples for all
of them.
0.0
GAN C2ST score
GAN C2ST score
0.0
−0.5
−1.0
GAN
WGAN
WGAN-GP
−1.5
102
103
−0.5
−1.0
104
Training steps
105
102
GAN
WGAN
WGAN-GP
0.8
0.6
0.4
0.2
0.0
102
103
104
Training steps
1.0
0.4
0.2
4
2
0
102
103
104
Training steps
105
(e): non-separable models
WGAN-GP C2ST scores
102
103
104
Training steps
105
(d): separable models
WGAN C2ST scores
WGAN-GP C2ST score
6
105
0.6
0.0
105
GAN
WGAN
WGAN-GP
8
104
Training steps
GAN-sep
WGAN-sep
WGAN-GP-sep
0.8
(c): non-separable models
WGAN C2ST scores
10
103
(b): separable models
GAN C2ST scores
WGAN C2ST score
WGAN C2ST score
1.0
GAN-sep
WGAN-sep
WGAN-GP-sep
−1.5
(a): non-separable models
GAN C2ST scores
WGAN-GP C2ST score
In this section, we report an experiment comparing the
behavior of several variants of classifier two-samples test
(C2ST) based on different ways of training the classifier.
We consider three approaches to train the classier used in
C2ST, which come from GAN [14, 40], WGAN [1] and
WGAN-GP [15]. In the case of GAN, we simply train a
classifier using the negative cross-entropy loss and report
the negative loss on the test-test set and freshly generated
images as the C2ST score (note that such score is always
negative). In the case of WGAN, we train a classifier with
all the weights clipped to the segment [−0.01, 0.01] and
use the negation of (3) as the C2ST score (note that such
scores are always non-negative). In the case of WGANGP, we train a classifier with the regularizer based on
the L2 -penalty on the gradient norm (4) (with the regularizer weight equal to 10) and use the negation of (4) as the
C2ST score (note that these scores can be negative when the
two collections of samples are similar).
For C2ST with GAN or WGAN-GP, we use the Adam
optimizer [21]. For C2ST with WGAN, we use the RMSprop optimizer [16]. We run all optimizers for 5000 iterations with the parameters coming from the corresponding
GAN methods (see our implementation [37] for the details).
In Figure 8, we apply all the three versions of C2ST after different number of training iterations of GAN, WGAN
and WGAN-GP for both regular and separable (green-onred) generators. We repeat each measurement on 10 splits
of the test set and report the median and 0.1, 0.9 quantiles. In Figure 9, we show samples generated by separable models trained with GAN, WGAN and WGAN-GP.
We observe that all the variants of C2ST (including the
GAN version) show that the WGAN and WGAN-GP models are significantly better than the GAN ones. This effect is
likely happening due to the conditional mode collapse of the
GAN models (see the first column of Figure 9). Comparing
the C2ST score themselves, we conclude that the versions
based on WGAN and WGAN-GP have less variance and
are more stable. The two latter versions perform similarly
and do not show a clear winner.
10
GAN-sep
WGAN-sep
WGAN-GP-sep
8
6
4
2
0
102
103
104
Training steps
105
separable models
WGAN-GP C2ST scores
Figure 8. Comparison of C2ST based on different ways to train the
discriminator. For all C2ST scores, the lower the better. For real
images, GAN gives −3.7 ± 0.7, WGAN gives 0.0 ± 0.0, WGANGP gives −0.7 ± 0.6.
Steps C2ST
Samples of GAN
C2ST
Samples of WGAN
C2ST
100
97.0±
0.2
147±
0.2
32.8±
0.2
1k
11.0±
0.1
57.1±
0.3
6.0±
0.1
2k
9.5±
0.1
23.8±
0.1
2.9±
0.1
5k
6.7±
0.1
7.0±
0.1
2.2±
0.1
10k
5.9±
0.1
4.8±
0.1
1.7±
0.1
50k
3.2±
0.1
4.1±
0.1
1.3±
0.1
500k 5.9±
4.0±
0.1
0.8±
0.1
0.1
Samples of WGAN-GP
Figure 9. Scores of the classifier two-sample test (C2ST) between the generators and the hold-out test sets of images (an extension of
Figure 3). We report the scores of separable GAN, WGAN and WGAN-GP at different stages of training. For each line, we show the
samples from the corresponding models to demonstrate that the lower C2ST scores correspond to better-looking (sharper, less artifacts,
etc.) images. Best viewed in color and on a screen.
400
GAN-sep
WGAN-GP-sep
GAN-star
WGAN-GP-star
300
NLL
250
300
250
250
200
200
200
150
150
150
150
100
0.050
0.075 0.100
L2 -error
0.125
100
0.025
images of Alp14
regular reconstruction
0.050
0.075 0.100
L2 -error
0.125
images of Alp14
separable reconstruction
400
350
300
NLL
300
250
0.075 0.100
L2 -error
0.125
0.025
300
250
300
250
200
200
200
150
150
150
150
100
100
100
0.075 0.100
L2 -error
0.125
0.025
images of Cki2
regular reconstruction
0.050
0.075 0.100
L2 -error
0.125
images of Cki2
separable reconstruction
400
250
0.075 0.100
L2 -error
0.125
0.025
300
250
300
300
250
250
200
200
150
150
150
150
100
0.050
0.075 0.100
L2 -error
0.125
images of Sid2
regular reconstruction
100
0.025
0.050
0.075 0.100
L2 -error
0.125
images of Sid2
separable reconstruction
0.125
GAN-sep
WGAN-GP-sep
GAN-star
WGAN-GP-star
350
200
0.025
0.075 0.100
L2 -error
400
GAN-sep
WGAN-GP-sep
GAN-star
WGAN-GP-star
350
200
100
0.050
images of Mkh1
separable reconstruction
400
GAN-sep
WGAN-GP-sep
GAN-star
WGAN-GP-star
350
NLL
300
0.050
images of Mkh1
regular reconstruction
400
GAN-sep
WGAN-GP-sep
GAN-star
WGAN-GP-star
350
100
0.025
NLL
0.050
0.125
GAN-sep
WGAN-GP-sep
GAN-star
WGAN-GP-star
350
200
0.025
0.075 0.100
L2 -error
400
GAN-sep
WGAN-GP-sep
GAN-star
WGAN-GP-star
350
250
0.050
images of Arp3
separable reconstruction
400
GAN-sep
WGAN-GP-sep
GAN-star
WGAN-GP-star
NLL
350
0.050
images of Arp3
regular reconstruction
400
GAN-sep
WGAN-GP-sep
GAN-star
WGAN-GP-star
100
0.025
NLL
0.025
NLL
300
GAN-sep
WGAN-GP-sep
GAN-star
WGAN-GP-star
350
200
100
NLL
250
400
GAN-sep
WGAN-GP-sep
GAN-star
WGAN-GP-star
350
NLL
NLL
300
350
NLL
350
400
GAN-sep
WGAN-GP-sep
GAN-star
WGAN-GP-star
NLL
400
100
0.025
0.050
0.075 0.100
L2 -error
0.125
images of Tea1
regular reconstruction
0.025
0.050
0.075 0.100
L2 -error
0.125
images of Tea1
separable reconstruction
Figure 10. Reconstruction errors against negative log likelihood (NLL) of the latent vectors found by reconstruction (complete version of
Figure 6). We show all the cells corresponding to all selected proteins. The vertical gray line shows the median L2 -error of the nearest
neighbor. Horizontal gray lines show mean NLL (± 3 std) of the noise sampled from the Gaussian prior. We observer that the images
of Mkh1 are that hardest to reconstruct for the star-shaped models. In the separable (red-first) setting, the star-shaped model trained with
GAN provides very bad reconstructions, whereas the same model trained with WGAN-GP results in high NLL values. For all the other
proteins, the star-shaped models can reconstruct as well as simpler separable models. We also conclude that the models trained with
WGAN-GP reconstruct consistently better (smaller both NLL and L2 -error) compared to the models trained with the GAN objective.
(a)
(b)
(c)
(d)
(e)
(f)
(a)
(b)
(c)
(d)
(e)
(f)
Figure 11. Examples of cell reconstructions (an extension of Figure 5). (a) – a test image; (b) – the L2 nearest neighbor; (c) – regular
reconstruction by one-class separable WGAN-GP; (d) – regular reconstruction by star-shaped WGAN-GP; (e) – separable reconstruction
by star-shaped WGAN-GP; (f) – separable reconstruction by star-shaped GAN.
| 1 |
WaterGAN: Unsupervised Generative Network to Enable Real-time
Color Correction of Monocular Underwater Images
arXiv:1702.07392v3 [] 26 Oct 2017
Jie Li1∗ , Katherine A. Skinner2∗ , Ryan M. Eustice3 and Matthew Johnson-Roberson3
*These authors contributed equally to this work.
Abstract— This paper reports on WaterGAN, a generative
adversarial network (GAN) for generating realistic underwater
images from in-air image and depth pairings in an unsupervised
pipeline used for color correction of monocular underwater
images. Cameras onboard autonomous and remotely operated
vehicles can capture high resolution images to map the seafloor;
however, underwater image formation is subject to the complex
process of light propagation through the water column. The raw
images retrieved are characteristically different than images
taken in air due to effects such as absorption and scattering,
which cause attenuation of light at different rates for different
wavelengths. While this physical process is well described
theoretically, the model depends on many parameters intrinsic
to the water column as well as the structure of the scene. These
factors make recovery of these parameters difficult without
simplifying assumptions or field calibration; hence, restoration
of underwater images is a non-trivial problem. Deep learning
has demonstrated great success in modeling complex nonlinear
systems but requires a large amount of training data, which is
difficult to compile in deep sea environments. Using WaterGAN,
we generate a large training dataset of corresponding depth,
in-air color images, and realistic underwater images. This data
serves as input to a two-stage network for color correction
of monocular underwater images. Our proposed pipeline is
validated with testing on real data collected from both a pure
water test tank and from underwater surveys collected in the
field. Source code, sample datasets, and pretrained models are
made publicly available.
Index Terms— Underwater vision, monocular vision, generative adversarial network, image restoration
I. I NTRODUCTION
Many fields rely on underwater robotic platforms equipped
with imaging sensors to provide high resolution views of
the seafloor. For instance, marine archaeologists use photomosaic maps to study submerged objects and cities [1],
and marine scientists use surveys of coral reef systems to
track bleaching events over time [2]. While recent decades
have seen great advancements in vision capabilities of underwater platforms, the subsea environment presents unique
challenges to perception that are not present on land. Rangedependent lighting effects such as attenuation cause exponential decay of light between the imaged scene and the camera.
This attenuation acts at different rates across wavelengths and
is strongest for the red channel. As a result, raw underwater
1 J. Li is with the Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI 48109 USA
[email protected]
2 K. Skinner is with the Robotics Program, University of Michigan, Ann
Arbor, MI 48109 USA [email protected]
3 R. Eustice and M. Johnson-Roberson are with the Department of Naval
Architecture and Marine Engineering, University of Michigan, Ann Arbor,
MI 48109 USA [email protected]
Fig. 1: Flowchart displaying both the WaterGAN and color
correction networks. WaterGAN takes input in-air RGB-D
and a sample set of underwater images and outputs synthetic
underwater images aligned with the in-air RGB-D. The color
correction network uses this aligned data for training. For
testing, a real monocular underwater image is input and a
corrected image and relative depth map are output.
images appear relatively blue or green compared to the true
color of the scene as it would be imaged in air. Simultaneously, light is added back to the sensor through scattering
effects, causing a haze effect across the scene that reduces
the effective resolution. In recent decades, stereo cameras
have been at the forefront in solving these challenges. With
calibrated stereo pairs, high resolution images can be aligned
with depth information to compute large-scale photomosaic
maps or metrically accurate 3D reconstructions [3]. However,
degradation of images due to range-dependent underwater
lighting effects still hinders these approaches, and restoration
of underwater images involves reversing effects of a complex
physical process with prior knowledge of water column
characteristics for a specific survey site.
Alternatively, neural networks can achieve end-to-end
modeling of complex nonlinear systems. Yet deep learning
has not become as commonplace subsea as it has for terrestrial applications. One challenge is that many deep learning
structures require large amounts of training data, typically
paired with labels or corresponding ground truth sensor
measurements. Gathering large sets of underwater data with
depth information is challenging in deep sea environments;
obtaining ground truth of the true color of a natural subsea
scene is also an open problem.
Rather than gathering training data, we propose a novel
approach, WaterGAN, a generative adversarial network
(GAN) [4] that uses real unlabeled underwater images to
learn a realistic representation of water column properties of
a particular survey site. WaterGAN takes in-air images and
depth maps as input and generates corresponding synthetic
underwater images as output. This dataset with corresponding depth data, in-air color, and synthetic underwater color
can then supplant the need for real ground truth depth and
color in the training of a color correction network. We
propose a color correction network that takes as input raw
unlabeled underwater images and outputs restored images
that appear as if they were taken in air.
This paper is organized as follows: §II presents relevant
prior work; §III gives a detailed description of our technical
approach; §IV presents our experimental setup to validate
our proposed approach; §V provides results and a discussion
of these results; lastly, §VI concludes the paper.
II. BACKGROUND
Prior work on compensating for effects of underwater
image formation has focused on explicitly modeling this
physical process to restore underwater images to their true
color. Jordt et al. used a modified Jaffe-McGlamery model
with parameters obtained through prior experiments [5] [6].
However, attenuation parameters vary for each survey site
depending on water composition and quality. Bryson et al.
used an optimization approach to estimate water column and
lighting parameters of an underwater survey to restore the
true color of underwater scenes [7]. However, this method
requires detailed knowledge of vehicle configuration and the
camera pose relative to the scene. In this paper, we propose to
learn to model these effects using a deep learning framework
without explicitly encoding vehicle configuration parameters.
Approaches that make use of the gray world assumption [1] or histogram equalization are common preprocessing
steps for underwater images and may result in improved
image quality and appearance. However, as such methods
have no knowledge of range-dependent effects, resulting
images of the same object viewed from different viewpoints
may appear with different colors. Work has been done to
enforce the consistency of restored images across a scene [8],
but these methods require dense depth maps. In prior work,
Skinner et al. worked to relax this requirement using an
underwater bundle adjustment formulation to estimate the
parameters of a fixed attenuation model and the 3D structure
simultaneously [9], but such approaches require a fixed
image formation model and handle unmodeled effects poorly.
Our proposed approach can perform restoration with individual monocular images as input, and learns the relative
structure of the scene as it corrects for the effects of rangedependent attenuation.
Several methods have addressed range-dependent image
dehazing by estimating depth through developed or statistical
priors on attenuation effects [10]–[12]. More recent work
has focused on leveraging the success of deep learning techniques to estimate parameters of the complex physical model.
Shin et al. [13] developed a deep learning pipeline that
achieves state-of-the-art performance in underwater image
dehazing using simulated data with a regression network
structure to estimate parameters for a fixed restoration model.
Our method incorporates real field data in a generative
network to learn a realistic representation of environmental
conditions for raw underwater images of a specific survey
site.
We structure our training data generator, WaterGAN, as a
generative adversarial network (GAN). GANs have shown
success in generating realistic images in an unsupervised
pipeline that only relies on an unlabeled set of images of
a desired representation [4]. A standard GAN generator receives a noise vector as input and generates a synthetic image
from this noise through a series of convolutional and deconvolutional layers [14]. Recent work has shown improved
results by providing an input image to the generator network,
rather than just a noise vector. Shrivastava et al. provided
a simulated image as input to their network, SimGAN, and
then used a refiner network to generate a more realistic image
from this simulated input [15]. To extend this idea to the
domain of underwater image restoration, we also incorporate
easy-to-gather in-air RGB-D data into the generator network
since underwater image formation is range-dependent. Sixt et
al. proposed a related approach in RenderGAN, a framework
for generating training data for the task of tag recognition
in cluttered images [16]. RenderGAN uses an augmented
generator structure with augment functions modeling known
characteristics of their desired images, including blur and
lighting effects. RenderGAN focuses on a finite set of tags
and classification as opposed to a generalizable transmission
function and image-to-image mapping.
III. T ECHNICAL A PPROACH
This paper presents a two-part technical approach to produce a pipeline for image restoration of monocular underwater images. Figure 1 shows an overview of our full pipeline.
WaterGAN is the first component of this pipeline, taking as
input in-air RGB-D images and a sample set of underwater
images to train a generative network adversarially. This
training procedure uses unlabeled raw underwater images of
a specific survey site, assuming that water column effects are
mostly uniform within a local area. This process produces
rendered underwater images from in-air RGB-D images that
conform to the characteristics of the real underwater data at
that site. These synthetic underwater images can then be used
to train the second component of our system, a novel color
correction network that can compensate for water column
effects in a specific location in real-time.
A. Generating Realistic Underwater Images
We structure WaterGAN as a generative adversarial network, which has two networks training simultaneously: a
generator, G, and a discriminator, D (Fig. 2). In a standard
GAN [4] [14] the generator input is a noise vector z,
which is projected, reshaped, and propagated through a series
of convolution and deconvolution layers. The output is a
synthetic image, G(z). The discriminator receives as input
the synthetic images and a separate dataset of real images,
x, and classifies each sample as real (1) or synthetic (0). The
goal of the generator is to output synthetic images that the
discriminator classifies as real. Thus in optimizing G, we
seek to maximize
Fig. 2: WaterGAN: The GAN for generating realistic underwater images with similar image formation properties to those
of unlabeled underwater data taken in the field.
log(D(G(z)).
(1)
The goal of the discriminator is to achieve high accuracy in
classification, minimizing the above function, and maximizing D(x) for a total value function of
log(D(x)) + log(1 − D(G(z))).
(2)
The generator of WaterGAN features three main stages,
each modeled after a component of underwater image formation: attenuation (G-I), backscattering (G-II), and the camera
model (G-III). The purpose of this structure is to ensure
that generated images align with the RGB-D input, such
that each stage does not alter the underlying structure of the
scene itself, only its relative color and intensity. Additionally,
our formulation ensures that the network is using depth
information in a realistic manner. This is necessary as the
discriminator does not have direct knowledge of the depth
of the scene. The remainder of this section describes each
stage in detail.
G-I: Attenuation
The first stage of the generator, G-I, accounts for
range-dependent attenuation of light. The attenuation
model is a simplified formulation of the Jaffe-McGlamery
model [6] [17],
G1 = Iair e−η(λ)rc ,
(3)
where Iair is the input in-air image, or the initial irradiance
before propagation through the water column, rc is the range
from the camera to the scene, and η is the wavelengthdependent attenuation coefficient estimated by the network.
We discretize the wavelength, λ, into three color channels.
G1 is the final output of G-I, the final irradiance subject to
attenuation in the water column. Note that the attenuation
coefficient is dependent on water composition and quality,
and varies across survey sites. To ensure that this stage only
attenuates light, as opposed to adding light, and that the
coefficient stays within physical bounds, we constrain η to
be greater than 0. All input depth maps and images have
dimensions of 48 × 64 for training model parameters. This
training resolution is sufficient for the size of our parameter
space and preserves the aspect ratio of the full-size images.
Note that we can still achieve full resolution output for final
data generation, as explained below. Depth maps for in-air
training data are normalized to the maximum underwater
survey altitude expected. Given the limitation of optical
sensors underwater, it is reasonable to assume that this value
is available.
G-II: Scattering
As a photon of light travels through the water column, it
is also subjected to scattering back towards the image sensor.
This creates a characteristic haze effect in underwater images
and is modeled by
B = β(λ)(1 − e−η(λ)rc ),
(4)
where β is a scalar parameter dependent on wavelength.
Stage G-II accounts for scattering through a shallow convolutional network. To capture range-dependency, we input
the 48 × 64 depth map and a 100-length noise vector. The
noise vector is projected, reshaped, and concatenated to the
depth map as a single channel 48 × 64 mask. To capture
wavelength-dependent effects, we copy this input for three
independent convolution layers with kernel size 5 × 5. This
output is batch normalized and put through a final leaky
rectified linear unit (LReLU) with a leak rate of 0.2. Each
of the three outputs of the distinct convolution layers are
concatenated together to create a 48 × 64 × 3 dimension
mask. Since backscattering adds light back to the image, and
to ensure that the underlying structure of the imaged scene
is not distorted from the RGB-D input, we add this mask,
M2 , to the output of G-I:
G2 = G1 + M2 .
(5)
G-III: Camera Model
Lastly we account for the camera model. First we model
vignetting, which produces a shading pattern around the
borders of an image due to effects from the lens. We adopt
the vignetting model from [18],
V = 1 + ar2 + br4 + cr6 ,
(6)
where r is the normalized radius per pixel from the center
of the image, such that r = 0 in the center of the image and
r = 1 at the boundaries. The constants a, b, and c are model
parameters estimated by the network. The output mask has
dimensions of the input images, and G2 is multiplied by
M3 = V1 to produce a vignetted image G3 ,
G3 = M3 G2 .
As described in [18], we constrain this model by
(7)
(c ≥ 0) ∧ (4b2 − 12ac < 0).
(8)
Finally we assume a linear sensor response function,
which has a single scaling parameter k [7], with the final
output given by
Gout = kG3 .
(9)
Discriminator
For the discriminator of WaterGAN, we adopt the convolutional network structure used in [14]. The discriminator takes
an input image 48 × 64 × 3, real or synthetic. This image is
propagated through four convolutional layers with kernel size
5 × 5 with the image dimension downsampled by a factor of
two, and the channel dimension doubled. Each convolutional
layer is followed by LReLUs with a leak rate of 0.2. The final
layer is a sigmoid function and the discriminator returns a
classification label of (0) for synthetic or (1) for a real image.
Generating Image Samples
After training is complete, we use the learned model
to generate image samples. For image generation, we input in-air RGB-D data at a resolution of 480 × 640 and
output synthetic underwater images at the same resolution.
To maintain resolution and preserve the aspect ratio, the
vignetting mask and scattering image are upsampled using
bicubic interpolation before applying them to the image. The
attenuation model is not specific to the resolution.
B. Underwater Image Restoration Network
To achieve real-time monocular image color restoration,
we propose a two-stage algorithm using two fully convolutional networks that train on the in-air RGB-D data
and corresponding rendered underwater images generated
by WaterGAN. The architecture of the model is depicted
in Fig. 3. A depth estimation network first reconstructs a
coarse relative depth map from the downsampled synthetic
underwater image. Then a color restoration network conducts
restoration from the input of both the underwater image and
its estimated relative depth map.
We propose the basic architecture of both network modules based on a state-of-the-art fully convolutional encoderdecoder architecture for pixel-wise dense learning, SegNet [19]. A new type of non-parametric upsampling layer is
proposed in SegNet that directly uses the index information
from corresponding max-pooling layers in the encoder. The
resulting encoder-decoder network structure has been shown
to be more efficient in terms of training time and memory
compared to benchmark architectures that achieve similar
performance. SegNet was designed for scene segmentation,
so preserving high frequency information of the input image
is not a required property. In our application of image
restoration, however, it is important to preserve the texture
level information for the output so that the corrected image
can still be processed or utilized in other applications such
as 3D reconstruction or object detection. Inspired by recent
work on image restoration and denoising using neural networks [20][21], we incorporate skipping layers on the basic
encoder-decoder structure to compensate for the loss in high
frequency components through the network. The skipping
layers are able to increase the convergence speed in network
training and to improve the fine scale quality of the restored
image, as shown in Fig. 6. More discussion will be given in
§V.
As shown in Fig. 3, in the depth estimation network, the
encoder consists of 10 convolution layers and three levels of
downsampling. The decoder is symmetric to the encoder,
using non-parametric upsampling layers. Before the final
convolution layer, we concatenate the input layer with the
feature layers to provide high resolution information to the
last convolution layer. The network takes a downsampled
underwater image of 56 × 56 × 3 as input and outputs a
relative depth map of 56×56×1. This map is then upsampled
to 480 × 480 and serves as part of the input to the second
stage for color correction.
The color correction network module is similar to the
depth estimation network. It takes an input RGB-D image at
the resolution of 480×480, padded to 512×512 to avoid edge
effects. Although the network module is a fully convolutional
network and changing the input resolution does not affect the
model size itself, increasing input resolution demands larger
computational memory to process the intermediate forward
and backward propagation between layers. A resolution of
256 × 256 would reach the upper bound of such an encoderdecoder network trained on a 12GB GPU. To increase the
output resolution of our proposed network, we keep the basic
network architecture used in the depth estimation stage as
the core processing component of our color restoration net,
as depicted in Fig. 3. Then we wrap the core component
with an extra downsampling and upsampling stage. The input
image is downsampled using an averaging pooling layer to a
resolution of 128 × 128 and passed through the core process
component. At the end of the core component, the output is
then upsampled to 512 × 512 using a deconvolution layer
initialized by a bilinear interpolation filter. Two skipping
layers are concatenated to preserve high resolution features.
In this way, the main intermediate computation is still done
in relatively low resolution. We were able to use a batch
size of 15 to train the network on a 12GB GPU with this
resolution. For both the depth estimation and color correction
networks, a Euclidean loss function is used. The pixel values
in the images are normalized between 0 to 1.
IV. E XPERIMENTAL S ETUP
We evaluate our proposed method using datasets gathered
in both a controlled pure water test tank and from real
scientific surveys in the field. As input in-air RGB-D for
all experiments, we compile four indoor Kinect datasets
(B3DO [22], UW RGB-D Object [23], NYU Depth [24] and
Microsoft 7-scenes [25]) for a total of 15000 RGB-D images.
Fig. 3: Network architecture for color estimation. The first stage of the network takes a synthetic (training) or real (testing)
underwater image and learns a relative depth map. The image and depth map are then used as input for the second stage to
output a restored color image as it would appear in air.
(a) Rock platform
(b) Color board
(c) MHL test tank
Fig. 4: (a) An artificial rock platform and (b) a diving color
board are used to provide ground truth for controlled imaging
tests.(c) The rock platform is submerged in a pure water test
tank for gathering the MHL dataset.
A. Artificial Testbed
The first survey is done using a 4 ft × 7 ft man-made rock
platform submerged in a pure water test tank at University
of Michigan’s Marine Hydrodynamics Laboratory (MHL). A
color board is attached to the platform for reference (Fig. 4).
A total of over 7000 underwater images are compiled from
this survey.
B. Field Tests
One field dataset was collected in Port Royal, Jamaica, at
the site of a submerged city containing both natural and manmade structure. These images were collected with a handheld diver rig. For our experiments, we compile a dataset
consisting of 6500 images from a single dive. The maximum
depth from the seafloor is approximately 1.5m. Another field
dataset was collected at a coral reef system near Lizard
Island, Australia [26]. The data was gathered with the same
diver rig and we assumed a maximum depth of 2.0m from
the seafloor. We compile a total number of 6083 images from
the multi-dive survey within a local area.
C. Network Training
For each dataset, we train the WaterGAN network to
model a realistic representation of raw underwater images
from a specific survey site. The real samples are input to
WaterGAN’s discriminator network during training, with an
equal number of in-air RGB-D pairings input to the generator
network. We train WaterGAN on a Titan X (Pascal) with
a batch size of 64 images and a learning rate of 0.0002.
Through experiments, we found 10 epochs to be sufficient
to render realistic images for input to the color correction
network for the Port Royal and Lizard Island datasets. We
trained for 25 epochs for the MHL dataset. Once a model is
trained, we can generate an arbitrary amount of synthetic
data. For our experiments, we generate a total of 15000
rendered underwater images for each model (MHL, Port
Royal, and Lizard Island), which corresponds to the total
size of our compiled RGB-D dataset.
Next, we train our proposed color correction network
with our generated images and corresponding in-air RGB-D
images. We split this set into a training set with 12000 images
and a validation set with 3000 images. We train the networks
from scratch for both the depth estimation network and image
restoration network on a Titan X (Pascal) GPU. For the depth
estimation network, we train for 20 epochs with a batch size
of 50, a base learning rate of 1e−6 , and a momentum of 0.9.
For the color correction network, we conduct a two-level
training strategy. For the first level, the core component is
trained with an input resolution of 128 × 128, a batch size
of 20, and a base learning rate of 1e−6 for 20 epochs. Then
we train the whole network at a full resolution of 512 × 512,
with the parameters in core components initialized from the
first training step. We train the full resolution model for 10
epochs with a batch size of 15 and a base learning rate of
1e−7 . Results are discussed in §V for all three datasets.
V. R ESULTS AND D ISCUSSION
To evaluate the image restoration performance in real
underwater data, we present both qualitative and quantitative
analysis for each dataset. We compare our proposed method
to image processing approaches that are not range-dependent,
including histogram equalization and normalization with the
gray world assumption. We also compare our results to
a range-dependent approach based on a physical model,
the modified Jaffe-McGlamery model (Eqn. 3) with ideal
attenuation coefficients [5]. Lastly, we compare our proposed
method to Shin et al.’s deep learning approach [13], which
implicitly models range-dependent information in estimating
a transmission map.
Qualitative results are given in Figure 5. Histogram equalization looks visually appealing, but it has no knowledge of
range-dependent effects so corrected color of the same object
viewed from different viewpoints appears with different
colors. Our proposed method shows more consistent color
across varying views, with reduced effects of vignetting and
attenuation compared to the other methods. We demonstrate
these findings across the full datasets in our following
quantitative evaluation.
We present two quantitative metrics for evaluating the
performance of our color correction: color accuracy and
color consistency. For accuracy, we refer to the color board
attached to the submerged rock platform in the MHL dataset.
Table I shows the Euclidean distance of intensity-normalized
color in RGB-space for each color patch on the color board
compared to an image of the color board in air. These results
show that our method has the lowest error for blue, red,
and magenta. Histogram equalization has the lowest error
for cyan, yellow and green recovery, but our method still
outperforms the remaining methods for cyan and yellow.
TABLE I: Color correction accuracy based on Euclidean
distance of intensity-normalized color in RGB-space for each
method compared to the ground truth in-air color board.
Blue
Red
Mag.
Green
Cyan
Yellow
Raw
0.3349
0.2812
0.3475
0.3332
0.3808
0.3599
Hist.
Eq.
0.2247
0.0695
0.1140
0.1158
0.0096
0.0431
Gray
World
0.2678
0.1657
0.2020
0.1836
0.1488
0.1102
Mod.
J-M
0.2748
0.2249
0.298
0.2209
0.3340
0.2265
Shin[13]
0.1933
0.1946
0.1579
0.2013
0.2216
0.2323
Prop.
Meth.
0.1431
0.0484
0.0580
0.2132
0.0743
0.1033
To analyze color consistency quantitatively, we compute
the variance of intensity-normalized pixel color for each
scene point that is viewed across multiple images. Table II
shows the mean variance of these points. Our proposed
method shows the lowest variance across each color channel.
This consistency can also be seen qualitatively in Fig. 5.
TABLE II: Variance of intensity-normalized color of single
scene points imaged from different viewpoints.
Red
Green
Blue
Raw
0.0073
0.0011
0.0093
Hist.
Eq.
0.0029
0.0021
0.0051
Gray
World
0.0039
0.0053
0.0042
Mod.
J-M
0.0014
0.0019
0.0027
Shin[13]
0.0019
0.0170
0.0038
Prop.
Meth.
0.0005
0.0007
0.0006
We also validate the trained network on the testing set of
synthetic data and the validation results are given in Table III.
We use RMSE as the error metric for both color and depth.
These results show that the trained network is able to invert
the model encoded by the generator.
TABLE III: Validation error in pixel value is given in RMSE
in RGB-space. Validation error in depth is given in RMSE
(m).
Dataset
Synth. MHL
Synth. Port Royal
Synth. Lizard
Red
0.052
0.060
0.068
Green
0.033
0.041
0.045
Blue
0.055
0.031
0.035
Depth
RMSE
0.127
0.122
0.103
In terms of the computational efficiency, the forward
propagation for depth estimation takes 0.007s on average and
the color correction module takes 0.06s on average, which
is efficient for real-time applications.
It is important to note that our depth estimation network
recovers accurate relative depth, not necessarily absolute
depth. This is due to the scale ambiguity inherent to the
monocular depth estimation problem. To evaluate the depth
estimation in real underwater images, we compare our estimated depth with depth reconstructed from stereo images
available for the MHL dataset in a normalized manner,
ignoring the pixels where no depth is recovered from stereo
reconstruction due to lack of overlap or feature sparsity. The
RMSE of normalized estimated depth and the normalized
stereo reconstructed depth is 0.11m.
To evaluate the improvement in image quality due to
skipping layers in the color correction network, we train the
network at the same resolution with and without skipping
layers. For the first pass of core component training, the
network without skipping layers takes around 30 epochs to
reach a stable loss, while the proposed network with skipping
layers takes around 15 epochs. The same trend holds for full
model training, taking 10 and 5 epochs, respectively. Figure 6 shows a comparison of image patches recovered from
both versions of the network. This demonstrates that using
skipping layers helps to preserve high frequency information
from the input image.
One limitation of our model is in the parameterization of
the vignetting model, which assumes a centered vignetting
pattern. This is not a valid assumption for the MHL dataset,
so our restored images still show some vignetting though
it is partially corrected. These results could be improved
by adding a parameter that adjusts the center position of
the vignetting pattern over the image. This demonstrates a
limitation of augmented generators, more generally. Since
they are limited by the choice of augmentation functions,
augmented generators may not fully capture all aspects of a
complex nonlinear model [16]. We introduce a convolutional
layer into our augmented generator that is meant to capture
scattering, but we would like to experiment with adding
additional layers to this stage for capturing more complex
effects, such as lighting patterns from sunlight in shallow
water surveys. To further increase the network robustness
and enable the generalization to more application scenarios,
we would also like to train our network across more datasets
covering a larger variety of environmental conditions including differing illumination and turbidity.
Source code, sample datasets, and pretrained models are available at https://github.com/kskin/
WaterGAN.
VI. C ONCLUSIONS
This paper proposed WaterGAN, a generative network
for modeling underwater images from RGB-D in air. We
showed a novel generator network structure that incorporates
the process of underwater image formation to generate high
resolution output images. We then adapted a dense pixel-wise
model learning pipeline for the task of color correction of
monocular underwater images trained on RGB-D pairs and
corresponding generated images. We evaluated our method
on both controlled and field data to show qualitatively and
quantitatively that our output is accurate and consistent
across varying viewpoints. There are several promising directions for future work to extend this network. Here we
(a) Raw underwater
image
(b) Histogram
equalization
(c) Gray world
(d) Modified
Jaffe-McGlamery
(e) Shin et al.[13]
(f) Our Method
Fig. 5: Results showing color correction on the MHL, Lizard Island, and Port Royal datasets (from top to bottom). Each
column shows (a) raw underwater images, and corrected images using (b) histogram equalization, (c) normalization with
the gray world assumption, (d) a modified Jaffe-McGlamery model (Eqn. 3) with ideal attenuation coefficients, (e) Shin et
al.’s deep learning approach, and (f) our proposed method.
[9]
[10]
(a) Raw image patch (b) Restored image (c) Proposed output
without skipping layers
Fig. 6: Zoomed-in comparison of color correction results of
an image with and without skipping layers.
train WaterGAN and the color correction network separately
to simplify initial development of our methods. Combining
these networks into a single network to allow joint training
would be a more elegant approach. Additionally, this would
allow the output of the color correction network to directly
influence the WaterGAN network, perhaps enabling development of a more descriptive loss function for the specific
application of image restoration.
[11]
[12]
[13]
[14]
[15]
ACKNOWLEDGMENTS
The authors would like to thank the Australian Centre
for Field Robotics for providing the Lizard Island dataset,
the Marine Hydrodynamics Laboratory at the University of
Michigan for providing access to testing facilities, and Y.S.
Shin for sharing source code. This work was supported
in part by the National Science Foundation under Award
Number: 1452793, the Office of Naval Research under award
N00014-16-1-2102, and by the National Oceanic and Atmospheric Administration under award NA14OAR0110265.
[16]
[17]
[18]
[19]
R EFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
M. Johnson-Roberson, M. Bryson, A. Friedman, O. Pizarro,
G. Troni, P. Ozog, and J. C. Henderson, “High-resolution
underwater robotic vision-based mapping and 3d reconstruction for archaeology,” J. Field Robotics, pp. 625–643, 2016.
M. Bryson, M. Johnson-Roberson, O. Pizarro, and S.
Williams, “Automated registration for multi-year robotic
surveys of marine benthic habitats,” in Proc. IEEE/RSJ Int.
Conf. Intell. Robots and Syst., 2013, pp. 3344–3349.
M. Johnson-Roberson, M. Bryson, B. Douillard, O. Pizarro,
and S. B. Williams, “Out-of-core efficient blending for
underwater georeferenced textured 3d maps,” in IEEE Int.
Conf. Comp. for Geospat. Res. and App., 2013, pp. 8–15.
I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D.
Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Adv. in Neural Info. Proc. Syst.
Curran Associates, Inc., 2014, pp. 2672–2680.
A. Jordt, “Underwater 3d reconstruction based on physical
models for refraction and underwater light propagation,”
PhD thesis, Kiel University, 2013.
J. S. Jaffe, “Computer modeling and the design of optimal
underwater imaging systems,” IEEE J. Oceanic Engin., vol.
15, no. 2, pp. 101–111, 1990.
M. Bryson, M. Johnson-Roberson, O. Pizarro, and S. B.
Williams, “True color correction of autonomous underwater
vehicle imagery,” J. Field Robotics, pp. 853–874, 2015.
M. Bryson, M. Johnson-Roberson, O. Pizarro, and S.
Williams, “Colour-Consistent Structure-from-Motion Models using Underwater Imagery,” in Rob: Sci. and Syst., 2012,
pp. 33–40.
[20]
[21]
[22]
[23]
[24]
[25]
[26]
K. A. Skinner, E. I. Ruland, and M. J.-R. Johnson-Roberson,
“Automatic color correction for 3d reconstruction of underwater scenes,” in Proc. IEEE Int. Conf. Robot. and
Automation, 2017.
N. Carlevaris-Bianco, A. Mohan, and R. M. Eustice, “Initial
results in underwater single image dehazing,” in Proc.
IEEE/MTS OCEANS, Seattle, WA, USA, 2010, pp. 1–8.
P. Drews Jr., E. do Nascimento, F. Moraes, S. Botelho,
and M. Campos, “Transmission estimation in underwater
single images,” in Proc. IEEE Int. Conf. on Comp. Vision
Workshops, 2013, pp. 825–830.
P. D. Jr., E. R. Nascimento, S. S. C. Botelho, and M. F. M.
Campos, “Underwater depth estimation and image restoration based on single images,” IEEE CG&A, vol. 36, no. 2,
pp. 24–35, 2016.
Y.-S. Shin, Y. Cho, G. Pandey, and A. Kim, “Estimation of ambient light and transmission map with common
convolutional architecture,” in Proc. IEEE/MTS OCEANS,
Monterey, CA, 2016, pp. 1–7.
A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” ArXiv preprint arXiv:1511.06434, 2015.
A. Shrivastava, T. Pfister, O. Tuzel, J. Susskind, W. Wang,
and R. Webb, “Learning from simulated and unsupervised images through adversarial training,” CoRR, vol.
abs/1612.07828, 2016.
L. Sixt, B. Wild, and T. Landgraf, “Rendergan: Generating
realistic labeled data,” CoRR, vol. abs/1611.01331, 2016.
B. L. McGlamery, “Computer analysis and simulation of
underwater camera system performance,” UC San Diego,
Tech. Rep., 1975.
L. Lopez-Fuentes, G. Oliver, and S. Massanet, “Revisiting
image vignetting correction by constrained minimization
of log-intensity entropy,” in Proc. Adv. in Comp. Intell.,
Springer International Publishing, June 2015, pp. 450–463.
V. Badrinarayanan, A. Kendall, and R. Cipolla, “Segnet: A
deep convolutional encoder-decoder architecture for scene
segmentation,” IEEE Trans. on Pattern Analysis and Machine Intell., vol. PP, no. 99, 2017.
X.-J. Mao, C. Shen, and Y.-B. Yang, “Image restoration
using convolutional auto-encoders with symmetric skip connections,” ArXiv preprint arXiv:1606.08921, 2016.
V. Jain, J. F. Murray, F. Roth, S. Turaga, V. Zhigulin, K. L.
Briggman, M. N. Helmstaedter, W. Denk, and H. S. Seung,
“Supervised learning of image restoration with convolutional
networks,” in Proc. IEEE Int. Conf. Comp. Vision, IEEE,
2007, pp. 1–8.
A. Janoch, S. Karayev, Y. Jia, J. T. Barron, M. Fritz, K.
Saenko, and T. Darrell, “A category-level 3-d object dataset:
Putting the kinect to work,” in Proc. IEEE Int. Conf. on
Comp. Vision Workshops, 2011, pp. 1168–1174.
K. Lai, L. Bo, and D. Fox, “Unsupervised feature learning
for 3d scene labeling,” in Proc. IEEE Int. Conf. on Robot.
and Automation, 2014, pp. 3050–3057.
N. Silberman and R. Fergus, “Indoor scene segmentation
using a structured light sensor,” in Proc. IEEE Int. Conf.
Comp. Vision Workshops, 2011, pp. 601–608.
J. Shotton, B. Glocker, C. Zach, S. Izadi, A. Criminisi,
and A. Fitzgibbon, “Scene coordinate regression forests
for camera relocalization in rgb-d images,” in Proc. IEEE
Conf. on Comp. Vision and Pattern Recognition, 2013,
pp. 2930–2937.
O. Pizarro, A. Friedman, M. Bryson, S. B. Williams, and
J. Madin, “A simple, fast, and repeatable survey method
for underwater visual 3d benthic mapping and monitoring,”
Ecology and Evolution, vol. 7, no. 6, pp. 1770–1782, 2017.
| 1 |
MAXIMAL MODIFICATIONS AND AUSLANDER–REITEN DUALITY
FOR NON-ISOLATED SINGULARITIES.
OSAMU IYAMA AND MICHAEL WEMYSS
arXiv:1007.1296v4 [math.AG] 14 Nov 2013
In memory of Kentaro Nagao
Abstract. We first generalize classical Auslander–Reiten duality for isolated singularities to cover singularities with a one-dimensional singular locus. We then define the
notion of CT modules for non-isolated singularities and we show that these are intimately related to noncommutative crepant resolutions (NCCRs). When R has isolated
singularities, CT modules recover the classical notion of cluster tilting modules but
in general the two concepts differ. Then, wanting to generalize the notion of NCCRs
to cover partial resolutions of Spec R, in the main body of this paper we introduce a
theory of modifying and maximal modifying modules. Under mild assumptions all the
corresponding endomorphism algebras of the maximal modifying modules for threedimensional Gorenstein rings are shown to be derived equivalent. We then develop a
theory of mutation for modifying modules which is similar but different to mutations
arising in cluster tilting theory. Our mutation works in arbitrary dimension, and in
dimension three the behavior of our mutation strongly depends on whether a certain
factor algebra is artinian.
Contents
1. Introduction
1.1. Motivation and History
1.2. Auslander–Reiten Duality for Non-Isolated Singularities
1.3. Maximal Modifications and NCCRs
1.4. Derived Equivalences
1.5. Mutation of Modifications
1.6. Conventions
2. Preliminaries
2.1. Depth and CM Modules
2.2. Reflexive Equivalence and Symmetric Algebras
2.3. Non-Singular and Gorenstein Orders
2.4. d-sCY Algebras
2.5. Global–local properties
3. Auslander–Reiten Duality for Non-Isolated Singularities
4. Modifying and Maximal Modifying Modules
4.1. Modifications in Dimension d
4.2. Derived Equivalence in Dimension 3
5. Relationship Between CT modules, NCCRs and MM modules
6. Mutations of Modifying Modules
6.1. Mutations and Derived Equivalences in Dimension d
6.2. Mutations and Derived Equivalences in Dimension 3
6.3. Complete Local Case
References
2
2
3
4
6
7
8
8
9
9
11
13
14
15
18
18
20
24
28
28
33
36
38
The first author was partially supported by JSPS Grant-in-Aid for Scientific Research 24340004,
23540045, 20244001 and 22224001. The second author was partially supported by a JSPS Postdoctoral
Fellowship and by the EPSRC.
1
2
OSAMU IYAMA AND MICHAEL WEMYSS
1. Introduction
1.1. Motivation and History. One of the basic results in representation theory of commutative algebras is Auslander–Reiten (=AR) duality [Aus78, AR, Y90] for isolated singularities, which gives us many important consequences, e.g. the existence of almost split
sequences and the Calabi-Yau property of the stable categories of Cohen–Macaulay (=CM)
modules over Gorenstein isolated singularities. One of the aims of this paper is to establish a version of AR duality for singularities with one dimensional singular loci. As an
application, the stable categories of CM modules over Gorenstein singularities with one
dimensional singular loci enjoy a generalized Calabi-Yau property. This is a starting point
of our research to apply the methods of cluster tilting in representation theory to study
singularities.
One of the highlights of representation theory of commutative algebras is AR theory
of simple surface singularities [Aus86]. They have only finitely many indecomposable CM
modules, and the Auslander algebras (i.e. the endomorphism algebras of the direct sums of
all indecomposable CM modules) enjoy many nice properties. If we consider singularities
of dimension greater than two, then there are very few representation-finite singularities,
and their Auslander algebras do not satisfy such nice properties. The reason is that the
categories of CM modules do not behave nicely in the sense that the homological properties
of simple functors corresponding to free modules are very hard to control. Motivated to
obtain the correct category on which higher AR theory should be performed, in [Iya07] the
first author introduced the notion of a maximal n-orthogonal subcategory and maximal
n-orthogonal module for the category mod Λ, later renamed cluster tilting subcategories
and cluster tilting modules respectively. Just as classical AR theory on mod Λ was moved
to AR theory on CM Λ following the work of Auslander on the McKay correspondence for
surfaces Λ [Aus86], this suggests that in the case of a higher dimensional CM singularity
R we should apply the definition of a maximal n-orthogonal subcategory/modules to
CM R and hope that this provides a suitable framework for tackling higher-dimensional
geometric problems. Strong evidence for this is provided when R is a three dimensional
normal isolated Gorenstein singularity, since in this case it is known [IR08, 8.13] that such
objects have an intimate relationship with Van den Bergh’s noncommutative crepant
resolutions (NCCRs) [V04b]. Requiring R to be isolated is absolutely crucial to this
relationship (by normality the singularities are automatically isolated in the surfaces case);
from an algebraic perspective this perhaps should not be surprising since AR theory only
works well for isolated singularities. It turns out that the study of maximal n-orthogonal
modules in CM R is not well-suited to non-isolated singularities since the Ext vanishing
condition is far too strong; the question arises as to what subcategories of CM R should
play the role of the maximal n-orthogonal subcategories above.
Although in this paper we answer this question, in fact we say much more. Philosophically, the point is that we are asking ourselves the wrong question. The restriction
to studying maximal orthogonal modules is unsatisfactory since crepant resolutions need
not exist (even for 3-folds) and so we develop a theory which can deal with singularities
in the crepant partial resolutions. Since the endomorphism rings of maximal orthogonal
modules have finite global dimension, these will not do the job for us.
We introduce the notion of maximal modifying modules (see 1.12 below) which intuitively we think of as corresponding to shadows of maximal crepant partial resolutions.
Geometrically this level always exists, but only sometimes will it be smooth. With regards
to this viewpoint maximal modifying modules are a more natural class of objects to work
with compared to noncommutative crepant resolutions; we should thus always work in
this level of generality and simply view the case when the geometry is smooth as being
a happy coincidence. Pushing this philosophy further, everything that we are currently
able to do with NCCRs we should be able to do with maximal modifying modules, and
this motivates much of the work in this paper.
In fact in many regards restricting our attention to only studying maximal crepant
partial resolutions misses much of the picture and so we should (and do) work even more
generally. When one wants to flop curves between varieties with canonical singularities
which are not terminal this does not take place on the maximal level but we should still
be able to understand this homologically. This motivates our definition and the study of
MM MODULES AND AR DUALITY FOR NON-ISOLATED SINGULARITIES.
3
modifying modules (see 1.12 below). Viewing our modifying modules M as conjectural
shadows of partial resolutions we should thus be able to track the birational transformations between the geometrical spaces by using some kind of homological transformation
between the corresponding modifying modules. This leads us to develop a theory of
mutation for modifying modules, which we do in Section 6.
We note that some parts of the theory of (maximal) modifying modules developed
in this paper are analogues of cluster tilting theory [GLS, IR08], especially when the ring
has Krull dimension three. One main difference is that we do not assume that the ring is
an isolated singularity, so we need to introduce (maximal) modifying modules which are
much more general than (maximal) rigid modules in cluster tilting theory. Some of the
main properties in cluster tilting theory are still true in our setting, for example, mutation
is an involution in dimension three (see 1.25 below) and gives a derived equivalence in any
dimension (see 1.23 below). On the other hand, new features also appear in our setting.
For example, mutation sometimes does not change the given modifying modules (see 1.25
below). This feature is necessary, since it exactly reflects the geometry of partial crepant
resolutions.
Although in this paper we are principally interested in the geometrical and commutative algebraic statements, the proofs of our theorems require a slightly more general
noncommutative setting. For this reason throughout this paper we use the language of
singular Calabi–Yau algebras:
Definition 1.1. Let Λ be a module finite R-algebra, then for d ∈ Z we call Λ d-Calabi–Yau
(=d-CY) if there is a functorial isomorphism
HomD(Mod Λ) (X, Y [d]) ∼
= D0 HomD(Mod Λ) (Y, X)
for all X ∈ Db (fl Λ), Y ∈ Db (mod Λ), where D0 is the Matlis dual (see §2.4 for more
details). Similarly we call Λ singular d-Calabi–Yau (=d-sCY) if the above functorial
isomorphism holds for all X ∈ Db (fl Λ) and Y ∈ Kb (proj Λ).
Clearly d-sCY (respectively d-CY) algebras are closed under derived equivalence
[IR08, 3.1(1)]. When Λ = R, it is known (see 2.20) that R is d-sCY if and only if R
is Gorenstein and equi-codimensional with dim R = d. Thus throughout this paper, we
use the phrase ‘R is d-sCY’ as a convenient shorthand for this important property.
We remark that by passing to mildly noncommutative d-sCY algebras we increase the
technical difficulty, but we emphasize that we are forced to do this since we are unable to
prove the purely commutative statements without passing to the noncommutative setting.
We now describe our results rigorously, and in more detail.
1.2. Auslander–Reiten Duality for Non-Isolated Singularities. Throughout this
subsection let R be an equi-codimensional (see 1.3 below) CM ring of dimension d with a
canonical module ωR . Recall that for a non-local CM ring R, a finitely generated R-module
ωR is called a canonical module if (ωR )m is a canonical Rm -module for all m ∈ Max R [BH,
3.3.16]. In this case (ωR )p is a canonical Rp -module for all p ∈ Spec R since canonical
modules localize for local CM rings [BH, 3.3.5].
We denote CM R to be the category of CM R-modules (see §2.1), CMR to be the
stable category and CMR to be the costable category. The AR translation is defined to be
τ := HomR (Ωd Tr(−), ωR ) : CMR → CMR.
When R is an isolated singularity one of the fundamental properties of the category CM R
is the existence of Auslander–Reiten duality [Aus78, I.8.8] [AR, 1.1(b)], namely
HomR (X, Y ) ∼
= D0 Ext1R (Y, τ X)
for all X, Y ∈ CM R where D0 is the Matlis dual (see §3). Denoting D1 := Extd−1
R (−, ωR )
to be the duality on the category of Cohen–Macaulay modules of dimension 1, we show
that AR duality generalizes to mildly non-isolated singularities as follows:
Theorem 1.2 (=3.1). Let R be a d-dimensional, equi-codimensional CM ring with a
canonical module ωR and singular locus of Krull dimension less than or equal to one.
4
OSAMU IYAMA AND MICHAEL WEMYSS
Then there exist functorial isomorphisms
fl HomR (X, Y ) ∼
= D0 (fl Ext1R (Y, τ X)),
Ext1R (Y, τ X)
HomR (X, ΩY ) ∼
= D1
fl HomR (X, ΩY )
fl Ext1R (Y, τ X)
for all X, Y ∈ CM R, where for an R-module M we denote fl M to be the largest finite
length R-submodule of M .
In fact we prove 3.1 in the setting of noncommutative R-orders (see §3 for precise
details). In the above and throughout this paper, for many of the global-local arguments
to work we often have to add the following mild technical assumption.
Definition 1.3. A commutative ring R is equi-codimensional if all its maximal ideals
have the same height.
Although technical, such rings are very common; for example all domains finitely
generated over a field are equi-codimensional [Ei95, 13.4]. Since our main applications
are three-dimensional normal domains finitely generated over C, in practice this adds no
restrictions to what we want to do. We will use the following well-known property [Mat,
17.3(i), 17.4(i)(ii)]:
Lemma 1.4. Let R be an equi-codimensional CM ring, and let p ∈ Spec R. Then
ht p + dim R/p = dim R.
The above generalized Auslander–Reiten duality implies the following generalized
(d − 1)-Calabi-Yau property of the triangulated category CMR.
Corollary 1.5 (=3.7). Let R be a d-sCY ring with dim Sing R ≤ 1. Then
(1) There exist functorial isomorphisms
fl HomR (X, Y ) ∼
= D0 (fl HomR (Y, X[d − 1])),
HomR (Y, X[d − 2])
HomR (X, Y ) ∼
D
= 1
fl HomR (X, Y )
fl HomR (Y, X[d − 2])
for all X, Y ∈ CM R.
(2) (=4.4) For all X, Y ∈ CM R, HomR (X, Y ) ∈ CM R if and only if HomR (Y, X) ∈
CM R.
Note that 1.5(2) also holds (with no assumptions on the singular locus) provided that
R is normal (see 2.9). This symmetry in the Hom groups gives us the technical tool we
require to move successfully from the cluster tilting level to the maximal modification
level below, and is entirely analogous to the symmetry given by [CB, Lemma 1] as used
in cluster theory (e.g. [GLS]).
1.3. Maximal Modifications and NCCRs. Here we introduce our main definitions,
namely modifying, maximal modifying and CT modules, and then survey our main results.
Throughout, an R-algebra is called module finite if it is a finitely generated R-module.
As usual, we denote (−)p := − ⊗R Rp to be the localization functor. For an R-algebra
Λ, clearly Λp is an Rp -algebra and we have a functor (−)p : mod Λ → mod Λp . Recall
[Aus78, Aus84, CR90]:
Definition 1.6. Let R be a CM ring and let Λ be a module finite R-algebra. We say
(1) Λ is an R-order if Λ ∈ CM R.
(2) An R-order Λ is non-singular if gl.dim Λp = dim Rp for all primes p of R.
(3) An R-order Λ has isolated singularities if Λp is a non-singular Rp -order for all nonmaximal primes p of R.
In the definition of non-singular R-order above, gl.dim Λp = dim Rp means that
gl.dim Λp takes the smallest possible value. In fact for an R-order Λ we always have
that gl.dim Λp ≥ dim Rp := tp for all primes p of R since proj.dimΛp Λp /(x1 , ..., xtp )Λp =
dim Rp for a Λp -regular sequence x1 , . . . , xtp . We also remark that since the localization
functor is exact and dense, we always have gl.dim Λp ≤ gl.dim Λ for all p ∈ Spec R.
MM MODULES AND AR DUALITY FOR NON-ISOLATED SINGULARITIES.
5
Throughout this paper we denote
(−)∗ := HomR (−, R) : mod R → mod R
and we say that X ∈ mod R is reflexive if the natural map X → X ∗∗ is an isomorphism.
We denote ref R to be the category of reflexive R-modules. By using Serre’s (S2 )-condition
(see for example [EG85, 3.6], [BH, 1.4.1(b)]), when R is a normal domain the category
ref R is closed under both kernels and extensions.
Definition 1.7. Let R be CM, then by a noncommutative crepant resolution (NCCR) of
R we mean Γ := EndR (M ) where M ∈ ref R is non-zero such that Γ is a non-singular
R-order.
We show in 2.17 that under very mild assumptions the condition in 1.6(2) can in
fact be checked at only maximal ideals, and we show in 2.23 that 1.7 is equivalent to the
definition of NCCR due to Van den Bergh [V04b] when R is a Gorenstein normal domain.
Recall the following:
Definition 1.8. Let A be a ring. We say that an A-module M is a generator if A ∈
add M . A projective A-module M which is a generator is called a progenerator.
Motivated by wanting a characterization of the reflexive generators which give NCCRs, we define:
Definition 1.9. Let R be a d-dimensional CM ring with a canonical module ωR . We call
M ∈ CM R a CT module if
add M = {X ∈ CM R : HomR (M, X) ∈ CM R} = {X ∈ CM R : HomR (X, M ) ∈ CM R}.
Clearly a CT module M is always a generator and cogenerator (i.e. add M contains
both R and ωR ). We show in 5.12 that this recovers the established notion of maximal
(d − 2)-orthogonal modules when R is d-dimensional and has isolated singularities. The
following result in the complete case is shown in [Iya07, 2.5] under the assumption that
G is a small subgroup of GL(d, k), and S G is an isolated singularity. We can drop all
assumptions under our definition of CT modules:
Theorem 1.10 (=5.7). Let k be a field of characteristic zero, and let S be the polynomial ring k[x1 , . . . , xd ] (respectively formal power series ring k[[x1 , . . . , xd ]]). For a finite
subgroup G of GL(d, k), let R = S G . Then S is a CT R-module.
One of our main results involving CT modules is the following, where part (2) answers
a question of Van den Bergh posed in [V04b, 4.4].
Theorem 1.11 (=5.9). Let R be a normal 3-sCY ring. Then
(1) CT modules are precisely those reflexive generators which give NCCRs.
(2) R has a NCCR ⇐⇒ R has a NCCR given by a CM generator M ⇐⇒ R has a CT
module.
However in many cases R need not have a NCCR so we must weaken the notion of
CT module and allow for our endomorphism rings to have infinite global dimension. We
do so as follows:
Definition 1.12. Let R be a d-dimensional CM ring. We call N ∈ ref R a modifying
module if EndR (N ) ∈ CM R, whereas we call N a maximal modifying (MM) module if
N is modifying and furthermore it is maximal with respect to this property, that is to say
if there exists X ∈ ref R with N ⊕ X modifying, necessarily X ∈ add N . Equivalently, we
say N is maximal modifying if
add N = {X ∈ ref R : HomR (N ⊕ X, N ⊕ X) ∈ CM R}.
If N is an MM module (respectively modifying module), we call EndR (N ) a maximal
modification algebra (=MMA) (respectively modification algebra).
In this paper we will mainly be interested in the theoretical aspects of MMAs, but
there are many natural examples. In fact NCCRs are always MMAs (see 4.5) and so this
gives one rich source of examples. However MMAs need not be NCCRs, and for examples
6
OSAMU IYAMA AND MICHAEL WEMYSS
of this type of behaviour, together with the links to the geometry, we refer the reader to
[IW11].
When R is d-dimensional with isolated singularities we show in 5.12 that modifying
modules recover the established notion of (d − 2)-rigid modules, whereas MM modules
recover the notion of maximal (d − 2)-rigid modules. However, other than pointing out
this relationship, throughout we never assume that R has isolated singularities.
When an NCCR exists, we show that MMAs are exactly the same as NCCRs:
Proposition 1.13 (=5.11). Let R be a normal 3-sCY ring, and assume that R has a
NCCR (equivalently, by 1.11, a CT module). Then
(1) MM modules are precisely the reflexive modules which give NCCRs.
(2) MM modules which are CM (equivalently, by 4.2, the MM generators) are precisely
the CT modules.
(3) CT modules are precisely those CM modules which give NCCRs.
The point is that R need not have a NCCR, and our definition of maximal modification algebra is strictly more general.
1.4. Derived Equivalences. We now explain some of our results involving derived equivalences of modifying modules. We say that two module finite R-algebras A and B are
derived equivalent if D(Mod A) ≃ D(Mod B) as triangulated categories, or equivalently
Db (mod A) ≃ Db (mod B) by adapting [R89, 8.1, 8.2]. First, we show that any algebra
derived equivalent to a modifying algebra also has the form EndR (M ).
Theorem 1.14 (=4.6). Let R be a normal d-sCY ring, then
(1) Modifying algebras of R are closed under derived equivalences, i.e. any ring derived
equivalent to a modifying algebra is isomorphic to a modifying algebra.
(2) NCCRs of R are closed under derived equivalences.
The corresponding statement for MMAs is slightly more subtle, but we show it is
true in dimension three (1.17), and also slightly more generally in 4.8(2).
Throughout this paper we freely use the notion of a tilting module which we always
assume has projective dimension less than or equal to one:
Definition 1.15. Let Λ be a ring. Then T ∈ mod Λ is called a partial tilting module if
proj.dimΛ T ≤ 1 and Ext1Λ (T, T ) = 0. If further there exists an exact sequence
0 → Λ → T0 → T1 → 0
with each Ti ∈ add T , we say that T is a tilting module.
Our next result details the relationship between modifying and maximal modifying
modules on the level of derived categories.
Theorem 1.16 (=4.15). Let R be a normal 3-sCY ring with MM module M . Then
(1) If N is any modifying module, then T := HomR (M, N ) is a partial tilting EndR (M )module that induces a recollement [BBD, §1.4]
K
D(Mod EndR (M ))
F
D(Mod EndR (N ))
where F = RHom(T, −) and K is a certain triangulated subcategory of D(Mod EndR (M )).
(2) If further N is maximal modifying then the above functor F is an equivalence.
Theorems 1.14 and 1.16 now give the following, which we view as the noncommutative
analogue of a result of Chen [C02, 1.1].
Corollary 1.17 (=4.16). Let R be a normal 3-sCY ring with an MM module. Then all
MMAs are derived equivalent, and further any algebra derived equivalent to an MMA is
also an MMA.
In our study of modifying modules, reflexive modules over noncommutative R-algebras
play a crucial role.
Definition 1.18. Let R be any commutative ring. If A is any R-algebra then we say that
M ∈ mod A is a reflexive A-module if it is reflexive as an R-module.
MM MODULES AND AR DUALITY FOR NON-ISOLATED SINGULARITIES.
7
Note that we do not require that the natural map M → HomAop (HomA (M, A), A) is
an isomorphism. However when A is 3-sCY and M is a reflexive A-module in the sense of
1.18, automatically it is. Our main theorem regarding maximal modifying modules is the
following remarkable relationship between modifying modules and tilting modules. Note
that (3) below says that R has a maximal modification algebra if and only if it has a
maximal modification algebra EndR (N ) where N is a CM generator, a generalization of
1.11(2).
Theorem 1.19 (=4.17, 4.18). Let R be a normal 3-sCY ring with an MM module M .
Then
(1) The functor HomR (M, −) : mod R → mod EndR (M ) induces bijections
1:1
{maximal modifying R-modules}
{modifying R-modules}
←→ {reflexive tilting EndR (M )-modules}.
1:1
←→ {reflexive partial tilting EndR (M )-modules}.
(2) N is modifying ⇐⇒ N is a direct summand of a maximal modifying module.
(3) R has an MM module which is a CM generator.
1.5. Mutation of Modifications. Recall:
Definition 1.20. Let Λ be a ring. For Λ-modules M and N , we say that a morphism
f : N0 → M is a right (add N )-approximation if N0 ∈ add N and further
·f
HomΛ (N, N0 ) → HomΛ (N, M )
is surjective. Dually we define a left (add N )-approximation.
Now let R be a normal d-sCY ring. We introduce categorical mutations as a method
of producing modifying modules (together with a derived equivalence) from a given one.
For a given modifying R-module M , and N such that 0 6= N ∈ add M we consider
a
(1) a right (add N )-approximation of M , denoted N0 → M .
b
(2) a right (add N ∗ )-approximation of M ∗ , denoted N1∗ → M ∗ .
Note that the above a and b are surjective if N is a generator. In what follows we denote
the kernels by
c
a
0 → K 0 → N0 → M
d
b
and 0 → K1 → N1∗ → M ∗
and call these exchange sequences.
Definition 1.21. With notation as above, we define the right mutation of M at N to be
−
∗
µ+
N (M ) := N ⊕ K0 and we define the left mutation of M at N to be µN (M ) := N ⊕ K1 .
+
∗ ∗
Note that by definition µ−
N (M ) = (µN ∗ (M )) .
−
Theorem 1.22 (=6.10, 6.5). (1) Both µ+
N (M ) and µN (M ) are modifying R-modules.
+
−
+
(2) µN and µN are mutually inverse operations, i.e. we have that µ−
N (µN (M )) = M and
−
µ+
N (µN (M )) = M , up to additive closure.
What is remarkable is that this process always produces derived equivalences, even
in dimension d:
Theorem 1.23 (=6.8, 6.10). Let R be a normal d-sCY ring with modifying module M .
Suppose that 0 6= N ∈ add M . Then
+
(1) EndR (M ), EndR (µ−
N (M )) and EndR (µN (M )) are all derived equivalent.
+
(2) If M gives an NCCR, so do µN (M ) and µ−
N (M ).
−
(3) Whenever N is a generator, if M is a CT module so are µ+
N (M ) and µN (M ).
(4) Whenever dim Sing R ≤ 1 (e.g. if d = 3), if M is a MM module so are µ+
N (M ) and
µ−
(M
).
N
In particular the above allows us to mutate any NCCR, in any dimension, at any
direct summand, and will give another NCCR together with a derived equivalence. In
particular, we can do this when the ring R is not complete local, and also we can do
this when the NCCR may be given by a quiver with relations where the quiver has both
loops and 2-cycles, in contrast to cluster theory. This situation happens very frequently in
8
OSAMU IYAMA AND MICHAEL WEMYSS
the study of one-dimensional fibres, where this form of mutation seems to have geometric
consequences.
One further corollary in full generality is the following surprising result on syzygies
Ω and cosyzygies Ω−1 , since they are a special case of left and right mutation. Note that
we have to be careful when defining our syzygies and cosyzygies so that they have free
summands; see §6.1 for more details.
Corollary 1.24 (=6.11). Suppose that R is a normal d-sCY ring and M ∈ ref R is a
modifying generator. Then
(1) Ωi M ∈ CM R is a modifying generator for all i ∈ Z, and further all EndR (Ωi M ) are
derived equivalent.
(2) If M is CT (i.e. gives an NCCR), then all Ωi M ∈ CM R are CT, and further all
EndR (Ωi M ) are derived equivalent.
We remark that when dim R = 3, in nice situations we can calculate the mutated algebra from the original algebra by using various combinatorial procedures (see e.g. [BIRS,
5.1], [KY11, 3.2] and [V09, 3.5]), but we note that our mutation is categorical and much
more general, and expecting a combinatorial rule is unfortunately too optimistic a hope.
We also remark that since we are dealing with algebras that have infinite global dimension,
there is no reason to expect that they possess superpotentials and so explicitly describing
their relations is in general a very difficult problem.
When dim R = 3 and R is complete local, we can improve the above results. In this
setting, under fairly weak assumptions it turns out that left mutation is the same as right
mutation, as in the case of cluster theory [IY08]. If 0 6= N ∈ add M then we define [N ]
to be the two-sided ideal of EndR (M ) consisting of morphisms M → M which factor
through a member of add N , and denote ΛN := Λ/[N ]. The behaviour of mutation at N
is controlled by ΛN , in particular whether or not ΛN is artinian. Note that when R is
finitely generated over a field k, ΛN is artinian if and only if it is finite dimensional over
k (see 6.15).
For maximal modifying modules the mutation picture is remarkably clear, provided
that we mutate at only one indecomposable summand at a time:
Theorem 1.25 (=6.25). Suppose R is complete normal 3-sCY with MM module M .
Denote Λ = EndR (M ), let Mi be an indecomposable summand of M and consider Λi :=
Λ/Λ(1 − ei )Λ where ei is the idempotent in Λ corresponding to Mi . To ease notation
+
−
−
denote µ+
i = µ M and µi = µ M . Then
Mi
Mi
−
(1) If Λi is not artinian then µ+
i (M ) = M = µi (M ).
−
+
(2) If Λi is artinian then µi (M ) = µi (M ) and this is not equal to M .
−
In either case denote µi := µ+
i = µi then it is also true that
(3) µi µi (M ) = M .
(4) µi (M ) is a MM module.
(5) EndR (M ) and EndR (µi (M )) are derived equivalent, via the tilting EndR (M )-module
HomR (M, µi (M )).
Some of the above proof works in greater generality, but we suppress the details here.
1.6. Conventions. We now state our conventions. All modules will be left modules, so
for a ring A we denote mod A to be the category of finitely generated left A-modules.
Throughout when composing maps f g will mean f then g, similarly for quivers ab will
mean a then b. Note that with this convention HomR (M, X) is a EndR (M )-module and
HomR (X, M ) is a EndR (M )op -module. For M ∈ mod A we denote add M to be the full
subcategory consisting of summands of finite direct sums of copies of M and we denote
proj A := add A to be the category of finitely generated projective A-modules. Throughout
we will always use the letter R to denote some kind of commutative noetherian ring. We
always strive to work in the global setting, so we write (R, m) if R is local. We use the
cp to denote the completion of the localization Rp at its unique maximal ideal.
notation R
2. Preliminaries
MM MODULES AND AR DUALITY FOR NON-ISOLATED SINGULARITIES.
9
2.1. Depth and CM Modules. Here we record the preliminaries we shall need in subsequent sections, especially some global-local arguments that will be used extensively. For
a commutative noetherian local ring (R, m) and M ∈ mod R recall that the depth of M
is defined to be
depthR M := inf{i ≥ 0 : ExtiR (R/m, M ) 6= 0},
which coincides with the maximal length of a M -regular sequence. Keeping the assumption that (R, m) is local we say that M ∈ mod R is maximal Cohen-Macaulay (or simply,
CM ) if depthR M = dim R. This definition generalizes to the non-local case as follows: if
R is an arbitrary commutative noetherian ring we say that M ∈ mod R is CM if Mp is
CM for all prime ideals p in R, and we say that R is a CM ring if R is a CM R-module.
It is often convenient to lift the CM property to noncommutative rings, which we do
as follows:
Definition 2.1. Let Λ be a module finite R-algebra, then we call M ∈ mod Λ a CM Λmodule if it is CM when viewed as an R-module. We denote the category of CM Λ-modules
by CM Λ.
To enable us to bring the concept of positive depth to non-local rings, the following
is convenient:
Definition 2.2. Let R be a commutative noetherian ring and M ∈ mod R. We denote
fl M to be the unique maximal finite length R-submodule of M .
It is clear that fl M exists because of the noetherian property of M ; when (R, m) is
local fl M = {x ∈ M : ∃ r ∈ N with mr x = 0}. The following is well-known.
Lemma 2.3. Let (R, m) be a local ring of dimension d ≥ 2 and let Λ be a module finite Ralgebra. Then for all M, N ∈ mod Λ with depthR N ≥ 2 we have depthR HomΛ (M, N ) ≥
2.
Proof. A free presentation Λa → Λb → M → 0 gives 0 → HomΛ (M, N ) → N b → N a so
the result follows from the depth lemma.
In particular if depth R ≥ 2 then reflexive R-modules always have depth at least two.
Lemma 2.4. Suppose R is a d-dimensional CM ring with d ≥ 2 and let Λ be a module
finite R-algebra. For any X ∈ ref Λ we have Xp ∈ CM Λp for all p ∈ Spec R with ht p ≤ 2.
Proof. Since X is reflexive as an R-module we can find an exact sequence 0 → X → P → Q
with P, Q ∈ add R and so on localizing we see that Xp is a second syzygy for all primes p.
Consequently if p has height ≤ 2 then Xp is a second syzygy for the CM ring Rp which
has dim Rp ≤ 2 and so Xp ∈ CM Rp .
2.2. Reflexive Equivalence and Symmetric Algebras. Here we introduce and fix
notation for reflexive modules and symmetric algebras. All the material in this subsection
can be found in [IR08]. Recall from the introduction (1.18) our convention on the definition
of reflexive modules. Recall also that if Λ is a module finite R-algebra, we say M ∈
ref Λ is called a height one progenerator (respectively, height one projective) if Mp is a
progenerator (respectively, projective) over Λp for all p ∈ Spec R with ht p ≤ 1.
In this paper, when the underlying commutative ring R is a normal domain the
following reflexive equivalence is crucial:
Lemma 2.5. If Λ is a module finite R-algebra, then
(1) If M ∈ mod Λ is a generator then
HomΛ (M, −) : mod Λ → mod EndΛ (M )
≃
is fully faithful, restricting to an equivalence add M → proj EndΛ (M ).
If further R is a normal domain, then the following assertions hold.
(2) HomΛ (X, Y ) ∈ ref R for any X ∈ mod Λ and any Y ∈ ref Λ.
(3) Every non-zero M ∈ ref R is a height one progenerator.
(4) Suppose Λ is a reflexive R-module and let M ∈ ref Λ be a height one progenerator.
Then
HomΛ (M, −) : ref Λ → ref EndΛ (M )
10
OSAMU IYAMA AND MICHAEL WEMYSS
is an equivalence. In particular HomR (N, −) : ref R → ref EndR (N ) is an equivalence for
all non-zero N ∈ ref R.
Proof. (1) is standard.
(2) follows easily from the fact that reflexives are closed under kernels; see [IR08, 2.4(1)].
(3) If p is a height one prime then by 2.4 Mp ∈ CM Rp . But R is normal so Rp is regular;
thus Mp is free.
(4) follows by (3) and [RV89, 1.2] (see also [IR08, 2.4(2)(i)]).
Throughout this paper, we will often use the following observation.
Lemma 2.6. Let R be a CM ring, X, Y ∈ CM R. Then SuppR ExtiR (X, Y ) ⊆ Sing R for
all i > 0. In particular, if R has isolated singularities then ExtiR (X, Y ) is a finite length
R-module for all X, Y ∈ CM R and i > 0.
Proof. This is well-known [Aus78], [Y90, 3.3].
The following lemma is convenient and will be used extensively.
Lemma 2.7. Let R be a 3-dimensional, equi-codimensional CM ring and let Λ be a module
finite R-algebra. If X ∈ mod Λ and Y ∈ ref Λ then
HomΛ (X, Y ) ∈ CM R ⇒ fl Ext1Λ (X, Y ) = 0.
If further Y ∈ CM Λ then the converse holds.
Proof. (⇒) For each m ∈ Max R there is an exact sequence
fm
0 → HomΛ (X, Y )m → HomΛ (P, Y )m → HomΛ (ΩX, Y )m → Ext1Λ (X, Y )m → 0
(2.A)
obtained by localizing the exact sequence obtained from 0 → ΩX → P → X → 0 with P ∈
add Λ. Now depthRm HomΛm (Xm , Ym ) = 3 and further by 2.3 depthRm HomΛm (Pm , Ym ) ≥
2, thus depthRm Cok fm ≥ 2. Since again by 2.3 depthRm HomΛm (ΩXm , Ym ) ≥ 2, we
conclude that depthRm Ext1Λm (Xm , Ym ) > 0 for all m ∈ Max R and so fl Ext1Λ (X, Y ) = 0.
(⇐) Suppose now that Y ∈ CM Λ. Then in (2.A) depthRm HomΛm (Pm , Ym ) = 3 and so by
a similar argument depthRm Ext1Λm (Xm , Ym ) > 0 implies that depthRm HomΛm (Xm , Ym ) =
3.
Definition 2.8. Let Λ be a module finite R-algebra where R is an arbitrary commutative
ring. We call Λ a symmetric R-algebra if HomR (Λ, R) ∼
= Λ as Λ-bimodules. We call Λ a
locally symmetric R-algebra if Λp is a symmetric Rp -algebra for all p ∈ Spec R.
Note that if Λ is a symmetric R-algebra then as functors mod Λ → mod Λop
HomΛ (−, Λ) ∼
= HomΛ (−, HomR (Λ, R)) ∼
= HomR (Λ ⊗Λ −, R) = HomR (−, R).
We have the following well-known observation. Recall that throughout our paper, (−)∗ :=
HomR (−, R).
Lemma 2.9. Let R be a normal domain and Λ be a symmetric R-algebra. Then there is
a functorial isomorphism HomΛ (X, Y ) ∼
= HomΛ (Y, X)∗ for all X, Y ∈ ref Λ such that Y
is height one projective.
Proof. For the convenience of the reader, we give a detailed proof here. We have a natural
map f : HomΛ (Y, Λ) ⊗Λ X → HomΛ (Y, X) sending a ⊗ x to (y 7→ a(y)x). Consider the
map f ∗ : HomΛ (Y, X)∗ → (HomΛ (Y, Λ) ⊗Λ X)∗ between reflexive R-modules. Since Y is
height one projective, fp and (f ∗ )p are isomorphisms for any prime p of height at most
one. Thus f ∗ is an isomorphism since R is normal. Thus we have
f∗
HomΛ (Y, X)∗ ∼
= HomΛ (X, Y ∗∗ ) ∼
= HomΛ (X, Y )
= (Y ∗ ⊗Λ X)∗ ∼
= (HomΛ (Y, Λ) ⊗Λ X)∗ ∼
as required.
This immediately gives the following result, which implies that symmetric algebras
are closed under reflexive equivalence.
MM MODULES AND AR DUALITY FOR NON-ISOLATED SINGULARITIES.
11
Lemma 2.10. [IR08, 2.4(3)] If Λ is a symmetric R-algebra then so is EndΛ (M ) for any
height one projective M ∈ ref Λ. In particular if R is a normal domain and N ∈ ref R
then EndR (N ) is a symmetric R-algebra.
When discussing derived equivalence of modifying algebras later, we will require the
following result due to Auslander–Goldman.
Proposition 2.11 (cf. [AG60, 4.2]). Let R be a normal domain with dim R ≥ 1, and let
Λ be a module finite R-algebra. Then the following conditions are equivalent:
(1) There exists M ∈ ref R such that Λ ∼
= EndR (M ) as R-algebras.
(2) Λ ∈ ref R and further Λp is Morita equivalent to Rp for all p ∈ Spec R with ht p = 1.
Proof. (1)⇒(2) is trivial.
(2)⇒(1) For the convenience of the reader, we give a simple direct proof. Let K be the
quotient field of R. As we have assumed that Λp is Morita equivalent to Rp , K ⊗R Λ ∼
=
Mn (K) as K-algebras for some n > 0. Thus for any M ∈ mod Λ, we can regard K ⊗R M
as an Mn (K)-module. We denote by V the simple Mn (K)-module.
First, we show that there exists M ∈ ref Λ such that K ⊗R M ≃ V as Mn (K)modules. For example, take a split epimorphism f : Mn (K) → V of Mn (K)-modules and
let M := (f (Λ))∗∗ . Then clearly M satisfies the desired properties.
Next, for M above, we show that the natural map g : Λ → EndR (M ) given by the
action of Λ on M is an isomorphism. Since both Λ and EndR (M ) are reflexive R-modules
by our assumption, it is enough to show that gp is an isomorphism for all p ∈ Spec R with
ht p = 1. Since K ⊗R EndR (M ) = EndK (K ⊗R M ) = EndK (V ) = Mn (K), we have that
K ⊗g : K ⊗R Λ → K ⊗R EndR (M ) is an isomorphism. In particular, gp : Λp → EndR (M )p
is injective. Since Rp is local and Λp is Morita equivalent to Rp , we have that Λp is a full
matrix ring over Rp , which is well-known to be a maximal order over Rp (see e.g. [AG60,
3.6], [CR90, §37]). Thus we have that gp is an isomorphism.
2.3. Non-Singular and Gorenstein Orders. Recall from 1.6 the definition of a nonsingular R-order. By definition the localization of a non-singular R-order is again a
non-singular R-order — we shall see in 2.17 that in most situations we may check whether
an algebra is a non-singular R-order by checking only at the maximal ideals.
For some examples of non-singular R-orders, recall that for a ring Λ and a finite group
G together with a group homomorphism G → Autk−alg (Λ), we define the skew group ring
Λ#G as follows [Aus86, Y90]: As a set, it is a free Λ-module with the basis G. The
multiplication is given by
(sg)(s′ g ′ ) := (s · g(s′ ))(gg ′ )
for any s, s′ ∈ S and g, g ′ ∈ G.
Lemma 2.12. Let R be a CM ring containing a field k. Let Λ be a non-singular R-order,
let G be a finite group together with a group homomorphism G → AutR-alg (Λ) and suppose
|G| 6= 0 in k. Then Λ#G is a non-singular R-order.
Proof. Since Λ#G is a direct sum of copies of Λ as an R-module and Λ ∈ CM R, we have
Λ#G ∈ CM R. Now if X, Y ∈ mod Λ#G then G acts on HomΛ (X, Y ) by
(gf )(x) := g · f (g −1 x)
for all g ∈ G, f ∈ HomΛ (X, Y ) and x ∈ X. Clearly we have a functorial isomorphism
HomΛ#G (X, Y ) = HomΛ (X, Y )G
for all X, Y ∈ mod Λ#G. Taking G-invariants (−)G is an exact functor since kG is
semisimple. Thus we have a functorial isomorphism
ExtiΛ#G (X, Y ) = ExtiΛ (X, Y )G
for all X, Y ∈ mod Λ#G and i ≥ 0. In particular, gl.dim Λ#G ≤ gl.dim Λ holds, and we
have the assertion.
In the remainder of this subsection we give basic properties of non-singular orders.
Lemma 2.13. Non-singular R-orders are closed under Morita equivalence.
12
OSAMU IYAMA AND MICHAEL WEMYSS
Proof. Suppose that Λ is a non-singular R-order and Γ is Morita equivalent to Λ. Then
Λp is Morita equivalent to Γp for all primes p. Thus since global dimension is an invariant
of the abelian category, we have gl.dim Γp = gl.dim Λp = dim Rp for all primes p. To see
why the CM property passes across the Morita equivalence let P denote the progenerator
in mod Λ such that Γ ∼
= EndΛ (P ). Since P is a summand of Λn for some n we know that
Γ is a summand of EndΛ (Λn ) = Mn (Λ) as an R-module. Since Λ is CM, so is Γ.
Recall from the introduction (§1.2) the definition of a canonical module ωR for a
non-local CM ring R. If Λ is an R-order we have an exact duality
HomR (−, ωR ) : CM Λ ↔ CM Λop
and so the Λ-module ωΛ := HomR (Λ, ωR ) is an injective cogenerator in the category
CM Λ.
Definition 2.14. [CR90, GN02] Assume R has a canonical module ωR . An R-order Λ
is called Gorenstein if ωΛ is a projective Λ-module.
It is clear that if Λ is a Gorenstein R-order then Λp is a Gorenstein Rp -order for all
p ∈ Spec R. If R is Gorenstein and Λ is a symmetric R-order (i.e. an R-order which is a
symmetric R-algebra), then Λ is clearly a Gorenstein R-order. Moreover if both R and Λ
are d-sCY (see §2.4), we shall see in 2.21 that Λ is a Gorenstein order. Also, we have the
following.
Lemma 2.15. Let Λ be an R-order, then the following are equivalent.
(1) Λ is Gorenstein.
(2) addΛ (Λ) = addΛ (ωΛ ).
(3) addΛop (Λ) = addΛop (ωΛ ).
(4) Λop is Gorenstein.
Proof. Due to 2.26, we can assume that R is complete local, and so mod Λ is Krull–
Schmidt [CR90, 6.12].
(1)⇒(2) If Λ is Gorenstein then by definition addΛ (ωΛ ) ⊆ addΛ (Λ). The number of
non-isomorphic indecomposable projective Λ-modules is equal to that of Λop -modules.
Moreover, the latter is equal to the number of non-isomorphic indecomposable summands
of ωΛ by the duality HomR (−, ωR ). By the Krull–Schmidt property, addΛ (Λ) ⊆ addΛ (ωΛ ).
(2)⇒(1) is trivial.
(2)⇔(3) follows by applying the duality HomR (−, ωR ).
(3)⇔(4) is identical to (1)⇔(2).
When R is local, Gorenstein R-orders Λ are especially important since we have the following Auslander–Buchsbaum type equality, which in particular says that the Λ-modules
which have finite projective dimension and are CM as R-modules are precisely the projective Λ-modules.
Lemma 2.16. Let (R, m) be a local CM ring with a canonical module ωR and let Λ be a
Gorenstein R-order. Then for any X ∈ mod Λ with proj.dimΛ X < ∞,
depthR X + proj.dimΛ X = dim R.
Proof. Let X be a Λ-module with proj.dimΛ X < ∞.
(i) We will show that if X ∈ CM Λ then X is projective. We know that ExtiΛ (−, ωΛ ) = 0
on CM Λ for all i > 0 since ωΛ is injective in CM Λ. Since Λ is Gorenstein add Λ = add ωΛ
by 2.15 and so we have ExtiΛ (−, Λ) = 0 on CM Λ for all i > 0. Since ExtnΛ (X, Λ) 6= 0 for
n = proj.dim X, we have that X is projective.
(ii) Let n = proj.dim X and t = depth X. Take a minimal projective resolution
0 → Pn → . . . → P0 → X → 0.
By the depth lemma necessarily t ≥ d−n. On the other hand by the depth lemma we have
Ωd−t X ∈ CM Λ. By (i) we know Ωd−t X is projective so n ≤ d − t. Thus d = n + t.
The following result is well-known to experts (e.g. [Aus84, 1.5]).
MM MODULES AND AR DUALITY FOR NON-ISOLATED SINGULARITIES.
13
Proposition 2.17. Let Λ be an R-order where R is a CM ring with a canonical module
ωR . Then the following are equivalent:
(1) Λ is non-singular.
(2) gl.dim Λm = dim Rm for all m ∈ Max R.
(3) CM Λ = proj Λ.
(4) Λ is Gorenstein and gl.dim Λ < ∞.
Proof. (1)⇒(2) This is immediate.
(2)⇒(3) Let X ∈ CM Λ. Then Xm ∈ CM Λm . Let x1 , . . . , xd be a Xm -regular sequence
with d = dim Rm . Since we have an exact sequence
x
1
Xm → Xm /x1 Xm → 0
0 → Xm −→
which induces an exact sequence
x
1
i+1
ExtiΛm (Xm , −) → Exti+1
ExtiΛm (Xm , −) −→
Λm (Xm /x1 Xm , −) → ExtΛm (Xm , −),
we have proj.dimΛm (Xm /x1 Xm ) = proj.dimΛm Xm + 1 by Nakayama’s Lemma. Using
this repeatedly, we have proj.dimΛm (Xm /(x1 , . . . , xd )Xm ) = proj.dimΛm Xm + d. Since
gl.dim Λm = d, this implies that Xm is a projective Λm -module. Since this holds for all
m ∈ Max R, X is a projective Λ-module (see e.g. 2.26).
(3)⇒(4) We have ωΛ ∈ CM Λ = proj Λ. Since Ωdim R X ∈ CM Λ = proj Λ for any X ∈
mod Λ, we have gl.dim Λ ≤ dim R.
(4)⇒(1) Pick p ∈ Spec R and suppose Y ∈ mod Λp . Since gl.dim Λp < ∞, by Auslander–
Buchsbaum 2.16 proj.dimΛp Y ≤ dim Rp and so gl.dim Λp ≤ dim Rp . Since Λp is an
Rp -order, the Λp -regular sequence x1 , . . . , xd with d = dim Rp gives an Λp -module X :=
Λp /(x1 , . . . , xd )Λp with proj.dimΛp X = d. Thus we have gl.dim Λp ≥ dim Rp .
2.4. d-sCY Algebras. Throughout this paper we shall freely use the notion of d-CY and
d-sCY as in [IR08, §3]: let R be a commutative noetherian ring with dim R = d and let
Λ be a module finite
L R-algebra. For any X ∈ mod Λ denote by E(X) the injective hull of
X, and set E := m∈Max R E(R/m). This gives rise to Matlis duality D0 := HomR (−, E)
(see for example [O76, §1]). Matlis duality always gives a duality from the category of
finite length R-modules to itself. This is true without assuming that R is (semi-)local
because any finite length R-module is the finite direct sum of finite length Rm -modules
for maximal ideals m, so the statement follows from that for the local setting [BH, 3.2.12].
Recall from the introduction:
Definition 2.18. For n ∈ Z we call Λ n-CY if there is a functorial isomorphism
HomD(Mod Λ) (X, Y [n]) ∼
= D0 HomD(Mod Λ) (Y, X)
for all X ∈ Db (fl Λ), Y ∈ Db (mod Λ). Similarly we call Λ n-sCY if the above functorial
isomorphism holds for all X ∈ Db (fl Λ) and Y ∈ Kb (proj Λ).
The next three results can be found in [IR08]; we include them here since we will use
and refer to them extensively.
Proposition 2.19. (1) Λ is d-CY if and only if it is d-sCY and gl.dim Λ < ∞.
(2) d-sCY (respectively d-CY) algebras are closed under derived equivalences.
Proof. (1) is [IR08, 3.1(7)] whilst (2) is [IR08, 3.1(1)].
Proposition 2.20. [IR08, 3.10] Let R be a commutative noetherian ring, d ∈ N. Then
R is d-sCY if and only if R is Gorenstein and equi-codimensional with dim R = d.
From this, for brevity we often say ‘R is d-sCY’ instead of saying ‘R is Gorenstein
and equi-codimensional with dim R = d’.
Proposition 2.21. Let R be d-sCY and let Λ be a module finite R-algebra. Then
Λ is d-sCY ⇐⇒ Λ is an R-order which is a locally symmetric R-algebra.
Thus if Λ is d-sCY then Λ is a Gorenstein R-order.
Proof. The first statement is [IR08, 3.3(1)]. For the second, suppose Λ is d-sCY then since
it is locally symmetric we have Λm ∼
= HomRm (Λm , Rm ) = HomR (Λ, R)m is a projective Λm module for all m ∈ Max R. Hence HomR (Λ, R) is a projective Λ-module, as required.
14
OSAMU IYAMA AND MICHAEL WEMYSS
The following picture for d-sCY rings R may help the reader navigate the terminology
introduced above.
d-CY R-algebra
non-singular R-order
if gl.dim<∞
d-sCY R-algebra
2.21
2.17
locally symmetric R-order
2.21
if gl.dim<∞
Gorenstein R-order
symmetric R-order
The following non-local result is also useful.
Lemma 2.22. Suppose that R is a d-sCY normal domain.
(1) If Λ is a module finite R-algebra which is d-sCY and M ∈ ref Λ is a height one
projective, then EndΛ (M ) is d-sCY ⇐⇒ EndΛ (M ) ∈ CM R.
(2) If N ∈ ref R then EndR (N ) is d-sCY ⇐⇒ EndR (N ) ∈ CM R.
Proof. (1) Let Γ := EndR (M ). By 2.21 Λm is a symmetric Rm -algebra for all m ∈ Max R,
thus Γm is a symmetric Rm -algebra by 2.10. By 2.21, Γ is d-sCY if and only if Γm ∈ CM Rm
for all m ∈ Max R if and only if Γ ∈ CM R. Thus the assertion follows.
(2) This follows immediately from (1) since any N ∈ ref R is a height one progenerator
by 2.5(3).
Throughout we shall use the definition of NCCR in the introduction (1.7) due to its
suitability for global-local arguments. However, we have the following:
Lemma 2.23. Let R be a d-sCY normal domain, then M ∈ ref R gives a NCCR if and
only if gl.dim EndR (M ) < ∞ and EndR (M ) ∈ CM R.
Proof. (⇒) obvious.
(⇐) Set Λ := EndR (M ), d := dim R. By 2.22(2) Λ is d-sCY hence by 2.21 Λ is a
Gorenstein order, with gl.dim Λ < ∞. By 2.17 Λ is non-singular.
2.5. Global–local properties. In this paper we work in the global setting of non-local
rings so that we can apply our work to algebraic geometry [IW11]. To do this requires
the following technical lemmas.
Lemma 2.24. Derived equivalences between module finite R-algebras are preserved under
localization and completion.
Proof. Let A and B be module finite R-algebras with A derived equivalent to B via a
cp
tilting complex T [R89, 6.5]. Since Ext groups localize (respectively, complete), Tp and T
both have no self-extensions. Further A can be constructed from T using cones, shifts and
summands of T , so using the localizations (respectively, completions) of these triangles
cp can be reached from T
cp .
we conclude that Ap can be constructed from Tp and also A
cp is a tilting A
cp complex.
Thus Tp is a tilting Ap complex and T
The following ensure that membership of add M can be shown locally or even complete locally, and we will use this often.
Lemma 2.25. Let Λ be a module finite R-algebra, where R is a commutative noetherian
g
ring, and let M, N ∈ mod Λ. Denote by N0 → M a right (add N )-approximation of M .
(·g)
Then add M ⊆ add N if and only if the induced map HomΛ (M, N0 ) −−→ EndΛ (M ) is
surjective.
(·g)
Proof. (⇐) If HomΛ (M, N0 ) −−→ EndΛ (M ) is surjective we may lift idM to obtain a
splitting for g and hence M is a summand of N0 .
a
b
(⇒) If M ∈ add N then there exists M → N n → M with ab = idM . Since g is an
MM MODULES AND AR DUALITY FOR NON-ISOLATED SINGULARITIES.
15
approximation, for every ϕ ∈ EndΛ (M ) there is a commutative diagram
Nn
b
ϕ
ψ
N0
M
g
M
Consequently ϕ = abϕ = aψg and so ϕ is the image of aψ under the map (·g).
Proposition 2.26. Let Λ be a module finite R-algebra, where R is a commutative noetherian ring, and let M, N ∈ mod Λ. Then the following are equivalent:
1. add M ⊆ add N .
2. add Mp ⊆ add Np for all p ∈ Spec R.
3. add Mm ⊆ add Nm for all m ∈ Max R.
cp ⊆ add N
bp for all p ∈ Spec R.
4. add M
cm ⊆ add N
bm for all m ∈ Max R.
5. add M
Furthermore we can replace ⊆ by equality throughout and the result is still true.
Proof. Let g be as in 2.25. Then gp : (N0 )p → Mp is a right (add Np )-approximation
\
c
c
and gbp : (N
0 )p → Mp is a right (add Np )-approximation for any p ∈ Spec R. Since
(·g)
the vanishing of Cok(HomΛ (M, N0 ) −−→ EndΛ (M )) can be checked locally or complete
locally, all conditions are equivalent. The last statement holds by symmetry.
3. Auslander–Reiten Duality for Non-Isolated Singularities
Let R be a d-dimensional, equi-codimensional CM ring with a canonical module
ωR , and let Λ be an R-order. If R is d-sCY, we always choose ωR := R. We denote CMΛ to be the stable category of maximal CM Λ-modules and CMΛ to be the
costable category. By definition these have the same objects as CM Λ, but morphism
spaces are defined as HomΛ (X, Y ) := HomΛ (X, Y )/P(X, Y ) (respectively HomΛ (X, Y ) :=
HomΛ (X, Y )/I(X, Y )) where P(X, Y ) (respectively I(X, Y )) is the subspace of morphisms factoring through add Λ (respectively add ωΛ ).
We denote Tr := TrΛ : modΛ → modΛop the Auslander–Bridger transpose duality,
and ΩΛop : modΛop → modΛop the syzygy functor. Then we have AR translation
τ := HomR (ΩdΛop TrΛ (−), ωR ) : CMΛ → CMΛ.
We denote Di := Extd−i
R (−, ωR ) to be the duality of the category of Cohen–Macaulay
modules of dimension i, so D0 is the Matlis dual (as in §2.4).
If Λ is an R-order as above we define SingR Λ := {p ∈ Spec R : gl.dim Λp > dim Rp }
to be the singular locus of Λ (see 1.6(2)). Our main theorem is the following:
Theorem 3.1. Let R be a d-dimensional, equi-codimensional CM ring with a canonical
module ωR . Let Λ be an R-order with dim SingR Λ ≤ 1. Then there exist functorial
isomorphisms
fl HomΛ (X, Y ) ∼
= D0 (fl Ext1Λ (Y, τ X)),
HomΛ (X, ΩY ) ∼
Ext1Λ (Y, τ X)
= D1
fl HomΛ (X, ΩY )
fl Ext1Λ (Y, τ X)
for all X, Y ∈ CM Λ.
In fact 3.1 immediately follows from the more general 3.2 below. Recall for X ∈ mod Λ
that NP(X) := {p ∈ Spec R : Xp ∈
/ proj Λp } and CM1 Λ := {X ∈ CM Λ : dim NP(X) ≤
1}.
Theorem 3.2. Let R be a d-dimensional, equi-codimensional CM ring with a canonical
module ωR . Let Λ be an R-order. Then there exist functorial isomorphisms
fl HomΛ (X, Y ) ∼
= D0 (fl Ext1Λ (Y, τ X)),
Ext1Λ (Y, τ X)
HomΛ (X, ΩY ) ∼
= D1
fl HomΛ (X, ΩY )
fl Ext1Λ (Y, τ X)
for all X ∈ CM1 Λ and Y ∈ CM Λ
16
OSAMU IYAMA AND MICHAEL WEMYSS
The proof of 3.2 requires the next three easy lemmas. For a finitely generated Rmodule M , denote ER (M ) to be the injective hull of M .
Lemma 3.3. If X ∈ mod R and Y ∈ Mod R satisfies Supp X ∩ Ass Y = ∅, then
HomR (X, Y ) = 0.
Proof. Let f : X → Y be any map and X ′ := Im f . Then X ′ ⊂ Y is a finitely generated
submodule such that Ass X ′ ⊂ Supp X ∩ Ass Y . Thus Ass X ′ = ∅ and so since X ′ is
finitely generated, X ′ = 0.
Lemma 3.4. [BH, 3.2.7(a)] We have Ass ER (R/p) = {p}.
Now recall that if R is a d-dimensional equi-codimensional CM ring with canonical
ωR then the minimal R-injective resolution of ωR , denoted
0 → ωR → I0 → I1 → . . . → Id−1 → Id → 0,
satisfies
Ii
[BH, 3.2.9, 3.3.10(b)]
=
M
M
1.4
E(R/p) =
(3.A)
E(R/p).
(3.B)
p:dim R/p=d−i
p:htp=i
In particular the Matlis dual is D0 = HomR (−, Id ).
Lemma 3.5. Let R be a d-dimensional equi-codimensional CM ring with canonical module
ωR . If N ∈ mod R with dimR N ≤ 1, then
d−1 N
∼
(1) Extd−1
R (N, ωR ) = ExtR ( fl N , ωR ).
d
d
(2) ExtR (N, ωR ) ∼
= ExtR (fl N, ωR ).
(3) There is an exact sequence
d
N
0 → Extd−1
R ( fl N , ωR ) → HomR (N, Id−1 ) → HomR (N, Id ) → ExtR (fl N, ωR ) → 0.
Proof. There is an exact sequence 0 → fl N → N →
HomR (−, ωR ) gives
N
fl N
→ 0 from which applying
d−1
d−1
d−1 N
Extd−2
R (fl N, ωR ) → ExtR ( fl N , ωR ) → ExtR (N, ωR ) → ExtR (fl N , ωR ).
Since dimR (fl N ) = 0, it is well-known that ExtiR (fl N, ωR ) = 0 for all i 6= d [BH, 3.5.11],
hence the outer two ext groups vanish, establishing (1). But we also have an exact sequence
N
ExtdR ( flNN , ωR ) → ExtdR (N, ωR ) → ExtdR (fl N , ωR ) → Extd+1
R ( fl N , ωR )
and so since flNN has positive depth (or is zero) at all maximal ideals, ExtiR ( flNN , ωR ) = 0
for all i > d − 1, again by [BH, 3.5.11]. This establishes (2). For (3), note first that
HomR (N, Id−2 ) = 0 by 3.3, since by 3.4 and the assumption that dim N ≤ 1 we have
that Supp N ∩ Ass Id−2 = ∅. Consequently simply applying HomR (N, −) to (3.A) gives
an exact sequence
d
0 → Extd−1
R (N, ωR ) → HomR (N, Id−1 ) → HomR (N, Id ) → ExtR (N, ωR ) → 0,
and so (3) follows from (1) and (2).
We are now ready to prove 3.2. To ease notation, we often drop Tor and Ext, and
R
for example write 1R (X, Y ) for Ext1R (X, Y ), and R
1 (X, Y ) for Tor1 (X, Y ).
Proof. Denote T := Tr X. Now since Y ∈ CM Λ we have ExtiR (Y, ωR ) = 0 for all i > 0
and so applying HomR (Y, −) to (3.A) gives an exact sequence
0 → R (Y, ωR ) → R (Y, I0 ) → R (Y, I1 ) → . . . → R (Y, Id−1 ) → R (Y, Id ) → 0
of left Λop -modules, which we split into short exact sequences as
0
R (Y, ωR )
R (Y, I0 )
R (Y, I1 )
C1
R (Y, I2 )
C2
...
R (Y, Id−2 )
R (Y, Id−1 )
Cd−1
R (Y, Id )
0.
MM MODULES AND AR DUALITY FOR NON-ISOLATED SINGULARITIES.
17
Applying HomΛop (T, −) gives exact sequences
1
1
2
2
2
Λop (T, R (Y, Id−1 )) Λop (T, R (Y, Id )) Λop (T, Cd−1 ) Λop (T, R (Y, Id−1 )) Λop (T, R (Y, Id ))
2
2
3
3
Λop (T, R (Y, Id−2 ))
Λop (T, Cd−1 )
Λop (T, Cd−2 )
Λop (T, R (Y, Id−2 ))
..
.
d−1
Λop (T, R (Y, I1 ))
d−1
Λop (T, C2 )
d
Λop (T, C1 )
d
Λop (T, R (Y, I1 ))
d
Λop (T, R (Y, I0 ))
d
Λop (T, C1 )
d+1
Λop (T, R (Y, ωR ))
d+1
Λop (T, R (Y, I0 ))
.
By the functorial isomorphism [CE99, VI.5.1]
∼ HomR (TorΛ (A, B), I)
Extj (A, R (B, I)) =
j
Λ
where I is an injective R-module, we have exact sequences
Λ
R (1 (T, Y
), Id−1 )
Λ
R (2 (T, Y
Λ
R (1 (T, Y
Λ
R (d (T, Y
2
Λop (T, Cd−1 )
2
Λop (T, Cd−1 )
), Id−2 )
Λ
R (d−1 (T, Y
), Id )
d−1
Λop (T, C2 )
), I1 )
d
Λop (T, C1 )
), I0 )
Λ
R (2 (T, Y
Λ
R (2 (T, Y
), Id ) (3.C)
3
Λ
(T,
C
)
(
(T,
Y
),
I
)
op
d−2
R 3
d−2
Λ
..
.
(3.D)
Λ
d
R (d (T, Y ), I1 )
Λop (T, C1 )
Λ
d+1
.
(
(T,
Y
),
I
)
(T,
(Y,
ω
))
R
0
op
R
R
d+1
Λ
), Id−1 )
By the assumption that X ∈ CM1 Λ, for all primes p such that dim R/p > 1, we
have Xp ∈ proj Λp and so Tp ∈ proj Λop
p . Thus for all such primes and any j > 0,
Λp
∼
(T
,
Y
)
=
0. Hence for all i = 0, 1, . . . , d − 2 and all
Tor
we have TorΛ
(T,
Y
)
=
p
p
p
j
j
j > 0, by 3.4 and (3.B) it follows that Supp TorΛ
j (T, Y ) ∩ Ass Ii = ∅ and so consequently
(T,
Y
),
I
)
=
0
for
all
j
>
0
and
all
i = 0, 1, . . . , d − 2 by 3.3. Thus (3.D)
HomR (TorΛ
i
j
reduces to
∼ Ext3 op (T, Cd−2 ) =
∼ ... =
∼ Extd op (T, C1 ) =
∼ Extd+1
Ext2 op (T, Cd−1 ) =
op (T, R (Y, ωR ))
Λ
Λ
Λ
Λ
and so it follows that
Ext2Λop (T, Cd−1 ) ∼
= Ext1Λ (Y, R (ΩdΛop Tr X, ωR )) = Ext1Λ (Y, τ X).
= Ext1Λop (ΩdΛop Tr X, R (Y, ωR )) ∼
(3.E)
Using the well-known functorial isomorphism [Aus78, 3.2],[Y90, 3.9]
TorΛ (Tr X, Y ) ∼
= Hom (X, Y ),
(3.F)
Λ
1
(3.C), (3.E) and (3.F) combine to give the following commutative diagram of exact sequences:
Λ
R (Tor1 (T, Y
∼
=
), Id−1 )
∼
=
(3.F)
R (HomΛ (X, Y
Λ
R (Tor1 (T, Y
), Id−1 )
), Id )
Ext2Λop (T, Cd−1 )
∼
=
(3.F)
R (HomΛ (X, Y
), Id )
ψ
Λ
R (Tor2 (T, Y
∼
=
(3.E)
Ext1Λ (Y, τ X)
), Id−1 )
Λ
R (Tor2 (T, Y
∼
=
(3.F)
R (HomΛ (X, ΩY
), Id−1 )
), Id )
(3.F)
R (HomΛ (X, ΩY
), Id )
which we splice as
R (HomΛ (X, Y
), Id−1 ) → R (HomΛ (X, Y ), Id ) → Im ψ → 0
(3.G)
0 → Im ψ → Ext1Λ (Y, τ X) → Cok ψ → 0
(3.H)
0 → Cok ψ → R (HomΛ (X, ΩY ), Id−1 ) → R (HomΛ (X, ΩY ), Id ).
(3.I)
By applying 3.5(3) to N := HomΛ (X, Y ) and comparing to (3.G) we see that
Im ψ ∼
= Extd (fl Hom (X, Y ), ωR ) = D0 (fl Hom (X, Y )).
R
Λ
Λ
(3.J)
Similarly, applying 3.5(3) to N := HomΛ (X, ΩY ) and comparing to (3.I) we see that
HomΛ (X,ΩY )
HomΛ (X,ΩY )
(3.K)
Cok ψ ∼
= Extd−1
R
fl Hom (X,ΩY ) , ωR = D1 fl Hom (X,ΩY ) .
Λ
Λ
fl Ext1Λ (Y, τ X),
Now (3.J) and (3.K) show that Im ψ =
and together with (3.J) this
establishes the first required isomorphism, and together with (3.H) and (3.K) this establishes the second required isomorphism.
18
OSAMU IYAMA AND MICHAEL WEMYSS
When R has only isolated singularities the above reduces to classical Auslander–
Reiten duality. If moreover R is a d-sCY ring with isolated singularities (i.e. R is a
Gorenstein d-dimensional equi-codimensional ring with isolated singularities), AR duality
implies that the category CMR is (d − 1)-CY. We now apply 3.1 to possibly non-isolated
d-sCY rings to obtain some analogue of this (d − 1)-CY property (see 3.7(1) below). The
following lemma is well-known [Aus78, III.1.3].
Lemma 3.6. Suppose R is d-sCY and let Λ be a symmetric R-order. Then τ ∼
= Ω2−d
Λ .
Proof. We have Ω2 Tr(−) ∼
= HomΛ (−, Λ). Since R is d-sCY (and so ωR := R), and Λ is
symmetric, we have Ω2 Tr(−) ∼
= HomΛ (−, Λ) ∼
= HomR (−, R). Thus
d
∼
τ = HomR (Ω op Tr(−), R) = HomR (Ωd−2
op HomR (−, R), R)
Λ
Λ
∼
HomR (HomR (−, R), R)
= Ω2−d
Λ
∼
= Ω2−d .
Λ
Corollary 3.7. Let R be a d-sCY ring and let Λ be a symmetric R-order with dim SingR Λ ≤
1. Then
(1) There exist functorial isomorphisms
fl HomΛ (X, Y ) ∼
= D0 (fl HomΛ (Y, X[d − 1])),
HomΛ (X, Y ) ∼
HomΛ (Y, X[d − 2])
= D1
fl HomΛ (X, Y )
fl HomΛ (Y, X[d − 2])
for all X, Y ∈ CM Λ.
(2) If d = 3 then for all X, Y ∈ CM Λ, HomΛ (X, Y ) ∈ CM R if and only if HomΛ (Y, X) ∈
CM R.
Proof. (1) It is well-known that in CMΛ the shift functor [1] = Ω−1 so by 3.6 τ = [d − 2].
Thus the result follows directly from 3.1, using the fact that HomΛ (A, B[1]) ∼
= Ext1Λ (A, B)
for all A, B ∈ CM Λ.
(2) Immediate from (1) and 2.7.
Note that by 2.9, 3.7(2) also holds for arbitrary d (with no assumptions on the singular
locus) provided that R is normal. When R is not necessarily normal, we improve 3.7(2)
in 4.4 below.
4. Modifying and Maximal Modifying Modules
Motivated by the fact that Spec R need not have a crepant resolution, we want to
be able to control algebras of infinite global dimension and hence partial resolutions of
singularities.
4.1. Modifications in Dimension d. We begin with our main definition.
Definition 4.1. Let R be a d-dimensional CM ring, Λ a module finite R-algebra. We
call N ∈ ref Λ a modifying module if EndΛ (N ) ∈ CM R, whereas we call N a maximal
modifying (MM) module if N is modifying and furthermore it is maximal with respect to
this property, that is to say if there exists X ∈ ref Λ with N ⊕ X modifying, necessarily
X ∈ add N .
The following is immediate from the definition.
Lemma 4.2. Suppose R is a d-dimensional CM ring, Λ a module finite R-algebra. Then
(1) The modifying Λ-modules which are generators are always CM.
(2) If further Λ is a Gorenstein R-order then the MM generators are precisely the MM
modules which are CM.
Proof. (1) Since M is a modifying EndΛ (M ) ∈ CM R, and since M is a generator, Λ ∈
add M . Hence M ⊕n ∼
= HomΛ (Λ, M ⊕n ) ∈ CM R is a summand of HomΛ (M ⊕n , M ⊕n ) ∼
=
⊕n2
EndΛ (M )
for some n ∈ N, thus M ⊕n and so consequently M itself are CM.
(2) Conversely suppose that M is an MM module which is CM. Then certainly we have
MM MODULES AND AR DUALITY FOR NON-ISOLATED SINGULARITIES.
19
HomΛ (Λ, M ) ∼
= M ∈ CM R and also HomΛ (M, ωΛ ) ∼
= HomR (M, ωR ) ∈ CM R. Since Λ is
a Gorenstein R-order, add Λ = add ωΛ by 2.15, thus EndΛ (M ⊕ Λ) ∈ CM R. Since M is
maximal necessarily Λ ∈ add M .
Under assumptions on the singular locus, we can check whether a CM module is
modifying by examining Ext groups. The following is a generalization of 2.7 for d = 3,
and [Iya07, 2.5.1] for isolated singularities.
Theorem 4.3. Suppose that R is d-sCY with d = dim R ≥ 2 and dim Sing R ≤ 1,
let Λ be an R-order and let X, Y ∈ CM Λ. Then HomΛ (X, Y ) ∈ CM R if and only if
ExtiΛ (X, Y ) = 0 for all i = 1, . . . , d − 3 and fl Extd−2
Λ (X, Y ) = 0.
Proof. Without loss of generality, we can assume that R is local. Consider a projective
resolution . . . → P1 → P0 → X → 0. Applying HomΛ (−, Y ), we have a complex
0 → HomΛ (X, Y ) → HomΛ (P0 , Y ) → . . .
. . . → HomΛ (Pd−3 , Y ) → HomΛ (Ωd−2 X, Y ) → Extd−2
Λ (X, Y ) → 0 (4.A)
with homologies ExtiΛ (X, Y ) at HomΛ (Pi , Y ) for i = 1, . . . , d − 3.
(⇐) By assumption the sequence (4.A) is exact. Since depth Extd−2
Λ (X, Y ) ≥ 1,
depth HomΛ (Ωd−2 X, Y ) ≥ 2 by 2.3, and HomΛ (Pi , Y ) ∈ CM R, we have HomΛ (X, Y ) ∈
CM R by the depth lemma.
(⇒) By 2.6 and our assumption dim Sing R ≤ 1, we have dim ExtiΛ (X, Y ) ≤ 1 for
any i > 0. Assume ExtiΛ (X, Y ) 6= 0 for some i = 1, . . . , d − 3. Take minimal i such that
ExtiΛ (X, Y ) 6= 0. We have an exact sequence
0 → HomΛ (X, Y ) → HomΛ (P0 , Y ) → . . .
. . . → HomΛ (Pi−1 , Y ) → HomΛ (Ωi X, Y ) → ExtiΛ (X, Y ) → 0.
Localizing at prime ideal p of R with height at least d−1 and using depthRp HomΛp (Ωi Xp , Yp ) ≥
2 by 2.3, HomΛp (Pip , Yp ) ∈ CM Rp and HomΛp (Xp , Yp ) ∈ CM Rp by our assumption,
we have depthRp ExtiΛ (X, Y )p ≥ 1 by the depth lemma. If p has height d − 1, then
dimRp ExtiΛ (X, Y )p = 0 and we have ExtiΛ (X, Y )p = 0. Thus dim ExtiΛ (X, Y ) = 0 holds,
and we have ExtiΛ (X, Y ) = 0, a contradiction.
Thus we have ExtiΛ (X, Y ) = 0 for all i = 1, . . . , d−3 and so the sequence (4.A) is exact.
Since depth HomΛ (Ωd−2 X, Y ) ≥ 2, HomΛ (Pi , Y ) ∈ CM R and HomΛ (X, Y ) ∈ CM R, we
have depth Extd−2
Λ (X, Y ) ≥ 1 by the depth lemma.
The following improves 3.7(2).
Corollary 4.4. Let R be a d-sCY ring and let Λ be a symmetric R-order with dim SingR Λ ≤
1. Then for all X, Y ∈ CM Λ, HomΛ (X, Y ) ∈ CM R if and only if HomΛ (Y, X) ∈ CM R.
Proof. By the statement and proof of 2.3, when d ≤ 2, if M and N are CM then so are
both HomΛ (M, N ) and HomΛ (N, M ). Thus we can assume that d ≥ 3. By symmetry, we
need only show (⇒). Assume that HomΛ (X, Y ) ∈ CM R. Then by 4.3 ExtiΛ (X, Y ) = 0
ExtiΛ (X, Y )
for any i = 1, . . . , d − 3 and fl Extd−2
= 0 for any
Λ (X, Y ) = 0. Since
fl ExtiΛ (X, Y )
ExtiΛ (Y, X)
i = 1, . . . , d − 3, the D1 duality in 3.7(1) implies
= 0 for any i = 1, . . . , d − 3.
fl ExtiΛ (Y, X)
i
Thus ExtΛ (Y, X) has finite length for any i = 1, . . . , d − 3. Since fl ExtiΛ (X, Y ) = 0 for any
i = 1, . . . , d − 2, the D0 duality in 3.7(1) implies fl ExtiΛ (Y, X) = 0 for any i = 1, . . . , d − 2.
Consequently we have ExtiΛ (Y, X) = 0 for any i = 1, . . . , d − 3 and fl Extd−2
Λ (Y, X) = 0.
Again by 4.3 we have HomΛ (Y, X) ∈ CM R.
Recall from 1.7 the definition of an NCCR. The following asserts that, in arbitrary
dimension, NCCRs are a special case of MMAs:
Proposition 4.5. Let R be a d-dimensional, normal, equi-codimensional CM ring with
a canonical module ωR (e.g. if R is a normal d-sCY ring). Then reflexive R-modules M
giving NCCRs are MM modules.
20
OSAMU IYAMA AND MICHAEL WEMYSS
Proof. Assume that X ∈ ref R satisfies EndR (M ⊕ X) ∈ CM R. Then HomR (M, X) ∈
CM Γ for Γ := EndR (M ). By 2.17 we have HomR (M, X) ∈ proj Γ. By 2.5(4) X ∈ add M
as required.
We now investigate the derived equivalence classes of modifying algebras, maximal
modifying algebras, and NCCRs.
Theorem 4.6. Let R be a normal d-sCY ring, then
(1) Modifying algebras of Λ are closed under derived equivalences.
(2) NCCRs of Λ are closed under derived equivalences.
Proof. (1) Let Λ = EndR (M ) be a modifying algebra of R, and let Γ be a ring that is
derived equivalent to Λ. Then Γ is a module finite R-algebra since it is the endomorphism
ring of a tilting complex of Λ. Since Λ is a modifying algebra of R, it is d-sCY by 2.22(2).
But d-sCY algebras are closed under derived equivalences, hence Γ is also d-sCY and so
Γ ∈ CM R by 2.21. In particular, Γ is reflexive as an R-module.
Now we fix a height one prime ideal p of R. Since Mp is a free Rp -module of finite
rank, Λp = EndRp (Mp ) is a full matrix algebra of Rp . Since Rp is local, the Morita
equivalence class of Rp coincides with the derived equivalence class of Rp [RZ03, 2.12],
so we have that Γp is Morita equivalent to Rp . Thus Γ satisfies the conditions in 2.11,
so there exists a reflexive R-module N such that Γ ∼
= EndR (N ) as R-algebras. We have
already observed that Γ ∈ CM R, hence it is a modifying algebra of R.
(2) Since R is normal d-sCY, by 2.23 NCCRs of R are nothing but modifying algebras of
R which have finite global dimension. We know modifying algebras are d-sCY by 2.22, so
the result follows by combining (1) and 2.19.
Question 4.7. Let R be a normal d-sCY ring. Are the maximal modifying algebras of
R closed under derived equivalences?
We now show that the question has a positive answer when d ≥ 2 provided that
dim Sing R ≤ 1. In particular, this means that 4.7 is true when d ≤ 3.
Theorem 4.8. Suppose R is a normal d-sCY ring with dim R = d ≥ 2 and dim Sing R ≤
1. Let N be a modifying module and set Γ := EndR (N ). Then
(1) Then N is MM if and only if CMΓ has no non-zero objects Y satisfying HomΓ (Y, Y [i]) =
0 for all i = 1, . . . , d − 3 and fl HomΓ (Y, Y [d − 2]) = 0.
(2) MMAs are closed under derived equivalences.
Proof. (1) By reflexive equivalence 2.5(4) it is easy to show that there exists X ∈ ref R
with X ∈
/ add N such that EndR (N ⊕ X) ∈ CM R if and only if there exists Y ∈ CM Γ
with Y ∈
/ add Γ such that EndΓ (Y ) ∈ CM R. Since for Y ∈ CM Γ we have ExtiΓ (Y, Y ) =
HomΓ (Y, Y [i]), by 4.3 we have the assertion.
(2) Suppose that Λ is derived equivalent to Γ = EndR (N ) where Γ is an MMA. By 4.6(1)
we know that Λ ∼
= EndR (M ) for some modifying M . Since the equivalence D(Mod Λ) ≃
D(Mod Γ) induces equivalences Db (mod Λ) ≃ Db (mod Γ) and Kb (proj Λ) ≃ Kb (proj Γ)
by [R89, 8.1, 8.2], we have CMΛ ≃ CMΓ by [Bu86, 4.4.1]. By (1), the property of being
an MMA can be characterized on the level of this stable category, hence Λ is also an
MMA.
4.2. Derived Equivalence in Dimension 3. We now restrict to dimension three. In
this case, we can strengthen 4.6 to obtain one of our main results (4.16). Leading up to our
next proposition (4.12) we require three technical lemmas. Recall from the introduction
(1.20) the notion of an approximation.
Lemma 4.9. Let R be a normal 3-sCY ring and let Λ be a module finite R-algebra which
is 3-sCY. Let B ∈ ref Λ be a modifying height one progenerator and let C ∈ ref Λ. If
g
f
0 → A → B0 → C → 0 is an exact sequence where g is a right (add B)-approximation,
then the cokernel of the natural map
f·
HomΛ (B0 , B) → HomΛ (A, B)
has finite length.
MM MODULES AND AR DUALITY FOR NON-ISOLATED SINGULARITIES.
21
Proof. Set Γ := EndΛ (B). Since B is a height one progenerator we have a reflexive
equivalence F := HomΛ (B, −) : ref Λ → ref Γ by 2.5(4). Moreover Γ is 3-sCY by 2.22.
Since g is a right (add B)-approximation, we have an exact sequence
0 → FA → FB0 → FC → 0
of Γ-modules. Then since FB0 ∈ proj Γ, applying HomΓ (−, Γ) = HomΓ (−, FB) gives an
exact sequence
HomΓ (FB0 , FB)
∼
=
HomΛ (B0 , B)
HomΓ (FA, FB)
f·
Ext1Γ (FC, Γ)
0
∼
=
HomΛ (A, B)
and thus Cok(f ·) = Ext1Γ (FC, Γ). Hence we only have to show that Ext1Γ (FC, Γ)p = 0
for any non-maximal prime ideal p of R. By 2.3 and 2.4 we have (FC)p ∈ CM Γp .
Since Γ is 3-sCY, Γp is a Gorenstein Rp -order by 2.21. Consequently Ext1Γ (FC, Γ)p =
Ext1Γp ((FC)p , Γp ) = 0, as required.
Lemma 4.10. Let R be a normal 3-sCY ring and let Λ be a module finite R-algebra which
is 3-sCY. Suppose N ∈ ref Λ and M ∈ CM Λ with both M and N modifying such that M
h
is a height one progenerator. If 0 → L → M0 → N → 0 is an exact sequence where h is
a right (add M )-approximation, then EndΛ (L ⊕ M ) ∈ CM R.
Proof. Note first that since N is reflexive and M ∈ CM Λ we have L ∈ CM Λ by the depth
lemma. From the exact sequence
0 → HomΛ (M, L) → HomΛ (M, M0 ) → HomΛ (M, N ) → 0
with HomΛ (M, M0 ) ∈ CM R we see, using 2.3 and the depth lemma, that HomΛ (M, L) ∈
CM R. By 2.9 HomΛ (L, M ) ∈ CM R. Since EndΛ (M ) ∈ CM R by assumption, it suffices
to show that EndΛ (L) ∈ CM R. By 2.7 we only need to show that fl Ext1Λ (L, L) = 0.
Consider now the following exact commutative diagram
Cok f
HomΛ (L, M0 )
f
c
b
HomΛ (M0 , M0 )
Ext1Λ (L, L)
HomΛ (L, N )
t
Ext1Λ (L, M0 ) .
K
HomΛ (M0 , N )
Since HomΛ (L, M ) ∈ CM R we know by 2.7 that fl Ext1Λ (L, M0 ) = 0 and so fl K = 0.
Hence to show that fl Ext1Λ (L, L) = 0 we just need to show that fl Cok f = 0. To do this
consider the exact sequence
Cok b → Cok bf → Cok f → 0.
(4.B)
By 4.9 applied with B = M0 , Cok b has finite length and thus the image of the first map
in (4.B) has finite length. Second, note that Cok bf = Cok tc = Cok c and fl Cok c = 0
since Cok c embeds inside Ext1Λ (N, N ) and furthermore fl Ext1Λ (N, N ) = 0 by 2.7. This
means that the image of the first map is zero, hence Cok f ∼
= Cok c and so in particular
fl Cok f = 0.
In fact, using reflexive equivalence we have the following improvement of 4.10 which
does not assume that M is CM, which is the analogue of [GLS, 5.1].
Lemma 4.11. Let R be a normal 3-sCY ring and let M and N be modifying modules. If
h
0 → L → M0 → N is an exact sequence where h is a right (add M )-approximation, then
L ⊕ M is modifying.
Proof. Note that L is reflexive since R is normal. Denote Λ := EndR (M ) and F :=
HomR (M, −) : ref R → ref Λ the reflexive equivalence in 2.5(4). Then Λ is 3-sCY by 2.22,
FN ∈ ref Λ, FM ∈ CM Λ and both FN and FM are modifying Λ-modules. Further
Fh
0 → FL → FM0 −→ FN → 0
22
OSAMU IYAMA AND MICHAEL WEMYSS
is exact and FM = Λ so trivially Fh is a right (add FM )-approximation. It is also clear
that FM = Λ is a height one progenerator. By 4.10 we see that EndΛ (FL ⊕ FM ) ∈ CM R,
hence EndR (L ⊕ M ) ∼
= EndΛ (FL ⊕ FM ) ∈ CM R as required.
Now we are ready to prove the following crucial result (c.f. 5.10 later), which is the
analogue of [GLS, 5.2].
Theorem 4.12. Let R be a normal 3-sCY ring and let M be a non-zero modifying module.
Then the following are equivalent
(1) M is an MM module.
f
(2) If N is any modifying module then there exists an exact sequence 0 → M1 → M0 → N
with each Mi ∈ add M such that f is a right (add M )-approximation.
Proof. Set Λ := EndR (M ). Since M is a height one progenerator, we have a reflexive
equivalence F := HomR (M, −) : ref R → ref Λ by 2.5(4). Moreover Λ is 3-sCY by 2.22
and so a Gorenstein R-order by 2.21.
f
(1)⇒(2) We have an exact sequence 0 → L → M0 → N where f is a right (add M )approximation of N . By 4.11 EndR (L ⊕ M ) ∈ CM R thus since M is an MM module,
L ∈ add M .
(2)⇒(1) Suppose N is reflexive with EndR (M ⊕ N ) ∈ CM R. Then FN ∈ CM R. We
have proj.dimΛ FN ≤ 1 since N is a modifying module and so there is an exact sequence
0 → FM1 → FM0 → FN → 0 by assumption. Since Λ is a Gorenstein R-order it follows
that FN is a projective Λ-module by using localization and Auslander–Buchsbaum 2.16.
Hence N ∈ add M .
The following version of the Bongartz completion [B80][ASS, VI.2.4] is convenient for
us. Recall from the introduction that throughout this paper when we say tilting module
we mean a tilting module of projective dimension ≤ 1 (see 1.15).
Lemma 4.13. Suppose R is normal, M ∈ ref R and denote Λ := EndR (M ). If N ∈ ref R
is such that HomR (M, N ) is a partial tilting Λ-module then there exists L ∈ ref R such
that HomR (M, N ⊕ L) is a tilting Λ-module.
Proof. By 2.5 T := HomR (M, N ) and Λ are both reflexive. Thus since R is normal we
can invoke [IR08, 2.8] to deduce that there exists an X ∈ ref Λ such that T ⊕ X is tilting.
Again by 2.5 X = HomR (M, L) for some L ∈ ref R.
We have the following analogue of [IR08, 8.7].
Proposition 4.14. Let R be a normal 3-sCY ring and assume M is an MM module.
Then
(1) HomR (M, −) sends modifying R-modules to partial tilting EndR (M )-modules.
(2) HomR (M, −) sends MM R-modules to tilting EndR (M )-modules.
Proof. (1) Denote Λ := EndR (M ), let N be a modifying module and denote T :=
HomR (M, N ). Note first that proj.dimΛ T ≤ 1 by 4.12 and also Λ is a Gorenstein Rorder by 2.21 and 2.22.
Since projective dimension localizes proj.dimΛp Tp ≤ 1 for all primes p, and further
if ht p = 2 then Tp ∈ CM Rp by 2.3. Since Λp is a Gorenstein Rp -order, Auslander–
Buchsbaum (2.16) implies that Tp is a projective Λp -module and so Ext1Λp (Tp , Tp ) = 0
for all primes p with ht p = 2. Consequently Ext1Λm (Tm , Tm ) has finite length for all
m ∈ Max R. On the other hand Λ is 3-sCY by 2.22 and EndΛ (T ) ∼
= EndR (N ) ∈ CM R
by 2.5. Thus fl Ext1Λm (Tm , Tm ) = 0 for all m ∈ Max R by 2.7 and so Ext1Λ (T, T ) = 0, as
required.
(2) Now suppose that N is also MM. By Bongartz completion 4.13 we may find L ∈ ref R
such that HomR (M, N ⊕L) is a tilting EndR (M )-module, thus EndR (M ) and EndR (N ⊕L)
are derived equivalent. Since EndR (M ) is 3-sCY so is EndR (N ⊕ L) by 2.19 and thus by
2.22 EndR (N ⊕ L) ∈ CM R. Consequently L ∈ add N and so HomR (M, N ) is a tilting
module.
Now for the convenience of the reader we give a second proof, which shows us more
explicitly how our tilting module generates the derived category. If N is also MM then
MM MODULES AND AR DUALITY FOR NON-ISOLATED SINGULARITIES.
23
since (−)∗ : ref R → ref R is a duality, certainly N ∗ (and M ∗ ) is MM. By 4.12 we can find
0 → N1∗ → N0∗ → M ∗
such that
0 → FN1∗ → FN0∗ → FM ∗ → 0
∗
(4.C)
∗
Ext1Γ (FM ∗ , FM ∗ )
is exact, where F = HomR (N , −). Denote Γ := EndR (N ), then
=
0 by (1). Thus applying HomΓ (−, FM ∗ ) to (4.C) gives us the following commutative
diagram
0
HomΓ (FM ∗ , FM ∗ )
HomΓ (FN0∗ , FM ∗ )
HomΓ (FN1∗ , FM ∗ )
0
0
HomR (M ∗ , M ∗ )
HomR (N0∗ , M ∗ )
HomR (N1∗ , M ∗ )
0
where the top row is exact and the vertical maps are isomorphisms by 2.5. Hence the
bottom row is exact. Since (−)∗ : ref R → ref R is a duality, this means that
0 → HomR (M, M ) → HomR (M, N0 ) → HomR (M, N1 ) → 0
is exact. But denoting Λ := EndR (M ) and T := HomR (M, N ), this means we have an
exact sequence
0 → Λ → T0 → T1 → 0
with each Ti ∈ add T . Hence T is a tilting Λ-module.
The following is now immediate:
Corollary 4.15. Let R be a normal 3-sCY ring and assume M is an MM module. Then
(1) If N is any modifying module then the partial tilting EndR (M )-module T := HomR (M, N )
induces a recollement
K
D(Mod EndR (M ))
F
D(Mod EndR (N ))
where F = RHom(T, −) and K is a certain triangulated subcategory of D(Mod EndR (M )).
(2) If further N is maximal modifying then the above functor F is an equivalence.
Proof. (1) Set Λ := EndR (M ) then T := HomR (M, N ) is a partial tilting Λ-module
by 4.14. The fact that EndΛ (T ) ∼
= EndR (N ) follows since HomR (M, −) is a reflexive
equivalence by 2.5(4). By Bongartz completion T is a summand of a tilting Λ-module U .
We have a derived equivalence D(Mod EndR (M )) ≃ D(Mod EndΛ (U )). Moreover there
exists an idempotent e of EndΛ (U ) such that e EndΛ (U )e ∼
= EndΛ (T ) ∼
= EndR (N ). Thus
we have the desired recollement (e.g. [K10, 4.16], see also [M03]).
(2) is an immediate consequence by taking U := T in the argument above.
We can now improve 4.8.
Theorem 4.16. Let R be a normal 3-sCY ring. Then MMAs of R form a complete
derived equivalence class.
Proof. By 4.8(2), MMAs of R are closed under derived equivalence. On the other hand,
all MMAs are derived equivalent by 4.15(2).
Moreover, we have the following bijections (cf. [IR08, 8.9]).
Theorem 4.17. Let R be a normal 3-sCY ring and assume M is an MM module. Then
the functor HomR (M, −) : mod R → mod EndR (M ) induces bijections
(1) {maximal modifying R-modules}
(2)
{modifying R-modules}
1:1
←→ {reflexive tilting EndR (M )-modules}.
1:1
←→ {reflexive partial tilting EndR (M )-modules}.
Proof. (1) In light of 4.14(2) we only need to show that every reflexive tilting module is
the image of some MM module. Thus let X be a reflexive tilting EndR (M )-module; by
reflexive equivalence 2.5(4) there exists some N ∈ ref R such that HomR (M, N ) ∼
= X. We
claim that N is MM. Since HomR (M, N ) is a tilting EndR (M )-module certainly EndR (M )
and EndR (N ) are derived equivalent; the fact that N is MM follows from 4.8(2) above.
(2) By 4.14(1) we only need to show that every reflexive partial tilting EndR (M )-module
24
OSAMU IYAMA AND MICHAEL WEMYSS
is the image of some modifying module. Suppose X is a reflexive partial tilting EndR (M )module, say X ∼
= HomR (M, N ). Then by Bongartz completion 4.13 there exists N1 ∈
ref R such that HomR (M, N ⊕ N1 ) is a tilting EndR (M )-module. By (1) N ⊕ N1 is MM,
thus EndR (N ) is a summand of the CM R-module EndR (N ⊕ N1 ) and so is itself CM.
Corollary 4.18. Let R be a normal 3-sCY ring and assume M is an MM module. Then
(1) N is a modifying module ⇐⇒ N is the direct summand of an MM module.
(2) R has an MM module which is a CM generator.
Proof. (1) ‘if’ is clear. For the ‘only if’ let N be a modifying module, then by 4.14(1)
HomR (M, N ) is a partial tilting EndR (M )-module, so the proof of 4.17(2) shows that
there exists N1 ∈ ref R such that N ⊕ N1 is MM.
(2) Apply (1) to R. The corresponding MM module is necessarily CM by 4.2.
Recall from 1.6(3) we say that an R-order Λ has isolated singularities if gl.dim Λp =
dim Rp for all non-maximal primes p of R.
Remark 4.19. It is unclear in what generality every maximal modification algebra
EndR (M ) has isolated singularities. In many cases this is true — for example if R is
itself an isolated singularity this holds, as it does whenever M is CT by 5.4. Also, if
R is Gorenstein, X → Spec R projective birational with M ∈ ref R modifying such that
Db (coh X) ∼
= Db (mod EndR (M )), then provided X has at worst isolated singularities (e.g.
if X is a 3-fold with terminal singularities) then EndR (M ) has isolated singularities too.
This is a direct consequence of the fact that in this case the singular derived category has
finite dimensional Hom-spaces. Also note that if R is normal 3-sCY then the existence of
an MM algebra EndR (M ) with isolated singularities implies that Rp has finite CM type
for all primes p of height 2 by a result of Auslander (see [IW08, 2.13]). Finally note that
it follows immediately from 4.15(2) (after combining 2.19 and 2.24) that if R is normal
3-sCY and there is one MM algebra EndR (M ) with isolated singularities then necessary
all MM algebras EndR (N ) have isolated singularities.
The above remark suggests the following conjecture.
Conjecture 4.20. Let R be a normal 3-sCY ring with rational singularities. Then
(1) R always has an MM module M (which may be R).
(2) For all such M , EndR (M ) has isolated singularities.
This is closely related to a conjecture of Van den Bergh regarding the equivalence
of the existence of crepant and noncommutative crepant resolutions when R is a rational
normal Gorenstein 3-fold [V04b, 4.6]. We remark that given the assumption on rational
singularities, any proof is likely to be geometric. We also remark that the restriction
to rational singularities is strictly necessary, since we can consider any normal surface
singularity S of infinite CM type. Since S is a surface EndS (M ) ∈ CM S for all M ∈ CM S,
so since S has infinite CM type it cannot admit an MMA. Now consider R := S ⊗C C[t],
then R is a 3-fold that does not admit an MMA. A concrete example is given by R =
C[x, y, z, t]/x3 + y 3 + z 3 .
5. Relationship Between CT modules, NCCRs and MM modules
In this section we define CT modules for singularities that are not necessarily isolated,
and we show that they are a special case of the MM modules introduced in §4. We also
show (in 5.12) that all these notions recover the established ones when R is an isolated
singularity.
When R is a normal 3-sCY domain, below we prove the implications in the following picture which summarizes the relationship between CT modules, NCCRs and MM
modules:
5.4
CT modules
4.5
modules giving NCCRs
if generator
5.4
MM modules
modifying modules
if ∃NCCR
5.11
We remark that the relationship given by 4.5 and 5.4 holds in arbitrary dimension d,
whereas 5.11 requires d = 3.
MM MODULES AND AR DUALITY FOR NON-ISOLATED SINGULARITIES.
25
Definition 5.1. Let R be a d-dimensional CM ring with a canonical module ωR . We call
M ∈ CM R a CT module if
add M = {X ∈ CM R : HomR (M, X) ∈ CM R} = {X ∈ CM R : HomR (X, M ) ∈ CM R}.
The name CT is inspired by (but is normally different than) the notion of ‘cluster
tilting’ modules. We explain the connection in 5.12.
Lemma 5.2. (1) Any CT module is a generator-cogenerator.
(2) Any CT module is maximal modifying.
Proof. Let M be a CT module.
(1) This is clear from HomR (M, ωR ) ∈ CM R and HomR (R, M ) ∈ CM R.
(2) Suppose N ∈ ref R with EndR (M ⊕N ) ∈ CM R, then certainly HomR (M, N ) ∈ CM R.
Since R ∈ add M by (1), we have N ∈ CM R. Hence since M is a CT module, necessarily
N ∈ add M .
Not every MM module is CT, however in the situation when R has a CT module
(equivalently, by 5.9(2) below, R has a NCCR) we give a rather remarkable relationship
between CT modules, MM modules and NCCRs in 5.11 at the end of this subsection.
If R is a CM ring with a canonical module ωR we denote the duality (−)∨ :=
HomR (−, ωR ). We shall see shortly that if M or M ∨ is a generator then we may test
the above CT condition on one side (see 5.4 below), but before we do this we need the
following easy observation.
Lemma 5.3. Let R be a CM ring with a canonical module ωR , M ∈ CM R. If EndR (M )
is a non-singular R-order, then R ∈ add M ⇐⇒ ωR ∈ add M .
∼ EndR (M )op
Proof. Since (−)∨ : CM R → CM R is a duality we know that EndR (M ∨ ) =
∨
∨
∨
and so EndR (M ) is also a non-singular R-order. Moreover R = ωR and ωR
= R so by
the symmetry of this situation we need only prove the ‘only if’ part. Thus assume that
R ∈ add M . In this case since HomR (M, ωR ) = M ∨ ∈ CM R, by 2.17 HomR (M, ωR ) is a
projective EndR (M )-module and thus ωR ∈ add M by 2.5(1).
We reach one of our main characterizations of CT modules. Note that if R is a normal
d-sCY ring, then by 2.9 the definition of CT modules is equivalent to
add M = {X ∈ CM R | HomR (M, X) ∈ CM R},
however the following argument works in greater generality:
Theorem 5.4. Let R be a d-dimensional, equi-codimensional CM ring (e.g. if R is dsCY) with a canonical module ωR . Then for any M ∈ CM R the following are equivalent
(1) M is a CT module.
(2) R ∈ add M and add M = {X ∈ CM R : HomR (M, X) ∈ CM R}.
′
(2) ωR ∈ add M and add M = {X ∈ CM R : HomR (X, M ) ∈ CM R}.
(3) R ∈ add M and EndR (M ) is a non-singular R-order.
′
(3) ωR ∈ add M and EndR (M ) is a non-singular R-order.
In particular CT modules are precisely the CM generators which give NCCRs.
Proof. (2)⇒(3) By assumption we have R ∈ add M and EndR (M ) ∈ CM R. Now let Y ∈
mod EndR (M ) and consider a projective resolution Pd−1 → Pd−2 → ... → P0 → Y → 0.
f
By 2.5(1) there is an exact sequence Md−1 → Md−2 → ... → M0 with each Mi ∈ add M
such that the projective resolution above is precisely
·f
HomR (M, Md−1 ) → HomR (M, Md−2 ) → ... → HomR (M, M0 ) → Y → 0.
Denote Kd = Ker f . Then we have an exact sequence
0 → HomR (M, Kd ) → Pd−1 → Pd−2 → ... → P0 → Y → 0.
Localizing the above and counting depths we see that HomR (M, Kd )m ∈ CM Rm for all
m ∈ Max R, thus HomR (M, Kd ) ∈ CM R and so by definition Kd ∈ add M . Hence
proj.dimEndR (M) Y ≤ d and so gl.dim EndR (M ) ≤ d.
(3)⇒(2) Since EndR (M ) ∈ CM R, automatically add M ⊆ {X ∈ CM R : HomR (M, X) ∈
CM R}. To obtain the reverse inclusion assume that X ∈ CM R with HomR (M, X) ∈
26
OSAMU IYAMA AND MICHAEL WEMYSS
CM R, then since EndR (M ) is a non-singular R-order HomR (M, X) is a projective EndR (M )module by 2.17. This implies that X ∈ add M by 2.5(1).
′
′
(2) ⇐⇒ (3) We have a duality (−)∨ : CM R → CM R thus apply (2) ⇐⇒ (3) to M ∨
and use the fact that EndR (M ∨ ) = EndR (M )op has finite global dimension if and only if
EndR (M ) does.
(3) ⇐⇒ (3)′ This is immediate from 5.3.
In particular by the above we have (2) ⇐⇒ (2)′ . Since we clearly have (1) ⇐⇒ (2)+(2)′ ,
the proof is completed.
Note that the last assertion in 5.4 is improved when R is a 3-sCY normal domain in
5.11(3). From the definition it is not entirely clear that CT is a local property:
Corollary 5.5. Let R be a d-dimensional, equi-codimensional CM ring (e.g. if R is dsCY) with a canonical module ωR . Then the following are equivalent
(1) M is a CT R-module
(2) Mp is a CT Rp -module for p ∈ Spec R.
(3) Mm is a CT Rm -module for m ∈ Max R.
cp is a CT R
cp -module for p ∈ Spec R.
(4) M
d
c
(5) M
m is a CT Rm -module for m ∈ Max R.
Thus CT can be checked locally, or even complete locally.
Proof. By 5.4(3) M is a CT R-module if and only if R ∈ add M and EndR (M ) is a nonsingular R-order. Non-singular R-orders can be checked either locally or complete locally
(2.17), and R ∈ add M can be checked locally or complete locally by 2.26.
Theorem 5.4 also gives an easy method to find examples of CT modules. Recall that
an element g ∈ GL(d, k) is called pseudo-reflection if the rank of g − 1 is at most one.
A finite subgroup G of GL(d, k) is called small if it does not contain pseudo-reflections
except the identity. The following is well-known:
Proposition 5.6. Let k be a field of characteristic zero, let S be the polynomial ring
k[x1 , . . . , xd ] (respectively formal power series ring k[[x1 , . . . , xd ]]) and let G be a finite
subgroup of GL(d, k).
(1) If G is generated by pseudo-reflections, then S G is a polynomial ring (respectively a
formal power series ring) in d variables.
(2) If G is small, then the natural map S#G → EndR (S) given by sg 7→ (t 7→ s · g(t))
(s, t ∈ S, g ∈ G) is an isomorphism.
Proof. (1) See [Bou68, §5 no. 5] for example.
(2) This is due to Auslander [Aus86, §4], [Y90, 10.8]. See also [IT10, 3.2] for a detailed
proof.
This immediately gives us a rich source of CT modules. The following result is shown
in [Iya07, 2.5] under the assumption that G is a small subgroup of GL(d, k) and S G is an
isolated singularity. We can drop both assumptions under our definition of CT modules.
Theorem 5.7. Let k be a field of characteristic zero, and let S be the polynomial ring
k[x1 , . . . , xd ] (respectively formal power series ring k[[x1 , . . . , xd ]]). For a finite subgroup
G of GL(d, k), let R = S G . Then S is a CT R-module.
Proof. We prove the case when S is a polynomial ring, since the case when S is a formal
power series ring then follows by 5.5. We proceed by induction on |G|, the case |G| = 1
being trivial. If G is small, then EndR (S) is isomorphic to S#G by 5.6(2). This shows,
by 2.12, that EndR (S) is a non-singular R-order and so S is a CT R-module by 5.4.
Hence we can assume that G is not small, so if N denotes the subgroup of G generated
by pseudo-reflections, we have |G/N | < |G|. Now G/N acts on S N , which by 5.6(1) is
a polynomial ring. In fact the graded subring S N of S has a free generating set of
homogeneous polynomials [C55, Thm. A]. Let V (d) be the vector space of N -invariant
polynomials of degree d (with respect to the original grading of S). We prove, by induction
on d, that generators of S N can be chosen so that G/N acts linearly on these generators.
Clearly the action of G/N is linear on U (d1 ) := V (d1 ), where d1 > 0 is the smallest
such that V (d1 ) is non-empty. Consider now V (d). It has a subspace, say U (d), of linear
MM MODULES AND AR DUALITY FOR NON-ISOLATED SINGULARITIES.
27
combinations of products of N -invariant polynomials of smaller degree. By induction,
G/N acts on U (d), so U (d) is a G/N -submodule. By Maschke’s theorem we can take
the G/N -complement to U (d) in V (d). Then G/N acts linearly on this piece, so it acts
linearly on V (d). Hence a k-basis of U (d) for each d > 0 gives a free generating set on
which G/N acts linearly.
Hence, with these new generators, we have S G = (S N )G/N ∼
= k[X1 , . . . , Xd ]G/N where
G/N a subgroup of GL(d, k). Thus
CM S G ≃ CM k[X1 , . . . , Xd ]G/N
and further under this correspondence
addS G S ≃ addk[X1 ,...,Xd ]G/N S N = addk[X1 ,...,Xd ]G/N k[X1 , . . . , Xd ].
Hence EndS G (S) is Morita equivalent to Endk[X1 ,...,Xd ]G/N (k[X1 , . . . , Xd ]) := Λ. By induction Λ is a non-singular R-order, so it follows that EndS G (S) is a non-singular R-order
by 2.13. Consequently S is a CT S G -module by 5.4 since R is a direct summand of the
R-module S by the Reynolds operator.
As another source of CT modules, we have:
f
Example 5.8. Let Y → X = Spec R be a projective birational morphism such that
Rf∗ OY = OX and every fibre has dimension at most one, where R is a d-dimensional
normal Gorenstein ring R finitely generated over a field. Then provided Y is smooth and
crepant there exists a NCCR EndR (M ) [V04a, 3.2.10] in which M is CM containing R as
a summand. By 5.4 M is a CT module.
We now show that for R normal 3-sCY, the existence of a CT module is equivalent
to the existence of a NCCR. Note that (2) below answers a question of Van den Bergh
[V04b, 4.4].
Corollary 5.9. Let R be a 3-sCY normal domain. Then
(1) CT modules are precisely those reflexive generators which give NCCRs.
(2) R has a NCCR ⇐⇒ R has a NCCR given by a CM generator M ⇐⇒ CM R
contains a CT module.
Proof. Notice that any reflexive generator M which gives a NCCR is CM since R is a
summand of M and further M ∼
= HomR (R, M ) is a summand of EndR (M ) ∈ CM R as an
R-module.
(1) By 5.4 CT modules are precisely the CM generators which give NCCRs. The assertion
follows from the above remark.
(2) The latter equivalence was shown in 5.4. We only have to show (⇒) of the former
assertion. If R has a NCCR Λ, then Λ is an MMA (by 4.5) and so by 4.18(2) R has an
MMA Γ = EndR (M ) where M is a CM generator. But by 4.15(2) Γ and Λ are derived
equivalent, so since Λ is an NCCR, so too is Γ (4.6(2)).
Below is another characterization of CT modules, which is analogous to [Iya07, 2.2.3].
Compare this to the previous 4.12.
Proposition 5.10. Assume R is a 3-sCY normal domain and let M ∈ CM R with R ∈
add M . Then the following are equivalent
(1) M is a CT module.
(2) EndR (M ) ∈ CM R and further for all X ∈ CM R there exists an exact sequence
f
0 → M1 → M0 → X → 0 with M1 , M0 ∈ add M and a right (add M )-approximation f .
Proof. (1)⇒(2). Fix X ∈ CM R. Since R is 3-sCY, we have an exact sequence 0 → X →
P0 → P1 with each Pi ∈ add R. Applying HomR (M, −) gives an exact sequence
g
0 → HomR (M, X) → HomR (M, P0 ) → HomR (M, P1 ) → Cok g → 0.
Since both HomR (M, Pi ) are projective EndR (M )-modules by 2.5(1), and gl.dim EndR (M ) =
3 by 5.4, it follows that proj.dimEndR (M) HomR (M, X) ≤ 1. Consequently we may take
a projective resolution 0 → HomR (M, M1 ) → HomR (M, M0 ) → HomR (M, X) → 0 which
necessarily comes from a complex 0 → M1 → M0 → X → 0, again using 2.5(1). This
28
OSAMU IYAMA AND MICHAEL WEMYSS
complex is itself exact since M is a generator.
(2)⇒(1). Denote Γ = EndR (M ). By 2.22 and 2.21 Γ is a Gorenstein R-order. By
5.4 we only have to show that add M = {X ∈ CM R : HomR (M, X) ∈ CM R}. The
assumption EndR (M ) ∈ CM R shows that the inclusion ⊆ holds so let X ∈ CM R
be such that HomR (M, X) ∈ CM R. By assumption we may find M1 , M0 ∈ add M
such that 0 → HomR (M, M1 ) → HomR (M, M0 ) → HomR (M, X) → 0 is exact, hence
proj.dimΓm HomR (M, X)m ≤ 1 for all m ∈ Max R by 2.5(1). Since HomR (M, X)m ∈
CM Rm , Auslander–Buchsbaum (2.16) implies that proj.dimΓm HomR (M, X)m = 0 for all
m ∈ Max R and hence HomR (M, X) is a projective Γ-module. Since M is a generator,
X ∈ add M .
Provided an NCCR exists, the following shows the precise relationship between MM
modules, CT modules and NCCRs. Note that 5.11(2) says that CT modules are really a
special case of MM modules.
Proposition 5.11. Let R be a 3-sCY normal domain, and assume that R has a NCCR
(equivalently, by 5.9, a CT module). Then
(1) MM modules are precisely the reflexive modules which give NCCRs.
(2) MM modules which are CM (equivalently, by 4.2, the MM generators) are precisely
the CT modules.
(3) CT modules are precisely those CM modules which give NCCRs.
Proof. (1) (⇐) This is shown in 4.5 above.
(⇒) Suppose that M is an MM module, and let EndR (N ) be a NCCR. Then EndR (N )
is an MMA by 4.5 and so EndR (M ) and EndR (N ) are derived equivalent by 4.16. This
implies that EndR (M ) is also an NCCR by 4.6(2).
(2) By (1) MM generators are precisely the CM generators which give NCCRs. By 5.4
these are precisely the CT modules.
(3) Follows immediately from (1) and (2).
In the remainder of this section we relate our work to that of the more common
notions of n-rigid, maximal n-rigid and maximal n-orthogonal (=cluster tilting) modules
in the case when R is an isolated singularity.
Recall that M ∈ ref R is called n-rigid if ExtiR (M, M ) = 0 for all 1 ≤ i ≤ n. We call
M ∈ ref R maximal n-rigid if M is n-rigid and furthermore it is maximal with respect
to this property, namely if there exists X ∈ ref R such that M ⊕ X is n-rigid, then
X ∈ add M .
Recall that M ∈ CM R is called a maximal n-orthogonal module if
add M
= {X ∈ CM R | ExtiR (M, X) = 0 for all 1 ≤ i ≤ n}
= {X ∈ CM R | ExtiR (X, M ) = 0 for all 1 ≤ i ≤ n}.
Proposition 5.12. Let R be d-sCY with only isolated singularities, M ∈ CM R. Then
(1) M is a modifying module if and only if it is (d − 2)-rigid.
(2) M is a maximal modifying module if and only if it is maximal (d − 2)-rigid.
(3) M is a CT module if and only if it is maximal (d − 2)–orthogonal.
Proof. Let X, Y ∈ CM R. By 4.3 and 2.6, it follows that HomR (X, Y ) ∈ CM R if and
only if ExtiR (X, Y ) = 0 for all 1 ≤ i ≤ d − 2. Thus the assertions for (1), (2) and (3)
follow.
6. Mutations of Modifying Modules
6.1. Mutations and Derived Equivalences in Dimension d. Mutation is a technique used to obtain new modifying, maximal modifying and CT modules from a given
one. Many of our arguments work in the full generality of modifying modules although
sometimes it is necessary to restrict to the maximal modifying level to apply certain
arguments.
Throughout this section R will be a normal d-sCY ring, d ≥ 2, and M will be a
modifying module with N such that 0 6= N ∈ add M . Note that N may or may not be
MM MODULES AND AR DUALITY FOR NON-ISOLATED SINGULARITIES.
29
decomposable. Given this, we define left and right mutation as in 1.21 in the introduction:
we have exact sequences
a
c
0 → K 0 → N0 → M
d
0 → K1 →
b
N1∗ →
(6.A)
M∗
(6.B)
∗
where a is a right (add N )-approximation and b is a right (add N )-approximation. We call
−
∗
them exchange sequences. From this we define µ+
N (M ) := N ⊕ K0 and µN (M ) := N ⊕ K1 .
Note that by the definition of approximations, N0 , N1 ∈ add N and we have exact
sequences
·c
·a
0 → HomR (N, K0 ) → HomR (N, N0 ) → HomR (N, M ) → 0
·d
∗
0 → HomR (N , K1 ) → HomR (N
∗
·b
, N1∗ ) →
∗
(6.C)
∗
HomR (N , M ) → 0.
(6.D)
−
Remark 6.1. (1) In general µ+
N (M ) and µN (M ) are not the same. Nevertheless, we will
+
see later in some special cases that µN (M ) = µ−
N (M ) holds (6.25), as in cluster tilting
theory [IY08, 5.3], [GLS, B06].
(2) A new feature of our mutation which is different from cluster tilting theory is that
−
G
µ+
N (M ) = M = µN (M ) can happen. A concrete example is given by taking R = k[x, y, z]
1
with G = 2 (1, 1, 0), M = k[x, y, z] and N = R.
−
Remark 6.2. If d = 3, then both µ+
N (M ) and µN (M ) are modifying R-modules by 4.11.
We will show in 6.10 that this is the case in any dimension.
We note that mutation is unique up to additive closure. This can be improved if R
is complete local.
a′
a
Lemma 6.3. Suppose N0 → M and N0′ → M are two right (add N )-approximations
of M . Then add(N ⊕ Ker a) = add(N ⊕ Ker a′ ). A similar statement holds for left
approximations.
Proof. Let K := Ker a and K ′ := Ker a′ . Then we have a commutative diagram
0
K
c
N0
s
K′
0
a
M
t
c′
N0′
a′
M
of exact sequences, giving an exact sequence
′
(ct )
(−s c)
0 → K −−−−→ K ′ ⊕ N0 −−→ N0′ .
(6.E)
From the commutative diagram
0
HomR (N, K)
·c
HomR (N, N0 )
·s
0
HomR (N, K ′ )
·a
HomR (N, M )
0
HomR (N, M )
0
·t
·c′
HomR (N, N0′ )
·a′
we see that
′
′
·(ct )
HomR (N, K ⊕ N0 ) −−−→ HomR (N, N0′ ) → 0
is exact. Thus (6.E) is a split short exact sequence, so in particular K ∈ add(N ⊕ K ′ ).
Similarly K ′ ∈ add(N ⊕ K).
Proposition 6.4. Let R be a normal d-sCY ring and let M be a modifying module with
0 6= N ∈ add M (i.e. notation as above). Then
(1) Applying HomR (−, N ) to (6.A) induces an exact sequence
a·
c·
0 → HomR (M, N ) → HomR (N0 , N ) → HomR (K0 , N ) → 0.
(6.F)
In particular c is a left (add N )-approximation.
(2) Applying HomR (−, N ∗ ) to (6.B) induces an exact sequence
b·
d·
0 → HomR (M ∗ , N ∗ ) → HomR (N1∗ , N ∗ ) → HomR (K1 , N ∗ ) → 0
(6.G)
30
OSAMU IYAMA AND MICHAEL WEMYSS
In particular d is a left (add N ∗ )-approximation.
(3) We have that
a∗
c∗
0 → M ∗ → N0∗ → K0∗
∗
b
d
(6.H)
∗
0 → M → N1 → K1∗
(6.I)
are exact, inducing exact sequences
·a∗
·c∗
c∗ ·
a∗ ·
0 → HomR (N ∗ , M ∗ ) → HomR (N ∗ , N0∗ ) → HomR (N ∗ , K0∗ ) → 0
(6.J)
0 → HomR (K0∗ , N ∗ ) → HomR (N0∗ , N ∗ ) → HomR (M ∗ , N ∗ ) → 0
·b∗
(6.K)
·d∗
0 → HomR (N, M ) → HomR (N, N1 ) → HomR (N, K1∗ ) → 0
d∗ ·
(6.L)
b∗ ·
0 → HomR (K1∗ , N ) → HomR (N1 , N ) → HomR (M, N ) → 0
(6.M)
Proof. Denote Λ := EndR (N ) and F := HomR (N, −).
(1) We note that (6.C) is
0 → FK0 → FN0 → FM → 0
so applying HomΛ (−, FN ) gives
0 → HomΛ (FM, FN ) → HomΛ (FN0 , FN ) → HomΛ (FK0 , FN ) → Ext1Λ (FM, Λ).
But by 2.22 Λ is d-sCY and thus a Gorenstein R-order by 2.21. Since FM ∈ CM Λ
and add Λ = add ωΛ by 2.15, it follows that Ext1Λ (FM, Λ) = 0 and hence we have a
commutative diagram of complexes
0
HomΛ (FM, FN )
0
HomR (M, N )
HomΛ (FN0 , FN )
a·
HomR (N0 , N )
c·
HomΛ (FK0 , FN )
0
HomR (K0 , N )
0
in which the top row is exact and the vertical maps are isomorphisms by reflexive equivalence 2.5(4). It follows that the bottom row is exact.
(2) is identical to (1) since HomR (N ∗ , M ∗ ) ∈ CM R.
(3) As in (1) applying HomΛ (−, FR) to (6.C) gives an commutative diagram of complexes
0
HomΛ (FM, FR)
0
HomR (M, R)
HomΛ (FN0 , FR)
a∗
HomR (N0 , R)
HomΛ (FK0 , FR)
c∗
HomR (K0 , R)
in which the top row is exact. Hence the bottom row (i.e. (6.H)) is exact. The proof that
(6.I) is exact is identical. Now since (−)∗ : ref R → ref R is a duality, the sequences (6.J),
(6.K), (6.L) and (6.M) are identical with (6.F), (6.C), (6.G) and (6.D) respectively. Thus
they are exact.
−
−
+
Proposition 6.5. µ+
N and µN are mutually inverse operations, i.e. we have µN (µN (M )) =
+
−
M and µN (µN (M )) = M , up to additive closure.
+
Proof. Since (6.H) and (6.J) are exact, we have µ−
N (µN (M )) = M . The other assertion
follows dually.
The following is standard in the theory of tilting mutation [RS91].
Lemma 6.6. Let Λ be a ring, let Q be a projective Λ-module and consider an exact
f
g
sequence Λ → Q′ → Cok f → 0 where f is a left (add Q)-approximation. If f is injective
then Q ⊕ Cok f is a tilting Λ-module of projective dimension at most one.
Proof. For the convenience of the reader we give a complete proof here. It is clear that
proj.dimΛ (Q ⊕ Cok f ) ≤ 1 and it generates the derived category. We need only to check
that Ext1Λ (Q ⊕ Cok f, Q ⊕ Cok f ) = 0. Applying HomΛ (−, Q), we have an exact sequence
f·
HomΛ (Q′ , Q) → HomΛ (Λ, Q) → Ext1Λ (Cok f, Q) → 0.
MM MODULES AND AR DUALITY FOR NON-ISOLATED SINGULARITIES.
31
Since (f ·) is surjective, we have Ext1Λ (Cok f, Q) = 0. Applying HomΛ (−, Cok f ), we have
an exact sequence
f·
HomΛ (Q′ , Cok f ) → HomΛ (Λ, Cok f ) → Ext1Λ (Cok f, Cok f ) → 0.
f·
·g
Here (f ·) is surjective since HomΛ (Q′ , Q′ ) → HomΛ (Λ, Q′ ) and HomΛ (Λ, Q′ ) → HomΛ (Λ, Cok f )
are surjective. Thus we have Ext1Λ (Cok f, Cok f ) = 0. Consequently we have Ext1Λ (Q ⊕
Cok f, Q ⊕ Cok f ) = 0 since Q is projective.
The proof of 6.8 requires the following technical lemma.
Lemma 6.7. Let R be a normal domain, let Λ ∈ ref R be a module finite R-algebra and let
T ∈ mod Λ be a height one projective (i.e. Tp is a projective Λp -module for all p ∈ Spec R
with ht p ≤ 1) such that EndΛ (T ) ∈ ref R. Then EndΛ (T ) ∼
= EndΛop (T ∗ )op .
Proof. Consider the natural ring homomorphism
ψ:=(−)∗
EndΛ (T ) −−−−→ EndΛop (T ∗ )op
where recall (−)∗ := HomR (−, R). Note that T ∗ ∈ ref R by 2.5(2), i.e. T ∗ ∈ ref Λop . This
implies EndΛop (T ∗ )op ∈ ref R by 2.5(2).
Since T is a height one projective and Λ ∈ ref R, it follows that Tp ∈ ref Λp for all
height one primes p. Hence, via the anti–equivalence
(−)∗
p
ref Λp −−→ ref Λop
p ,
we have that ψ is a height one isomorphism.
By assumption EndΛ (T ) ∈ ref R holds. Since R is normal, ψ, being a height one
isomorphism between reflexive R-modules, is actually an isomorphism.
Theorem 6.8. Let R be a normal d-sCY ring with modifying module M . Suppose 0 6=
N ∈ add M . Then
(1) EndR (M ) and EndR (µ−
N (M )) are derived equivalent.
(2) EndR (M ) and EndR (µ+
N (M )) are derived equivalent.
Proof. (1) Denote Λ := EndR (M ) and F := HomR (M, −) : ref R → ref Λ. Applying F to
(6.I) and denoting V := Cok(·b∗ ), we have an exact sequence
0
FM
(·b∗ )
FK1∗ .
FN1
V
(6.N)
h
We now claim that (·b∗ ) is a left (add Q)-approximation where Q := HomR (M, N ) = FN .
Simply applying HomΛ (−, Q) = HomΛ (−, FN ) to the above we obtain
HomΛ (FN1 , FN )
HomΛ (FM, FN )
HomR (N1 , N )
HomR (M, N )
0
where the bottom is just (6.M) and so is exact, and the vertical maps are isomorphisms
by reflexive equivalence 2.5(4). Hence the top is surjective, showing that (·b∗ ) is a left
(add Q)-approximation. By 6.6 it follows that Q ⊕ V is a tilting Λ-module.
We now show that EndΛ (V ⊕ Q) ∼
= EndR (µ−
N (M )) by using 6.7. To do this, note
first that certainly Λ ∈ ref R since Λ ∈ CM R, and further Λ is d-sCY by 2.22(2). Hence
EndΛ (V ⊕ Q), being derived equivalent to Λ, is also d-sCY and so EndΛ (V ⊕ Q) ∈ ref R
by 2.21. We now claim that V ⊕ Q is a height one projective Λ-module.
Let p ∈ Spec R be a height one prime, then Mp ∈ ref Rp = add Rp . Hence Mp is a
free Rp -module, and so add Np = add Mp . Localizing (6.L) gives an exact sequence
0 → HomRp (Np , Mp ) → HomRp (Np , (N1 )p ) → HomRp (Np , (K1∗ )p ) → 0
and so since add Np = add Mp ,
0 → HomRp (Mp , Mp ) → HomRp (Mp , (N1 )p ) → HomRp (Mp , (K1∗ )p ) → 0
32
OSAMU IYAMA AND MICHAEL WEMYSS
is exact. This is (6.N) localized at p, hence we conclude that h is a height one isomorphism.
In particular Vp = HomRp (Mp , (K1∗ )p ) with both Mp , (K1∗ )p ∈ add Rp . Consequently V ,
thus V ⊕ Q, is a height one projective Λ-module.
Thus by 6.7 we have an isomorphism
EndΛ (V ⊕ Q) ∼
= EndΛop (V ∗ ⊕ Q∗ )op .
Now since h is a height one isomorphism, it follows that h∗ is a height one isomorphism.
But h∗ is a morphism between reflexive modules, so h∗ must be an isomorphism. We thus
have
V ∗ ⊕ Q∗ = (F(K1∗ ))∗ ⊕ Q∗ = (F(K1∗ ))∗ ⊕ (FN )∗ = (F(K1∗ ⊕ N ))∗ .
Consequently
∼ EndΛ (F(K ∗ ⊕ N ))
∼ EndΛop ((F(K ∗ ⊕ N ))∗ )op =
EndΛ (V ⊕ Q) =
1
1
since
(−)∗
(F(K1∗ ⊕ N ))∗ ∈ ref Λop −−−−→ ref Λ
is an anti–equivalence. This then yields
EndΛ (V ⊕ Q) ∼
= EndΛ (F(K ∗ ⊕ N )) ∼
= EndR (K ∗ ⊕ N ) = EndR (µ− (M )),
1
1
N
where the second isomorphism follows from reflexive equivalence 2.5.
∗
(2) Since M ∗ is a modifying R-module, by (1) EndR (M ∗ ) and EndR (µ−
N ∗ (M )) are derived
−
+
+
∗
∗
∗
equivalent. But µN ∗ (M ) = (µN (M )) , so EndR (M ) and EndR ((µN (M ))∗ ) are derived
op
equivalent. Hence EndR (M )op and EndR (µ+
are derived equivalent, which forces
N (M ))
+
EndR (M ) and EndR (µN (M )) to be derived equivalent [R89, 9.1].
Remark 6.9. By 6.8, for every 0 6= N ∈ add M we obtain an equivalence
TN := RHom(V ⊕ Q, −) : Db (mod EndR (M )) → Db (mod EndR (µ−
N (M ))).
Sometimes µ−
N (M ) = M can happen (see next subsection), but the functor TN is never
the identity provided add N 6= add M . This gives a way of generating autoequivalences of
the derived category.
Theorem 6.10. Let R be a normal d-sCY ring with modifying module M . Suppose
0 6= N ∈ add M . Then
−
(1) µ+
N (M ) and µN (M ) are modifying R-modules.
−
(2) If M gives an NCCR, so do µ+
N (M ) and µN (M ).
−
(3) Whenever N is a generator, if M is a CT module so are µ+
N (M ) and µN (M ).
(4) Whenever dim Sing R ≤ 1 (e.g. if d = 3), if M is a MM module so are µ+
N (M ) and
µ−
(M
).
N
+
Proof. Set Λ := EndR (M ). By 6.8, Λ, EndR (µ−
N (M )) and EndR (µN (M )) are all derived
equivalent. Hence (1) follows from 4.6(1), (2) follows from 4.6(2) and (4) follows from
4.8(2).
(3) Since M is CT, by definition M ∈ CM R. But N is a generator, so the a and b in
the exchange sequences (6.A) and (6.B) are surjective. Consequently both µ+
N (M ) and
5.4.
µ−
(M
)
are
CM
R-modules,
so
the
result
follows
from
(2)
and
N
One further corollary to 6.10 is the following application to syzygies and cosyzygies.
Usually syzygies and cosyzygies are only defined up to free summands, so let us first settle
some notation. Suppose that R is a normal d-sCY ring and M is a modifying generator.
Since M and M ∗ are finitely generated we can consider exact sequences
0 → K0 → P0 → M → 0
(6.O)
P1∗
(6.P)
0 → K1 →
∗
→M →0
−1
where P0 , P1 ∈ add R. We define ΩM := R ⊕ K0 = µ+
M := R ⊕ K1∗ =
R (M ) and Ω
−
i
µR (M ). Inductively we define Ω M for all i ∈ Z.
Our next result shows that modifying modules often come in infinite families, and
that in particular NCCRs often come in infinite families:
MM MODULES AND AR DUALITY FOR NON-ISOLATED SINGULARITIES.
33
Corollary 6.11. Suppose that R is a normal d-sCY ring and M ∈ ref R is a modifying
generator. Then
(1) EndR (Ωi M ) are derived equivalent for all i ∈ Z.
(2) Ωi M ∈ CM R is a modifying generator for all i ∈ Z.
Proof. The assertions are immediate from 6.8 and 6.10.
6.2. Mutations and Derived Equivalences in Dimension 3. In the special case
d = 3, we can extend some of the above results, since we have more control over the
tilting modules produced from the procedure of mutation. Recall from the introduction
that given 0 6= N ∈ add M we define [N ] to be the two-sided ideal of Λ := EndR (M )
consisting of morphisms M → M which factor through a member of add N .
The factors ΛN := Λ/[N ] are, in some sense, replacements for simple modules in the
infinite global dimension setting. For example, we have the following necessary condition
for a module to be MM.
Proposition 6.12. Suppose that R is a normal 3-sCY ring, let M be an MM R-module
and denote Λ = EndR (M ). Then proj.dimΛ ΛN ≤ 3 for all N such that 0 6= N ∈ add M .
Proof. The sequence (6.A) 0 → K0 → N0 → M gives
0 → HomR (M, K0 ) → HomR (M, N0 ) → Λ → ΛN → 0
where HomR (M, N0 ) and Λ are projective Λ-modules. But K0 is a modifying module by 6.10, so by 4.12 we know that proj.dimΛ HomR (M, K0 ) ≤ 1. Hence certainly
proj.dimΛ ΛN ≤ 3.
Remark 6.13. The converse of 6.12 is not true, i.e. there exists non-maximal modifying
modules M such that proj.dimΛ ΛN ≤ 3 for all 0 6= N ∈ add M . An easy example is
given by M := R ⊕ (a, c2 ) for R := C[[a, b, c, d]]/(ab − c4 ). In this case the right (add R)approximation
2
(ca2 )
(− c inc)
0 → (a, c2 ) −−−a−−−→ R ⊕ R −−−→ (a, c2 ) → 0
shows that proj.dimΛ (Λ/[(a, c2 )]) = 2, whilst the right (add(a, c2 ))-approximation
2
(−a c )
inc
c2
a
0 → R −−−−−→ (a, c2 ) ⊕ (a, c2 ) −−−−−→ R
shows that proj.dimΛ (Λ/[R]) = 2. Also Λ/[M ] = 0 and so trivially proj.dimΛ (Λ/[M ]) = 0.
Hence proj.dimΛ ΛN ≤ 3 for all 0 6= N ∈ add M , however EndR (M ⊕ (a, c)) ∈ CM R with
(a, c) ∈
/ add M , so M is not an MM module.
Roughly speaking, mutation in dimension d = 3 is controlled by the factor algebra
ΛN , in particular whether it is artinian or not. When it is artinian, the derived equivalence
in 6.8 is given by a very explicit tilting module.
Theorem 6.14. Let R be a normal 3-sCY ring with modifying module M . Suppose that
0 6= N ∈ add M and denote Λ = EndR (M ). If ΛN = Λ/[N ] is artinian then
−
∼
(1) T1 := HomR (M, µ−
N (M )) is a tilting Λ-module such that EndΛ (T1 ) = EndR (µN (M )).
+
op
∗
∗
op
∼
(2) T2 := HomR (M , µN (M ) ) is a tilting Λ -module such that EndΛop (T2 ) = EndR (µ+
N (M )) .
Remark 6.15. In the setting of 6.14, we have the following.
(1) ΛN is artinian if and only if add Mp = add Np for all p ∈ Spec R with ht p = 2.
(2) If R is finitely generated over a field k then ΛN is artinian if and only if dimk ΛN < ∞.
Thus if the reader is willing to work over C, they may replace the condition ΛN is artinian
by dimC ΛN < ∞ throughout.
Proof of 6.14. (1) Denote G := HomR (N, −) and Γ := EndR (N ). Applying HomR (M, −)
to (6.I) and HomΓ (GM, −) to (6.L) gives a commutative diagram
0
HomΓ (GM, GM )
0
HomR (M, M )
HomΓ (GM, GK1∗ )
HomΓ (GM, GN1 )
·b∗
HomR (M, N1 )
·d∗
HomR (M, K1∗ )
Ext1Γ (GM, GM )
C
0
34
OSAMU IYAMA AND MICHAEL WEMYSS
where the vertical maps are isomorphisms by 2.5(4), hence C ⊆ Ext1Γ (GM, GM ). We
first claim that C = 0. Since EndΓ (GM ) ∼
= Λ by reflexive equivalence 2.5, by 2.7
fl Ext1Γ (GM, GM ) = 0. On the other hand HomR (N, −) applied to (6.I) is exact (by
6.4), so C is annihilated by [N ] and consequently C is a ΛN -module. Since ΛN is artinian
so too is C, thus it has finite length. Hence C = 0 and so
0 → HomR (M, M ) → HomR (M, N1 ) → HomR (M, K1∗ ) → 0
(6.Q)
is exact. Thus the tilting module V ⊕ Q in the proof of 6.8(1) is simply HomR (M, K1∗ ) ⊕
HomR (M, N ) = T1 . The remaining statements are contained in 6.8(1).
(2) Similarly to the above one can show that applying HomR (M ∗ , −) to (6.H) gives an
exact sequence
0 → HomR (M ∗ , M ∗ ) → HomR (M ∗ , N0∗ ) → HomR (M ∗ , K0∗ ) → 0
(6.R)
∗
and so the tilting module V ⊕ Q in the proof of 6.8(2) is simply HomR (M , K0∗ ) ⊕
∗
HomR (M ∗ , N ∗ ) = HomR (M ∗ , µ+
N (M ) ).
Remark 6.16. Note that the statement in 6.14 is quite subtle. There are examples where
−
∗
∗
HomR (M, µ+
N (M )) (respectively, HomR (M , µN (M ) )) is not a tilting EndR (M )-module
op
(respectively, EndR (M ) -module). Note however that these are always tilting modules
if M is an MM module, by combining 4.14(2) and 6.10(4).
If ΛN is artinian, the module M changes under mutation:
Proposition 6.17. Let R be a normal 3-sCY ring with modifying module M . Suppose
0 6= N ∈ add M , denote Λ = EndR (M ) and define ΛN := Λ/[N ]. If ΛN is artinian then
(1) If add N 6= add M then add µ+
N (M ) 6= add M .
(2) If add N 6= add M then add µ−
N (M ) 6= add M .
Proof. (1) Since ΛN is artinian, the sequence (6.R)
0 → HomR (M ∗ , M ∗ ) → HomR (M ∗ , N0∗ ) → HomR (M ∗ , K0∗ ) → 0
is exact. If this splits then by reflexive equivalence (2.5(4)) M ∗ is a summand of N0∗ ,
contradicting add N 6= add M . Thus the above cannot split so HomR (M ∗ , K0∗ ) cannot be
projective, hence certainly K0∗ ∈
/ add M ∗ and so K0 ∈
/ add M . This implies add µ+
N (M ) 6=
add M .
(2) Similarly, the exact sequence (6.Q) cannot split, so K1∗ ∈
/ add M .
Remark 6.18. It is natural to ask under what circumstances the hypothesis ΛN is artinian in 6.14, 6.17 holds. In the situation of 5.8 the answer seems to be related to the
contractibility of the corresponding curves; we will come back to this question in future
work.
One case where ΛN is always artinian is when R has isolated singularities:
Lemma 6.19. Suppose R is a normal 3-sCY ring. Let M be a modifying module with
0 6= N ∈ add M , denote Λ = EndR (M ) and set ΛN = Λ/[N ]. Then
(1) dimR ΛN ≤ 1.
(2) depthRm (ΛN )m ≤ 1 for all m ∈ Max R.
(3) If R is an isolated singularity then ΛN is artinian.
(4) If proj.dimΛ ΛN < ∞ then inj.dimΛN ΛN ≤ 1.
Proof. (1) We have (EndR (M )/[N ])p = EndRp (Mp )/[Np ] for all p ∈ Spec R. Since R
is normal, add Mp = add Rp = add Np for all p ∈ Spec R with ht p = 1. Thus we have
(EndR (M )/[N ])p = EndRp (Mp )/[Np ] = 0 for all these primes, and so the assertion follows.
(2) is immediate from (1).
(3) If R is isolated then by the argument in the proof of (1) we have dimR ΛN = 0 and so
ΛN is supported only at a finite number of maximal ideals. Hence ΛN has finite length
and so ΛN is artinian.
(4) Notice that Λ is 3-sCY by 2.22. Hence the assertion follows from [IR08, 5.5(3)]
for 3-CY algebras, which is also valid for 3-sCY algebras under the assumption that
proj.dimΛ ΛN < ∞.
MM MODULES AND AR DUALITY FOR NON-ISOLATED SINGULARITIES.
35
We now show that mutation does not change the factor algebra ΛN . Suppose M is
modifying and N is such that 0 6= N ∈ add M , and consider an exchange sequence (6.A)
a
c
0 → K0 → N0 → M.
We know by definition that a is a right (add N )-approximation, and by (6.F) that c is a
left (add N )-approximation.
Since ΛN is by definition EndR (M ) factored out by the ideal of all morphisms M → M
which factor through a module in add N , in light of the approximation property of the
map a, this ideal is the just the ideal Ia of all morphisms M → M which factor as xa
where x is some morphism M → N0 . Thus ΛN = EndR (M )/Ia .
On the other hand taking the choice µ+
N (M ) = K0 ⊕ N coming from the above
′
exchange sequence, ΛN is by definition EndR (µ+
N (M )) = EndR (K0 ⊕ N ) factored out by
the ideal of all morphisms K0 ⊕ N → K0 ⊕ N which factor through a module in add N .
Clearly this is just EndR (K0 ) factored out by those morphisms which factor through
add N . In light of the approximation property of the map c, Λ′N = EndR (K0 )/Ic where
Ic is the ideal of all morphisms K0 → K0 which factor as cy where y is some morphism
K 0 → N0 .
Theorem 6.20. Let R be a normal d-sCY ring, and let M be a modifying module with
0 6= N ∈ add M . With the notation and choice of exchange sequence as above, we have
ΛN ∼
= Λ′N as R-algebras. In particular
(1) Λ′N is independent of the choice of exchange sequence, up to isomorphism.
(2) ΛN is artinian if and only if Λ′N is artinian.
Proof. We construct a map α : ΛN = EndR (M )/Ia → EndR (K0 )/Ic = Λ′N as follows:
given f ∈ EndR (M ) we have
0
c
K0
N0
∃ hf
0
a
M
∃ gf
c
K0
N0
f
a
M
where the gf exists (non-uniquely) since a is an approximation. Define α by α(f + Ia ) =
hf + Ic . We will show that α : ΛN → Λ′N is a well-defined map, which is independent of
the choice of gf . Take f ′ ∈ Λ satisfying f − f ′ ∈ Ia . We have a commutative diagram
0
K0
c
N0
∃ hf ′
0
K0
a
M
∃gf ′
c
N0
f′
a
M
There exists x : M → N0 such that xa = f − f ′ . Thus (gf − gf ′ − ax)a = 0 so there exists
y : N0 → K0 such that yc = gf −gf ′ −ax. This implies cyc = c(gf −gf ′ −ax) = (hf −hf ′ )c,
so since c is a monomorphism we have cy = hf − hf ′ . Thus hf + Ic = hf ′ + Ic holds, and
we have the assertion.
It is easy to check that α is an R-algebra homomorphism since α is independent of
the choice of gf .
We now show that α is bijective by constructing the inverse map β : Λ′N → ΛN . Let
t : K0 → K0 be any morphism then on dualizing we have
0
M∗
a∗
∃r
0
M∗
N0∗
c∗
t∗
∃s
a
∗
N0∗
K0∗
c
∗
K0∗
where the rows are exact by (6.H), s exists (non-uniquely) by (6.J) and r exists since a∗
is the kernel of c∗ . Let β(t + Ic ) := r∗ + Ia . By the same argument as above, we have
that β : Λ′N → ΛN is a well-defined map.
36
OSAMU IYAMA AND MICHAEL WEMYSS
Dualizing back gives a commutative diagram
c
K0
0
M
∗
t
r∗
s
c
K0
0
a
N0
N0
a
M
which shows that β is the inverse of α.
6.3. Complete Local Case. In this subsection we assume that R is a complete local
normal Gorenstein d-dimensional ring, then since we have Krull–Schmidt decompositions
we can say more than in the previous section. Note that with these assumptions R is
automatically d-sCY by 2.20. For a modifying module M we write
M
M = M1 ⊕ . . . ⊕ Mn =
Mi
i∈I
as its Krull–Schmidt decomposition into indecomposable submodules, where I = {1, . . . , n}.
Throughout we assume that M is basic, i.e. the Mi ’s are mutually non-isomorphic. With
the new assumption on R we may take minimal approximations
and so the setupLin the preL
M
vious section can be simplified: for ∅ 6= J ⊆ I set MJ := j∈J Mj and M
:= i∈I\J Mi .
J
Then
a
M
(a) Denote L0 → MJ to be a right (add M
)-approximation of MJ which is right
J
M
minimal. If MJ contains R as a summand then necessarily a is surjective.
b
∗
∗
(b) Similarly denote L∗1 → MJ∗ to be a right (add M
M ∗ )-approximation of MJ which is
right minimal. Again if
M
MJ
J
contains R as a summand then b is surjective.
Recall that a morphism a : X → Y is called right minimal if any f ∈ EndR (X) satisfying
a = f a is an automorphism. In what follows we denote the kernels of the above right
minimal approximations by
a
c
0 → C0 → L0 → MJ
d
b
and 0 → C1 → L∗1 → MJ∗ .
This recovers the mutations from the previous subsection:
Lemma 6.21. With notation as above, µ+M (M ) =
MJ
M
MJ
⊕ C0 and µ−M (M ) =
MJ
M
MJ
⊕ C1∗ .
Proof. There is an exact sequence
(c 0)
0 → C0 → L0 ⊕
M
MJ
(a0 10)
→ MJ ⊕
M
MJ
=M →0
M
)-approximation ( a0 10 ). Thus the assertion follows.
with a right (add M
J
Since we have minimal approximations from now on we define our mutations in terms
−
M
M
∗
of them. We thus define µ+
J (M ) := C0 ⊕ MJ and µJ (M ) := C1 ⊕ MJ . When J = {i}
−
+
−
+
we often write µi and µi instead of µ{i} and µ{i} respectively. Note that using this
−
new definition of mutation involving minimal approximations, µ+
J and µJ are now inverse
operations up to isomorphism, not just additive closure. This strengthens 6.5.
We now investigate, in dimension three, the mutation of an MM module at an indecomposable summand Mi . Let ei denote the idempotent in Λ := EndR (M ) corresponding
to the summand Mi , then the theory depends on whether or not Λi := Λ/Λ(1 − ei )Λ is
artinian.
Theorem 6.22. Suppose R is complete local normal 3-sCY and let M be an MM module
with indecomposable summand Mi . Denote Λ = EndR (M ), let ei be the idempotent corre−
sponding to Mi and denote Λi = Λ/Λ(1 − ei )Λ. If Λi is artinian, then µ+
i (M ) = µi (M )
and this is not equal to M .
−
Proof. We know that M , µ+
i (M ) and µi (M ) are all MM modules by 6.10, thus by 4.17(1)
+
it follows that HomR (M, µi (M )) and HomR (M, µ−
i (M )) are both tilting EndR (M )−
modules. But since µ+
i (M ) 6= M and µi (M ) 6= M by 6.17, neither of these tilting modules
equal HomR (M, M ). Further by construction, as EndR (M )-modules HomR (M, µ+
i (M ))
MM MODULES AND AR DUALITY FOR NON-ISOLATED SINGULARITIES.
37
and HomR (M, µ−
i (M )) share all summands except possibly one, thus by a Riedtmann–
∼
Schofield type theorem [IR08, 4.2] [RS91, 1.3], they must coincide, i.e. HomR (M, µ+
i (M )) =
+
−
−
∼
HomR (M, µi (M )). By reflexive equivalence 2.5(4) we deduce that µi (M ) = µi (M ).
The case when Λi is not artinian is very different:
Theorem 6.23. Suppose R is complete local normal 3-sCY and let M be a modifying
module with indecomposable summand Mi . Denote Λ = EndR (M ), let ei be the idempotent
corresponding to Mi and denote Λi = Λ/Λ(1 − ei )Λ. If Λi is not artinian, then
−
(1) If proj.dimΛ Λi < ∞, then µ+
i (M ) = µi (M ) = M .
+
(2) If M is an MM module, then always µi (M ) = µ−
i (M ) = M .
Proof. (1) It is always true that depthR Λi ≤ dimR Λi ≤ inj.dimΛi Λi by [GN02, 3.5]
(see [IR08, 2.1]). Since proj.dimΛ Λi < ∞, by 6.19(4) we know that inj.dimΛi Λi ≤ 1.
Since Λi is local and inj.dimΛi Λi ≤ 1, depthR Λi = inj.dimΛi Λi by Ramras [Ram, 2.15].
If dimR Λi = 0 then Λi has finite length, contradicting the assumption that Λi is not
artinian. Thus depthR Λi = dimR Λi = inj.dimΛi Λi = 1. In particular Λi is a CM
R-module of dimension 1.
Now Λ is a Gorenstein R-order by 2.21 and 2.22 so by Auslander–Buchsbaum 2.16,
since proj.dimΛ Λi < ∞ necessarily proj.dimΛ Λi = 3 − depthR Λi = 2. Thus we have a
minimal projective resolution
f
0 → P2 → P1 → Λei → Λi → 0.
(6.S)
where f is a minimal right (add Λ(1 − ei ))-approximation since it is a projective cover of
Λ(1 − ei )Λei . By [IR08, 3.4(5)] we have
Ext2Λ (Λi , Λ) ∼
= Ext2R (Λi , R)
op
and this is a projective Λop
i -module by [GN02, 1.1(3)]. It is a free Λi -module since Λi is a
2
local ring. Since Λi is a CM R-module of dimension 1, we have ExtR (Ext2R (Λi , R), R) ∼
= Λi
op
as Λi -modules. Thus the rank has to be one and we have Ext2R (Λi , R) ∼
= Λi as Λi modules. Applying HomΛ (−, Λ) to (6.S) gives an exact sequence
HomΛ (P1 , Λ) → HomΛ (P2 , Λ) → Λi → 0
which gives a minimal projective presentation of the Λop -module Λi . Thus we have
HomΛ (P2 , Λ) ∼
= Λei .
= ei Λ and P2 ∼
Under the equivalence HomR (M, −) : add M → proj Λ, the sequence (6.S) corresponds to a complex
h
g
0 → Mi → L0 → Mi
M
)-approximation. Since the induced morphism Mi → Ker g
with g a minimal right (add M
i
is sent to an isomorphism under the reflexive equivalence HomR (M, −) : ref R → ref Λ
M
(2.5(4)), it is an isomorphism and so h = ker g. Consequently we have µ+
i (M ) = Mi ⊕Mi =
−
M . This implies that µi (M ) = M by 6.5.
(2) This follows from (1) since proj.dimΛ Λi < ∞ by 6.12.
Remark 6.24. The above theorem needs the assumption that Mi is indecomposable. If
we assume that |J| ≥ 2 and ΛJ is still not artinian, then both examples with µ−
J (M ) 6= M
and those with µ−
(M
)
=
M
exist.
See
for
example
[IW12,
§5]
for
more
details.
J
In dimension three when the base R is complete local, we have the following summary,
which completely characterizes mutation at an indecomposable summand.
Summary 6.25. Suppose R is complete normal 3-sCY with MM module M . Denote Λ =
EndR (M ), let Mi be an indecomposable summand of M and consider Λi := Λ/Λ(1 − ei )Λ
where ei is the idempotent in Λ corresponding to Mi . Then
−
(1) If Λi is not artinian then µ+
i (M ) = M = µi (M ).
+
−
(2) If Λi is artinian then µi (M ) = µi (M ) and this is not equal to M .
−
In either case denote µi := µ+
i = µi then it is also true that
(3) µi µi (M ) = M .
(4) µi (M ) is a MM module.
38
OSAMU IYAMA AND MICHAEL WEMYSS
(5) EndR (M ) and EndR (µi (M )) are derived equivalent, via the tilting EndR (M )-module
HomR (M, µi (M )).
Proof. (1) is 6.23 and (2) is 6.22. The remainder is trivially true in the case when Λi is not
−
artinian (by 6.23), thus we may assume that Λi is artinian. Now µi µi (M ) = µ+
i (µi M ) =
M by 6.3, proving (3). (4) is contained in 6.10 and (5) is 6.14(1).
Example 6.26. Consider the subgroup G = 12 (1, 1, 0) ⊕ 12 (0, 1, 1) of SL(3, k) and let
R = k[[x, y, z]]G . We know by 5.7 that M = k[[x, y, z]] is a CT R-module, and in this
example it decomposes into 4 summands R ⊕ M1 ⊕ M2 ⊕ M3 with respect to the characters
of G. The quiver of EndR (M ) is the McKay quiver
y
y
M1
z
xx
R
M2
z
z
xx
z
y
y
M3
and so to mutate at M2 it is clear that the relevant approximation is
z
y
x
R ⊕ M1 ⊕ M3 → M2 → 0
Thus the mutation at vertex M2 changes M = R ⊕ M1 ⊕ M2 ⊕ M3 into R ⊕ M1 ⊕ K2 ⊕ M3
where K2 is the kernel of the above map which (by counting ranks) has rank 2. On the
level of quivers of the endomorphism rings, this induces the mutation
M1
M1
M2
M3
K2
µ2
R
M3
R
Due to the relations in the algebra EndR (µ2 (M )) (which we suppress), the mutation at
R, M1 and M3 in the new quiver are trivial, thus in EndR (µ2 (M )) the only vertex we can
non-trivially mutate at is K2 , which gives us back our original M . By the symmetry of
the situation we obtain the beginning of the mutation graph:
R⊕M1 ⊕M2 ⊕K3
µ3
R⊕K1 ⊕M2 ⊕M3
µ1
R⊕M1 ⊕M2 ⊕M3
µ2
R⊕M1 ⊕K2 ⊕M3
We remark that mutating at any of the decomposable modules M1 ⊕ M2 , M1 ⊕ M3 ,
M2 ⊕ M3 or M1 ⊕ M2 ⊕ M3 gives a trivial mutation. Note that the mutation µ+
M/R (M )
at the vertex R is not a CM R-module, and so we suppress the details.
Acknowledgements. We would like to thank Michel Van den Bergh, Vanya Cheltsov,
Constantin Shramov, Ryo Takahashi and Yuji Yoshino for stimulating discussions and
valuable suggestions. We also thank the anonymous referee for carefully reading the
paper, and offering many useful insights and suggested improvements.
References
[ASS]
I. Assem, D. Simson, A. Skowroński, Elements of the representation theory of associative algebras.
Vol. 1. Techniques of representation theory, London Mathematical Society Student Texts, 65.
Cambridge University Press, Cambridge, 2006.
[Aus78] M. Auslander, Functors and morphisms determined by objects. Representation theory of algebras
(Proc. Conf., Temple Univ., Philadelphia, Pa., 1976), pp. 1–244. Lecture Notes in Pure Appl.
Math., Vol. 37, Dekker, New York, 1978.
, Isolated singularities and existence of almost split sequences. Representation theory, II
[Aus84]
(Ottawa, Ont., 1984), 194–242, Lecture Notes in Math., 1178, Springer, Berlin, 1986.
MM MODULES AND AR DUALITY FOR NON-ISOLATED SINGULARITIES.
[Aus86]
[AG60]
[AR]
[BBD]
[B80]
[Bou68]
[BH]
[B06]
[Bu86]
[BIRS]
[CB]
[CE99]
[C02]
[C55]
[CR90]
[Ei95]
[EG85]
[GLS]
[GN02]
[Iya07]
[IR08]
[IT10]
[IW08]
[IW11]
[IW12]
[IY08]
[KY11]
[K10]
[Mat]
[M03]
[O76]
[Ram]
[RV89]
[R89]
[RS91]
[RZ03]
39
, Rational singularities and almost split sequences. Trans. Amer. Math. Soc. 293 (1986),
no. 2, 511–531.
M. Auslander and O. Goldman, Maximal orders, Trans. Amer. Math. Soc. 97 (1960), 1–24.
M. Auslander and I. Reiten, Almost split sequences for Cohen-Macaulay modules, Math. Ann.
277 (1987), 345–349.
A. A. Beilinson, J. Bernstein and P. Deligne, Faisceaux pervers, Analysis and topology on singular
spaces, I (Luminy, 1981), 5–171, Astérisque, 100, Soc. Math. France, Paris, 1982.
K. Bongartz. Tilted algebras. Representations of algebras (Puebla, 1980), pp. 26–38, Lecture
Notes in Math., 903, Springer, Berlin-New York, 1981.
N. Bourbaki, Groupe et Algebre de Lie, Chapter 5, Hermann, Paris, 1968.
W. Bruns and J. Herzog, Cohen-Macaulay rings. Rev. ed., Cambridge Studies in Advanced
Mathematics. 39 xiv+453 pp.
A. Buan, R. Marsh, M. Reineke, I. Reiten and G. Todorov, Tilting theory and cluster combinatorics. Adv. Math. 204 (2006), no. 2, 572–618.
R. O. Buchweitz, Maximal Cohen–Macaulay modules and Tate–cohomology over Gorenstein
rings, preprint, 1986.
A. Buan, O. Iyama, I. Reiten and D. Smith, Mutation of cluster-tilting objects and potentials,
Amer. J. Math. 133 (2011), no. 4, 835–887.
W. Crawley-Boevey, On the exceptional fibres of Kleinian singularities, Amer. J. Math. 122
(2000), no. 5, 1027–1037.
H. Cartan and S. Eilenberg, Homological algebra. Reprint of the 1956 original. Princeton Landmarks in Mathematics. Princeton University Press, 1999.
J-C. Chen, Flops and equivalences of derived categories for threefolds with only terminal Gorenstein singularities, J. Differential Geom. 61 (2002), no. 2, 227–261.
C. Chevalley, Invariants of finite groups generated by reflections, Amer. J. Math. 77 (1955),
778–782.
C. W. Curtis and I. Reiner, Methods of representation theory. Vol. I. With applications to finite
groups and orders. Reprint of the 1981 original. John Wiley & Sons, Inc., New York, 1990.
D. Eisenbud, Commutative algebra. With a view toward algebraic geometry, Graduate Texts in
Mathematics, 150 Springer-Verlag, New York, 1995. xvi+785 pp.
E. G. Evans and P. Griffith, Syzygies, London Math. Soc. Lecture Note Ser., vol. 106, Cambridge
University Press, Cambridge, 1985.
C. Geiss, B. Leclerc and J. Schröer, Rigid modules over preprojective algebras, Invent. Math. 165
(2006), no. 3, 589–632.
S. Goto and K. Nishida, Towards a theory of Bass numbers with application to Gorenstein
algebras, Colloq. Math. 91 (2002), no. 2, 191–253.
O. Iyama, Higher-dimensional Auslander–Reiten theory on maximal orthogonal subcategories,
Adv. Math. 210 (2007), no. 1, 22–50.
O. Iyama and I. Reiten, Fomin-Zelevinsky mutation and tilting modules over Calabi-Yau algebras,
Amer. J. Math. 130 (2008), no. 4, 1087–1149.
O. Iyama and R. Takahashi, Tilting and cluster tilting for quotient singularities, Math. Ann.
356 (2013), no. 3, 1065–1105.
O. Iyama and M. Wemyss, The classification of special Cohen Macaulay modules, Math. Z. 265
(2010), no.1, 41–83.
O. Iyama and M. Wemyss, Singular derived categories of Q-factorial terminalizations and maximal modification algebras, arXiv:1108.4518.
O. Iyama and M. Wemyss, Reduction of triangulated categories and Maximal Modification Algebras for cAn singularities, ArXiv:1304.5259.
O. Iyama and Y. Yoshino, Mutation in triangulated categories and rigid Cohen-Macaulay modules, Invent. Math. 172 (2008), no. 1, 117–168.
B. Keller and D. Yang, Derived equivalences from mutations of quivers with potential, Adv.
Math. 226 (2011), 2118–2168.
H. Krause, Localization theory for triangulated categories, in: Triangulated categories, 161–235,
London Math. Soc. Lecture Note Ser. 375, Cambridge Univ. Press, Cambridge, 2010.
H. Matsumura, Commutative ring theory (Second edition), Cambridge Studies in Advanced
Mathematics, 8, Cambridge University Press, Cambridge, 1989.
J. Miyachi, Recollement and tilting complexes, J. Pure Appl. Algebra 183 (2003) 245–273.
A. Ooishi, Matlis duality and the width of a module, Hiroshima Math. J. 6 (1976), 573–587.
M. Ramras, Maximal orders over regular local rings of dimension two, Trans. Amer. Math. Soc.
142 (1969), 457–479.
I. Reiten and M. Van den Bergh, Two-dimensional tame and maximal orders of finite representation type., Mem. Amer. Math. Soc. 408 (1989), vii+72pp.
J. Rickard, Morita theory for derived categories, J. London Math. Soc. (2) 39 (1989), 436–456.
C. Riedtmann and A. H. Schofield, On a simplicial complex associated with tilting modules,
Comment. Math. Helv. 66 (1991), no. 1, 70–78.
R. Rouquier and A. Zimmermann, Picard groups for derived module categories, Proc. London
Math. Soc. (3) 87 (2003), no. 1, 197–225.
40
OSAMU IYAMA AND MICHAEL WEMYSS
[V04a]
M. Van den Bergh, Three-dimensional flops and noncommutative rings, Duke Math. J. 122
(2004), no. 3, 423–455.
, Non-commutative crepant resolutions, The legacy of Niels Henrik Abel, 749–770,
[V04b]
Springer, Berlin, 2004.
[V09]
J. Vitoria, Mutations vs. Seiberg duality, J. Algebra, 321 (2009), no. 3, 816–828.
[Y90]
Y. Yoshino, Cohen-Macaulay modules over Cohen-Macaulay rings, London Mathematical Society
Lecture Note Series, 146. Cambridge University Press, Cambridge, 1990.
Osamu Iyama, Graduate School of Mathematics, Nagoya University, Chikusa-ku, Nagoya,
464-8602, Japan
E-mail address: [email protected]
Michael Wemyss, The Maxwell Institute, School of Mathematics, James Clerk Maxwell
Building, The King’s Buildings, Mayfield Road, Edinburgh, EH9 3JZ, UK.
E-mail address: [email protected]
| 0 |
1
Learning Topology of Distribution Grids using
only Terminal Node Measurements
arXiv:1608.05031v1 [math.OC] 17 Aug 2016
Deepjyoti Deka†, Scott Backhaus†, and Michael Chertkov†
†Los Alamos National Laboratory, USA
Email: [email protected], [email protected], [email protected]
Abstract—Distribution grids include medium and low voltage
lines that are involved in the delivery of electricity from
substation to end-users/loads. A distribution grid is operated
in a radial/tree-like structure, determined by switching on or
off lines from an underling loopy graph. Due to the presence
of limited real-time measurements, the critical problem of fast
estimation of the radial grid structure is not straightforward.
This paper presents a new learning algorithm that uses measurements only at the terminal or leaf nodes in the distribution
grid to estimate its radial structure. The algorithm is based on
results involving voltages of node triplets that arise due to the
radial structure. The polynomial computational complexity of
the algorithm is presented along with a detailed analysis of its
working. The most significant contribution of the approach is
that it is able to learn the structure in certain cases where available measurements are confined to only half of the nodes. This
represents learning under minimum permissible observability.
Performance of the proposed approach in learning structure is
demonstrated by experiments on test radial distribution grids.
Index Terms—Distribution Networks, Power Flows, Tree
learning, Voltage measurements, Missing data, Complexity
I. I NTRODUCTION
The power grid is operationally divided hierarchically into
transmission and distribution grids. While the transmission
grid connects the generators and includes high voltage lines,
the distribution grid comprises of medium and low voltage
lines that connect the distribution substation to the end
users/loads. Aside from low voltages, distribution grids are
structurally distinguished from the transmission side by their
operational radial structure. A distribution grid is operated
as a tree with a substation placed at the root node/bus that
is connected via intermediate nodes and lines to the terminal
end nodes/households. This radial operational structure is
derived from an underlying loopy network by switching on
and off some of the lines (network edges) [1]. The specific
structure may be changed from one radial configuration to
another, by reversing the switch statuses. Fig. 1 presents an
illustrative example of a radial distribution grid derived from
an underlying graph.
Historically, the distribution grid has had limited presence
of real time devices on the lines, buses and feeders [2].
Due to this, real-time monitoring of the operating radial
structure and the states of the resident buses on the distribution side is not straightforward. These estimation problems,
previously neglected, have become of critical importance due
to introduction of new devices like batteries and electric
vehicles, and intermittent generation resources like rooftop
solar panels. Optimal control in today’s distribution grid
requires fast topology and state estimation, often from limited
real-time measurements. In this context, it needs to be
mentioned that smart meters, micro-PMUs [3], frequency
measurements devices (FNETs) [4] and advanced sensors
(internet-of-things) capable of reporting real-time measurements are being deployed on the distribution side. However, in the current scenario, such devices are often limited
to households/terminal nodes as their primary purpose for
installation is services like price controllable demand and
load monitoring. A majority of the intermediate lines and
buses that connect the distribution substation to the terminal
nodes do not have real-time measurement devices and are
thus unobserved in terms of their structure and state. This
hiders real-time topology and state estimation.
This paper is aimed at developing a learning framework
that is able to overcome the lack of measurements at the intermediate nodes. Specifically, the primary goal of our work is
to propose an algorithm to learn the operating radial topology
using only real-time voltage magnitude measurements from
the terminal nodes. The reliance only on measurements from
terminal nodes is crucial as it makes our work applicable to
realistic deployment of smart devices in today’s distribution
grids. Further, our learning algorithm is able to tolerate a
much higher fraction of missing nodes (with unavailable
data) compared to prior work in this area. Our approach is
based on provable relations in voltage magnitudes of triplets
and pairs of terminal nodes that are used to discover the
operational edges iteratively. Computationally, the algorithm
has polynomial complexity in the number of nodes in the
system.
A. Prior Work
Topology learning in radial distribution grids has received
attention in recent times. Learning techniques in this area
vary, primarily based on the operating conditions and measurements available for estimation. The authors of [5] use
a Markov random field model for nodal phases to identify
faults in the grids. [6] uses conditional independence tests to
identify the grid topology in radial grids. [7] uses signs of
elements in the inverse covariance matrix of nodal voltages
to learn the operational topology. Signature/enevelope based
identification of topology changes is proposed in [8]. Such
comparison based schemes are used for parameter estimation
2
in [9], [10]. In contrast with nodal voltage measurements, line
flow measurements are used in a maximum likelihood based
scheme for topology estimation in [11]. In previous work
[1], [12], authors have analyzed topology learning schemes
that rely on trends in second moments of nodal voltage
magnitudes. Further, a spanning tree based topology learning
algorithm is proposed in [13]. This line of work [1], [12], [13]
is close in spirit to machine learning schemes [14], [15] developed to learn the structure of general probabilistic graphical models [16]. A major limitation of the cited literature is
that they assume measurement collection at most, if not all,
nodes in the system. To the best of our knowledge, work that
discuss learning in the presence of missing/unobserved nodes
[12], [13] assume missing nodes to be separated by greater
than two hops in the grid. As discussed earlier, distribution
grids often have real-time meters only at terminal nodes and
none at intermediate nodes that may be adjacent (one hop
away).
B. Contribution of This Work
In this paper, we discuss topology learning in the radial
distribution grid when measurements are limited to real-time
voltage readings at the terminal nodes (end-users) alone. All
intermediate nodes are unobserved and hence assumed to be
missing nodes. We analyze voltages in the distribution grid
using a linear lossless AC power flow model [1], [12] that is
analogous to the popular [17], [18] LinDistFlow equations.
For uncorrelated fluctuations of nodal power consumption,
we construct functions of voltage magnitudes at a pair or
triple of terminal nodes such that their values depend on
the edges that connect the group of nodes. These functions
provide the necessary tool to develop our learning scheme
that iteratively identifies operational edges from the leaf
onward to the substation root node. We discuss the computational complexity of the learning scheme and show that
it is a third order polynomial in the number of nodes. In
comparison to existing work, our approach is able to learn the
topology and thereby estimate usage statistics in the presence
of much greater fraction of missing/unobserved nodes. In
fact, in limiting configurations, our learning algorithm is able
to determine the true structure and statistics even when half
of the nodes are unobserved/missing. We demonstrate the
performance of our algorithm through experiments on test
distribution grids.
The rest of the manuscript is organized as follows. Section
II introduces notations used in the paper and describes the
grid topology and power flow models used for analysis in
later sections. Section III mentions assumptions made and describes important properties involving voltage measurements
at terminal nodes. Our algorithm to learn the operating radial
structure of the grid is discussed in Section IV with detailed
examples. We detail the computational complexity of the
algorithm in Section V. Simulation results of our learning
algorithm on test radial networks are presented in Section
VI. Finally, Section VII contains conclusions and discussion
of future work.
missing intermediate node
terminal node
Fig. 1. Radial distribution grid tree with substation/root node colored in
red. Dotted grey lines represent open switches. Terminal leaf nodes (d)
represent end-users from which real-time measurements are collected. The
intermediate missing nodes (b) are unobserved. Here, nodes a and c are
descendants of node a. Dashed lines represent the paths from nodes a and
d to the root node.
II. D ISTRIBUTION G RID : S TRUCTURE AND P OWER
F LOWS
Radial Structure: We denote the underlying graph of the
distribution grid by the graph G = (V, E), where V is the set
of buses/nodes and E is the set of all undirected lines/edges.
We term nodes by alphabets (a, b,...). An edge between nodes
a and b is denoted by (ab). An illustrative example is given
in Fig. 1. The operational grid consisting of one tree T with
nodes VT and operational edge set ET ⊂ E as shown in Fig. 1.
Our results are immediately extended to the case K > 1. In
tree T, Pa denote the set of edges in the unique path from
node a to the root node (reference bus). A node c is called
a descendant of node a if the path from node c to the root
passes through a, i.e., Pc ⊂ Pa . We use Da to denote the set
of descendants of a and include node a in Da by definition.
If edge (ac) ∈ ET and c is a descendant of a, we term a as
parent and c as its child node. Further, as discussed in the
Introduction, terminal nodes/leaf nodes in the tree represent
end-users or households that are assumed to be equipped with
real-time nodal meters. The remaining intermediate nodes
(not terminal nodes) are not observed and hence termed as
missing nodes. These definitions are illustrated in Fig 1. Next
we describe the notation used in the power flow equations.
Power Flow Models: We use zab = rab +ixab to denote the
complex impedances of line (ab) (i2 = −1) where rab and
xab represent the line resistance and reactance respectively.
Let real valued scalars, va , θa , pa and qa denote the voltage
magnitude, voltage phase, active and reactive power injection
respectively at node a. At each node a, Kirchhoff’s law of
power flow relate them as follows:
Pa = pa + iqa =
v2a − va vb exp(iθa − iθb )
z∗ab
b:(ab)∈ET
∑
(1)
Note that Eq. (1) is nonlinear and non-convex. Under realistic
assumption that losses of both active and reactive power
losses on each line of tree T is small, we can neglect second
order terms in Eq. (1) to achieve the following linearized
form [1], [7], [12]:
3
pa =
∑
(βab (θa − θb ) + gab (va − vb )) ,
(2)
∑
(−gab (θa − θb ) + βab (va − vb ))
(3)
b:(ab)∈ET
qa =
b:(ab)∈ET
.
where gab =
rab
, βab
2
2
xab + rab
.
=
xab
2
2
xab + rab
(4)
As shown in [1], Eqs. (2),(3) are equivalent to the LinDistFlow equations for power flow in distribution grid [17]–[19],
when deviations in voltage magnitude are assumed to be
small. Similar to LinDistFlow model, Eqs. (2),(3) are lossless
with sum power equal to zero (∑a∈VT Pa = 0). Further, note
that both active and reactive power injections are functions of
difference in voltage magnitudes and phases of neighboring
nodes. Thus, the analysis of the system can be reduced by
one node termed as reference node with voltage magnitude
and phase at all other nodes being measured relative to
this node. In our case, we take the substation/root node as
the reference node with voltage magnitude 1 and phase 0
respectively. The reference node’s injection also balances
the power injections in the remaining network. Inverting
Eqs. (2),(3) for the reduced system (without the reference
node), we express voltages as a function of nodal power
injections in the following vector form:
−1
−1
−1
−1
v = H1/r
p + H1/x
q θ = H1/x
p − H1/r
q
(5)
We term this as the Linear Coupled Power Flow (LCPF) model where p, q, v and θ are the vectors of real
power, reactive power injections, voltage magnitudes and
phase angles respectively at the non-substation nodes. H1/r
and H1/x are the reduced weighted Laplacian matrices for
tree T where reciprocal of resistances and reactances are used
respectively as edge weights. The reduction is achieved by
removing the row and column corresponding to the reference
bus in the original weighted Laplacian matrix. We denote
the mean of a random vector X by µX = E[X]. For two
random vectors X and Y , the covariance matrix is denoted by
ΩXY = E[(X − µX )(Y − µY )T ]. Using Eq. (5), we can relate
the means and covariances of voltage magnitudes with those
of active and reactive injection as follows:
−1
−1
µv = H1/r
µ p + H1/x
µq
(6)
−1
−1
−1
−1
−1
−1
+ H1/x
Ωq H1/x
+ H1/r
Ω pq H1/x
Ωv = H1/r
Ω p H1/r
−1
−1
+ H1/x
Ωqp H1/r
(7)
As considered in prior literature [7], [12], this assumption
is reasonable over short time-intervals where fluctuations in
nodal power usage at households/terminal nodes are independent and hence uncorrelated. Intermediate nodes that do not
represent end-users are uncorrelated if they have independent
usage patterns. Specifically, for intermediate nodes involved
in separation of power into downstream lines and without
any major nodal usage, the net power injection is contributed
by leakage or device losses and hence uncorrelated from
the rest. Note that Assumption 1 does not specify the class
of distributions that can model individual node’s power
injection. It is applicable when nodal injections are negative
(loads), positive (due to local generation) or a mixture of
both. In future work, we will relax this assumption and
discuss learning in the presence of positively correlated enduser injection profiles.
Next, we mention an analytical statement relating the
inverse of the reduced Laplacian matrix for a radial graph
that we use in our later results.
Lemma 1. [1], [20] The reduced weighted Laplacian matrix
H1/r for tree T satisfies
−1
H1/r
(a, b) =
∑T
(cd)∈Pa
rcd
(8)
Pb
−1
In other words, the (a, b)th entry in H1/r
is equal to the sum
of line resistances of edges common to paths from node a and
−1
b to the root. For example, in Fig. 1, H1/r
(a, d) = rbe + re0 .
Using Eq. (8) it follows immediately that if node b is the
parent of node a, then ∀c
(
rab
if node c ∈ Da
−1
−1
H1/r (a, c) − H1/r (b, c) =
(9)
0
otherwise,
Next, consider the function φ defined over two nodes a and
b as φab = E[(va − µva ) − (vb − µvb )]2 . φab represents the
variance of the difference in voltage magnitudes at nodes
a and b. Using Eq. (7) we can write φab in terms of power
injection statistics in tree T as follows
φab = Ωv (a, a) − 2Ωv (a, b) + Ωv (b, b)
(10)
−1
−1
= ∑(H1/r
(a, d) − H1/r
(b, d))2 Ω p (d, d)
d∈T
−1
−1
+ (H1/x
(a, d) − H1/x
(b, d))2 Ωq (d, d)
−1
−1
−1
−1
+ 2 H1/r
(a, d) − H1/r
(b, d) H1/x
(a, d) − H1/x
(b, d) Ω pq (d, d)
(11)
Using these statistical quantities, we discuss useful identities
that arise in radial distribution grids in the next section that
form the basis of our learning algorithms.
Note that Lemma 1 and Eq. (9) can be inserted in Eq. (11)
to simply it. In fact, doing so lets us derive properties of φ
for terminal nodes in tree T as discussed next.
III. P ROPERTIES OF VOLTAGE M AGNITUDES IN R ADIAL
G RIDS
Theorem 1. Let node b be the parent of nodes a and c in T
such that (ab) and (bc) are operational edges (see Fig. 2(a)).
Then
First, we make the following assumption regarding statistics of power injections at the non-substation grid nodes,
under which our results hold.
Assumption 1: Fluctuations of active and reactive powers at different nodes are uncorrelated. Thus, ∀a 6= b nonsubstation nodes, Ω p (a, b) = Ωq (a, b) = Ωqp (a, b) = 0.
2
2
φac = ∑ rab
Ω p (d, d) + xab
Ωq (d, d) + 2rab xab Ω pq (d, d)
d∈Da
2
2
+ rbc
Ω p (d, d) + xbc
Ωq (d, d) + 2rbc xbc Ω pq (d, d)
d∈Dc
(12)
= φab + φbc
(13)
∑
4
the root follow Pa ∩ Pd = Pb ∩ Pd . Using Lemma 1, we thus
−1
−1
have ∀d 6= a, b, H1/r
(a, d) = H1/r
(b, d) and
𝑘2
𝑘
𝑘
𝑟𝑎 2
𝑏
𝑐
𝑐
𝑎
(17)
−1
−1
−1
−1
(c, b) = rbk2
(b, b) − H1/r
(c, a) = rak2 , H1/r
(a, a) − H1/r
H1/r
𝐷𝑎
𝐷𝑐
(16)
−1
−1
−1
−1
(b, a) = rbk1
(b, b) − H1/r
(a, b) = rak1 , H1/r
(a, a) − H1/r
H1/r
As k1 and c are descendants of node k2 and Pk1 ∩ Pc = Pk2 ,
𝑘1
𝑎
−1
−1
−1
−1
(c, d)
(b, d) − H1/r
(c, d) = H1/r
(a, d) − H1/r
H1/r
𝑏
(18)
Further, using Eqs. (17, 18), we get
(a)
(b)
Fig. 2. (a) Distribution grid tree with nodes a and c as children of common
parent node b. (b)Terminal nodes a and b have common parent k1 . k1 and
k
terminal node c are descendants of k2 . ra2 is the sum of resistance on lines
(ak1 ), (k1 k) and (kk2 ) connecting nodes a and k.
Proof: Observe Lemma 1. As (ab) and (bc) are op−1
erational edges, the only nodes d such that (H1/r
(a, d) −
−1
H1/r (c, d)) 6= 0 are either descendants of a (set Da ) or
of c (set Dc ). Further Dc and Da are disjoint. When
−1
−1
d ∈ Da , (H1/r
(a, d) − H1/r
(c, d)) = rab , while when d ∈ Dc ,
−1
−1
(H1/r (a, d) − H1/r (c, d)) = −rbc . Using this in the formula
for φac in Eq. (11) gives us the relation. The equality
φac = φab + φbc is verified by plugging values in Eq. (11)
for φab , φbc and φac .
A few points are in order. First, note that the only
operational lines whose impedances appear on the right side
of Eq. (12) are (ab) and (bc). For the special case where a
and c are terminal nodes with parent b, the relation reduces
to the following:
2
2
φac = rab
Ω p (a, a) + xab
Ωq (a, a) + 2rab xab Ω pq (a, a)
2
2
+ rbc
Ω p (c, c) + xbc
Ωq (c, c) + 2rbc xbc Ω pq (c, c)
(14)
If the covariance of injections at a and b are known, the
above condition can be checked for each potential parent
node ‘b’ in linear time to identify the true parent. Second,
the equality in Eq. (13) is true only when Pa ∩ Pc = Pb . It is
replaced by a strict inequality for other configurations that we
omit discussing as they are outside the scope of our learning
conditions. The next theorem gives a result that relates the
voltages at three terminal nodes.
Theorem 2. Let terminal nodes a and b have common parent
node k1 . Let c be another terminal node such that c, k1 ∈
Dk2 and Pk1 ∩ Pc = Pk2 for some intermediate node k2 (see
Fig. 2(b)). Let rak2 and xak2 denote the sum of resistance and
reactance respectively on lines on the path from node a to
node k2 , i.e., rak2 = ∑ re f , xak2 = ∑ xe f . Define rbk2 , rkk12 etc. in
(e f )∈Pa −Pk2
the same way. Then
(e f )∈Pa −Pk2
φac − φbc = Ω p (a, a)((rak2 )2 − (rkk12 )2 ) + Ωq (a, a)((xak2 )2 − (xkk12 )2 )
+ Ω pq (a, a)(rak2 xak2 − rkk12 xkk12 ) − Ω p (b, b)((rbk2 )2 − (rkk12 )2 )
+ Ωq (b, b)((xbk2 )2 − (xkk12 )2 ) + Ω pq (b, b)(rbk2 xbk2 − rkk12 xkk12 )
(15)
Proof: As a and b are terminal nodes with same parent
k1 , for each node d 6= a 6= b, the paths from d, a and b to
−1
−1
−1
−1
(c, b) = rkk12
(a, b) − H1/r
(c, a) = H1/r
(b, a) − H1/r
H1/r
(19)
where rak2 , rkk12 etc. are defined in the statement of the
−1
theorem. Similar relations can be written for H1/x
terms as
well. We now expand φac and φbc using Eq. (11). Using
Eq. (16,18,19) in the expression for φac − φbc gives Eq. (15).
Observe that the lines whose impedances appear on the
right side of Eq. (15) are (ak1 ), (bk1 ) and the ones on the path
from node k1 to k2 . If this path is known till the penultimate
node k before k2 (see Fig. 2(b)), then Eq. (15) can be used
to learn edge (kk2 ) through a linear search among candidate
nodes for k2 . In the next section, we discuss the use of this
relation to learn the path from terminal pairs with common
parent (here a and b) to the root iteratively. Further, note
that the value of φac − φbc is independent of injections at
intermediate nodes and at terminal node c as long as c is
a descendant of k2 . Thus, Eq. (15) only helps identify c’s
relative position in the graph with respect to k2 . As shown
in the next section, we are able to locate c’s parent using a
post-order node traversal [21] in the grid graph.
IV. A LGORITHM TO LEARN TOPOLOGY USING TERMINAL
NODE MEASUREMENTS
Using the relations described in the previous section,
we discuss our algorithm to learn the operational topology of the radial distribution grid. As mentioned in the
Introduction, voltage measurements are available only at the
terminal/leaf nodes. The observer also has access to power
injection statistics (variances) at the terminal nodes. These
statistics are either known from historical data or computed
empirically from power injection measurements collected at
the terminal nodes. All intermediate nodes are missing and
their voltage measurements and injection statistics are not
observed/available. Further, we assume that impedances of
all lines (open or operational) in E for the underlying loopy
graph G are available as input. The task of the learning
algorithm is to identify the set ET of operational edges in the
radial grid T. The substation root node in tree T is assumed to
be connected to a single intermediate node. Otherwise each
sub-tree connected to the substation node can be considered
as a disjoint tree. First, we make the following restriction
regarding the degree of missing intermediate nodes.
Assumption 2: All missing intermediate nodes are assumed to have a degree greater than 2.
This assumption is necessary as without it, the solution
to the topology learning problem will not be unique. As
5
Fig. 3. Distribution grid tree where terminal node a is connected to node
d via two unknown intermediate nodes of degree 2. Either configuration
d, c, b, a or d, b, c, a for the intermediate nodes is a feasible structure given
no measurements at nodes b and c.
an example, consider the test distribution grid given in
Fig. 3 where the path from leaf node a to node d passes
through two intermediate nodes b and c of degree 2 each.
Both configuration A (operations edges (ab), (bc), (cd)) and
configuration B (operations edges (ac), (cb), (bd)) are feasible operational topologies if voltage and power injection
measurements are available only at the terminal node a. In
other words, different values of injection covariances at nodes
b and c can be taken such that relations between voltage
and injections are satisfied in either configuration. Similar
assumptions for uniqueness in learning general graphical
models are mentioned in [22]. For radial configurations that
respect Assumption 2, Algorithm 1 learns the topology using
measurements of voltage magnitudes and information of
injection statistics at terminal nodes.
Working: Algorithm 1 learns the structure of operational
tree T iteratively from leaf node pairs up to the root. Set
L represents the current set of terminal nodes/leaves with
unknown parents, while set M represents the set of all
missing intermediate nodes. Note that for each node a, para
denotes its identified parent while desa denotes a set of two
leaf nodes that are its descendants. Φ represents the empty
set. The two sets (par and des) are used to keep track of
the current stage of the learnt graph. There are three major
stages in the edge learning process in Algorithm 1.
First, we build edges between leaf nodes pairs a, c ∈ L
to their common missing parent node b in Steps 3 to 6.
Here the relation for φab derived in Theorem 1 and Eq. (14)
is used to identify the true missing parent in set M. Note
that only parents with two or more leaf nodes as children
can be identified at this stage. Parents that have at most one
leaf node as child (other children being missing intermediate
nodes) are not identified by this check.
Second, we remove nodes with discovered parents from
the set of leaf nodes L then iteratively learn the edges
between intermediate nodes in Steps 10 to 24. At each
stage of the iteration, M2 denotes the set of intermediate
nodes with unknown parents whose descendants (des) were
discovered in the previous iteration, while M1 denotes the
set of all intermediate nodes with unknown parent (par) and
known descendants. Clearly M2 ⊂ M1 . The parent k2 of each
node k in M2 is identified by using relation Eq. (15) in
Theorem 2 for its descendants a and b. There are two cases
considered for candidate parent k2 : (A) k2 belongs to M1
and has known descendant c (Step 15), and (B) k2 belongs
to M − M1 and has no discovered descendant (Step 18). For
the second case, the algorithm tries to find some leaf node
c that is potentially an undiscovered descendant of k2 by
Algorithm 1 Topology Learning using Terminal Node Data
Input: Injection covariances Ω p , Ωq , Ω pq at terminal nodes
L, Missing node set M = VT − L, m voltage magnitude
observations v for nodes in L, set of all edges E with line
impedances.
Output: Operational Edge set ET .
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
22:
23:
24:
25:
26:
27:
28:
29:
30:
31:
32:
33:
34:
35:
36:
∀ nodes a, c ∈ L , compute φac = E[(va − µva ) − (vc − µvc )]2
∀a ∈ VT , define para ← Φ, desa ← Φ
for all a ∈ L do
if para = Φ&∃c ∈ L, b ∈ M s.t. φab , c satisfy Eq. (14) then
ET ← ET ∪ {(ab), (bc)}
para ← b, parc ← b, desb ← a, c
end if
end for
L ← {a : a ∈ L, para = Φ}, t p ← 1, M1 ← Φ
while t p > 0 do
M2 ← {k1 : k1 ∈ M − M1 , park1 = Φ, desk1 6= Φ}
M1 ← {k1 : k1 ∈ M, park1 = Φ, desk1 6= Φ}
for all k ∈ M1 with a, b ∈ desk do
k1 ← para
if ∃k2 ∈ M1 with M2 ∩ {k, k2 } 6= Φ with c ∈ desk2 , s.t.
φac − φbc satisfy Eq. (15) then
ET ← ET ∪ {(kk2 )}, park ← k2
else
if k ∈ M1 , ∃k2 ∈ M−M1 , c ∈ L, s.t. φac −φbc satisfy
Eq. (15) then
ET ← ET ∪ {(kk2 )}, park ← k2 , desk2 ← desk
end if
end if
end for
t p ← |{k1 : k1 ∈ M1 , park1 6= Φ}|
end while
if then|M1 | = 1
Join k ∈ M1 to root
end if
Form a post-order traversal node set W using para for
a : ∀a ∈ M, desa 6= Φ
for all c ∈ L do
for j ← 1 to |W| do
k2 ← W( j) with a, b ∈ desk2 , k1 ← para
if φac − φbc satisfy Eq. (15) then
ET ← ET ∪ {(ck2 )}, W ← W − {k2 }, j ← |W|
end if
end for
end for
checking for the relation φac − φbc specified in Eq. (15). If
the relation holds, node k2 is denoted as the parent of k1 . As
it is not clear if leaf c is an immediate descendant (child) of
k2 , no edge to c is added. The iterations culminate when no
additional edge between intermediate nodes is discovered.
Third, in Steps 29 to 36, the parents of the remaining leaf
nodes in L are identified. Note that Eq. (15) also holds for
non-immediate descendant c of k2 , hence it cannot be used
directly to identify c’s location. To overcome this, for each
c in L, we first check at descendants of k2 before k2 . This
is ensured by creating a post-order traversal list W [21] in
Step 28. In post-order traversal W, each potential parent is
reached only after all its descendants have been reached. The
node k2 which satisfies Eq. (15) in Step 32 now gives the true
parent of leaf c. The algorithm terminates after searching for
all leaves in L.
6
1
2
3
Fig. 4. Sequence in which edges area learnt by Algorithm 1 for an example
radial grid using only measurements at leaf nodes. In the first step, leaf pairs
with common parent are learnt, then intermediate nodes appear and finally
connections to leaf nodes with no leaf sibling are discovered.
(a)
(b)
Fig. 5. Cases where fraction of missing nodes becomes 50% (a) A binary
distribution grid tree with each intermediate node having degree 3. (b)
Distribution grid tree with each intermediate node with one child
Note: Empirically computed second moments of voltages
may differ from their true values and hence may not satisfy
relations from Theorems 1 and 2 exactly. Hence we use
tolerances τ1 and τ2 to check the correctness of Eq. (14) and
Eq. (15) respectively in Algorithm 1. We defer the selection
of optimal thresholds to future work.
We discuss the operation of Algorithm 1 through a test
example given in Fig. 4. As described in the previous paragraph, in the first step, the common parents of leaf node pairs
are discovered. In the next step, edges between intermediate
nodes are discovered iteratively. Finally, positions of leaf
nodes that do not have common parent with any other leaf
node are located by a post-order traversal and then the
substation connects to the node with no parent.
depth d −1 are missing. The second graph has the structure of
a line graph with one additional leaf node connected to each
intermediate node on the line. In either graph, as the number
of nodes increases, the fraction of missing nodes increases to
50%. In other words, configurations exist such that Algorithm
1 is able to learn the grid structure with approximately half
the nodes missing/unobserved.
This is crucial because 50% fraction of missing nodes
represents the threshold beyond which state estimation is not
possible even in the presence of knowledge of the operational
topology. This is because the LC-PF Eqs. (4) cannot be
solved if less than half the nodes are observed.
V. C OMPUTATIONAL C OMPLEXITY AND P ERFORMANCE
VI. E XPERIMENTS
Consider the original loopy graph G to be complete (all
node pairs belong to E). We compute the computational
complexity of operational tree in terms of the number of
nodes N = |VT |.
Algorithm Complexity: To measure the computational
complexity, we focus on the iteration steps. Let the number
of leaves in tree T be l0 . Detecting edges between leaf pairs
and their common parent (Steps 3 to 6) takes O(l02 (N − l0 ))
comparisons, which takes the form O(N 3 ) in the worst case.
Next we analyze the complexity of identifying edges between
intermediate nodes. Let the number of ‘new’ intermediate
nodes with unknown parents (set M2 ) in each iteration of
(Steps 10 to 24) be li . Each ‘new’ node is compared with
the set of all intermediate nodes with unknown parents
(M1 ⊇ M2 ) first. As addition of a ‘new’ node leads to
removal of its children (≥ 1) from M1 , the size of M1 never
increases more than its initial value (≤ l0 /2) giving this step
a worst case complexity of O(li l0 ). Comparison of nodes in
Mi with missing nodes in M − M1 and leaf nodes in L (Step
18) has worst case complexity O(li l0 (N − l0 )). Using l0 as
O(N) and ∑ li = O(N), the computational complexity of all
iterations in Steps 10 to 24 is thus O(N 3 ). Finally composing
the post-order tree traversal and checking at node location
(O(N)) for each leaf (O(N)) has complexity less than or
equal to O(N 2 ). The overall complexity of the algorithm is
thus O(N 3 ).
Maximum Missing Node Fraction: Consider the two
radial graphs given in Fig. 5 that satisfy the condition in Assumption 2. The first is a binary tree of depth d. All nodes till
Here we demonstrate performance of Algorithm 1. We
consider a radial network [23], [24] (Fig. 6) with 20
nodes and one substation. Out of those 20 nodes, 12 are
terminal/leaf nodes while 8 are intermediate nodes which
are missing. In each of our simulation runs, we construct
complex power injection samples at the non-substation nodes
from a multivariate Gaussian distribution that is uncorrelated
between different nodes as per Assumption 1. Next we use
LC-PF Eqs. (5) to generate nodal voltage magnitude measurements. To understand the performance of our algorithm,
we introduce 30 additional edges (at random) to construct the
loopy edge set E. The additional edges are given impedances
comparable to those of operational lines. The input to the
observer consists of voltage magnitude measurements and
injection statistics at the terminal nodes and impedances of
all the lines within the loopy edge set E. The average fraction
errors (number of errors/size of the operational edge set) in
reconstructing the grid structure is shown in Fig. 7(a) for
different sizes of terminal voltage magnitude measurements
used. Different curves in the plot depict different values of
tolerances τ1 and τ2 to check the correctness of Eq. (14) and
Eq. (15) in Algorithm 1. Note that the average fractional
errors decreases steadily with increase in the number of
measurements as empirical second moment statistics used
in Algorithm 1 get more accurate. The values of tolerance
to achieve the most accurate results are selected manually.
We plan to develop a theoretical selection of the optimal
tolerance values in future work.
7
sample complexity of the learning algorithm and optimizing
the selection of tolerance values based on the number of
samples are directions of our future research in this area.
Further, we plan to expand this learning regime to cases with
correlated injections.
R EFERENCES
Fig. 6. Layouts of the grid tested with 20 non-substation nodes. Black
lines represent operational edges. Some of the additional open lines (actual
number 30) are represented by dotted green lines.
0.45
τ =3.8*10−2, τ = 2*10−1
1
2
τ = 3.5*10−2, τ =2*10−1
1
0.4
2
−2
−1
τ1 = 3.5*10 , τ2 =1.5*10
Average fraction errors
in topology learning
−2
τ1 =3.2*10 , τ2 =1.5*10−1
0.35
τ1 = 3*10−2 , τ2 =1*10−1
0.3
0.25
0.2
0.5
1
1.5
2
Number of voltage measurement
2.5
3
4
x 10
(a)
Fig. 7. Average fractional errors in learning operational edges vs number
of samples used in Algorithm 1 for different values of tolerances τ1 and τ2 .
VII. C ONCLUSIONS
Topology learning is an important problem in distribution
grids as it influences several other control and managements
applications that form part of the smart grid. The learning
problem is complicated by the low placement of real-time
measurements in the distribution grid, primarily being placed
at terminal nodes that represent end-users or households. In
this paper, we study the problem of learning the operational
radial structure from a dense underlying loopy grid graph
in this realistic regime where only terminal node (end-user)
voltage measurements and injection statistics are available
as input and all other nodes are unobserved/missing. We use
a linear-coupled power flow model and show that voltages
terminal node pairs and triplets satisfy relations that depend
on their individual injection statistics and impedances on the
lines connecting them. We use these properties to propose
a learning algorithm that iteratively learns the grid structure
from the leaves onward towards the substation root. We show
that the algorithm has polynomial time complexity comparable to existing work, despite being capable of tolerating much
greater and realistic fraction of missing nodes. For specific
cases, this algorithm is capable of learning the structure with
50% unobserved nodes, beyond which state estimation is not
possible even in the presence of topology information. We
demonstrate performance of the learning algorithm through
experiments on distribution grid test case. Computing the
[1] D. Deka, S. Backhaus, and M. Chertkov, “Structure learning and
statistical estimation in distribution networks - part i,” arXiv preprint
arXiv:1501.04131, 2015.
[2] R. Hoffman, “Practical state estimation for electric distribution networks,” in Power Systems Conference and Exposition, 2006. PSCE’06.
2006 IEEE PES. IEEE, 2006, pp. 510–517.
[3] A. von Meier, D. Culler, A. McEachern, and R. Arghandeh, “Microsynchrophasors for distribution systems,” in Innovative Smart Grid
Technologies Conference (ISGT). IEEE, 2014, pp. 1–5.
[4] Z. Zhong, C. Xu, B. J. Billian, L. Zhang, S.-J. S. Tsai, R. W.
Conners, V. A. Centeno, A. G. Phadke, and Y. Liu, “Power system
frequency monitoring network (fnet) implementation,” Power Systems,
IEEE Transactions on, vol. 20, no. 4, pp. 1914–1921, 2005.
[5] M. He and J. Zhang, “A dependency graph approach for fault detection and localization towards secure smart grid,” Smart Grid, IEEE
Transactions on, vol. 2, no. 2, pp. 342–351, 2011.
[6] D. Deka, S. Backhaus, and M. Chertkov, “Estimating distribution grid
topologies: A graphical learning based approach,” in Power Systems
Computation Conference (accepted), 2016.
[7] S. Bolognani, N. Bof, D. Michelotti, R. Muraro, and L. Schenato,
“Identification of power distribution network topology via voltage
correlation analysis,” in Decision and Control (CDC), 2013 IEEE 52nd
Annual Conference on. IEEE, 2013, pp. 1659–1664.
[8] G. Cavraro, R. Arghandeh, A. von Meier, and K. Poolla, “Data-driven
approach for distribution network topology detection,” arXiv preprint
arXiv:1504.00724, 2015.
[9] J. Peppanen, J. Grimaldo, M. J. Reno, S. Grijalva, and R. G. Harley,
“Increasing distribution system model accuracy with extensive deployment of smart meters,” in PES General Meeting— Conference
& Exposition, 2014 IEEE. IEEE, 2014, pp. 1–5.
[10] J. Peppanen, M. J. Reno, M. Thakkar, S. Grijalva, and R. G. Harley,
“Leveraging ami data for distribution system model calibration and
situational awareness,” 2015.
[11] R. Sevlian and R. Rajagopal, “Feeder topology identification,” arXiv
preprint arXiv:1503.07224, 2015.
[12] D. Deka, S. Backhaus, and M. Chertkov, “Structure learning and
statistical estimation in distribution networks - part ii,” arXiv preprint
arXiv:1502.07820, 2015.
[13] ——, “Learning topology of the power distribution grid with and
without missing data,” in European Control Conference (accepted),
2015.
[14] P. Ravikumar, M. J. Wainwright, J. D. Lafferty et al., “Highdimensional ising model selection using 1-regularized logistic regression,” The Annals of Statistics, vol. 38, no. 3, pp. 1287–1319, 2010.
[15] A. Anandkumar, V. Tan, and A. S. Willsky, “High-dimensional graphical model selection: tractable graph families and necessary conditions,”
in Advances in Neural Information Processing Systems, 2011, pp.
1863–1871.
[16] M. J. Wainwright and M. I. Jordan, “Graphical models, exponential
families, and variational inference,” Foundations and Trends R in
Machine Learning, vol. 1, no. 1-2, pp. 1–305, 2008.
[17] M. Baran and F. Wu, “Optimal sizing of capacitors placed on a radial
distribution system,” Power Delivery, IEEE Transactions on, vol. 4,
no. 1, pp. 735–743, Jan 1989.
[18] ——, “Optimal capacitor placement on radial distribution systems,”
Power Delivery, IEEE Transactions on, vol. 4, no. 1, pp. 725–734,
Jan 1989.
[19] ——, “Network reconfiguration in distribution systems for loss reduction and load balancing,” Power Delivery, IEEE Transactions on,
vol. 4, no. 2, pp. 1401–1407, Apr 1989.
[20] J. Resh, “The inverse of a nonsingular submatrix of an incident matrix,”
IEEE Transactions on Circuit Theory, vol. 10, p. 131.
[21] T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, Introduction
to Algorithms. The MIT Press, 2001.
[22] J. Pearl, Probabilistic reasoning in intelligent systems: networks of
plausible inference. Morgan Kaufmann, 2014.
8
[23] U. Eminoglu and M. H. Hocaoglu, “A new power flow method for
radial distribution systems including voltage dependent load models,”
Electric Power Systems Research, vol. 76, no. 13, pp. 106 – 114, 2005.
[24] “Reds: Repository of distribution systems,” accessed: 2016-05-20.
[Online]. Available: http://www.dejazzer.com/reds.html
| 3 |
Social Hash Partitioner:
A Scalable Distributed Hypergraph Partitioner
Igor Kabiljo1 , Brian Karrer1 , Mayank Pundir1 , Sergey Pupyrev1 , and Alon Shalita1
arXiv:1707.06665v1 [] 20 Jul 2017
1
Facebook
Abstract
We design and implement a distributed algorithm for balanced k-way hypergraph partitioning that minimizes fanout, a fundamental hypergraph quantity also known as the communication
volume and (k − 1)-cut metric, by optimizing a novel objective called probabilistic fanout. This
choice allows a simple local search heuristic to achieve comparable solution quality to the best
existing hypergraph partitioners.
Our algorithm is arbitrarily scalable due to a careful design that controls computational
complexity, space complexity, and communication. In practice, we commonly process hypergraphs with billions of vertices and hyperedges in a few hours. We explain how the algorithm’s
scalability, both in terms of hypergraph size and bucket count, is limited only by the number
of machines available. We perform an extensive comparison to existing distributed hypergraph
partitioners and find that our approach is able to optimize hypergraphs roughly 100 times bigger
on the same set of machines.
We call the resulting tool Social Hash Partitioner (SHP), and accompanying this paper, we
open-source the most scalable version based on recursive bisection.
1
Introduction
The goal of graph partitioning is to divide the vertices of a graph into a number of equal size
components, so as to minimize the number of edges that cross components. It is a classical and
well-studied problem with origins in parallel scientific computing and VLSI design placement [9].
Hypergraph partitioning is relatively less well-studied than graph partitioning. Unlike a graph, in a
hypergraph, an edge, referred to as a hyperedge, can connect to any number of vertices, as opposed
to just two. The revised goal is to divide the vertices of a hypergraph into a number of equal size
components, while minimizing the number of components hyperedges span.
While graph partitioning is utilized in a variety of applications, hypergraph partitioning can be
a more accurate model of many real-world problems [10,11,14]. For example, it has been successfully
applied for optimizing distributed systems [19,26,29] as well as distributed scientific computation [10,
14]. For another example, hypergraph partitioning accurately models the problem of minimizing the
number of transactions in distributed data placement [11].
Our primary motivation for studying hypergraph partitioning comes from the problem of storage
sharding common in distributed databases. Consider a scenario with a large dataset whose data
records are distributed across several storage servers. A query to the database may consume several
data records. If the data records are located on multiple servers, the query is answered by sending
requests to each server. Hence, the assignment of data records to servers determines the number of
requests needed to process a query; this number is often called the fanout of the query. Queries with
1
queries Q
database
server 1
application
1
server 2
3
4
3
1
2
6
1
2
3
bucket V1
(a) Storage sharding
2
data D
5
4
5
6
bucket V1
bucket V2
5
bucket V2
(b) Model with bipartite graph
6
4
(c) Model with hypergraph
Figure 1: The storage sharding problem modeled with the (b) bipartite graph and (c) hypergraph
partitioning. Given three queries (red), {1, 2, 6}, {1, 2, 3, 4}, {4, 5, 6}, the goal is to split six data
vertices (blue) into two buckets, V1 and V2 , so that the average query fanout is minimized. For
V1 = {1, 2, 3} and V2 = {4, 5, 6}, fanout of the queries is 2, 2, and 1, respectively.
low fanout can be answered more quickly, as there is less chance of contacting a slow server [12, 29].
Thus, a common optimization is to choose an assignment of data records that collocates the data
required by different queries.
Storage sharding is naturally modeled by the hypergraph partitioning problem; see Figure 1. For
convenience, and entirely equivalent to the hypergraph formulation, we follow the notation of [15,19]
and define the problem using a bipartite graph. Let G = (Q∪D, E) be an undirected bipartite graph
with disjoint sets of query vertices, Q, and data vertices, D. The goal is to partition D into k parts,
that is, find a collection of k disjoint subsets V1 , . . . , Vk covering D that minimizes an objective
function. The resulting subsets, also called buckets, should be balanced, that is, |Vi | ≤ (1 + ε) nk for
all 1 ≤ i ≤ k and some ε ≥ 0, where n = |D|.
This optimization is exactly equivalent to balanced hypergraph partitioning with the set of
vertices D and hyperedges {v1 , . . . , vr }, vi ∈ D for every q ∈ Q with {q, vi } ∈ E. The data vertices
are shared between the bipartite graph and its hypergraph representation, and each query vertex
corresponds to a single hyperedge that spans all data vertices connected to that query vertex in the
bipartite graph. Figures 1b and 1c show this equivalent representation.
For a given partitioning P = {V1 , . . . , Vk } and a query vertex q ∈ Q, we define the fanout of q
as the number of distinct buckets having a data vertex incident to q:
fanout(P, q) = |{Vi : ∃{q, v} ∈ E, v ∈ Vi }|.
The quality of partitioning P is the average query fanout:
fanout(P ) =
1 X
fanout(P, q).
|Q|
q∈Q
The fanout minimization problem is, given a graph G, an integer k > 1, and a real number ε ≥ 0,
find a partitioning of G into k subsets with the minimum average fanout.
Modulo irrelevant constant factors, the fanout is also called the communication volume [13] and
the (k - 1) cut metric. Arguably this is the most widely used objective for hypergraph partitioning [7,
13].1 Fanout is also closely related to the sum of external degrees, but not identical [22].2
1
Although communication volume seems a more popular term for the measure in hypergraph partitioning community, we utilize fanout here to be consistent with database sharding terminology.
2
The sum of external degrees (SOED) objective is equivalent to unnormalized fanout plus the number of query
vertices with fanout greater than one (that is, SOED is the communication volume plus the hyperedge cut).
2
Not surprisingly, the hypergraph partitioning problem is NP-hard [6]. Hence, exact algorithms are
only capable of dealing with fairly small problem sizes. Practical approaches all use heuristics. Even
then, a significant problem with the majority of proposed algorithms is scalability. Scalability is
desirable because modern graphs can be massive; for example, the Facebook Social Graph contains
billions of vertices and trillions of edges, consuming many hundred of petabytes of storage space [29].
Scalability can be achieved through distributed computation, where the algorithm is distributed
across many workers that compute a solution in parallel. Earlier works on distributed hypergraph
partitioning have proposed and implemented such algorithms, but as we argue in Section 2 and
experimentally demonstrate in Section 4, none of the existing tools are capable of solving the problem
at large scale. No existing partitioner was able to partition a hypergraph with a billion hyperedges
using four machines, and hypergraphs relevant to storage sharding at Facebook can be two or three
orders of magnitude larger.
With this in mind, we design, implement, and open source a scalable distributed algorithm for
fanout minimization. We denote the resulting implementation Social Hash Partitioner (SHP) because
it can be used as the hypergraph partitioning component of the Social Hash framework introduced
in [29]. The greater framework has several other major components and the specific choice of hypergraph partitioner was only briefly mentioned previously, and this entirely self-contained paper
delves into our partitioner in detail. The contributions of the paper are the following:
• We design an iterative algorithm for hypergraph partitioning aimed at optimizing fanout
through a classical local search heuristic [23] with two substantive modifications. The first
is a novel objective function that generalizes fanout called probabilistic fanout; using this
objective for fanout minimization improves result quality and algorithm convergence. Our
second modification facilitates a distributed implementation of the algorithm.
• We describe SHP and provide a detailed implementation of the algorithm that runs in a parallel
manner, carefully explaining how it limits memory usage, computation, and communication.
SHP relies on the vertex-centric programming model and scales to hypergraphs with billions
of vertices and hyperedges with a reasonable number of machines. In addition to optimizing
fanout, we also show that SHP can optimize other hypergraph objectives at scale. We have
open sourced the simple code for this implementation.
• We present results of extensive experiments on a collection of real-world and synthetic hypergraphs. These results show that SHP is arbitrarily scalable while producing partitions of
comparable quality to existing partitioners. SHP is able to partition hypergraphs 100x larger
than existing distributed partitioners, making it the only solution capable of partitioning
billion-vertex bipartite graphs using a cluster of machines.
We emphasize that the paper describes an algorithmic solution utilized in our Social Hash
framework. We refer the reader to [29] for the details of the framework, additional applications, and
real-world experiments.
2
Related Work
There exists a rich literature on graph partitioning from both theoretical and practical points of
view. We refer the reader to surveys by Bichot and Siarry [8] and by Buluç et al. [9]. Next we discuss
3
existing approaches for storage sharding that utilize graph partitioning and review theoretical results
on the problem. Then we describe existing libraries for hypergraph partitioning, focusing on the
publicly available ones. Finally, we analyze limitations of the tools, which motivate the development
of our new solution, SHP.
Storage sharding. Data partitioning is a core component of many existing distributed data
management systems [11, 19, 24, 26, 29]. The basic idea is to co-locate related data so as to minimize
communication overhead. In many systems, the data partitioning problem is reduced to a variant
of graph partitioning [11, 19, 24]; a solution is then found using a publicly available library such
as Metis [21]. However, hypergraph partitioning is a better model in several scenarios of storage
sharding [26, 29]. There exists much fewer tools for partitioning of hypergraphs, as the problem
tends to be harder and more computationally expensive than graph partitioning.
Theoretical aspects. The fanout minimization problem is a generalization of balanced k-way
graph partitioning (also called minimum bisection when k = 2), which is a central problem in
design and analysis of approximation algorithms. Andreev and Räcke [6] show that, unless P=NP,
there is no algorithm with a finite approximation factor for the problem when one requires a perfect
balance, that is, ε = 0. Hence, most works focus on the case ε > 0. Leighton et al. [27] and Simon
and Teng [30] achieved an O(log k log n) approximation algorithm for ε = 1, that
√ is, when the
maximum size of the resulting buckets is 2n
log k log n) for
.
The
bound
has
been
improved
to
O(
k
ε = 1 by Krauthgamer et al. [25] and to O(log n) for ε > 0 by Feldmann and Foschini [17].
We stress that being of high theoretical importance, the above approximation algorithms are
too slow to be used for large graphs, as they require solving linear programs. Hence, most of the
existing methods for graph and hypergraph partitioning are heuristics based on a simple local search
optimization [9]. We follow the same direction and utilize a heuristic that can be implemented in
an efficient way. It is unlikely that one can provide strong theoretical guarantees on a local search
algorithm or improve existing bounds: Since fanout minimization is a generalization of minimum
bisection, it would imply a breakthrough result.
Existing tools for hypergraph partitioning. As a generalization of graph partitioning, hypergraph partitioning is a more complicated topic and the corresponding algorithms are typically
more compute and memory intensive. Here we focus on existing solutions designed for hypergraph
partitioning. PaToH [10], hMetis [20], and Mondriaan [32] are software packages providing singlemachine algorithms. The tools implement different variants of local refinement algorithms, such as
Kernighan-Lin [23] or Feduccia-Mattheyses [18], that incrementally swap vertices among partitions
to reduce an optimization objective, until the process reaches a local minimum. Since such local
search algorithms can suffer from getting stuck in local minima, a multi-level paradigm is often used.
The idea is to create a sequence of “coarsened” hypergraphs that approximates the original hypergraph but have a smaller size. Then the refinement heuristic is applied on the smaller hypergraph,
and the process is reverted by an uncoarsening procedure. Note that the above software packages
all require random-access to the hypergraph located in memory; as a result these packages can only
handle smaller hypergraphs.
Parkway [31] and Zoltan [14] are distributed hypergraph partitioners that are based on a parallel
version of the multi-level technique. Unfortunately, as we argue below and show in our experiments,
the provided implementations do not scale well. We also mention a number of more recent tools for
4
hypergraph partitioning. UMPa [13] is a serial partitioner with a novel refinement heuristic. It aims
at minimizing several objective functions simultaneously. HyperSwap [33] is a distributed algorithm
that partitions hyperedges, rather than vertices. rFM [28], allows replication of vertices in addition
to vertex moves. We do not include these algorithms in our experimental study as they are not
open-source and they are rather complex to be re-implemented in a fair way.
Limitations of existing solutions. Parkway and Zoltan are two hypergraph partitioners that
are designed to work in a distributed environment. Both of the tools implement a multi-level
coarse/refine technique [20]. We analyzed the algorithms and identified the following scalability
limitations.
• First, multi-level schemes rely on an intermediate step in which the coarsest graph is partitioned on a single machine before it gets uncoarsened. While the approach is applicable for
graph partitioning (when the coarsest graph is typically fairly small), it does not always work
for hypergraphs. For large instances, the number of distinct hyperedges can be substantial,
and even the coarsest hypergraph might not fit the memory of a single machine.
• Second, the refinement phase itself is often equivalent to the local refinement scheme presented
in our work, which if not done carefully can lead to scalability issues. For example, Parkway is
using a single coordinator to approve vertex swaps while retaining balance. This coordinator
holds the concrete lists of vertices and their desired movements, which leads to yet another
single machine bottleneck.
• Third, the amount of communication messages between different machines/processors is an
important aspect for a distributed algorithm. Neither Zoltan nor Parkway provide strong
guarantees on communication complexity. For example, the authors of Zoltan present their
evaluation for mesh-like graphs (commonly used in scientific computing) and report relatively
low communication overhead. Their results might not hold for general non-planar graphs.
In contrast, our algorithm is designed (as explained in the next section) to avoid these single
machine bottlenecks and communication overheads.
3
Social Hash Partitioner
Our algorithm for the fanout minimization problem, as mentioned in Section 1, assumes that the
input is represented as a bipartite graph G = (Q ∪ D, E) with vertices representing queries and data
objects, and edges representing which data objects are needed to answer the queries. The input
also specifies the number of servers that are available to serve the queries, k > 1, and the allowed
imbalance, ε > 0.
3.1
Algorithm
For ease of presentation, we start with a high-level overview of our algorithm. The basic idea is
inspired by the Kernighan-Lin heuristic [23] for graph partitioning; see Algorithm 1. The algorithm
begins with an initial random partitioning of data vertices into k buckets. For every vertex, we
independently pick a random bucket, which for large graphs guarantees an initial perfect balance.
5
Algorithm 1: Fanout Optimization
Input : graph G = (Q ∪ D, E), the number of buckets k, imbalance ratio ε
Output: buckets V1 , V2 , . . . , Vk
for v ∈ D do
bucket[v] ← random(1, k);
/* initial partitioning */
repeat
for v ∈ D do
for i = 1 to k do
gaini [v] ← ComputeM oveGain(v, i);
/* local refinement */
/* find best bucket
target[v] ← arg maxi gaini [v];
/* update matrix
if gaintarget[v] [v] > 0 then
Sbucket[v],target[v] ← Sbucket[v],target[v] + 1;
*/
*/
/* compute move probabilities
for i, j = 1 to k do
min(Si,j ,Sj,i )
;
probability[i, j] ←
Si,j
*/
/* change buckets
for v ∈ D do
if gains[v] > 0 and random(0, 1) < probability[bucket[v], target[v]] then
bucket[v] ← target[v];
*/
until converged or iteration limit exceeded ;
The algorithm then proceeds in multiple steps by performing vertex swaps between the buckets in
order to improve an objective function. The process is repeated until a convergence criterion is met
(e.g., no swapped vertices) or the maximum number of iterations is reached.
Although the algorithm is similar to the classical one [23], we introduce two critical modifications.
The first concerns the objective function and is intended to improve the quality of the final result.
The second is related to how we choose and swap vertices between buckets; this modification is
needed to make distributed implementations efficient.
Optimization objective. Empirically, we found that fanout is rather hard to minimize with a
local search heuristic. Such a heuristic can easily get stuck in a local minimum for fanout minimization. Figure 2 illustrates this with an example which lacks a single move of a data vertex that
improves fanout. All move gains are non-positive, and the local search algorithm stops in the nonoptimal state. To alleviate this problem, we propose a modified optimization objective and assume
that a query q ∈ Q only requires an adjacent data vertex v ∈ D, {q, v} ∈ E for some probability
p ∈ (0, 1) fixed for all queries. This leads to so-called probabilistic fanout. The probabilistic fanout
of a given query q, denoted by p-fanout(q), is the expected number of servers that need to be contacted to answer the query given that each adjacent server needs to be contacted with independent
6
q1
q2
q3
V2
V1
1
2
3
4
5
6
7
8
Figure 2: An example in which no single move/swap of data vertices improves query fanout. Probabilistic fanout (for every 0 < p < 1) can be improved by exchanging buckets of vertices 4 and 5 or
by exchanging buckets of vertices 3 and 6. Applying both of the swaps reduces (non-probabilistic)
fanout of q1 and q3 , which yields an optimal solution.
probability p.
Formally, let P = {V1 , . . . , Vk } be a given partitioning of D, and let the number of data vertices
in bucket Vi adjacent to query vertex q be ni (q) = |{v : v ∈ Vi and {q, v} ∈ E}|. Then
Pserver i
is expected to be queried with probability 1 − (1 − p)ni (q) . Thus, the p-fanout of q is ki=1 (1 −
(1 − p)ni (q) ), and our probabilistic fanout objective, denoted p-fanout, for Algorithm 1 is, given
p ∈ (0, 1), minimize
k
1 X
1 XX
p-fanout(q) =
1 − (1 − p)ni (q) .
|Q|
|Q|
q∈Q
q∈Q i=1
This revised objective is a smoother version of the (non-probabilistic) fanout. It is simple to
observe that p-fanout(q) is less than or equal to fanout(q) for all q ∈ Q. If the number of data
adjacencies of q in bucket i is large enough, that is, ni (q) 1, then the bucket contributes to the
objective a value close to 1. If ni (q) = 1, then the contribution is p, which is smaller for p < 1. In
contrast, the non-probabilistic fanout contribution is simply 1, the same for all cases with ni (q) ≥ 1.
From a theoretical perspective, the way probabilistic fanout smooths fanout is by averaging the
fanout objective over an ensemble of random graphs similar to the bipartite graph being partitioned.
A bipartite graph from this random graph ensemble is created by independently removing edges
in the original bipartite graph with probability p. Then the probabilistic fanout is precisely the
expectation of fanout across this random graph ensemble. In essence, p-fanout minimization is forced
to select a partition that performs robustly across a collection of similar hypergraphs, reducing the
impact of local minima. With the new objective the state in Figure 2 is no longer a local minimum,
as data vertices 1 and 2 could be swapped to improve the p-fanout of both q1 and q3 .
An interesting aspect of p-fanout is how it behaves in extreme cases. As we show next, when
p → 1, p-fanout becomes the (non-probabilistic) fanout. When p → 0, the new measure is equivalent
to optimizing a weighted edge-cut, an objective suggested in prior literature [4, 5, 10]. In practice
it means that p-fanout is a generalization of these measures and Algorithm 1 can be utilized to
optimize either by setting small or large values of p.
Lemma 1. Minimizing p-fanout in the limit as p → 1 is equivalent to minimizing fanout.
Proof. We write p-fanout(q) for a query q ∈ Q as
k
X
k
X
1 − (1 − p)ni (q) =
1 − eni (q)log(1−p)
i=1
i=1
7
Now as p → 1, log(1 − p) goes to negative infinity and the exponential term is zero unless ni (q) = 0
in which case it equals one. Let δ(x) = 1 if x is true and δ(x) = 0, otherwise. In the limit, the above
expression equals
k
k
X
X
1 − δ(ni (q) = 0) =
δ(ni (q) > 0),
i=1
i=1
which is fanout of q.
Next we show that optimizing p-fanout as p → 0 is equivalent to a graph partitioning problem
on an edge-weighted graph constructed from data vertices. For a pair of data vertices, u, v ∈ D, let
w(u, v) be the number of common queries shared by these data vertices, that is, w(u, v) = |{q ∈ Q :
{q, u} ∈ E and {q, v} ∈ E}|. Consider a (complete) graph with vertex set D and let w(u, v) be the
weight of an edge between u, v ∈ D. For a given graph and a partition of its vertices, an edge-cut is
the sum of edge weights between vertices in different buckets.
Lemma 2. Minimizing p-fanout in the limit as p → 0 is equivalent to graph partitioning amongst
the data vertices while minimizing weighted edge-cut, where the edge weight between u ∈ D and
v ∈ D is given by w(u, v).
Proof. We begin from the definition of p-fanout and consider the Taylor expansion around p = 0:
p-fanout =
k
XX
1 − (1 − p)ni (q)
q∈Q i=1
=
k
XX
q∈Q i=1
=C −
p2
− ni (q)p + ni (q)(ni (q) − 1) + O(p3 )
2
k
p2 X X
ni (q)2 + O(p3 )
2
q∈Q i=1
P
P
2
The first term, C = q∈Q ki=1 ni (q)( p2 − p), is a constant proportional to the number of edges
in the graph. Thus, it is irrelevant to the minimization. The last term, O(p3 ), can also be ignored
for optimization when p → 0. We simplify the second term further.
−
k
k
p2 X X
p2 X X
ni (q)2 = −
ni (q)2
2
2
i=1 q∈Q
q∈Q i=1
k
2
X
X
X
p
w(u, v)
= −
2
i=1
u∈Vi v∈Vi
Therefore, minimizing probabilistic fanout in the limit as p → 0 is equivalent to maximizing
the sum, taken over all buckets, of edge weights between data vertices within the same bucket, or
maximizing within-bucket edge weights. Alternatively, this is also equivalent to minimizing intrabucket edge weights, that is, minimizing weighted edge-cut between buckets with edge weights given
by w(u, v).
8
data
master
data
move probabilities
target buckets
data
superstep 4
superstep 3
superstep 2
queries
neighbor data
current buckets
superstep 1
queries
master
data
Figure 3: Distributed implementation of the fanout minimization algorithm in the vertex-centric
framework with four supersteps and synchronization barriers between them: (1) collecting query
neighbor data, (2) computing move gains, (3) proposal of target buckets, (4) sending move probabilities and performing moves.
This p → 0 limit is interesting because the resulting optimization is an instance of the cliquenet model suggested as a heuristic for hypergraph partitioning [4, 5, 10]. The idea is to convert
the hypergraph partitioning problem to the (simpler) graph partitioning problem. To this end, a
hypergraph is transformed to an edge-weighted unipartite graph, Gc , on the same set of vertices,
by adding a clique amongst all pairs of vertices connected to a hyperedge. Multiple edges between a
vertex pair in the resulting graph are combined by summing their respective weights. The buckets
produced by a graph partitioning algorithm on the new graph are then used as a solution for
hypergraph partitioning.
An obstacle for utilizing the clique-net model is the size of the resulting (unipartite) graph Gc .
If there is a hyperedge connecting Ω(n) vertices, then Gc contains Ω(n2 ) edges, even if the original
hypergraph is sparse. Hence a common strategy is to use some variant of edge sampling to filter
out edges with low weight in Gc [4, 5, 10]. Lemma 2 shows that this is unnecessary: One can apply
Algorithm 1 with a small value of fanout probability, p, for solving the hypergraph partitioning
problem in the clique-net model.
Performing swaps. In order to iteratively swap vertices between buckets, we compute move
gains for the vertices, that is, the difference of the objective function after moving a vertex from
its current bucket to another one. For every vertex v ∈ D, we compute k values, referred to as
gaini [v], indicating the move gains to every bucket 1 ≤ i ≤ k. Then every vertex chooses a target
bucket that corresponds to the highest move gain. (For minimization, we select the bucket with
the lowest move gain, or equivalently the highest negative move gain.) This information is used to
calculate the number of vertices in bucket i that chose target bucket j, denoted Si,j , for all pairs
of buckets 1 ≤ i, j ≤ k. Ideally, we would move all these vertices from i to j to maximally improve
the objective. However, to preserve balance across buckets, we exchange only min(Si,j , Sj,i ) pairs of
vertices between buckets i and j.
In a distributed environment, we cannot pair vertices together for swaps easily, and so we
instead perform the swaps in an approximate manner. Our implementation defines a probability
min(Si,j ,Sj,i )
for each vertex in bucket i with target bucket j to be moved as
. All vertices are then
Si,j
simultaneously moved to their target buckets respecting the computed probabilities, for all pairs of
buckets. With this choice of probabilities, the expected number of vertices moved from i to j and
from j to i is the same; that is, the balance constraint is preserved in expectation. After that swap,
we recompute move gains for all data vertices and proceed with the next iteration.
9
We compute the move gains as follows. Assume that a vertex v ∈ D is moved from bucket i to
bucket j. Our objective function, p-fanout, might change only for queries adjacent to v and for the
terms corresponding to Vi and Vj . Let N (v) ⊆ Q be the subset of queries adjacent to v. The move
gain is then
X
gainj (v) =
1 − (1 − p)ni (q)−1 + 1 − (1 − p)nj (q)+1 −
q∈N (v)
−
X
1 − (1 − p)ni (q) + 1 − (1 − p)nj (q)
=
q∈N (v)
=
X
− (1 − p)ni (q)−1 + (1 − p)ni (q) − (1 − p)nj (q)+1 + (1 − p)nj (q)
(1)
q∈N (v)
=
X
(1 − p)
ni (q)−1
nj (q)
· (−1 + 1 − p) − (1 − p)
· (1 − p + 1)
q∈N (v)
=p·
X
(1 − p)nj (q) − (1 − p)ni (q)−1 .
q∈N (v)
Next we provide details of the implementation.
3.2
Implementation
Our implementation relies on the vertex-centric programming model and runs in the Giraph framework [1]. In Giraph, the input graph is stored as a collection of vertices that maintain some local
data (e.g., a list of adjacent vertices). The vertices are distributed to multiple machines in a cluster and communicate with each other via sending messages. A computation in Giraph is split into
supersteps that each consist of the following processing steps: (i) a vertex executes a user-defined
function based on local vertex data and on received messages, (ii) the resulting output is sent along
outgoing edges. Note that since vertices operate only with local data, such processing can easily
be executed in parallel and in a distributed environment. Supersteps end with a synchronization
barrier, which guarantees that messages sent in a given superstep are received at the beginning of
the next superstep. The whole computation is executed iteratively for a certain number of rounds,
or until a convergence property is met.
Algorithm 1 is implemented in the vertex-centric model in the following way; see Figure 3. The
first two supersteps compute move gains for all data vertices. As can be seen from Equation 1, a
move gain of v ∈ D depends on the state of adjacent query vertices. Specifically, we need to know the
values ni (q) for every q ∈ Q and all buckets 1 ≤ i ≤ k; we call this information the neighbor data of
query q. The first superstep is used to collect the neighbor data; to this end, every data vertex sends
its current bucket to the adjacent queries, which aggregate the received messages into the neighbor
data. On the second superstep, the query vertices send their neighbor data back to adjacent data
vertices, which use this information to compute their move gains according to Equation 1.
Once data vertices have computed move gains, we choose their target buckets and the next
superstep aggregates this information in matrix S, that stores the number of candidate data vertices
moving between pairs of buckets. The matrix is collected on a dedicated machine, called master,
which computes move probabilities for the vertices. On the last superstep, the probabilities are
propagated from master to all data vertices and the corresponding moves take effect. This sequence
of four supersteps continues until convergence.
10
3.3
Complexity
Our primary consideration in designing the algorithm is to keep the implementation scalable to
large instances. To this end, we limit space, computational, and communication complexity such
that each is bounded by O(k|E|), where k is the number of buckets.
Space complexity. We ensure every vertex v of G consumes only O(|N (v)| + k) memory, where
|N (v)| is the number of neighbors of vertex v, so the total memory consumption is O(|E| + k|V |).
Every vertex keeps its adjacency list, which is within the claimed limits. Additionally, every query
vertex q ∈ Q stores its neighbor data containing |f anout(q)| = O(|N (q)|) entries. Every data vertex
v ∈ D stores move gains from its current bucket to all other buckets, which includes up to O(k)
values per vertex. The master machine stores the information of move proposals for every pair of
buckets, that is, its memory consumption is O(k 2 ) = O(k|V |), which is again within the claimed
memory bound. Notice that since Giraph distributes vertices among machines in a Giraph cluster
randomly, the scheme does not have a single memory bottleneck. All the machines are equally
loaded, and in order to have enough memory for large graphs at a fixed number of buckets k, it is
sufficient to increase the cluster size, while keeping the implementation unchanged.
Computational complexity. Our algorithm is within a bound of O(k|E|), assuming a constant
number of refinement iterations. The computationally resource intensive steps are calculating the
query neighbor data, which is performed in O(|E|) time, and processing this information by data
vertices. The latter step is bounded by O(k|N (v)|) steps for every v ∈ D, as this is the amount
of information being sent to v in superstep 2. Finally, computing and processing matrix S requires
O(|V | + k 2 ) = O(k|V |) time.
Communication complexity. Another important aspect of a distributed algorithm is its communication complexity, that is, the amount of messages sent between machines during its execution.
The supersteps 1, 3, and 4 are relatively “lightweight” and require only |E|, |V |, and |V | messages
of constant size, respectively. The “heavy” one is superstep 2 in which every query vertex sends
its neighbor data to all its neighbors. We can upper bound the amount of sent information by
k|E|,
P as neighbor data for every q ∈ Q contains k entries. In practice, however, this is bounded
by q∈Q f anout(q) · |N (q)|, as the zero entries of neighbor data (having ni (q) = 0) need not be
sent as messages. Hence, a reasonable estimation of the amount of sent messages is f anout · |E| per
iteration of Algorithm 1, where f anout is the average fanout of queries on the current iteration.
We stress here that Giraph has several built-in optimizations that can further reduce the amount
of sent and received messages. For example, if exactly the same message is sent between a pair of
machines several times (which might happen, for example, on superstep 2), then the messages are
combined into a single one. Similarly, if a message is sent between two vertices residing on the same
machine, then the message can be replaced with a read from the local memory of the machine.
Another straightforward optimization is to maintain some state of the vertices and only recompute the state between iterations of the algorithm when necessary. A natural candidate is the query
neighbor data, which is recomputed only when an adjacent data vertex is moved; if the data vertex
stays in the same bucket on an iteration, then it does not send messages on superstep 1 for the next
iteration. Similarly, a data vertex v ∈ D may hold some state and recompute move gains only in
the case when necessary, that is, when another vertex u ∈ D with {q, v} ∈ E and {q, u} ∈ E for
some q ∈ Q changes its bucket.
11
Recursive partitioning. The discussion above suggests that Algorithm 1 is practical for small
values of k, e.g., when k = O(1). In this case the complexity of the algorithm is linear in the size
of the input hypergraph. However in some applications, substantially more buckets are needed. In
the extreme with k = Ω(|V |), the implementation requires quadratic time and memory, and the
run-time for large instances could be unreasonable even on a large Giraph cluster.
A common solution to this problem in the hypergraph partitioning literature is to observe that
the partitioning can be constructed recursively. In recursive partitioning, the algorithm splits data
vertices into r > 1 parts V1 , . . . , Vr . This splitting algorithm is recursively applied to all the graphs
induced by vertices Q ∪ V1 , . . . , Q ∪ Vr independently. The process continues until we achieve k
buckets, which requires dlogr ke levels of recursion. A typical strategy is to employ r = 2; that is,
recursive bisection [15, 30].
Notice that Algorithm 1 can be utilized for recursive partitioning with just a single modification.
At every recursion step, data vertices are constrained as to which buckets they are allowed to be
moved to. For example, at the first level of recursive bisection, all data vertices are split into V1
and V2 . At the second level, the vertices are split into four buckets V3 , V4 , V5 , V6 so that the vertices
v ∈ V1 are allowed to move between buckets V3 and V4 , and the vertices v ∈ V2 are allowed to moved
between V5 and V6 . In general, the vertices of Vi for 1 ≤ i ≤ k/2 are split into V2i+1 and V2i+2 .
An immediate implication of the constraint is that every data vertex only needs to compute r
move gains on each iteration. Similarly, a query vertex needs to send only a subset of its neighbor
data to data vertices that contains at most r values. Therefore, the memory requirement as well as
the computational complexity of recursive partitioning is O(r|E|) per iteration, while the amount
of messages sent on each iteration does not exceed O(r|E|), which is a significant reduction over
direct (non-recursive) partitioning when r k. This improvement sometimes comes with the price
of reduced quality; see Section 4 for a discussion. Accompanying the paper, we open-source a version
that performs recursive bisection, as it is the most scalable.
3.4
Advanced implementation
While the basic algorithm described above performs well, this subsection describes additional improvements that we have included in our implementation, motivated by our practical experience.
First, randomly selecting vertices to swap between a pair of buckets i and j may not select those
with the highest move gains to swap. In the ideal serial implementation, we would have two queues
of gains, one corresponding to vertices in bucket i that want to move to j, and the other for vertices
in bucket j that want to move to i, sorted by move gain from highest to lowest. We would then pair
vertices off for swapping from highest to lowest.
This is difficult to implement exactly in a distributed environment. Instead of maintaining two
queues for each pair of buckets, we maintain two histograms that contain the number of vertices
with move gains in exponentially sized bins. We then match bins in the two histograms for maximal
swapping with probability one, and then probabilistically pair the remaining vertices in the final
matched bins. In superstep 4, the master distributes move probabilities for each bin, most of which
are either one or zero. This change allows our implementation to focus on moving the most important
gains first. A further benefit is that we can allow non-positive move gains for target buckets. A pair
of positive and negative histogram bins can swap if the sum of the gains is expected to be positive,
which frees up additional movement in the local search.
Additionally, we utilize the imbalance allowed by ε to consider imbalanced swaps. For recursive
partitioning, we typically do not want to allow ε imbalance for the early recursive splits since that
12
will substantially constrain movement at later steps of the recursion. Instead, using ε multiplied
by the ratio of the current number of recursive splits to the final number of recursive splits works
reasonably well in practice.
Finally, when performing recursive partitioning, instead of optimizing p-fanout for the current
buckets, we approximately optimize for the final p-fanout after all splits. Consider a recursion step
where an existing bucket will be eventually split into t buckets. If a query has r neighbors in a
particular bucket, this query-bucket pair gives a contribution to the current p-fanout of 1 − (1 − p)r .
We can (pessimistically) approximate the contribution of this pair to the final p-fanout by assuming
each of these r neighbors has a probability of 1/t to end up in any of the t final buckets. Under this
assumption, the contribution of this query to p-fanout from a single final bucket is (1 − (1 − p/t)r ),
and summed across each of the t buckets is t × (1 − (1 − p/t)r .
4
Experiments
Here we describe our experiments that are designed to answer the following questions:
• What is the impact of reducing fanout on query processing in a sharded database (Section 4.2.1)?
• How well does our algorithm perform in terms of quality and scalability compared to existing
hypergraph partitioners (Sections 4.2.2 and 4.2.3)?
• How do various parameters of our algorithm contribute to its performance and final result
(Section 4.2.4)?
4.1
Datasets
We use a collection of hypergraphs derived from large social networks and web graphs; see Table 1.
We transform the input hypergraph into a bipartite graph, G = (Q∪D, E), as described in Section 1.
In addition, we use five large synthetically generated graphs that have similar characteristics as
the Facebook friendship graph [16]. These generated graphs are a natural source of hypergraphs;
in our storage sharding application, to render a profile-page of a Facebook user, one might want to
fetch information about a user’s friends. Hence, every user of a social network serves both as query
and as data in a bipartite graph.
In all experiments, isolated queries and queries of degree one (single-vertex hyperedges) are
removed, since they do not contribute to the objective, having fanout equal to one in every partition.
4.2
Evaluation
We evaluate two versions of our algorithm, direct partitioning into k buckets (SHP-k) and recursive
bisection with log2 k levels (SHP-2). The algorithms are implemented in Java and SHP-2 is available
at [2]. Both versions can be run in a single-machine environment using one or several threads running
in parallel, or in a distributed environment using a Giraph cluster. Throughout the section, we use
imbalance ratio ε = 0.05.
13
hypergraph
email-Enron
soc-Epinions
web-Stanford
web-BerkStan
soc-Pokec
soc-LJ
FB-10M
FB-50M
FB-2B
FB-5B
FB-10B
source
[3]
[3]
[3]
[3]
[3]
[3]
[16]
[16]
[16]
[16]
[16]
|Q|
|D|
|E|
25,481
31,149
253,097
609,527
1,277,002
3,392,317
32,296
152,263
6,063,442
15,150,402
30,302,615
36,692
75,879
281,903
685,230
1,632,803
4,847,571
32,770
154,551
6,153,846
15,376,099
40,361,708
356,451
479,645
2,283,863
7,529,636
30,466,873
68,077,638
10,099,740
49,998,426
2 × 109
5 × 109
10 × 109
Table 1: Properties of hypergraphs used in our experiments.
4.2.1
Storage Sharding
Here we argue and experimentally demonstrate that fanout is a suitable objective function for our
primary application, storage sharding. We refer the reader to the description of the Social Hash
framework [29] for more detailed evaluation of the system and other applications of SHP.
In many applications, queries issue requests to multiple storage servers, and they do so in parallel.
As such, the latency of a multi-get query is determined by the slowest request. By reducing fanout,
the probability of encountering a request that is unexpectedly slower than the others is reduced, thus
reducing the latency of the query. This is the fundamental argument for using fanout as the objective
function for the assignment problem in the context of storage sharding. We ran a simple experiment
to confirm our understanding of the relationship between fanout and latency. We issued trivial
remote requests and measured (i) the latency of a single request (fanout = 1) and (ii) the latency
of several requests sent in parallel (fanout > 1) (that is, the maximum over the latencies of single
requests). Figure 4a shows the results of this experiment and illustrates the dependency between
various percentiles of multi-get query latency and fanout of the query. The observed latencies match
our expectations and indicate that reducing fanout is important for database sharding; for example,
one can almost half the average latency by reducing fanout from 40 to 10.
There are several possible caveats to our analysis of the relationship between fanout and latency
in the simplistic experiment. For example, reducing fanout generally increases the size of the largest
request to a server, which could increase latency. With this in mind, we conduct a more realistic
experiment with 40 servers storing a subset of the Facebook friendship graph. For the experiment,
the data is stored in a memory-based, key-value store, and there is one data record per user. In
order to shard the data, we minimize fanout using our SHP algorithm applied for the graph. We
sample a live traffic pattern, and issued the same set of queries, while measuring fanout and latency
of each query. The dependency between fanout and latency are shown in Figure 4b. Here the queries
needed to issue requests to only 9.9 servers on average. Notice that we do not include measurements
for fanout > 35, as there are very few such queries; this also explain several “bumps” in latency for
queries with large fanout. The results demonstrate that decreasing fanout from 40 (corresponding
to a “random” sharding) to 10 (“social” sharding) yields a 2x lower average latency for the queries,
which agrees with the results of the earlier experiment.
14
30t
latency
25t
Percentile:
99%
95%
90%
50%
8t
7t
6t
latency
35t
20t
15t
10t
5t
Percentile:
99%
95%
90%
50%
4t
3t
2t
5t
1t
1t
5
10
15
20
fanout
25
30
35
40
(a) Synthetic queries
5
10
15
20
fanout
25
30
35
40
(b) Real-world queries
Figure 4: Distribution of latency for multi-get queries with various fanout, where t is the average
latency of a single call.
Finally, we mention that after we deployed storage sharding optimized with SHP to one of the
graph databases at Facebook, containing thousands of storage servers, we found that measured
latencies of queries decreased by over 50% on average, and CPU utilization also decreased by over
50%; see [29] for more details.
4.2.2
Quality
Next we compare the quality of our algorithm as measured by the optimized fanout with existing
hypergraph partitioners. We identified the following tools for hypergraph partitioning whose implementations are publicly available: hMetis [20], Mondriaan [32], Parkway [31], PaToH [10], and
Zoltan [14]. These are tools that can process hypergraphs and can optimize fanout (or the closely
related sum of external degrees) as the objective function. Unfortunately, one of the best (according to the recent DIMACS Implementation Challenge [7]) single-machine hypergraph packages,
UMPa [13], is not publicly available, and hence we do not include it in the evaluation.
For a fair comparison, we set allowed imbalance ε = 0.05 and used default optimization flags for
all partitioners. We also require that partitions can be computed with 10 hours without errors. We
computed the fanout of partitions produced with these algorithms for various numbers of buckets, k,
and found that hMetis and PaToH generally produced higher fanout than the other partitioners on
our hypergraphs. So for clarity, we focus on results from SHP-2 and SHP-k, along with Mondriaan,
Parkway, and Zoltan.
Figure 2 compares the fanout of partitions produced by these algorithms for various bucket count
against the minimum fanout partition produced across all algorithms. No partitioner is consistently
the best across all hypergraphs and bucket count. However, Zoltan and Mondriaan generally produce
high quality partitions in all circumstances.
SHP-2, SHP-k and to a lesser extent Parkway are more inconsistent. For example, both versions
of SHP have around a 10 − 30% percentage increased fanout, depending on the bucket count, over
the minimum fanout on the web hypergraphs. On the other hand, SHP generally performs well on
FB-10M and FB-50M, likely the hypergraphs closest to our storage sharding application.
We also note a trade-off between quality and speed for recursive bisection compared to direct
k-way optimization. The fanout from SHP-2 is typically, but not always, 5 − 10% larger than SHP-k.
Because no partitioner consistently provides the lowest fanout, we conclude that using all parti-
15
32
128
512
2
8
32
128
512
2
Bucket count, k
8
32
128
512
SHP-k
SHP-2
Mondriaan
Parkway
Zoltan
1.15
1.13
1.11
1.20
1.19
1.7
1.78
1.62
1.68
1.7
2.32
2.54
2.39
2.34
2.40
3.39
3.59
3.33
3.20
3.62
5.41
5.02
4.58
4.39
4.68
SHP-k
SHP-2
Mondriaan
Parkway
Zoltan
1.06
1.04
1.03
1.04
1.05
1.65
1.70
1.63
1.87
1.68
2.53
2.73
2.57
2.79
2.61
3.90
4.07
3.90
4.51
3.98
6.28
5.85
5.60
5.54
5.83
SHP-k
SHP-2
Mondriaan
Parkway
Zoltan
1.08
1.09
1.01
1.01
1.01
1.17
1.24
1.04
1.05
1.03
1.30
1.40
1.13
1.21
1.14
1.46
1.59
1.28
1.37
1.29
1.78
1.75
1.53
1.54
1.54
SHP-k
SHP-2
Mondriaan
Parkway
Zoltan
1.10
1.11
1.00
1.01
1.00
1.24
1.28
1.02
1.32
1.45
1.14
1.50
1.69
1.30
1.98
1.89
1.47
1.02
1.14
1.33
1.55
SHP-k
SHP-2
Mondriaan
Parkway
Zoltan
1.43
1.49
1.40
1.39
1.46
2.64
2.78
2.54
2.57
2.54
4.07
4.27
4.08
3.81
4.06
5.52
5.81
5.87
5.38
5.79
7.48
7.87
7.95
7.33
7.83
soc−LJ
8
128
soc−Pokec
2
32
web−BerkStan
512
8
web−Stanford
32
128
2
soc−Epinions
8
algorithm
email−Enron
2
SHP-k
SHP-2
Mondriaan
Parkway
Zoltan
1.37
1.31
1.24
2.06
2.17
1.83
2.84
3.29
2.74
3.82
4.48
3.75
5.22
5.77
4.94
1.28
1.86
2.73
3.71
4.83
SHP-k
SHP-2
Mondriaan
Parkway
Zoltan
1.93
1.93
1.93
1.93
1.94
7.04
7.10
7.15
7.20
7.18
21.81
21.62
23.25
24.02
23.12
61.45
54.17
57.98
58.98
57.77
125.7
103.62
112.56
SHP-k
SHP-2
Mondriaan
Parkway
Zoltan
1.93
1.93
1.94
1.93
1.94
6.69
7.11
7.16
7.12
7.18
22.43
21.40
23.38
24.43
23.30
59.88
54.19
58.29
62.20
57.61
2
8
32
128
512
2
8
32
128
512
2
FB−10M
8
32
128
512
2
FB−50M
8
32
128
512
0
10
20
512
112.10
117.16
102.95
112.64
112.4
30
(Fanout − Min Fanout) / Min Fanout (%)
algorithm
●
SHP−2
SHP−k
Zoltan
Parkway
Mondriaan
Table 2: Fanout optimization of different single-machine partitioners across hypergraphs from Table 1 for k ∈ {2, 8, 32, 128, 512}: (left) relative quality over the lowest computed fanout, and
(right) raw values of fanout. The shown results are the ones computed on a single machine within
10 hours without failures.
tioners and taking the lowest fanout across all partitions is an attractive option if possible. However,
as we shall see in the next section, SHP-2 may be the only option for truly large hypergraphs due
to its superior scalability.
16
512
256
64
16
2x10^9
5x10^9
number of edges, |E|
10x10^9
(a) Total time in a cluster with 4 machines
total time (minutes)
1024
5x10^8
16000
2000
run-time (minutes)
total time (minutes)
Bucket count, k:
2
32
8192
131072
4096
1500
1000
500
0
12000
8000
4000
0
4
8
#machines
16
4
8
#machines
16
(b) Run-time and total time in a cluster with various number of machines on FB-10B
Figure 5: Scalability of SHP-2 in a distributed setting for k ∈ {2, 32, 512, 8192, 131072} across largest
hypergraphs from Table 1. The run-time is the time of processing a hypergraph in a cluster; the
total time is the processing time of a single machines multiplied by the number of machines.
4.2.3
Scalability
In this section, we evaluate SHP’s performance in a distributed setting using the four largest hypergraphs in Table 1. We use 4 machines for our experiments each having the same configuration:
Intel(R) Xeon(R) CPU E5-2660 @ 2.20GHz with 144GB RAM.
First, we numerically verify the computational complexity estimates from Section 3.3. Figure 5a
shows SHP-2’s total wall time as a function of the number of hyperedges across the four largest
hypergraphs. Notice that the y-axis is on a log-scale in the figure. The data verifies that SHP-2’s
computational complexity is O(log k|E|), as predicted.
Next we analyze scalability of our approach with various number of worker machines in a cluster.
Figure 5b(left) illustrates the run-time of SHP-2 using 4, 8, and 16 machines; Figure 5b(right)
illustrates the total wall time using the same configuration. While there is a reduction in the runtime, the speedup is not proportional to the ratio of added machines. We explain this by an increased
communication between the machines, which contributes to the performance of our algorithm.
Now we compare SHP’s performance to the two existing distributed partitioning packages
(Parkway and Zoltan). The comparison with Zoltan is particularly relevant since it provides generally lower fanout in the quality comparison.
Figure 3 shows the run-time of these partitioners in minutes across the hypergraphs and various bucket count. If the result was not computed within 10 hours without errors, we display the
maximum value in the figure. Parkway only successfully ran on one of these graphs within the time
allotted, because it runs out of memory on the other hypergraphs in the 4-machine setting. Similarly,
Zoltan also failed to partition hypergraphs larger than soc-LJ. On the other hand, SHP-k ran on
FB-10B for 32 buckets, and only SHP-2 was able to successfully run on all tests.
Further, note that the x-axis is on a log-scale in Figure 3. So SHP can not only run on larger
graphs with more buckets than Zoltan and Parkway on these 4 machines, the run-time is generally
substantially faster. SHP-2 finished partitioning every hypergraph in less than 5 hours, and for the
hypergraphs on which SHP-k succeeded, it ran less than 8 hours.
While not observable in Figure 3, Zoltan’s run-time was largely independent of the bucket count,
such that for 8192 buckets on FB-50M it was faster than SHP-k. This is a relatively rare case, and
typically SHP-k, despite having a run-time that scales linearly with bucket count, was faster in our
17
32
algorithm
Bucket count, k
32
512
8192
32
512
8192
32
512
8192
100
43.4
42.6
SHP-k
SHP-2
Parkway
Zoltan
4.6
2.4
21
3.7
47.8
6.6
133
136
136
SHP-k
SHP-2
Parkway
Zoltan
2.4
1.9
11.2
7.5
5.5
3.2
9.21
8.2
30.3
4.2
SHP-k
SHP-2
Parkway
Zoltan
128
17
479
39.8
55.6
SHP-k
SHP-2
Parkway
Zoltan
116
43.6
433
110
141
FB−10B
8192
42.7
FB−5B
32
512
34.6
4.5
FB−2B
8192
8.8
2.3
FB−50M
32
512
2.6
1.8
SHP-k
SHP-2
Parkway
Zoltan
soc−LJ
512
8192
8192
soc−Pokec
32
512
SHP-k
SHP-2
Parkway
Zoltan
256
90.6
202
283
10000
7.5
Time (minutes)
algorithm
●
SHP−2
SHP−k
Zoltan
Parkway
Table 3: Run-time in minutes of distributed hypergraph partitioners across hypergraphs from Table 1
for k ∈ {32, 512, 8192}: (left) visual representation, and (right) raw values. All tests run on 4
machines. Partitioners that fail process an instance or if their run-time exceeds 10 hours are shown
with the maximum value.
experiments. While in all examples Zoltan was much slower than SHP-2, for a division of a small
hypergraph into a very large number of buckets, Zoltan could conceivably be faster, since SHP-2’s
run-time scales logarithmically with bucket count.
4.2.4
Parameters of SHP
There are two parameters affecting Algorithm 1: the fanout probability and the number of refinement
iterations. To investigate the effect of these parameters, we apply SHP-2 for various values of 0 <
p < 1; see Figure 6 illustrating the resulting percentage reduction in (non-probabilistic) fanout on
soc-Pokec. Values between 0.4 ≤ p ≤ 0.8 tend to produce the lowest fanout, with p = 0.5 being
a reasonable default choice for all tested values of bucket count k. The point p = 1 in the figure
corresponds to optimizing fanout directly with SHP-2, and yields worse results than p = 0.5.
One explanation, as mentioned in Section 3.1, is that the local search is more likely to land in
a local minimum with p = 1. This is illustrated in Figure 7 for SHP-k, where the number of moved
vertices per iteration on soc-LJ is significantly lower for p = 1 than for p = 0.5 at earlier iterations.
The number of moved vertices for p = 0.5 is below 0.1% after iteration 35; this number is below
0.01% after iteration 49. This behavior on soc-Pokec and soc-LJ for SHP was observed typical
18
fanout reduction (%)
0
Bucket count, k:
2
8
128
512
32
-10
-20
-30
-40
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
fanout probability, p
Figure 6: Fanout optimization with SHP-2 on soc-Pokec as a function of fanout probability, p, for
different bucket counts.
6
moved vertices, %
5
average fanout
100
fanout probability:
p=0.5
p=1.0
4
3
2
1
0
80
60
40
20
0
5
10
15
20
25
30
35
40
45
5
iteration
10
15
20
25
30
35
40
45
iteration
(a) progress of fanout
(b) percentage of moved vertices per iteration
Figure 7: Progress of fanout optimization with SHP-k for p = 0.5 and p = 1.0 on soc-LJ using k = 8
buckets.
across many hypergraphs, and motivates our default parameters. We set p = 0.5 as the default
for p-fanout, 60 for the maximum number of refinement iterations of SHP-k, and 20 iterations per
bisection for SHP-2.
Figure 8 quantifies the impact of using probabilistic fanout for SHP-2 across a variety of hypergraphs for 2, 8, and 32 buckets. The left-hand plot displays the substantial percentage increases in
fanout caused by using direct fanout optimization over the p = 0.5 probabilistic fanout optimization. Similar behavior is seen with SHP-k, and these results demonstrate the importance of using
probabilistic fanout for SHP. On average across these hypergraphs, direct fanout optimization would
produce fanout values 45% larger than probabilistic fanout with p = 0.5.
The right-hand plot of Figure 8 compares p-fanout with the clique-net model defined in Section 3.1, where the clique-net objective is optimized using SHP-2. The comparison to p = 0.5 reveals
that clique-net optimization is often worse, but typically similar, depending on the graph and the
number of buckets. In practice, we suggest optimizing for both p-fanout and clique-net, as which
surrogate objective performs better for fanout minimization depends on the graph.
19
20
fanout increase over p=0.5 (%)
fanout increase over p=0.5 (%)
100
80
60
40
20
email-Enron
soc-Epinions
web-BerkStan
web-Stanford
soc-Pokec
soc-LJ
15
10
5
0
0
2
8
bucket count, k
2
32
(a) Direct fanout optimization, p = 1.0
8
bucket count, k
32
(b) The clique-net optimization, p = 0.0
Figure 8: The comparison of different objective functions of SHP-2 for k ∈ {2, 8, 32}: (a) 0.5-fanout
vs 1-fanout (direct) optimization, (b) 0.5-fanout vs 0-fanout (clique-net) optimization.
5
Discussion
Storage sharding for production systems has many additional practical challenges [29]. Two requirements that arise from these challenges are (i) incremental updates of an existing partition and
(ii) balance across multiple dimensions.
(i) Incremental updates can be needed to avoid overhead from substantially changing a partition.
Our algorithm simply adapts to incremental updates by initializing with a previous partition
and running a local search. If a limited search moves too many data vertices, we can modify the
move gain calculation to punish movement from the existing partition or artificially lower the
movement probabilities returned via master in superstep four.
(i) Basic k-way hypergraph partitioning balances the number of data vertices per bucket. Trivially,
we can consider weighted data vertices, but additionally, a data vertex might have multiple
associated dimensions (e.g., CPU cost, memory, disk resources etc.) that each require balance.
In practice, we have found that requiring strict balance on many dimensions substantially harms
solution quality. Instead, we favor a simple heuristic that produces c · k buckets for some c > 1
that have loose balance requirements on all but one dimension, and merges them into k buckets
to satisfy load balance across all dimensions.
We stress that the storage sharding problem might have additional requirements that are not
captured by our model. For example, one could introduce some replication by allowing data records
to be present on multiple servers, as it is done in [11, 26]. However, that would bring in new operational complications (e.g., synchronization between the servers when a data record is modified),
which are not always possible in an existing framework. Another potential extension of our model
is to consider a better optimization goal that more accurately captures the relation between query
latency and distribution of data records among servers. We recently observed that the query fanout
is not the only objective affecting its latency; the size of a request to a server also plays a role. For
example, a query with fanout = 2 that needs 100 data records can be answered faster if the two
20
servers contain an even number of records, 50 and 50, in comparison with a distribution of 99 and 1.
We leave a deeper investigation of this and other potential extensions of the model as an interesting
future direction.
6
Conclusion and Future Work
In this paper, we presented Social Hash Partitioner, SHP, a distributed hypergraph partitioner that
can optimize p-fanout, as well as the clique-net objective among others, through local search, and
scales to far larger hypergraphs than existing hypergraph partitioning packages. Because the careful
design of SHP limits space, computational, and communication complexity, the applicability of our
implementation to a hypergraph is only constrained by the number of Giraph machines available.
We regularly apply SHP to hypergraphs that contain billions of vertices and hundreds of millions
to even billions of hyperedges. These hypergraphs correspond to bipartite graphs with billions of
vertices and trillions of edges.
Despite the improved scalability and simplicity of the algorithm, our experiments demonstrate
that SHP achieves comparable solutions to both single-machine and distributed hypergraph partitioners. We note these results and conclusions might be different for other input hypergraphs (e.g.
matrices from scientific computing, planar networks or meshes, etc.), and in cases where the hypergraph is small enough and solution quality is essential, running all available tools is recommended.
While SHP occasionally produced the best solution in our experiments, other packages, especially
Zoltan and Mondriaan, often claimed the lowest fanout.
Although our motivation for designing SHP is practical, it would be interesting to study fanout
minimization from a theoretical point of view.
√ For example, the classical balanced partitioning
problem on unipartite graphs admits an O( log k log n)-approximation for ε = 2 [25] but it is
unclear if a similar bound holds for our problem. Alternatively, it would be interesting to know
whether there is an optimal algorithm for some classes of hypergraphs, or an algorithm that provably
finds a correct solution for certain random hypergraphs (e.g., generated with a planted partition
model). Finally, it would be interesting to understand when and how minimizing p-fanout speeds
up algorithm convergence and improves solution quality over direct fanout minimization.
7
Additional Authors
Yaroslav Akhremtsev (Karlsruhe Institute of Technology) and Alessandro Presta (Google) – work
done while at Facebook.
8
Acknowledgments
We thank Herald Kllapi for contributing to an earlier graph partitioner that evolved into SHP. We
also thank Michael Stumm for providing comments on this work.
21
References
[1] Apache Giraph. http://giraph.apache.org/.
[2] Social Hash Partitioner. https://issues.apache.org/jira/browse/GIRAPH-1131.
[3] Stanford large network dataset collection. https://snap.stanford.edu/data.
[4] C. J. Alpert, L. W. Hagen, and A. B. Kahng. A hybrid multilevel/genetic approach for circuit
partitioning. In Asia Pacific Conference on Circuits and Systems, pages 298–301. IEEE, 1996.
[5] C. J. Alpert and A. B. Kahng. Recent directions in netlist partitioning: A survey. Integration,
the VLSI journal, 19(1-2):1–81, 1995.
[6] K. Andreev and H. Räcke. Balanced graph partitioning. Theory of Computing Systems,
39(6):929–939, 2006.
[7] D. A. Bader, H. Meyerhenke, P. Sanders, and D. Wagner. Graph partitioning and graph
clustering, 10th DIMACS implementation challenge workshop. Contemporary Mathematics,
588, 2013.
[8] C.-E. Bichot and P. Siarry. Graph partitioning. John Wiley & Sons, 2013.
[9] A. Buluç, H. Meyerhenke, I. Safro, P. Sanders, and C. Schulz. Recent advances in graph
partitioning. In Algorithm Engineering, pages 117–158. Springer, 2016.
[10] Ü. V. Çatalyürek and C. Aykanat. Hypergraph-partitioning-based decomposition for parallel
sparse-matrix vector multiplication. IEEE Transactions on Parallel and Distributed Systems,
10(7):673–693, 1999.
[11] C. Curino, E. Jones, Y. Zhang, and S. Madden. Schism: a workload-driven approach to database
replication and partitioning. VLDB Endowment, 3(1-2):48–57, 2010.
[12] J. Dean and L. A. Barroso. The tail at scale. Communications of the ACM, 56:74–80, 2013.
[13] M. Deveci, K. Kaya, B. Uçar, and Ü. V. Çatalyürek. Hypergraph partitioning for multiple communication cost metrics: Model and methods. Journal of Parallel and Distributed Computing,
77:69–83, 2015.
[14] K. D. Devine, E. G. Boman, R. T. Heaphy, R. H. Bisseling, and Ü. V. Çatalyürek. Parallel hypergraph partitioning for scientific computing. In International Parallel & Distributed
Processing Symposium, pages 10–pp. IEEE, 2006.
[15] L. Dhulipala, I. Kabiljo, B. Karrer, G. Ottaviano, S. Pupyrev, and A. Shalita. Compressing
graphs and indexes with recursive graph bisection. In International Conference on Knowledge
Discovery and Data Mining, pages 1535–1544. ACM, 2016.
[16] S. Edunov, D. Logothetis, C. Wang, A. Ching, and M. Kabiljo. Darwini: Generating realistic
large-scale social graphs. arXiv:1610.00664, 2016.
[17] A. E. Feldmann and L. Foschini. Balanced partitions of trees and applications. Algorithmica,
71(2):354–376, 2015.
22
[18] C. M. Fiduccia and R. M. Mattheyses. A linear-time heuristic for improving network partitions.
In 19th Conference on Design Automation, pages 175–181. IEEE, 1982.
[19] L. Golab, M. Hadjieleftheriou, H. Karloff, and B. Saha. Distributed data placement to minimize
communication costs via graph partitioning. In International Conference on Scientific and
Statistical Database Management, pages 20:1–20:12. ACM, 2014.
[20] G. Karypis, R. Aggarwal, V. Kumar, and S. Shekhar. Multilevel hypergraph partitioning:
applications in VLSI domain. IEEE Transactions on Very Large Scale Integration Systems,
7(1):69–79, 1999.
[21] G. Karypis and V. Kumar. METIS–unstructured graph partitioning and sparse matrix ordering
system, version 2.0. 1995.
[22] G. Karypis and V. Kumar. Multilevel k-way hypergraph partitioning. VLSI design, 11(3):285–
300, 2000.
[23] B. W. Kernighan and S. Lin. An efficient heuristic procedure for partitioning graphs. Bell
system technical journal, 49(2):291–307, 1970.
[24] T. Kiefer. Allocation Strategies for Data-Oriented Architectures. PhD thesis, Dresden, Technische Universität Dresden, 2016.
[25] R. Krauthgamer, J. S. Naor, and R. Schwartz. Partitioning graphs into balanced components.
In Symposium on Discrete Algorithms, pages 942–949. SIAM, 2009.
[26] K. A. Kumar, A. Quamar, A. Deshpande, and S. Khuller. SWORD: workload-aware data
placement and replica selection for cloud data management systems. The VLDB Journal,
23(6):845–870, 2014.
[27] T. Leighton, F. Makedon, and S. Tragoudas. Approximation algorithms for VLSI partition
problems. In Circuits and Systems, pages 2865–2868. IEEE, 1990.
[28] R. O. Selvitopi, A. Turk, and C. Aykanat. Replicated partitioning for undirected hypergraphs.
Journal of Parallel and Distributed Computing, 72(4):547–563, 2012.
[29] A. Shalita, B. Karrer, I. Kabiljo, A. Sharma, A. Presta, A. Adcock, H. Kllapi, and M. Stumm.
Social Hash: An assignment framework for optimizing distributed systems operations on social
networks. In 13th Usenix Conference on Networked Systems Design and Implementation, pages
455–468, 2016.
[30] H. D. Simon and S.-H. Teng. How good is recursive bisection? Journal on Scientific Computing,
18(5):1436–1445, 1997.
[31] A. Trifunović and W. J. Knottenbelt. Parallel multilevel algorithms for hypergraph partitioning.
Journal of Parallel and Distributed Computing, 68(5):563–581, 2008.
[32] B. Vastenhouw and R. H. Bisseling. A two-dimensional data distribution method for parallel
sparse matrix-vector multiplication. SIAM Review, 47(1):67–95, 2005.
[33] W. Yang, G. Wang, L. Ma, and S. Wu. A distributed algorithm for balanced hypergraph
partitioning. In Advances in Services Computing, pages 477–490. Springer, 2016.
23
| 8 |
The power of sum-of-squares for detecting hidden structures
arXiv:1710.05017v1 [] 13 Oct 2017
Samuel B. Hopkins∗
Prasad Raghavendra
Pravesh K. Kothari †
Tselil Schramm‡
Aaron Potechin
David Steurer§
October 31, 2017
Abstract
We study planted problems—finding hidden structures in random noisy inputs—through
the lens of the sum-of-squares semidefinite programming hierarchy (SoS). This family of powerful semidefinite programs has recently yielded many new algorithms for planted problems,
often achieving the best known polynomial-time guarantees in terms of accuracy of recovered
solutions and robustness to noise. One theme in recent work is the design of spectral algorithms which match the guarantees of SoS algorithms for planted problems. Classical spectral
algorithms are often unable to accomplish this: the twist in these new spectral algorithms is
the use of spectral structure of matrices whose entries are low-degree polynomials of the input
variables.
We prove that for a wide class of planted problems, including refuting random constraint
satisfaction problems, tensor and sparse PCA, densest-k-subgraph, community detection in
stochastic block models, planted clique, and others, eigenvalues of degree-d matrix polynomials
are as powerful as SoS semidefinite programs of size roughly n d . For such problems it is
therefore always possible to match the guarantees of SoS without solving a large semidefinite
program.
Using related ideas on SoS algorithms and low-degree matrix polynomials (and inspired
by recent work on SoS and the planted clique problem [BHK+ 16]), we prove new nearly-tight
SoS lower bounds for the tensor and sparse principal component analysis problems. Our
lower bounds are the first to suggest that improving upon the signal-to-noise ratios handled by
existing polynomial-time algorithms for these problems may require subexponential time.
∗ Cornell
University, [email protected] Partially supported by an NSF GRFP under grant no. 1144153, by a
Microsoft Research Graduate Fellowship, and by David Steurer’s NSF CAREER award.
† Princeton University and IAS, [email protected]
‡ UC Berkeley, [email protected]. Supported by an NSF Graduate Research Fellowship (1106400).
§ Cornell University, [email protected]. Supported by a Microsoft Research Fellowship, a Alfred P. Sloan
Fellowship, an NSF CAREER award, and the Simons Collaboration for Algorithms and Geometry.
Contents
1 Introduction
1.1 SoS and spectral algorithms for robust inference . . . . . .
1.2 SoS and information-computation gaps . . . . . . . . . . .
1.3 Exponential lower bounds for sparse PCA and tensor PCA
1.4 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.5 Organization . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2 Distinguishing Problems and Robust Inference
3 Moment-Matching Pseudodistributions
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
2
4
5
8
9
9
12
4 Proof of Theorem 2.6
15
4.1 Handling Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5 Applications to Classical Distinguishing Problems
23
6 Exponential lower bounds for PCA problems
29
6.1 Predicting sos lower bounds from low-degree distinguishers for Tensor PCA . . . . . 29
6.2 Main theorem and proof overview for Tensor PCA . . . . . . . . . . . . . . . . . . . . 31
6.3 Main theorem and proof overview for sparse PCA . . . . . . . . . . . . . . . . . . . . 33
References
35
A Bounding the sum-of-squares proof ideal term
40
B Lower bounds on the nonzero eigenvalues of some moment matrices
43
C From Boolean to Gaussian lower bounds
45
1 Introduction
Recent years have seen a surge of progress in algorithm design via the sum-of-squares (SoS)
semidefinite programming hierarchy. Initiated by the work of [BBH+ 12], who showed that
polynomial time algorithms in the hierarchy solve all known integrality gap instances for
Unique Games and related problems, a steady stream of works have developed efficient algorithms for both worst-case [BKS14, BKS15, BKS17, BGG+ 16] and average-case problems
[HSS15, GM15, BM16, RRS16, BGL16, MSS16a, PS17]. The insights from these works extend
beyond individual algorithms to characterizations of broad classes of algorithmic techniques. In
addition, for a large class of problems (including constraint satisfaction), the family of SoS semidefinite programs is now known to be as powerful as any semidefinite program (SDP) [LRS15].
In this paper we focus on recent progress in using Sum of Squares algorithms to solve averagecase, and especially planted problems—problems that ask for the recovery of a planted signal
perturbed by random noise. Key examples are finding solutions of random constraint satisfaction
problems (CSPs) with planted assignments [RRS16] and finding planted optima of random polynomials over the n-dimensional unit sphere [RRS16, BGL16]. The latter formulation captures a wide
range of unsupervised learning problems, and has led to many unsupervised learning algorithms
with the best-known polynomial time guarantees [BKS15, BKS14, MSS16b, HSS15, PS17, BGG+ 16].
In many cases, classical algorithms for such planted problems are spectral algorithms—i.e.,
using the top eigenvector of a natural matrix associated with the problem input to recover a
planted solution. The canonical algorithms for the planted clique [AKS98], principal components
analysis (PCA) [Pea01], and tensor decomposition (which is intimately connected to optimizaton of
polynomials on the unit sphere) [Har70] are all based on this general scheme. In all of these cases,
the algorithm employs the top eigenvector of a matrix which is either given as input (the adjacency
matrix, for planted clique), or is a simple function of the input (the empirical covariance, for PCA).
Recent works have shown that one can often improve upon these basic spectral methods
using SoS, yielding better accuracy and robustness guarantees against noise in recovering planted
solutions. Furthermore, for worst case problems—as opposed to the average-case planted problems
we consider here—semidefinite programs are strictly more powerful than spectral algorithms.1 A
priori one might therefore expect that these new SoS guarantees for planted problems would not
be achievable via spectral algorithms. But curiously enough, in numerous cases these stronger
guarantees for planted problems can be achieved by spectral methods! The twist is that the
entries of these matrices are low-degree polynomials in the input to the algorithm . The result
is a new family of low-degree spectral algorithms with guarantees matching SoS but requriring
only eigenvector computations instead of general semidefinite programming [HSSS16, RRS16,
AOW15a].
This leads to the following question which is the main focus of this work.
Are SoS algorithms equivalent to low-degree spectral methods for planted problems?
We answer this question affirmatively for a wide class of distinguishing problems which includes refuting random CSPs, tensor and sparse PCA, densest-k-subgraph, community detection
in stochastic block models, planted clique, and more. Our positive answer to this question implies
1For example, consider the contrast between the SDP algorithm for Max-Cut of Goemans and Williamson, [GW94],
and the spectral algorithm of Trevisan [Tre09]; or the SDP-based algorithms for coloring worst-case 3-colorable graphs
[KT17] relative to the best spectral methods [AK97] which only work for random inputs.
1
that a light-weight algorithm—computing the top eigenvalue of a single matrix whose entries are
low-degree polynomials in the input—can recover the performance guarantees of an often bulky
semidefinite programming relaxation.
To complement this picture, we prove two new SoS lower bounds for particular planted problems, both variants of component analysis: sparse principal component analysis and tensor principal component analysis (henceforth sparse PCA and tensor PCA, respectively) [ZHT06, RM14].
For both problems there are nontrivial low-degree spectral algorithms, which have better noise
tolerance than naive spectral methods [HSSS16, DM14b, RRS16, BGL16]. Sparse PCA, which
is used in machine learning and statistics to find important coordinates in high-dimensional
data sets, has attracted much attention in recent years for being apparently computationally intractable to solve with a number of samples which is more than sufficient for brute-force algorithms
[KNV+ 15, BR13b, MW15a]. Tensor PCA appears to exhibit similar behavior [HSS15]. That is, both
problems exhibit information-computation gaps.
Our SoS lower bounds for both problems are the strongest yet formal evidence for informationcomputation gaps for these problems. We rule out the possibility of subexponential-time SoS
algorithms which improve by polynomial factors on the signal-to-noise ratios tolerated by the
known low degree spectral methods. In particular, in the case of sparse PCA, it appeared possible
prior to this work that it might be possible in quasipolynomial time to recover a k-sparse unit vector
v in p dimensions from O(k log p) samples from the distribution N(0, Id +vv ⊤ ). Our lower bounds
suggest that this is extremely unlikely; in fact this task probably requires polynomial SoS degree
and hence exp(n Ω(1) ) time for SoS algorithms. This demonstrates that (at least with regard to SoS
algorithms) both problems are much harder than the planted clique problem, previously used as a
basis for reductions in the setting of sparse PCA [BR13b].
Our lower bounds for sparse and tensor PCA are closely connected to the failure of low-degree
spectral methods in high noise regimes of both problems. We prove them both by showing that
with noise beyond what known low-degree spectral algorithms can tolerate, even low-degree scalar
algorithms (the result of restricting low-degree spectral algorithms to 1 × 1 matrices) would require
subexponential time to detect and recover planted signals. We then show that in the restricted
settings of tensor and sparse PCA, ruling out these weakened low-degree spectral algorithms is
enough to imply a strong SoS lower bound.
1.1 SoS and spectral algorithms for robust inference
We turn to our characterization of SoS algorithms for planted problems in terms of low-degree
spectral algorithms. First, a word on planted problems. Many planted problems have several
formulations: search, in which the goal is to recover a planted solution, refutation, in which the goal
is to certify that no planted solution is present, and distinguishing, where the goal is to determine
with good probability whether an instance contains a planted solution or not. Often an algorithm
for one version can be parlayed into algorithms for the others, but distinguishing problems are
often the easiest, and we focus on them here.
A distinguishing problem is specified by two distributions on instances: a planted distribution
supported on instances with a hidden structure, and a uniform distribution, where samples w.h.p.
contain no hidden structure. Given an instance drawn with equal probability from the planted or
the uniform distribution, the goal is to determine with probability greater than 12 whether or not
2
the instance comes from the planted distribution. For example:
Planted clique Uniform distribution:
G(n, 12 ), the Erdős-Renyi distribution, which w.h.p.
contains no clique of size ω(log n). Planted distribution: The uniform distribution on graphs
containing a n ε -size clique, for some ε > 0. (The problem gets harder as ε gets smaller, since the
distance between the distributions shrinks.)
Planted 3xor Uniform distribution: a 3xor instance on n variables and m > n equations
x i x j x k a i jk , where all the triples (i, j, k) and the signs a i jk ∈ {±1} are sampled uniformly and
independently. No assignment to x will satisfy more than a 0.51-fraction of the equations, w.h.p.
Planted distribution: The same, except the signs a i jk are sampled to correlate with b i b j b k for a
randomly chosen b i ∈ {±1}, so that the assignment x b satisfies a 0.9-fraction of the equations.
(The problem gets easier as m/n gets larger, and the contradictions in the uniform case become
more locally apparent.)
We now formally define a family of distinguishing problems, in order to give our main theorem.
Let I be a set of instances corresponding to a product space (for concreteness one may think of
n
I to be the set of graphs on n vertices, indexed by {0, 1}( 2 ) , although the theorem applies more
broadly). Let ν, our uniform distrbution, be a product distribution on I.
With some decision problem P in mind (e.g. does G contain a clique of size > n ε ?), let X be a
set of solutions to P; again for concreteness one may think of X as being associated with cliques
in a graph, so that X ⊂ {0, 1} n is the set of all indicator vectors on at least n ε vertices.
For each solution x ∈ X, let µ |x be the uniform distribution over instances I ∈ I that contain x.
For example, in the context of planted clique, if x is a clique on vertices 1, . . . , n ε , then µ |x would
be the uniform distribution on graphs containing the clique 1, . . . , n ε . We define the planted
distribution µ to be the uniform mixture over µ x , µ U x∼X µ |x .
The following is our main theorem on the equivalence of sum of squares algorithms for distinguishing problems and spectral algorithms employing low-degree matrix polynomials.
Theorem 1.1 (Informal). Let N, n ∈ N, and let A, B be sets of real numbers. Let I be a family of
instances over A N , and let P be a decision problem over I with X B n the set of possible solutions to P
over I. Let {1 j (x, I)} be a system of n O(d) polynomials of degree at most d in the variables x and constant
degree in the variables I that encodes P, so that
• for I ∼ν I, with high probability the system is unsatisfiable and admits a degree-d SoS refutation, and
• for I ∼µ I, with high probability the system is satisfiable by some solution x ∈ X, and x remains
feasible even if all but an n −0.01 -fraction of the coordinates of I are re-randomized according to ν.
n
n
Then there exists a matrix whose entries are degree-O(d) polynomials Q : I → (6 d)×(6 d) such that
I∼ν
λ +max (Q(I)) 6 1,
while
I∼µ
where λ +max denotes the maximum non-negative eigenvalue.
λ +max (Q(I)) > n 10d ,
The condition that a solution x remain feasible if all but a fraction of the coordinates of I ∼ µ |x
are re-randomized should be interpreted as a noise-robustness condition. To see an example, in
the context of planted clique, suppose we start with a planted distribution over graphs with a
clique x of size n ε+0.01 . If a random subset of n 0.99 vertices are chosen, and all edges not entirely
3
contained in that subset are re-randomized according to the G(n, 1/2) distribution, then with high
probability at least n ε of the vertices in x remain in a clique, and so x remains feasible for the
problem P: G has a clique of size > n ε ?
1.2 SoS and information-computation gaps
Computational complexity of planted problems has become a rich area of study. The goal is to
understand which planted problems admit efficient (polynomial time) algorithms, and to study the
information-computation gap phenomenon: many problems have noisy regimes in which planted
structures can be found by inefficient algorithms, but (conjecturally) not by polynomial time
algorithms. One example is the planted clique problem, where the goal find a large clique in
a sample from the uniform distribution over graphs containing a clique of size n ε for a small
constant ε > 0. While the problem is solvable for any ε > 0 by a brute-force algorithm requiring
n Ω(log n) time, polynomial time algorithms are conjectured to require ε > 21 .
A common strategy to provide evidence for such a gap is to prove that powerful classes of
efficient algorithms are unable to solve the planted problem in the (conjecturally) hard regime.
SoS algorithms are particularly attractive targets for such lower bounds because of their broad
applicability and strong guarantees.
In a recent work, Barak et al. [BHK+ 16] show an SoS lower bound for the planted clique
problem, demonstrating that when ε < 12 , SoS algorithms require n Ω(log n) time to solve planted
clique. Intriguingly, they show that in the case of planted clique that SoS algorithms requiring
≈ n d time can distinguish planted from random graphs only when there is a scalar-valued degree
≈ d · log n polynomial p(A) : n×n → (here A is the adjacency matrix of a graph) with
G(n,1/2)
p(A) 0,
planted
p(A) > n
Ω(1)
·
G(n,1/2)
p(A)
1/2
.
That is, such a polynomial p has much larger expectation in under the planted distribution than
its standard deviation in uniform distribution. (The choice of n Ω(1) is somewhat arbitrary, and
could be replaced with Ω(1) or n Ω(d) with small changes in the parameters.) By showing that
as long as ε < 21 any such polynomial p must have degree Ω(log n)2 , they rule out efficient SoS
algorithms when ε < 12 . Interestingly, this matches the spectral distinguishing threshold—the
spectral algorithm of [AKS98] is known to work when ε > 21 .
This stronger characterization of SoS for the planted clique problem, in terms of scalar distinguishing algorithms rather than spectral distinguishing algorihtms, may at first seem insignificant.
To see why the scalar characterization is more powerful, we point out that if the degree-d moments
of the planted and uniform distributions are known, determining the optimal scalar distinguishing
polynomial is easy: given a planted distribution µ and a random distribution ν over instances I,
one just solves a linear algebra problem in the n d log n coefficients of p to maximize the expectation
over µ relative to ν:
max
[p 2 (I)] s.t.
[p 2 (I)] 1 .
p
I∼µ
I∼ν
It is not difficult to show that the optimal solution to the above program has a simple form: it is
the projection of the relative density of ν with respect to µ projected to the degree-d log n polynomials.
So given a pair of distributions µ, ν, in n O(d log n) time, it is possible to determine whether there
4
exists a degree-d log n scalar distinguishing polynomial. Answering the same question about the
existence of a spectral distinguisher is more complex, and to the best of our knowledge cannot be
done efficiently.
Given this powerful theorem for the case of the planted clique problem, one may be tempted
to conjecture that this stronger, scalar distinguisher characterization of the SoS algorithm applies
more broadly than just to the planted clique problem, and perhaps as broadly as Theorem 1.1. If
this conjecture is true, given a pair of distributions ν and µ with known moments, it would be
possible in many cases to efficiently and mechanically determine whether polynomial-time SoS
distinguishing algorithms exist!
Conjecture 1.2. In the setting of Theorem 1.1, the conclusion may be replaced with the conclusion that
there exists a scalar-valued polynomial p : I → of degree O(d · log n) so that
uniform
p(I) 0 and
planted
p(I) > n
Ω(1)
2
uniform
p(I)
1/2
To illustrate the power of this conjecture, in the beginning of Section 6 we give a short and
self-contained explanation of how this predicts, via simple linear algebra, our n Ω(1) -degree SoS
lower bound for tensor PCA. As evidence for the conjecture, we verify this prediction by proving
such a lower bound unconditionally.
We also note why Theorem 1.1 does not imply Conjecture 1.2. While, in the notation of that
theorem, the entries of Q(I) are low-degree polynomials in I, the function M 7→ λ +max (M) is not (to
the best of our knowledge) a low-degree polynomial in the entries of M (even approximately). (This
stands in contrast to, say the operator norm or Frobenious norm of M, both of which are exactly
or approximately low-degree polynomials in the entries of M.) This means that the final output
of the spectral distinguishing algorithm offered by Theorem 1.1 is not a low-degree polynomial in
the instance I.
1.3 Exponential lower bounds for sparse PCA and tensor PCA
Our other main results are strong exponential lower bound on the sum-of-squares method (specifΩ(1)
ically, against 2n time or n Ω(1) degree algorithms) for the tensor and sparse principal component
analysis (PCA). We prove the lower bounds by extending the techniques pioneered in [BHK+ 16].
In the present work we describe the proofs informally, leaving full details to a forthcoming full
version.
Tensor PCA. We start with the simpler case of tensor PCA, introduced by [RM14].
Problem 1.3 (Tensor PCA). Given an order-k tensor in (n )⊗k , determine whether it comes from:
• Uniform Distribution: each entry of the tensor sampled independently from N(0, 1).
• Planted Distribution: a spiked tensor, T λ · v ⊗k + G where v is sampled uniformly from
n−1 , and where G is a random tensor with each entry sampled independently from N(0, 1).
Here, we think of v as a signal hidden by Gaussian noise. The parameter λ is a signal-to-noise
ratio. In particular, as λ grows, we expect the distinguishing problem above to get easier.
5
Tensor PCA is a natural generalization of the PCA problem in machine learning and statistics.
Tensor methods in general are useful when data naturally has more than two modalities: for
example, one might consider a recommender system which factors in not only people and movies
but also time of day. Many natural tensor problems are NP hard in the worst-case. Though this is
not necessarily an obstacle to machine learning applications, it is important to have average-case
models to in which to study algorithms for tensor problems. The spiked tensor setting we consider
here is one such simple model.
Turning to algorithms: consider first the ordinary PCA problem in a spiked-matrix model.
Given an n × n matrix M, the problem is to distinguish between the case where every entry of M
is independently drawn from the standard Gaussian distribution N(0, 1) and the case when M is
drawn from a distribution as above with an added rank one shift λvv ⊤ in a uniformly random
direction v. A natural and well-studied algorithm, which solves this problem to informationtheoretic optimality is to threshold on the largest singular value/spectral norm of the input matrix.
Equivalently, one thresholds on the maximizer of the degree two polynomial hx, Mxi in x ∈ n−1 .
A natural generalization of this algorithm to the tensor PCA setting (restricting for simplicity
k 3 for this discussion) is the maximum of the degree-three polynomial hT, x ⊗3i over the unit
sphere—equivalently, the (symmetric) injective tensor norm of T. This maximum can be shown
√
to be much larger in case of the planted distribution so long as λ ≫ n. Indeed, this approach
to distinguishing between planted and uniform distributions is information-theoretically optimal
[PWB16, BMVX16]. Since recovering the spike v and optimizing the polynomial hT, x ⊗3 i on the
sphere are equivalent, tensor PCA can be thought of as an average-case version of the problem of
optimizing a degree-3 polynomial on the unit sphere (this problem is NP hard in the worst case,
even to approximate [HL09, BBH+12]).
Even in this average-case model, it is believed that there is a gap between which signal strengths
λ allow recovery of v by brute-force methods and which permit polynomial time algorithms. This is
quite distinct from the vanilla PCA setting, where eigenvector algorithms solve the spike-recovery
problem to information-theoretic optimality. Nevertheless, the best-known algorithms for tensor
PCA arise from computing convex relaxations of this degree-3 polynomial optimization problem.
Specifically, the SoS method captures the state of the art algorithms for the problem; it is known
to recover the vector v to o(1) error in polynomial time whenever λ ≫ n 3/4 [HSS15]. A major
open question in this direction is to understand the complexity of the problem for λ 6 n 3/4−ε .
O(ε)
Algorithms (again captured by SoS) are known which run in 2n
time [RRS16, BGG+ 16]. We
show the following theorem which shows that the sub-exponential algorithm above is in fact nearly
optimal for SoS algorithm.
Theorem 1.4. For a tensor T, let
SoS d (T) max ˜ [hT, x ⊗k i] such that ˜ is a degree d pseudoexpectation and satisfies {kx k 2 1}2
˜
For every small enough constant ε > 0, if T ∈ n×n×n has iid Gaussian or {±1} entries,
n k/4−ε , for every d 6 n c·ε for some universal c > 0.
T
SoS d (T) >
In particular for third order tensors (i.e k 3), since degree n Ω(ε) SoS is unable to certify that a
2For definitions of pseudoexpectations and related matters, see the survey [BS14].
6
random 3-tensor has maximum value much less than n 3/4−ε , this SoS relaxation cannot be used to
distinguish the planted and random distributions above when λ ≪ n 3/4−ε .3
Sparse PCA. We turn to sparse PCA, which we formalize as the following planted distinguishing
problem.
Problem 1.5 (Sparse PCA (λ, k)). Given an n × n symmetric real matrix A, determine whether A
comes from:
• Uniform Distribution: each upper-triangular entry of the matrix A is sampled iid from
N(0, 1); other entries are filled in to preserve symmetry.
√
• Planted Distribution: a random k-sparse unit vector v with entries {±1/ k, 0} is sampled,
and B is sampled from the uniform distribution above; then A B + λ · vv⊺ .
We defer significant discussion to Section 6, noting just a few things before stating our main
theorem on sparse PCA. First, the planted model above is sometimes called the spiked Wigner
model—this refers to the independence of the entries of the matrix B. An alternative model for
Í
sparse PCA is the spiked Wishart model: A is replaced by i 6 m x i x i⊺ , where each x i ∼ N(0, Id +βvv⊺ ),
for some number m ∈ of samples and some signal-strength β ∈ . Though there are technical
differences between the models, to the best of our knowledge all known algorithms with provable
guarantees are equally applicable to either model; we expect that our SoS lower bounds also apply
in the spiked Wishart model.
We generally think of k, λ as small powers of n; i.e. n ρ for some ρ ∈ (0, 1); this allows us to
generally ignore logarithmic factors in our arguments. As in the tensor PCA setting, a natural
and information-theoretically optimal algorithm for sparse PCA is to maximize the quadratic
form hx, Axi, this time over k-sparse unit vectors. For A from the uniform distribution √
standard
techniques (ε-nets and union bounds) show that the maximum value achievable is O( k log√n)
with high probability, while for A from the planted model of course hv, Avi ≈ λ. So, when λ ≫ k
one may distinguish the two models by this maximum value.
However, this maximization problem is NP hard for general quadratic forms A [CPR16]. So,
efficient algorithms must use some other distinguisher which leverages the randomness in the
instances. Essentially only two polynomial-time-computable distinguishers are known.4 If λ ≫
√
n then the maximum eigenvalue of A distinguishes the models. If λ ≫ k then the planted
model can be distinguished by the presence of large
diagonal entries of A. Notice both of these
√
√
distinguishers fail for some choices of λ (that is, k ≪ λ ≪ n, k) for which brute-force methods
(optimizing hx, Axi over sparse x) could successfully distinguish planted from uniform A’s.
√ The
theorem below should be interpreted as an impossibility result for SoS algorithms in the k ≪
√
λ ≪ n, k regime. This is the strongest known impossibility result for sparse PCA among those
ruling out classes of efficient algorithms (one reduction-based result is also know, which shows
sparse PCA is at least as hard as the planted clique problem [BR13a]. It is also the first evidence
that the problem may require subexponential (as opposed to merely quasi-polynomial) time.
3In fact, our proof for this theorem will show somewhat more: that a large family of constraints—any valid constraint
which is itself a low-degree polynomial of T—could be added to this convex relaxation and the lower bound would still
obtain.
4If one studies the problem at much finer granularity than we do here, in particular studying λ up to low-order
additive terms and how precisely it is possible to estimate the planted signal v, then the situation is more subtle [DM14a].
7
Theorem 1.6. If A ∈ n×n , let
SoS d,k (A) max ˜ hx, Axi s.t. ˜ is degree d and satisfies x 3i x i , kx k 2 k .
˜
There are absolute constants c, ε ∗ > 0 so that for every ρ ∈ (0, 1) and ε ∈ (0, ε ∗ ), if k n ρ , then for
d 6 n c·ε ,
SoS d,k (A) > min(n 1/2−ε k, n ρ−ε k) .
n
A∼{±1} ( 2 )
For more thorough discussion of the theorem, see Section 6.3.
1.4 Related work
On interplay of SoS relaxations and spectral methods. As we have already alluded to, many
prior works explore the connection between SoS relaxations and spectral algorithms, beginning
with the work of [BBH+ 12] and including the followup works [HSS15, AOW15b, BM16] (plus many
more). Of particular interest are the papers [HSSS16, MS16b], which use the SoS algorithms to
obtain fast spectral algorithms, in some cases running in time linear in the input size (smaller even
than the number of variables in the associated SoS SDP).
In light of our Theorem 1.1, it is particularly interesting to note cases in which the known SoS
lower bounds matching the known spectral algorithms—these problems include planted clique
(upper bound: [AKS98], lower bound:5 [BHK+ 16]), strong refutations for random CSPs (upper
bound:6 [AOW15b, RRS16], lower bounds: [Gri01b, Sch08, KMOW17]), and tensor principal
components analysis (upper bound: [HSS15, RRS16, BGG+ 16], lower bound: this paper).
We also remark that our work applies to several previously-considered distinguishing and
average-case problems within the sum-of-squares algorithmic framework: block models [MS16a] ,
densest-k-subgraph [BCC+ 10]; for each of these problems, we have by Theorem 1.1 an equivalence
between efficient sum-of-squares algorithms and efficient spectral algorithms, and it remains to
establish exactly what the tradeoff is between efficiency of the algorithm and the difficulty of
distinguishing, or the strength of the noise.
To the best of knowledge, no previous work has attempted to characterize SoS relaxations
for planted problems by simpler algorithms in the generality we do here. Some works have
considered characterizing degree-2 SoS relaxations (i.e. basic semidefinie programs) in terms of
simpler algorithms. One such example is recent work of Fan and Montanari [FM16] who showed
that for some planted problems on sparse random graphs, a class of simple procedures called local
algorithms performs as well as semidefinite programming relaxations.
On strong SoS lower bounds for planted problems. By now, there’s a large body of work that
establishes lower bounds on SoS SDP for various average case problems. Beginning with the work
of Grigoriev [Gri01a], a long line work have established tight lower bounds for random constraint
satisfaction problems [Sch08, BCK15, KMOW17] and planted clique [MPW15, DM15, HKP15, RS15,
5SDP lower bounds for the planted clique problem were known for smaller degrees of sum-of-squares relaxations
and for other SDP relaxations before; see the references therein for details.
6There is a long line of work on algorithms for refuting random CSPs, and 3SAT in particular; the listed papers
contain additional references.
8
BHK+ 16]. The recent SoS lower bound for planted clique of [BHK+ 16] was particularly influential
to this work, setting the stage for our main line of inquiry. We also draw attention to previous
work on lower bounds for the tensor PCA and sparse PCA problems in the degree-4 SoS relaxation
[HSS15, MW15b]—our paper improves on this and extends our understanding of lower bounds
for tensor and sparse PCA to any degree.
Tensor principle component analysis was introduced by Montanari and Richard [RM14] who
indentified information theoretic threshold for recovery of the planted component and analyzed the
maximum likelihood estimator for the problem. The work of [HSS15] began the effort to analyze
the sum of squares method for the problem and showed that it yields an efficient algorithm
for recovering the planted component with strength ω̃(n 3/4 ). They also established that this
threshold is tight for the sum of squares relaxation of degree 4. Following this, Hopkins et al.
[HSSS16] showed how to extract a linear time spectral algorithm from the above analysis. Tomioka
and Suzuki derived tight information theoretic thresholds for detecting planted components by
establishing tight bounds on the injective tensor norm of random tensors [TS14]. Finally, very
recently, Raghavendra et. al. and Bhattiprolu et. al. independently showed sub-exponential time
algorithms for tensor pca [RRS16, BGL16]. Their algorithms are spectral and are captured by the
sum of squares method.
1.5 Organization
In Section 2 we set up and state our main theorem on SoS algorithms versus low-degree spectral
algorithms. In Section 5 we show that the main theorem applies to numerous planted problems—
we emphasize that checking each problem is very simple (and barely requires more than a careful
definition of the planted and uniform distributions). In Section 3 and Section 4 we prove the main
theorerm on SoS algorithms versus low-degree spectral algorithms.
In section 7 we get prepared to prove our lower bound for tensor PCA by proving a structural
theorem on factorizations of low-degree matrix polynomials with well-behaved Fourier transforms.
In section 8 we prove our lower bound for tensor PCA, using some tools proved in section 9.
def
Notation. For two matrices A, B, let hA, Bi Tr(AB). Let kAkFr denote the Frobenius norm, and
kAk its spectral norm. For matrix valued functions A, B over I and a distribution ν over I ∼ I,
def
we will denote hA, Biν I∼ν hA(I), B(I)i and by kAkFr,ν ( I∼ν hA(I), A(I)i)1/2 .
For a vector of formal variables x (x1 , . . . , x n ), we use x 6 d to denote the vector consisting of all
def
monomials of degree at most d in these variables. Furthermore, let us denote X 6 d (x 6 d )(x 6 d )T .
2 Distinguishing Problems and Robust Inference
In this section, we set up the formal framework within which we will prove our main result.
Uniform vs. Planted Distinguishing Problems
We begin by describing a class of distinguishing problems. For A a set of real numbers, we will
use I A N denote a space of instances indexed by N variables—for the sake of concreteness, it
9
will be useful to think of I as {0, 1} N ; for example, we could have N n2 and I as the set of
all graphs on n vertices. However, the results that we will show here continue to hold in other
contexts, where the space of all instances is N or [q]N .
Definition 2.1 (Uniform Distinguishing Problem). Suppose that I is the space of all instances, and
suppose we have two distributions over I, a product distribution ν (the “uniform” distribution),
and an arbitrary distribution µ (the “planted” distribution).
In a uniform distinguishing problem, we are given an instance I ∈ I which is sampled with
probability 21 from ν and with probability 12 from µ, and the goal is to determine with probability
greater than 21 + ε which distribution I was sampled from, for any constant ε > 0.
Polynomial Systems
In the uniform distinguishing problems that we are interested in, the planted distribution µ will be
a distribution over instances that obtain a large value for some optimization problem of interest (i.e.
the max clique problem). We define polynomial systems in order to formally capture optimization
problems.
Program 2.2 (Polynomial System). Let A, B be sets of real numbers, let n, N ∈ , and let I A N
be a space of instances and X ⊆ B n be a space of solutions. A polynomial system is a set of polynomial
equalities
1 j (x, I) 0 ∀ j ∈ [m],
where {1 j } m
are polynomials in the program variables {x i }i∈[n] , representing x ∈ X, and in the
j1
instance variables {Ij } j∈[N] , representing I ∈ I. We define degprog (1 j ) to be the degree of 1 j in the
program variables, and deginst (1 j ) to be the degree of 1 j in the instance variables.
Remark 2.3. For the sake of simplicity, the polynomial system Program 2.2 has no inequalities.
Inequalities can be incorporated in to the program by converting each inequality in to an equality
with an additional slack variable. Our main theorem still holds, but for some minor modifications
of the proof, as outlined in Section 4.
A polynomial system allows us to capture problem-specific objective functions as well as
problem-specific constraints. For concreteness, consider a quadtratic program which checks if a
graph on n vertices contains a clique of size k. We can express this with the polynomial system
n
over program variables x ∈ n and instance variables I ∈ {0, 1}( 2 ) , where Ii j 1 iff there is an
edge from i to j, as follows:
nÕ
i∈[n]
o
x i − k 0 ∪ {x i (x i − 1) 0}i∈[n] ∪ {(1 − Ii j )x i x j 0}i, j∈( [n]) .
2
Planted Distributions
We will be concerned with planted distributions of a particular form; first, we fix a polynomial
system of interest S {1 j (x, I)} j∈[m] and some set X ⊆ B n of feasible solutions for S, so that the
10
program variables x represent elements of X. Again, for concreteness, if I is the set of graphs on
n vertices, we can take X ⊆ {0, 1} n to be the set of indicators for subsets of at least n ε vertices.
For each fixed x ∈ X, let µ |x denote the uniform distribution over I ∈ I for which the
polynomial system {1 j (x, I)} j∈[m] is feasible. The planted distribution µ is given by taking the
uniform mixture over the µ |x , i.e., µ ∼ U x∼X [µ |x ].
SoS Relaxations
If we have a polynomial system {1 j } j∈[m] where degprog (1 j ) 6 2d for every j ∈ [m], then the
degree-2d sum-of-squares SDP relaxation for the polynomial system Program 2.2 can be written
as,
Program 2.4 (SoS Relaxation for Polynomial System). Let S {1 j (x, I)} j∈[m] be a polynomial
system in instance variables I ∈ I and program variables x ∈ X. If degprog (1 j ) 6 2d for all j ∈ [m],
then an SoS relaxation for S is
hG j (I), Xi 0 ∀ j ∈ [m]
X0
6d
6d
where X is an [n]6 d ×[n]6 d matrix containing the variables of the SDP and G j : I → [n] ×[n] are
matrices containing the coefficients of 1 j (x, I) in x, so that the constraint hG j (I), Xi 0 encodes
the constraint 1 j (x, I) 0 in the SDP variables. Note that the entries of G j are polynomials of
degree at most deginst (1 j ) in the instance variables.
Sub-instances
Suppose that I A N is a family of instances; then given an instance I ∈ I and a subset S ⊆ [N],
let IS denote the sub-instance consisting of coordinates within S. Further, for a distribution Θ over
subsets of [N], let IS ∼Θ I denote a subinstance generated by sampling S ∼ Θ. Let I↓ denote
the set of all sub-instances of an instance I, and let I↓ denote the set of all sub-instances of all
instances.
Robust Inference
Our result will pertain to polynomial systems that define planted distributions whose solutions to
sub-instances generalize to feasible solutions over the entire instance. We call this property “robust
inference.”
Definition 2.5. Let I A N be a family of instances, let Θ be a distribution over subsets of [N], let S
be a polynomial system as in Program 2.2, and let µ be a planted distribution over instances feasible
for S. Then the polynomial system S is said to satisfy the robust inference property for probability
distribution µ on I and subsampling distribution Θ, if given a subsampling IS of an instance I from
µ, one can infer a setting of the program variables x ∗ that remains feasible to S for most settings of
IS .
Formally, there exists a map x : I↓ → n such that
I∼µ,S∼Θ,Ĩ∼ν|IS
[x(IS ) is a feasible for S on IS ◦ Ĩ] > 1 − ε(n, d)
11
for some negligible function ε(n, d). To specify the error probability, we will say that polynomial
system is ε(n, d)-robustly inferable.
Main Theorem
We are now ready to state our main theorem.
Theorem 2.6. Suppose that S is a polynomial system as defined in Program 2.2, of degree at most 2d in the
program variables and degree at most k in the instance variables. Let B > d · k ∈ such that
1. The polynpomial system S is n18B -robustly inferable with respect to the planted distribution µ and the
sub-sampling distribution Θ.
2. For I ∼ ν, the polynomial system S admits a degree-d SoS refutation with numbers bounded by n B
with probability at least 1 − n18B .
Let D ∈ be such that for any subset α ⊆ [N] with |α| > D − 2dk,
[α ⊆ S] 6
S∼Θ
1
n 8B
There exists a degree 2D matrix polynomial Q : I → [n]
+
I∼µ [λ max (Q(I))]
+
I∼ν [λ max (Q(I))]
6 d ×[n] 6 d
such that,
> n B/2
Remark 2.7. Our argument implies a stronger result that can be stated in terms of the eigenspaces
of the subsampling operator. Specifically, suppose we define
def
Sε
α | {α ⊆ S} 6 ε
S∼Θ
Then, the distinguishing polynomial exhibited by Theorem 2.6 satisfies Q
∈
span{ monomials Iα |α ∈ Sε }. This refinement can yield tighter bounds in cases where all
monomials of a certain degree are not equivalent to each other. For example, in the Planted
Clique problem, each monomial consists of a subgraph and the right measure of the degree of a
sub-graph is the number of vertices in it, as opposed to the number of edges in it.
In Section 5, we will make the routine verifications that the conditions of this theorem hold
for a variety of distinguishing problems: planted clique (Lemma 5.2), refuting random CSPs
(Lemma 5.4, stochastic block models (Lemma 5.6), densest-k-subgraph (Lemma 5.8), tensor PCA
(Lemma 5.10), and sparse PCA (Lemma 5.12). Now we will proceed to prove the theorem.
3 Moment-Matching Pseudodistributions
We assume the setup from Section 2: we have a family of instances I A N , a polynomial system
S {1 j (x, I)} j∈[m] with a family of solutions X B n , a “uniform” distribution ν which is a product
distribution over I, and a “planted” distribution µ over I defied by the polynomial system S as
described in Section 2.
12
The contrapositive of Theorem 2.6 is that if S is robustly inferable with respect to µ and a
distribution over sub-instances Θ, and if there is no spectral algorithm for distinguishing µ and
ν, then with high probability there is no degree-d SoS refutation for the polynomial system S (as
defined in Program 2.4). To prove the theorem, we will use duality to argue that if no spectral
algorithm exists, then there must exist an object which is in some sense close to a feasible solution
to the SoS SDP relaxation.
Since each I in the support of µ is feasible for S by definition, a natural starting point is the
6d
6d
SoS SDP solution for instances I ∼µ I. With this in mind, we let Λ : I → ([n] ×[n] )+ be an
arbitrary function from the support of µ over I to PSD matrices. In other words, we take
Λ(I) µ̂(I) · M(I)
where µ̂ is the relative density of µ with respect to ν, so that µ̂(I) µ(I)/ν(I), and M is some
matrix valued function such that M(I) 0 and kM(I)k 6 B for all I ∈ I. Our goal is to find a
PSD matrix-valued function P that matches the low-degree moments of Λ in the variables I, while
being supported over most of I (rather than just over the support of µ).
6d
6d
The function P : I → ([n] ×[n] )+ is given by the following exponentially large convex
program over matrix-valued functions,
Program 3.1 (Pseudodistribution Program).
min
s.t.
2
kP kFr,ν
(3.1)
hQ, Piν hQ, Λ′iν
P0
Λ′ Λ + η · Id,
[n]6 d ×[n]6 d
∀Q : I →
2n
2−2
, deginst (Q) 6 D
>η>0
(3.2)
(3.3)
The constraint (3.2) fixes Tr(P), and so the objective function (3.1) can be viewied as minimizing Tr(P 2 ), a proxy for the collision probability of the distribution, which is a measure of
entropy.
Remark 3.2. We have perturbed Λ in (3.3) so that we can easily show that strong duality holds in
the proof of Claim 3.4. For the remainder of the paper we ignore this perturbation, as we can
accumulate the resulting error terms and set η to be small enough so that they can be neglected.
The dual of the above program will allow us to relate the existence of an SoS refutation to the
existence of a spectral algorithm.
Program 3.3 (Low-Degree Distinguisher).
max
s.t.
hΛ, Qiν
Q : I → [n]
6 d ×[n] 6 d
2
kQ + kFr,ν
6 1,
, deginst (Q) 6 D
where Q + is the projection of Q to the PSD cone.
Claim 3.4. Program 3.3 is a manipulation of the dual of Program 3.1, so that if Program 3.1 has
√
optimum c > 1, Program 3.3 as optimum at least Ω( c).
13
Before we present the proof of the claim, we summarize its central consequence in the following
theorem: if Program 3.1 has a large objective value (and therefore does not provide a feasible SoS
solution), then there is a spectral algorithm.
[n]6 d ×[n]6 d
be such that Id M 0. Let λ +max (·) be the
Theorem 3.5. Fix a function M : I → +
function that gives the largest non-negative eigenvalue of a matrix. Suppose Λ µ · M then the optimum
of Program 3.1 is equal to opt > 1 only if there exists a low-degree matrix polynomial Q such that,
I∼µ
p
[λ+max (Q(I))] > Ω( opt/n d )
while,
I∼ν
[λ +max (Q(I))] 6 1 .
Proof. By Claim 3.4, if the value of Program 3.1 is opt > 1, then there is a polynomial Q achieves a
√
value of Ω( opt) for the dual. It follows that
I∼µ
1
nd
[λ +max (Q(I))] >
while
I∼ν
[λ+max (Q(I))]
6
I∼µ
q
[hId, Q(I))i] >
I∼ν
[λ +max (Q(I))2 ]
p
1
hΛ,
Qi
Ω(
opt/n d ),
ν
nd
6
r
I∼ν
2
6 1.
kQ + (I)kFr
It is interesting to note that the specific structure of the PSD matrix valued function M plays no
role in the above argument—since M serves as a proxy for monomials in the solution as represented
by the program variables x ⊗d , it follows that the choice of how to represent the planted solution
is not critical. Although seemingly counterintuitive, this is natural because the property of being
distinguishable by low-degre distinguishers or by SoS SDP relaxations is a property of ν and µ.
We wrap up the section by presenting a proof of the Claim 3.4.
Proof of Claim 3.4. We take the Lagrangian dual of Program 3.1. Our dual variables will be some
combination of low-degree matrix polynomials, Q, and a PSD matrix A:
2
L(P, Q, A) kP kFr,ν
− hQ, P − Λ′iν − hA, Piν
s.t.
A 0.
It is easy to verify that if P is not PSD, then A can be chosen so that the value of L is ∞. Similarly if
there exists a low-degree polynomial upon which P and Λ differ in expectation, Q can be chosen
as a multiple of that polynomial so that the value of L is ∞.
Now, we argue that Slater’s conditions are met for Program 3.1, as P Λ′ is strictly feasible.
Thus strong duality holds, and therefore
min max L(P, Q, A) 6 max min L(P, Q, A).
P
A0,Q
A0,Q
P
Taking the partial derivative of L(P, Q, A) with respect to P, we have
∂
L(P, Q, A) 2 · P − Q − A.
∂P
14
6d
6d
where the first derivative is in the space of functions from I → [n] ×[n] . By the convexity of
∂
L as a function of P, it follows that if we set ∂P
L 0, we will have the minimizer. Substituting, it
follows that
1
1
1
2
kA + Q kFr,ν
− hQ, A + Q − Λ′iν − hA, A + Qiν
4
2
2
1
2
max hQ, Λ′iν − kA + Q kFr,ν
4
A0,Q
min max L(P, Q, A) 6 max
P
A0,Q
A0,Q
(3.4)
Now it is clear that the maximizing choice of A is to set A −Q − , the negation of the negativesemi-definite projection of Q. Thus (3.4) simplifies to
min max L(P, Q, A) 6 max hQ, Λ′iν −
P
A0,Q
Q
1
2
kQ + kFr,ν
4
1
4
2
,
6 max hQ, Λiν + η Trν (Q + ) − kQ + kFr,ν
Q
(3.5)
def
where we have used the shorthand Trν (Q + ) I∼ν Tr(Q(I)+ ). Now suppose that the low-degree
matrix polynomial Q ∗ achieves a right-hand-side value of
1
2
> c.
hQ ∗ , Λiν + η · Trν (Q +∗ ) − kQ +∗ kFr,ν
4
Consider Q ′ Q ∗ /kQ +∗ kFr,ν . Clearly kQ +′ kFr,ν 1. Now, multiplying the above inequality through
by the scalar 1/kQ +∗ kFr,ν , we have that
Trν (Q +∗ )
1
c
−
η
·
+ kQ +∗ kFr,ν
∗
∗
kQ + kFr,ν
kQ + kFr,ν 4
1
c
− η · n d + kQ +∗ kFr,ν .
>
∗
kQ + kFr,ν
4
hQ ′ , Λiν >
√
Therefore hQ ′ , Λiν is at least Ω(c 1/2 ), as if kQ +∗ kFr,ν > c then the third term gives the lower bound,
and otherwise the first term gives the lower bound.
Thus by substituting Q ′, the square root of the maximum of (3.5) within an additive ηn d
lower-bounds the maximum of the program
max
s.t.
hQ, Λiν
Q : I → [n]
2
kQ + kFr,ν
6 d ×[n] 6 d
6 1.
This concludes the proof.
,
deginst (Q) 6 D
4 Proof of Theorem 2.6
We will prove Theorem 2.6 by contradiction. Let us assume that there exists no degree-2D matrix
polynomial that distinguishes ν from µ. First, the lack of distinguishers implies the following fact
about scalar polynomials.
15
Lemma 4.1. Under the assumption that there are no degree-2D distinguishers, for every degree-D scalar
polynomial Q,
2
2
kQ kFr,µ
6 n B kQ kFr,ν
Proof. Suppose not, then the degree-2D 1 × 1 matrix polynomial Tr(Q(I)2 ) will be a distinguisher
between µ and ν.
Constructing Λ. First, we will use the robust inference property of µ to construct a pseudodistribution Λ. Recall again that we have defined µ̂ to be the relative density of µ with respect
to ν, so that µ̂(I) µ(I)/ν(I). For each subset S ⊆ [N], define a PSD matrix-valued function
6d
6d
ΛS : I → ([n] ×[n] )+ as,
ΛS (I)
I′
S
[ µ̂(IS ◦ I ′)] · x(IS )6 d (x(IS )6 d )T
S
where we use IS to denote the restriction of I to S ⊂ [N], and IS ◦ I ′ to denote the instance given by
S
completing the sub-instance IS with the setting I ′. Notice that ΛS is a function depending only on
S
def
IS —this fact will be important to us. Define Λ
function that satisfies
hΛ∅,∅ , 1iν
I∼ν S∼Θ I ′ ∼ν
S
S∼Θ ΛS .
[ µ̂(IS ◦ I ′)]
S
Observe that Λ is a PSD matrix-valued
S IS IS ◦I ′ ∼ν
S
[ µ̂(IS ◦ I ′)] 1
S
(4.1)
Since Λ(I) is an average over ΛS (I), each of which is a feasible solution with high probability,
Λ(I) is close to a feasible solution to the SDP relaxation for I. The following Lemma formalizes
this intuition.
def
Define G span{χS · G j | j ∈ [m], S ⊆ [N]}, and use ΠG to denote the orthogonal projection
into G.
Lemma 4.2. Suppose Program 2.2 satisfies the ε-robust inference property with respect to planted distribution µ and subsampling distribution Θ and if kx(IS )6 d k22 6 K for all IS then for every G ∈ G, we
have
! 1/2
√
hΛ, Giν 6 ε · K ·
kG(IS ◦ IS )k22
S∼Θ Ĩ ∼ν I∼µ
S
Proof. We begin by expanding the left-hand side by substituting the definition of Λ. We have
hΛ, Giν
S∼Θ I∼ν
hΛS (IS ), G(I)i
S∼Θ I∼ν I ′ ∼ν
S
µ̂(IS ◦ I ′) · hx(IS )6 d (x(IS )6 d )T , G(I)i
S
And because the inner product is zero if x(IS ) is a feasible solution,
6
6
S∼Θ I∼ν
I ′ ∼ν
S
S∼Θ I∼ν I ′ ∼ν
S
µ̂(IS ◦ I ′) · [x(IS ) is infeasible for S(I)] · x(IS )6 d
S
2
2
· kG(I)kFr
µ̂(IS ◦ I ′) · [x(IS ) is infeasible for S(I)] · K · kG(I)kFr
S
16
And now letting ĨS denote the completion of IS to I, so that IS ◦ ĨS I, we note that the above
is like sampling I ′ , ĨS independently from ν and then reweighting by µ̂(IS ◦ I ′ ), or equivalently
S
S
taking the expectation over IS ◦ I ′ I ′ ∼ µ and ĨS ∼ ν:
S
S∼Θ I ′ ∼µ Ĩ ∼ν
S
· [x(IS ) is infeasible for S(IS ◦ ĨS )] · K · kG(IS ◦ ĨS )kFr
and by Cauchy-Schwarz,
6K·
S∼Θ I ′ ∼µ Ĩ ∼ν
S
· [x(IS ) is infeasible for S(IS ◦ ĨS )]
! 1/2
·
S∼Θ I ′ ∼µ Ĩ ∼ν
S
2
kG(IS ◦ ĨS )kFr
! 1/2
The lemma follows by observing that the first term in the product above is exactly the nonrobustness of inference probability ε.
Corollary 4.3. If G ∈ G is a degree-D polynomial in I, then under the assumption that there are no
degree-2D distinguishers for ν, µ,
hΛ, Giν 6
√
ε · K · n B · kGkFr,ν
Proof. For each fixing of ĨS , kG(IS ◦ ĨS )k22 is a degree-2D-scalar polynomial in I. Therefore by
Lemma 4.1 we have that,
I∼µ
2
kG(IS ◦ ĨS )kFr
6 nB ·
I∼ν
2
kG(IS ◦ ĨS )kFr
.
Substituting back in the bound in Lemma 4.2 the corollary follows.
Now, since there are no degree-D matrix distinguishers Q, for each S in the support of Θ we can
apply reasoning similar to Theorem 3.5 to conclude that there is a high-entropy PSD matrix-valued
function PS that matches the degree-D moments of ΛS .
Lemma 4.4. If there are no degree-D matrix distinguishers Q for µ, ν, then for each S ∼ Θ, there exists a
solution PS to Program 3.1 (with the variable Λ : ΛS ) and
kPS kFr,ν 6 n
(B+d)/4
6 n B/2
(4.2)
This does not follow directly from Theorem 3.5, because a priori a distinguisher for some
specific S may only apply to a small fraction of the support of µ. However, we can show that
Program 3.1 has large value for ΛS only if there is a distinguisher for µ, ν.
Proof. By Claim 3.4, it suffices for us to argue that there is no degree-D matrix polynomial Q which
has large inner product with ΛS relative to its Frobenius norm. So, suppose by way of contradiction
that Q is a degree-D matrix that distinguishes ΛS , so that hQ, ΛS iν > n B+d but kQ kFr,ν 6 1.
It follows by definition of ΛS that
n B+d 6 hQ, ΛS iν
I∼ν I ′ ∼ν
S
µ̂(IS ◦ I ′) · hQ(I), x(IS )6 d (x(IS )6 d )⊤ i
S
17
IS ◦I ′ ∼µ IS ∼ν
6
µ
S
λ +max
Q(IS ◦ IS ), x(IS )
IS ∼ν
Q(IS ◦ IS )
6d
(x(IS )
6d ⊤
· x(IS )6 d
)
2
2
.
So, we will show that Q S (I) I ′∼ν Q(IS ◦ I ′) is a degree-D distinguisher for µ. The degree of Q S
S
S
is at most D, since averaging over settings of the variables cannot increase the degree. Applying
our assumption that kx(IS )6 d k22 6 K 6 n d , we already have µ λ +max (Q S ) > n B . It remains to show
that ν λ +max (Q S ) is bounded. For this, we use the following fact about the trace.
Fact 4.5 (See e.g. Theorem 2.10 in [CC09]). For a function f : → and a symmetric matrix A with
Í
Í
eigendecomposition λ · vv ⊤ , define f (A) f (λ) · vv ⊤ . If f : → is continuous and convex, then
the map A → Tr( f (A)) is convex for symmetric A.
The function f (t) (max{0, t})2 is continuous and convex over , so the fact above implies
2
is convex for symmetric A. We can take Q S to be symmetric without loss
that the map A → kA+ kFr
of generality, as in the argument above we only consider the inner product of Q S with symmetric
matrices. Now we have that
k(Q S (I))+ k 2Fr
I′
S
h
Q(IS ◦
I ′)
S
i
!
2
6
I′
S
+ Fr
Q(IS ◦ I ′)
S
2
+ Fr
,
where the inequality is the definition of convexity. Taking the expectation over I ∼ ν gives us that
2
2
k(Q S )+ kFr,ν
6 kQ + kFr,ν
6 1, which gives us our contradiciton.
def
Now, analogous to Λ, set P
S∼Θ
PS .
Random Restriction. We will exploit the crucial property that Λ and P are averages over functions
that depend on subsets of variables. This has the same effect as a random restriction, in that hP, Riν
essentially depends on the low-degree part of R. Formally, we will show the following lemma.
Lemma 4.6. (Random Restriction) Fix D, ℓ ∈ . For matrix-valued functions R : I → ℓ×ℓ and a family
of functions {PS : IS → ℓ×ℓ }S⊆[N] , and a distribution Θ over subsets of [N],
I∼ν S∼Θ
hPS (IS ), R(I)i >
S∼Θ I∼ν
hPS (IS ), R <D
S (IS )i
1/2
− ρ(D, Θ)
·
2
kPS kFr,ν
S∼Θ
where
ρ(D, Θ) max
[α ⊆ S].
α,|α| > D S∼Θ
Proof. We first re-express the left-hand side as
I∼ν S∼Θ
hPS (IS ), R(I)i
S∼Θ I∼ν
18
hPS (IS ), R S (IS )i
12
kRkFr,ν
def
where R S (IS ) IS [R(I)] obtained by averaging out all coordinates outside S. Splitting the
function R S into its low-degree and high-degree parts, R S R S6D + R >D
, then applying a CauchyS
Schwartz inequality we get
S∼Θ I∼ν
hPS (IS ), R S (IS )i >
S∼Θ I∼ν
hPS (IS ), R <D
S (IS )i
−
2
kPS kFr,ν
S∼Θ
1/2
·
>D 2
S∼Θ
kR S kFr,ν
1/2
.
Expressing R >D (I) in the Fourier basis, we have that over a random choice of S ∼ Θ,
S∼Θ
2
kR S>D kFr,ν
Õ
α,|α| > D
2
[α ⊆ S] · R̂ 2α 6 ρ(D, Θ) · kRkFr
S∼Θ
Substituting into the above inequality, the conclusion follows.
Equality Constraints. Since Λ is close to satisfying all the equality constraints G of the SDP,
the function P approximately satisfies the low-degree part of G. Specifically, we can prove the
following.
Lemma 4.7. Let k > deginst (G j ) for all G j ∈ S. With P defined as above and under the conditions of
Theorem 2.6 for any function G ∈ G,
hP, G 6D iν 6
2
kGkFr,ν
n 2B
Proof. Recall that G span{χS · G j | j ∈ [m], S ⊆ [N]} and let ΠG be the orthogonal projection into
G. Now, since G ∈ G,
G 6D (ΠG G)6D (ΠG G 6D−2k )6D + (ΠG G >D−2k )6D .
(4.3)
Now we make the following claim regarding the effect of projection on to the ideal G, on the
degree of a polynomial.
Claim 4.8. For every polynomial Q, deg(ΠG Q) 6 deg(Q) + 2k. Furthermore for all α, ΠG Q >α has
no monomials of degree 6 α − k
Proof. To establish the first part of the claim it suffices to show that ΠG Q ∈ span{χS · G j | |S| 6
deg(Q) + k}, since deg(G j ) 6 k for all j ∈ [m]. To see this, observe that ΠG Q ∈ span{χS · G j | |S| 6
deg(Q) + k} and is orthogonal to every χS · G j with |S| > deg(Q) + k:
hΠG Q, χS · G j iν hQ, ΠG χS · G j iν hQ, χS · G j iν hQG j , χS iν 0,
where the final equality is because deg(χS ) > deg(G j ) + deg(Q). On the other hand, for every
subset S with deg(χS ) 6 α − k,
hΠG Q >α , χS · G j i hQ >α , ΠG χS · G j i hQ >α , χS · G j i 0, since α > deg(G j ) + deg(χS )
This implies that ΠG Q >α ∈ span{χS · G j | |S| > α − k} which implies that ΠG Q >α has no monomials
of degree 6 α − k.
19
Incorporating the above claim into (4.3), we have that
G 6D ΠG G 6D−2k + (ΠG G >D−2k )[D−3k,D] ,
where the superscript [D − 3k, D] denotes the degree range. Now,
hP, G 6D iν hP, ΠG G 6D−2k iν + hP, (ΠG G >D−2k )[D−3k,D] iν
And since ΠG G 6D−2k is of degree at most D we can replace P by Λ,
hΛ, ΠG G 6D−2k iν + hP, (ΠG G >D−2k )[D−3k,D] iν
Now bounding the first term using Corollary 4.3 with a n B bound on K,
1
6 8B
n
1/2
6 D−2k
· n B · (n B · kΠG G∅,∅
kFr,ν ) + hP, (ΠG G >D−2k )[D−3k,D] i
And for the latter term we use Lemma 4.6,
1
1
6 D−2k
6 2B kΠG G∅,∅
kFr,ν + 4B
n
n
2
kPS kFr,ν
S
1/2
kGkFr,ν ,
where we have used the fact that (ΠG G >D−2k )[D−3k,D] is high degree. By property of orthogonal
projections, kΠG G >D−2k kFr,ν 6 kG >D−2k kFr,ν 6 kGkFr,ν . Along with the bound on kPS kFr,ν from
(4.2), this implies the claim of the lemma.
Finally, we have all the ingredients to complete the proof of Theorem 2.6.
Proof of Theorem 2.6. Suppose we sample an instance I ∼ ν, and suppose by way of contradiction
this implies that with high probability the SoS SDP relaxation is infeasible. In particular, this
implies that there is a degree-d sum-of-squares refutation of the form,
−1 a I (x) +
Õ
j∈[m]
1 Ij (x) · q Ij (x),
where a I is a sum-of-squares of polynomials of degree at most 2d in x, and deg(q Ij ) + deg(1 Ij ) 6 2d.
6d
6d
Let AI ∈ [n] ×[n] be the matrix of coefficients for a I (c) on input I, and let G I be defined similarly
Í
for j∈[m] 1 j (x) · q j (x). We can rewrite the sum-of-squares refutation as a matrix equality,
−1 hX 6 d , AI i + hX 6 d , G I i,
where G I ∈ G, the span of the equality constraints of the SDP.
Define s : I → {0, 1} as
def
s(I) [∃ a degree-2d sos-refutation for S(I)]
By assumption,
setting,
I∼ν [s(I)]
1−
1
.
n 8B
Define matrix valued functions A, G : I → [n]
def
A(I) s(I) · AI
20
6 d ×[n] 6 d
by
def
G(I) s(I) · G I
With this notation, we can rewrite the sos-refutation identity as a polynomial identity in X and I,
−s(I) hX 6 d , A(I)i + hX 6 d , G(I)i .
Let e∅,∅ denote the [n]6 d × [n]6 d matrix with the entry corresponding to (∅, ∅) equal to 1, while the
remaining entries are zero. We can rewrite the above equality as,
−hX 6 d , s(I) · e∅,∅ i hX 6 d , A(I)i + hX 6 d , G(I)i .
for all I and formal variables X.
Now, let P S∼Θ PS where each PS is obtained by from the Program 3.1 with ΛS . Substituting
6
d
X with P(I) and taking an expectation over I,
hP, s(I) · e∅,∅ iν hP, Aiν + hP, Giν
(4.4)
> hP, Giν
(4.5)
where the inequality follows because A, P 0. We will show that the above equation is a
contradiction by proving that LHS is less than −0.9, while the right hand side is at least −0.5. First,
the right hand side of (4.4) can be bounded by Lemma 4.7
hP, Giν
>
I∼ν S∼Θ
hPS (IS ), G(I)i
I∼ν S∼Θ
hPS (IS ), G
6D
1
(I)i − 4B ·
n
1
2
> − 2B · kGkFr,ν − 4B
n
n
1
>−
2
2
kPS kFr,ν
S
2
kPS kFr,ν
S
1/2
kGkFr,ν
1/2
· kGkFr,ν
(random restriction Lemma 4.6)
(using Lemma 4.7)
where the last step used the bounds on kPS kFr,ν from (4.2) and on kGkFr,ν from the n B bound
assumed on the SoS proofs in Theorem 2.6.
Now the negation of the left hand side of (4.4) is
I∼ν
hP(I), s(I) · e∅,∅ i >
I∼ν
[P∅,∅ (I) · 1] − [(s − 1)2 ]1/2 · kP kFr,ν
The latter term can be simplified by noticing that the expectation of the square of a 0,1 indicator is
equal to the expectation of the indicator, which is in this case n18B by assumption. Also, since 1 is a
constant, P∅,∅ and Λ∅,∅ are equivalent:
I∼ν
[Λ∅,∅ (I) · 1] −
1
· kP kFr,ν
n 4B
1
· kP kFr,ν
( using (4.1))
n 4B
1
(using (4.2))
1 − 3B
n
1−
We have the desired contradiction in (4.4).
21
4.1 Handling Inequalities
Suppose the polynomial system Program 2.2 includes inequalities of the form h(I, x) > 0, then
a natural approach would be to introduce a slack variable z and set h(I, x) − z 2 0. Now, we
can view the vector (x, z) consisting of the original variables along with the slack variables as
the hidden planted solution. The proof of Theorem 2.6 can be carried out as described earlier in
this section, with this setup. However, in many cases of interest, the inclusion of slack variables
invalidates the robust inference property. This is because, although a feasible solution x can be
recovered from a subinstance IS , the value of the corresponding slack variables could potentially
depend on IS . For instance, in a random CSP, the value of the objective function on the assignment
x generated from IS depends on all the constraints outside of S too.
The proof we described is to be modified as follows.
• As earlier, construct ΛS using only the robust inference property of original variables x, and
the corresponding matrix functions PS .
• Convert each inequality of the form h i (I, x) > 0, in to an equality by setting h i (I, x) z 2i .
• Now we define a pseudo-distribution Λ̃S (IS ) over original variables x and slack variables z
as follows. It is convenient to describe the pseudo-distribution in terms of the corresponding
pseudo-expectation operator. Specifically, if x(IS ) is a feasible solution for Program 2.2 then
define
(
if σi odd for some i
def 0
Ẽ[z σ x α ] Î
σ i /2 · x(I )
otherwise
S α
i∈σ (h i (I, x(IS )))
Intuitively, the pseudo-distribution picks the sign for each z i uniformly at random, independent of all other variables. Therefore, all moments involving an odd power of z i are
zero. On the other hand, the moments of even powers of z i are picked so that the equalities
h i (I, x) z i are satisfied.
It is easy to check that Λ̃ is psd matrix valued, satisfies (4.1) and all the equalities.
• While ΛS in the original proof was a function of IS , Λ̃S is not. However, the key observation
is that, Λ̃S is degree at most k · d in the variables outside of S. Each function h i (I, x(IS ))
is degree at most k in IS , and the entries of Λ̃S (IS ) are a product of at most d of these
polynomials.
• The main ingredient of the proof that is different from the case of equalities is the random
restriction lemma which we outline below. The error in the random restriction is multiplied
by D dk/2 6 n B/2 ; however this does not substantially change our results, since Theorem 2.6
requires ρ(D, Θ) < n −8B , which leaves us enough slack to absorb this factor (and in every
application ρ(D, Θ) p O(D) for some p < 1 sufficiently small that we meet the requirement
that D dk ρ(D − dk, Θ) is monotone non-increasing in D).
Lemma 4.9. [Random Restriction for Inequalities] Fix D, ℓ ∈ . Consider a matrix-valued function
R : I → ℓ×ℓ and a family of functions {PS : I → ℓ×ℓ }S⊆[N] such that each PS has degree at most dk
in IS . If Θ is a distribution over subsets of [N] with
ρ(D, Θ) max
[α ⊆ S],
α,|α| > D S∼Θ
22
and the additional requirement that D dk · ρ(D − dk, Θ) is monotone non-increasing in D, then
I∼ν S∼Θ
hPS (IS ), R(I)i >
S∼Θ I∼ν
hPS (IS ), R̃ <D
S (IS )i
−D
1/2
dk/2
· ρ(D − dk, Θ)
·
2
kPS k2,ν
S∼Θ
12
kRkFr,ν
Proof.
I∼ν S∼Θ
hPS (IS ), R(I)i
S∼Θ I∼ν
hPS (IS ), R̃ S (I)i
where R̃ S (I) is now obtained by averaging out the values for all monomials whose degree in S is
> dk. Writing R̃ S R̃ S6D + R̃ >D
and applying a Cauchy-Schwartz inequality we get,
S
S∼Θ I∼ν
hPS (IS ), R̃ S (I)i >
S∼Θ I∼ν
hPS (IS ), R̃ <D
S (I)i
−
2
kPS kFr,ν
S∼Θ
1/2
·
>D
S∼Θ
k R̃ S kFr,ν
1/2
Over a random choice of S,
S∼Θ
2
k R̃ S>D kFr,ν
Õ
α,|α| > D
2
[|α ∩ S| 6 dk] · R̂ 2α 6 D dk · ρ(D − dk, Θ) · kRkFr
,
S∼Θ
where we have used that D dk ρ(D − dk, Θ) is a monotone non-increasing function of D. Substituting
this in the earlier inequality the Lemma follows.
5 Applications to Classical Distinguishing Problems
In this section, we verify that the conditions of Theorem 2.6 hold for a variety of canonical distinguishing problems. We’ll rely upon the (simple) proofs in Appendix A, which show that the ideal
term of the SoS proof is well-conditioned.
Problem 5.1 (Planted clique with clique of size n δ ). Given a graph G (V, E) on n vertices,
determine whether it comes from:
• Uniform Distribution: the uniform distribution over graphs on n vertices (G(n, 21 )).
• Planted Distribution: the uniform distribution over n-vertex graphs with a clique of size at
least n δ
The usual polynomial program for planted clique in variables x1 , . . . , x n is:
obj 6
Õ
xi
i
x 2i x i
∀i ∈ [n]
x i x j 0 ∀(i, j) ∈ E
Lemma 5.2. Theorem 2.6 applies to the above planted clique program, so long as obj 6 n δ−ε for any
c·d
ε > D−6d
for a fixed constant c.
Proof. For planted clique, for our notion of “instance degree”, rather than the multiplicity of
instance variables, the “degree” of Iα will be the number of distinct vertices incident on the edges
23
in α. The proof of Theorem 2.6 proceeds identically with this notion of degree, but we will be able
to achieve better bounds on D relative to d.
In this case, the instance degree of the SoS relaxation is k 2. We have from Corollary A.3 that
the degree-d SoS refutation is well-conditioned, with numbers bounded by n c1 ·d for some constant
c 1 /2. Define B c 1 d > dk.
Our subsampling distribution Θ is the distribution given by including every vertex with probability ρ, producing an induced subgraph of ≈ ρn vertices. For any set of edges α of instance
degree at most D − 6d,
[α ⊆ S] 6 ρ D−6d ,
S∼Θ
since the instance degree corresponds to the number of vertices incident on α.
This subsampling operation satisfies the subsample inference condition for the clique constraints with probability 1, since a clique in any subgraph of G is also a clique in G. Also, if there
is a clique of size n δ in G, then by a Chernoff bound
β 2 ρn δ
[∃ clique of size > (1 − β)ρn ∈ S] > 1 − exp(−
).
S∼Θ
2
δ
q
10B log n
, this gives us that Θ gives n −10B -robust inference for the planted clique
Choosing β
ρn δ
problem, so long as obj 6 ρn/2. Choosing ρ n −ε for ε so that
ρ D−6d 6 n −8B ⇒ ε >
c2 d
,
D − 6d
for some constant c 2 , all of the conditions required by Theorem 2.6 now hold.
Problem 5.3 (Random CSP Refutation at clause density α). Given an instance of a Boolean k-CSP
with predicate P : {±1} k → {±1} on n variables with clause set C, determine whether it comes
from:
• Uniform Distribution: m ≈ αn constraints are generated as follows. Each k-tuple of variables
S ∈ [n]k is independently with probability p αn −k+1 given the constraint P(x S ◦ z S ) b S
(where ◦ is the entry-wise multiplication operation) for a uniformly random z S ∈ {±1} k and
b S ∈ {±1}.
• Planted Distribution: a planted solution y ∈ {±1} n is chosen, and then m ≈ αn constraints
are generated as follows. Each k-tuple of variables S ∈ [n]k is independently with probability
p αn −k+1 given the constraint P(x S ◦ z S ) b S for a uniformly random z S ∈ {±1} k , but
b S P(yS ◦ z S ) with probability 1 − δ and b S is uniformly random otherwise.
The usual polynomial program for random CSP refutation in variables x1 , . . . , x n is:
obj 6
Õ
S∈[n]
1 + P(x S ◦ z S ) · b S
[∃ constraint on S] ·
2
k
x 2i 1 ∀ i ∈ [n]
Lemma 5.4. If α > 1, then Theorem 2.6 applies to the above random k-CSP refutation problem, so long
c·d log n
as obj 6 (1 − δ − ε)m for any ε > D−3d , where c is a fixed constant.
24
Proof. In this case, the instance degree of the SoS relaxation k 1. We have from Corollary A.3 that
the degree-d SoS refutation is well-conditioned, with numbers bounded by n c1 d for some constant
c 1 . Define B c 1 d.
Our subsampling distribution Θ is the distribution given by including each constraint independently with probability ρ, producing an induced CSP instance on n variables with approximately ρm constraints. Since each constraint survives the subsampling with probability ρ, for any
C
,
α ∈ D−3d
[α ⊆ S] 6 ρ D−3d .
S∼Θ
The subsample inference property clearly holds for the boolean constraints {x 2i 1}i∈[n] , as
a Boolean assignment to the variables is valid regardless of the number of constraints. Before
subsampling there are at least (1 − δ)m satisfied constraints, and so letting OS be the number of
constraints satisfied in sub-instance S, we have by a Chernoff bound
β 2 ρ(1 − δ)m
.
[OS > (1 − β) · ρ(1 − δ)m] > 1 − exp −
2
S∼Θ
Choosing β
q
10B log n
ρ(1−δ)m
o(1) (with overwhelming probability since we have α > 1 ⇒
n −10B -robust inference
[m] >
n), we have that Θ gives us
for the random CSP refutation problem, so long
as obj 6 (1 − o(1))ρ(1 − δ)m. Choosing ρ (1 − ε) so that
ρ D−3d 6 n −8B ⇒ ε >
c 2 d log n
,
D − 3d
for some constant c 2 . The conclusion follows (after making appropriate adjustments to the constant).
Problem 5.5 (Community detection with average degree d (stochastic block model)). Given a graph
G (V, E) on n vertices, determine whether it comes from:
• Uniform Distribution: G(n, b/n), the distribution over graphs in which each edge is included
independently with probability b/n.
• Planted Distribution: the stochastic block model—there is a partition of the vertices into two
equally-sized sets, Y and Z, and the edge (u, v) is present with probability a/n if u, v ∈ Y or
u, v ∈ Z, and with probability (b − a)/n otherwise.
Letting x1 , . . . , x n be variables corresponding to the membership of each vertex’s membership, and
let A be the adjacency of the graph. The canonical polynomial optimization problem is
obj 6 x ⊤ Ax
Õ
x 2i 1
∀i ∈ [n]
x i 0.
i
Lemma 5.6. Theorem 2.6 applies to the community detection problem so long as obj 6 (1 − ε)
for ε >
c·d log n
D−3d
where c is a fixed constant.
25
(2a−b)
4 n,
Proof. The degree of the SoS relaxation in the instance is k 1. Since we have only hypercube and
balancedness constraints, we have from Corollary A.3 that the SoS ideal matrix is well-conditioned,
with no number in the SoS refutation larger than n c1 d for some constant c 1 . Let B c 1 d.
Consider the solution x which assigns x i 1 to i ∈ Y and x i −1 to i ∈ Z. Our subsampling
operation is to remove every edge independently with probability 1−ρ. The resulting distribution Θ
and the corresponding restriction of x clearly satisfies the Booleanity and balancedness constraints
with probability 1. Since each edge is included independently with probability ρ, for any α ∈
E
D−3d ,
[α ⊆ S] 6 ρ D−3d .
S∼Θ
In the sub-instance, the expected value (over the choice of planted instance and over the choice
of sub-instance) of the restricted solution x is
ρa
·
n
|Z|
|Y|
+
2
2
−ρ
(2a − b)ρn
b−a
· |Y| · |Z|
− ρa,
n
4
and by a Chernoffqbound, the value in the sub instance is within a (1 − β)-factor with probability
10B log n
1 − n −10B for β
. On resampling the edges outside the sub-instance from the uniform
n
distribution, this value can only decrease by at most (1 − ρ)(1 + β)nb/2 w.h.p over the choice of the
outside edges.
c (2a−b) log n
If we set ρ (1 − ε(2a − b)/10b), then ρ D−3d 6 n −8B for ε > 2 D−3d . for some constant c 2 ,
while the objective value is at least (1 − ε)
adjustments to the constant).
(2a−b)n
.
4
The conclusion follows (after making appropriate
Problem 5.7 (Densest-k-subgraph). Given a graph G (V, E) on n vertices, determine whether it
comes from:
• Uniform Distribution: G(n, p).
• Planted Distribution: A graph from G(n, p) with an instance of G(k, q) planted on a random
subset of k vertices, p < q.
Letting A be the adjacency matrix, the usual polynomial program for densest-k-subgraph in
variables x1 , . . . , x n is:
obj 6 x ⊤ Ax
Õ
x 2i x i
∀i ∈ [n]
xi k
i
Lemma 5.8. When k 2 (p + q) ≫ d log n, Theorem 2.6 applies to the densest-k-subgraph problem with
c·d log n
obj 6 (1 − ε)(p + q) 2k for any ε > D−3d for a fixed constant c.
Proof. The degree of the SoS relaxation in the instance is k 1. We have from Corollary A.3 that
the SoS proof has no values larger than n c1 d for a constant c 1 ; fix B c 1 d.
Our subsampling operation is to include each edge independently with probability ρ, and take
the subgraph induced by the included edges. Clearly, the Booleanity and sparsity constraints are
26
preserved by this subsampling distribution Θ. Since each edge is included independently with
E
,
probability ρ, for any α ∈ D−3d
[α ⊆ S] 6 ρ D−3d .
S∼Θ
Now, the expected objective value (over the instance and the sub-sampling) is at least ρ(p + q) 2k ,
and applying a Chernoff bound, we hace that the probability the
r sub-sampled instance has value
less than (1 − β)ρ(p + q)
k
2
is at most n −10B if we choose β
10B log n
ρ(p+q)(2k)
(which is valid since we
assumed that d log n ≪ (p + q)k 2 ). Further, a dense subgraph on a subset of the edges is still dense
when more edges are added back, so we have the n −10B -robust inference property.
Thus, choosing ρ (1 − ε) and setting
ρ D−3d 6 n −8B ⇒ ε >
c 2 d log n
,
D − 3d
for some constant c 2 , which concludes the proof (after making appropriate adjustments to the
constant).
Problem 5.9 (Tensor PCA). Given an order-k tensor in (n )⊗k , determine whether it comes from:
• Uniform Distribution: each entry of the tensor sampled independently from N(0, 1).
• Planted Distribution: a spiked tensor, T λ · v ⊗k + G where v is sampled uniformly from
{± √1n } n , and where G is a random tensor with each entry sampled independently from
N(0, 1).
Given the tensor T, the canonical program for the tensor PCA problem in variables x1 , . . . , x n is:
obj 6 hx ⊗k , Ti
kx k22 1
Lemma 5.10. For λn −ε ≫ log n, Theorem 2.6 applies to the tensor PCA problem with obj 6 λn −ε for
c·d
for a fixed constant c.
any ε > D−3d
Proof. The degree of the SoS relaxation in the instance is k 1. Since the entries of the noise
component of the tensor are standard normal variables, with exponentially good probability over
the input tensor T we will have no entry of magnitude greater than n d . This, together with
Corollary A.3, gives us that except with exponentially small probability the SoS proof will have no
values exceeding n c1 d for a fixed constant c 1 .
Our subsampling operation is to set to zero every entry of T independently with probability
1 − ρ, obtaining a sub-instance T′ on the nonzero entries. Also, for any α ∈
[α ∈ S] 6 ρ D−3d .
[n]k
,
D−3d
S∼Θ
This subsampling operation clearly preserves the planted solution unit sphere constraint. Additionally, let R be the operator that restricts a tensor to the nonzero entries. We have that
hR(λ · v ⊗k ), v ⊗k i has expectation λ · ρ, since every entry of v ⊗k has magnitude n −k/2 . Applying a
Chernoff bound, q
we have that this quantity will be at least (1 − β)λρ with probability at least n −10B
if we choose β
10B log n
.
λρ
27
It remains to address the noise introduced by GT′ and resampling all the entries outside of the
subinstance T′. Each of these entries is a standard normal entry. The quantity h(Id −R)(N), v ⊗k i
is a sum over at most n k i.i.d. Gaussian entries each with standard deviation n −k/2 (since that is
the magnitude of (v ⊗k )α . The entire quantity is thus a Gaussian random variable with mean
0 and
p
−10B
variance 1, and therefore with probability at least n
this quantity will not exceed 10B log n.
p
So long as 10B log n ≪ λρ, the signal term will dominate, and the solution will have value at
least λρ/2.
Now, we set ρ n −ε so that
ρ D−3d 6 n −8B ⇒ ε >
2c 1 d
,
D − 3d
which concludes the proof (after making appropriate adjustments to the constant c 1 ).
Problem 5.11 (Sparse PCA). Given an n × m matrix M in n , determine whether it comes from:
• Uniform Distribution: each entry of the matrix sampled independently from
√ N(0, 1).
• Planted Distribution: a random vector with k non-zero entries v ∈ {0, ±1/ k} n is chosen,
and then the ith column of the matrix is sampled independently by taking s i v + γi for a
uniformly random sign s i ∈ {±1} and a standard gaussian vector γi ∼ N(0, Id).
The canonical program for the sparse PCA problem in variables x1 , . . . , x n is:
obj 6 kM ⊤ x k22
x 2i x i
kx k22
k
∀i ∈ [n]
Lemma 5.12. For kn −ε/2 ≫ log n, Theorem 2.6 applies to the sparse PCA problem with obj 6 k 2−ε m for
c·d
any ε > D−6d
for a fixed constant c.
Proof. The degree of the SoS relaxation in the instance is 2. Since the entries of the noise are
standard normal variables, with exponentially good probability over the input matrix M we will
have no entry of magnitude greater than n d . This, together with Corollary A.3, gives us that except
with exponentially small probability the SoS proof will have no values exceeding n c1 d for a fixed
constant c 1 .
Our subsampling operation is to set to zero every entry of M independently with probability
M
,
1 − ρ, obtaining a sub-instance M on the nonzero entries. Also, for any α ∈ D−6d
[α ∈ S] 6 ρ D−6d .
S∼Θ
This subsampling operation clearly preserves
the constraints on the solution variables.
√
We take our subinstance solution y kv, which is feasible. Let R be the subsampling operator
that zeros out a set of columns. On subsampling, and then resampling the zeroed out columns
from the uniform distribution, we can write the resulting M̃ as
M̃ ⊤ R(sv T ) + G ⊤
T
where
√ G is a random Gaussian matrix. Therefore, the objective value obtained by the solution
y kv is
28
M̃ ⊤ y
√
√
k · R(sv ⊤ )v + k · G ⊤ v
The first term is a vector u si1nal with m entries, each of which is a sum of k Bernoulli random
variables, all of the same sign, with probability ρ of being nonzero. The second term is a vector
u noise with m entries, each of them an independent Gaussian variable with variance bounded by
k. We have that
[ku si1nal k22 ] (ρk)2 m,
Θ
and by Chernoff bounds
qwe have that this concentrates within a (1 − β) factor with probability
10B log n
.
1 − n −10B if we take β
(ρk)2 m
The expectation of hu si1nal , u noise i is zero, and applying similar concentration arguments we
have that with probability 1 − n 10B , |hu si1nal , u noise i| 6 (1 + β)ρk. Taking the union bound over
these events and applying Cauchy-Schwarz, we have that
kR(M)y k22 > (ρk)2 m − 2km ρ 2 k 2 m − 2km.
so long as ρk ≫ 1, the first term dominates.
Now, we set ρ n −ε for ε < 1 so that
ρ D−6d 6 n −8B ⇒ ε >
c2 d
,
D − 6d
for some constant c 2 , which concludes the proof.
Remark 5.13. For tensor PCA and sparse PCA, the underlying distributions were Gaussian. Applying Theorem 2.6 in these contexts yields the existence of distinguishers that are low-degree in a
non-standard sense. Specifically, the degree of a monomial will be the number of distinct variables
in it, irrespective of the powers to which they are raised.
6 Exponential lower bounds for PCA problems
In this section we give an overview of the proofs of our SoS lower bounds for the tensor and sparse
PCA problems. We begin by showing how Conjecture 1.2 predicts such a lower bound in the tensor
PCA setting. Following this we state the key lemmas to prove the exponential lower bounds; since
these lemmas can be proved largely by techniques present in the work of Barak et al. on planted
clique [BHK+ 16], we leave the details to a forthcoming full version of the present paper.
6.1 Predicting sos lower bounds from low-degree distinguishers for Tensor PCA
In this section we demonstrate how to predict using Conjecture 1.2 that when λ ≪ n 3/4−ε for ε > 0,
SoS algorithms cannot solve Tensor PCA. This prediction is borne out in Theorem 1.4.
Theorem 6.1. Let µ be the distribution on n⊗n⊗n which places a standard Gaussian in each entry. Let ν
be the density of the Tensor PCA planted distribution with respect to µ, where we take the planted vector v
29
to have each entry uniformly chosen from {± √1n }.7 If λ 6 n 3/4−ε , there is no degree n o(1) polynomial p with
p(A) 0,
µ
p(A) > n
planted
Ω(1)
· p(A)
µ
1/2
.
We sketch the proof of this theorem. The theorem follows from two claims.
Claim 6.2.
ν
max
deg p 6 n o(1)
,
µ
p(T)0
µ
p(T)
( (ν 6 d (T) − 1)2 )1/2
1/2
µ
2
(6.1)
p(T)
where ν 6 d is the orthogonal projection (with respect to µ) of the density ν to the degree-d polynomials. Note that the last quantity is just the 2 norm, or the variance, of the truncation to low-degree
polynomials of the density ν of the planted distribution.
Claim 6.3. (
µ (v
6 d (T) −
1)2 )1/2 ≪ 1 when λ 6 n 3/4−ε for ε > Ω(1) and d n o(1) .
The theorem follows immediately. We sketch proofs of the claims in order.
Sketch of proof for Claim 6.2. By definition of ν, the maximization is equivalent to maximizing
o(1) and with
2
µ ν(T) · p(T) among all p of degree d n
µ p(T) 1 and
µ p(T) 0. Standard Fourier analysis shows that this maximum is achieved by the orthogonal projection of ν − 1
into the span of degree d polynomials.
To make this more precise, recall that the Hermite polynomials provide an orthonormal basis
for real-valued polynomials under the multivariate Gaussian distribution. (For an introduction
to Hermite polynomials, see the book [O’D14].) The tensor T ∼ µ is an n 3 -dimensional multivariate Gaussian. For a (multi)-set W ⊆ [n]3 , let HW be the W-th Hermite polynomial, so that
µ HW (T)HW ′ (T) WW ′ .
Then the best p (ignoring normalization momentarily) will be the function
p(A) ν 6 d (A) − 1
Õ
16 |W | 6 d
(
T∼µ
ν(T)HW (T)) · HW (A)
Here µ ν(T)HW (T) b
ν (W) is the W-th Fourier coefficient of ν. What value for (6.1) is achieved
by this p? Again by standard Fourier analysis, in the numerator we have,
ν
p(T)
ν
(ν 6D (T) − 1)
µ
ν(T) · (ν 6D (T) − 1)
µ
(ν 6 d (T) − 1)2
Comparing this to the denominator, the maximum value of (6.1) is ( µ (v 6 d (T) − 1)2 )1/2 . This is
nothing more than the 2-norm of the projection of ν − 1 to degree-d polynomials!
The following fact, used to prove Claim 6.3, is an elementary computation with Hermite
polynomials.
Fact 6.4. Let W ⊆ [n]3 . Then b
ν (W) λ |W | n −3|W |/2 if W, thought of as a 3-uniform hypergraph, has all
even degrees, and is 0 otherwise.
7This does not substantially modify the problem but it will make calculations in this proof sketch more convenient.
30
µ ν(T)HW (T) ν HW (T),
To see that this calculation is straightforward, note that ν(W)
so it is enough to understand the expectations of the Hermite polynomials under the planted
distribution.
Sketch of proof for Claim 6.3. Working in the Hermite basis (as described above), we get µ (v 6 d (T) −
Í
1)2 16 |W | 6 d b
ν (W)2 . For the sake of exposition, we will restrict attention in the sum to W in
which no element appears with multiplicity larger than 1 (other terms can be treated similarly).
Í
What is the contribution to 16 |W | 6 d b
ν (W)2 of terms W with |W | t? By the fact above, to
contribute a nonzero term to the sum, W,considered as a 3-uniform hypergraph must have even
degrees. So, if it has t hyperedges, it contains at most 3t/2 nodes. There are n 3t/2 choices for these
nodes, and having chosen them, at most t O(t) 3-uniform hypergraphs on those nodes. Hence,
Õ
16 |W | 6 d
2
b
ν (W) 6
d
Õ
n 3t/2 t O(t) λ 2t n −3t .
t1
So long as λ 2 6 n 3/2−ε for some ε Ω(1) and t 6 d 6 n O(ε) , this is o(1).
6.2 Main theorem and proof overview for Tensor PCA
In this section we give an overview of the proof of Theorem 1.4. The techniques involved in proving
the main lemmas are technical refinements of techniques used in the work of Barak et al. on SoS
lower bounds for planted clique [BHK+ 16]; we therefore leave full proofs to a forthcoming full
version of this paper.
To state and prove our main theorem on tensor PCA it is useful to define a Boolean version of
the problem. For technical convenience we actually prove an SoS lower bound for this problem;
then standard techniques (see Section C) allow us to prove the main theorem for Gaussian tensors.
Problem 6.5 (k-Tensor PCA, signal-strength λ, boolean version). Distinguish the following two
n
def
distributions on Ωk {±1}( k ) .
• the uniform distribution: A ∼ Ω chosen uniformly at random.
• the planted distribution: Choose v ∼ {±1} n and let B v ⊗k . Sample A by rerandomizing
every coordinate of B with probability 1 − λn −k/2 .
We show that the natural SoS relaxation of this problem suffers from a large integrality gap,
when λ is slightly less than n k/4 , even when the degree of the SoS relaxation is n Ω(1) . (When
O(ε)
are known for k O(1) [RM14, HSS15, HSSS16,
λ ≫ n k/4−ε , algorithms with running time 2n
BGL16, RRS16].)
Theorem 6.6. Let k O(1). For A ∈ Ωk , let
def
SoS d (A) max ˜ hx ⊗k , Ai s.t. ˜ is a degree-d pseudoexpectation satisfying {kx k 2 1} .
˜
There is a constant c so that for every small enough ε > 0, if d 6 n c·ε , then for large enough n,
{SoS d (A) > n k/4−ε } > 1 − o(1)
A∼Ω
31
and
A∼Ω
SoS d (A) > n k/4−ε .
Moreover, the latter also holds for A with iid entries from N(0, 1).8
To prove the theorem we will exhibit for a typical sample A from the uniform distribution a
degree n Ω(ε) pseudodistribution ˜ which satisfies {kx k 2 1} and has ˜ hx ⊗k , Ai > n k/4−ε . The
following lemma ensures that the pseudo-distribution we exhibit will be PSD.
Í
Lemma 6.7. Let d ∈ and let Nd s 6 d n(n − 1) · · · (n − (s − 1)) be the number of 6 d-tuples with
unique entries from [n]. There is a constant ε ∗ independent of n such that for any ε < ε ∗ also independent of
n, the following is true. Let λ n k/4−ε . Let µ(A) be the density of the following distribution (with respect
n
to the uniform distribution on Ω {±1}( k ) ).
The Planted Distribution: Choose v ∼ {±1} n uniformly. Let B v ⊗k . Sample A by
• replacing every coordinate of B with a random draw from {±1} independently with probability
1 − λn −k/2 ,
• then choosing a subset S ⊆ [n] by including every coordinate with probability n −ε ,
• then replacing every entry of B with some index outside S independently with a uniform draw from
{±1}.
Let Λ : Ω → Nd ×Nd be the following function
Λ(A) µ(A) ·
v|A
v ⊗ 62d
Here we abuse notation and denote by x 6 ⊗2d the matrix indexed by tuples of length 6 d with unique
entries from [n]. For D ∈ , let Λ6D be the projection of Λ into the degree-D real-valued polynomials on
n
{±1}( k ) . There is a universal constant C so that if Cd/ε < D < n ε/C , then for large enough n
{Λ6D (A) 0} > 1 − o(1) .
A∼Ω
For a tensor A, the moment matrix of the pseudodistribution we exhibit will be Λ6D (A). We will
need it to satisfy the constraint {kx k 2 1}. This follows from the following general lemma. (The
lemma is much more general than what we state here, and uses only the vector space structures of
space of real matrices and matrix-valued functions.)
n
Lemma 6.8. Let k ∈ . Let V be a linear subspace of N×M . Let Ω {±1}( k ) . Let Λ : Ω → V. Let Λ6D
be the entrywise orthogonal projection of Λ to polynomials of degree at most D. Then for every A ∈ Ω, the
matrix Λ6D (A) ∈ V.
Proof. The function Λ is an element of the vector space N×M ⊗ Ω . The projection ΠV : N×M →
V and the projection Π6D from Ω to the degree-D polynomials commute as projections on
N×M ⊗ Ω , since they act on separate tensor coordinates. It follows that Λ6D ∈ V ⊗ (Ω )6D takes
values in V.
8For technical reasons we do not prove a tail bound type statement for Gaussian A, but we conjecture that this is also
true.
32
Last, we will require a couple of scalar functions of Λ6D to be well concentrated.
Lemma 6.9. Let Λ, d, ε, D be as in Lemma 6.7. The function Λ6D satisfies
6D
• A∼Ω {Λ∅,∅
(A) 1 ± o(1)} > 1 − o(1) (Here Λ∅,∅ 1 is the upper-left-most entry of Λ.)
• A∼Ω {hΛ6D (A), Ai (1 ± o(1)) · n 3k/4−ε } > 1 − o(1) (Here we are abusing notation to write
hΛ6D (A), Ai for the inner product of the part of Λ6D indexed by monomials of degree k and A.)
The Boolean case of Theorem 6.6 follows from combining the lemmas. The Gaussian case can
be proved in a black-box fashion from the Boolean case following the argument in Section C.
The proofs of all the lemmas in this section follow analogous lemmas in the work of Barak et
al. on planted clique [BHK+ 16]; we defer them to the full version of the present work.
6.3 Main theorem and proof overview for sparse PCA
In this section we prove the following main theorem. Formally, the theorem shows that with high
probability for a random n × n matrix A, even high-degree SoS relaxations are unable to certify
that no sparse vector v has large quadratic form hv, Avi.
Theorem 6.10 (Restatement of Theorem 1.6). If A ∈ n×n , let
SoS d,k (A) max ˜ hx, Axi s.t. ˜ is degree d and satisfies x 3i x i , kx k 2 k .
˜
There are absolute constants c, ε ∗ > 0 so that for every ρ ∈ (0, 1) and ε ∈ (0, ε ∗ ), if k n ρ , then for
d 6 n c·ε ,
n {SoS d,k (A) > min(n 1/2−ε k, n ρ−ε k)} > 1 − o(1)
A∼{±1} ( 2 )
and
n
A∼{±1} ( 2 )
SoS d,k (A) > min(n 1/2−ε k, n ρ−ε k) .
Furthermore, the latter is true also if A is symmetric with iid entries from N(0, 1).9
We turn to some discussion of the theorem statement. First of all, though it is technically
convenient for A in the theorem statement above to be a ±1 matrix, the entries may be replaced by
standard Gaussians (see Section C).
Remark 6.11 (Relation to the spiked-Wigner model of sparse principal component analysis). To get
some intuition for the theorem statement, it is useful to return to a familiar planted problem: the
spiked-Wigner model of sparse principal component analysis. Let W be a symmetric matrix
with
√
iid entries from N(0, 1), and let v be a random k-sparse unit vector with entries {±1/ k, 0}. Let
B W + λvv⊺ . The problem is to distinguish between a single sample from B and a sample from
W. There are two main algorithms for this problem, both captured by the SoS hierarchy. The
√
first, applicable when λ ≫ n, is vanilla PCA: the top eigenvalue of B will be larger than the top
eigenvalue of W. The second, applicable when λ ≫ k, is diagonal thresholding: the diagonal
9For technical reasons we do not prove a tail bound type statement for Gaussian A, but we conjecture that this is also
true.
33
entries of B which corresponds to nonzero coordinates will be noticeably large. The theorem
statement above (transferred to the Gaussian setting, though this has little effect) shows that once
λ is well outside these parameter regimes, i.e. when λ < n 1/2−ε , k 1−ε for arbitrarily small ε > 0,
even degree n Ω(ε) SoS programs do not distinguish between B and W.
Remark 6.12 (Interpretation as an integrality gap). A second interpretation of the theorem statement,
independent of any planted problem, is as a strong integrality gap for random instances for the
problem of maximizing a quadratic form over k-sparse vectors. Consider the actual maximum of
hx, Axi for random ({±1} or Gaussian) A over k-sparse unit vectors x. There are roughly 2k log n
points in a 21 -net for such vectors, meaning that by standard arguments,
max
kxk1,x is k-sparse
√
hx, Axi 6 O( k log n) .
With the parameters of the theorem, this means that the integrality gap of the degree n Ω(ε) SoS
relaxation is at least min(n ρ/2−ε , n 1/2−ρ/2−ε ) when k n ρ .
Remark 6.13 (Relation to spiked-Wishart model). Theorem 1.6 most closely concerns the spikedWigner model of sparse PCA; this refers to independence of the entries of the matrix A. Often,
sparse PCA is instead studied in the (perhaps more realistic) spiked-Wishart model, where the input
is m samples x1 , . . . , x m from an n-dimensional Gaussian vector N(0, Id +λ · vv ⊤ ), where v is
a unit-norm k-sparse vector. Here the question is: as a function of the sparsity k, the ambient
dimension n, and the signal strength λ, how many samples m are needed to recover the vector v?
The natural approach to recovering v in this setting is to solve a convex relaxation of the problem
Í
of maximizing he quadratic form of the empirical covariance M i 6 m x i x i⊺ over k-sparse unit
vectors (the maximization problem itself is NP-hard even to approximate [CPR16]).
Theoretically, one may apply our proof technique for Theorem 1.6 directly to the spiked-Wishart
model, but this carries the expense of substantial technical complication. We may however make
intelligent guesses about the behavior of SoS relaxations for the spiked-Wishart model on the basis
of Theorem 1.6 alone. As in the spiked Wigner model, there are essentially two known algorithms
to recover a planted sparse vector v in the spiked Wishart model: vanilla PCA and diagonal
thresholding [DM14b]. We conjecture that, as in the spiked Wigner model, the SoS hierarchy
requires n Ω(1) degree to improve the number of samples required by these algorithms by any
polynomial factor. Concretely, considering the case λ 1 for simplicity, we conjecture that there
are constants c, ε ∗ such that for every ε ∈ (0, ε ∗ ) if m 6 min(k 2−ε , n 1−ε ) and x1 , . . . , x m ∼ N(0, Id)
are iid, then with high probability for every ρ ∈ (0, 1) if k n ρ ,
SoS d,k
Õ
i6m
!
x i x i⊺ > min(n 1−ε k, k 2−ε )
for all d 6 n c·ε .
Lemmas for Theorem 1.6. Our proof of Theorem 1.6 is very similar to the analogous proof for
Tensor PCA, Theorem 6.6. We state the analogues of Lemma 6.7 and Lemma 6.9. Lemma 6.8 can
be used unchanged in the sparse PCA setting.
The main lemma, analogous to Lemma 6.7 is as follows.
34
Í
Lemma 6.14. Let d ∈ and let Nd s 6 d n(n − 1) · · · (n − (s − 1)) be the number of 6 d-tuples with
unique entries from [n]. Let µ(A) be the density of the following distribution on n × n matrices A with
n
respect to the uniform distribution on {±1}( 2) .
Planted distribution: Let k k(n) ∈ and λ λ(n) ∈ , and γ > 0, and assume λ 6 k. Sample
a uniformly random k-sparse vector v ∈ n with entries ±1, 0. Form the matrix B vv ⊤ . For each nonzero
entry of B independently, replace it with a uniform draw from {±1} with probability 1 − λ/k (maintaining
the symmetry B B ⊤ ). For each zero entry of B, replace it with a uniform draw from {±1} (maintaining
the same symmetry). Finally, choose every i ∈ [n] with probability n −γ independently; for those indices that
were not chosen, replace every entry in the corresponding row and column of B with random ±1 entries.10
Output the resulting matrix A. (We remark that this matrix is a Boolean version of the more standard
spiked-Wigner model B + λvv√⊤ where B has iid standard normal entries and v is a random k-sparse unit
vector with entries from {±1/ k, 0}.)
n
Let Λ : {±1}( 2) → Nd ×Nd be the following function
Λ(A) µ(A) ·
v|A
v ⊗ 62d
where the expectation is with respect to the planted distribution above. For D D(n) ∈ , let Λ6D be the
entrywise projection of Λ into the Boolean functions of degree at most D.
There are constants C, ε ∗ > 0 such that for every γ > 0 and ρ ∈ (0, 1) and every ε ∈ (0, ε ∗ ) (all
independent of n), if k n ρ and λ 6 min{n ρ−ε , n 1/2−ε }, and if Cd/ε < D < n ε/C , then for large enough
n
n {Λ6D (A) 0} > 1 − o(1) .
A∼{±1} ( 2 )
Remark 6.15. We make a few remarks about the necessity of some of the assumptions above.
A useful intuition is that the function Λ6D (A) is (with high probability) positive-valued when
the parameters ρ, ε, γ of the planted distribution are such that there is no degree-D polynomial
n
f : {±1}( 2) → whose values distinguish a typical sample from the planted distribution from a
null model: a random symmetric matrix with iid entries.
At this point it is useful to consider a more familiar planted model, which the lemma above
n
mimics. Let W be a n × n symmetric
√ matrix with iid entries from N(0, 1). Let v ∈ be a k-sparse
⊺
unit vector, with entries in {±1/ k, 0}. Let A W + λvv . Notice that if λ ≫ k, then diagonal
thresholding on the matrix W identifies the nonzero coordinates of v. (This is the analogue of
the covariance-thresholding algorithm in the spiked-Wishart version of sparse PCA.) On the other
√
√
hand, if λ ≫ n then (since typically kW k ≈ n), ordinary PCA identifies v. The lemma captures
computational hardness for the problem of distinguishing a single sample from A from a sample
from the null model W both diagonal thresholding and ordinary PCA fail.
Next we state the analogue of Lemma 6.9.
Lemma 6.16. Let Λ, d, k, λ, γ, D be as in Lemma 6.14. The function Λ6D satisfies
6D
• A∼{±1}(nk) {Λ∅,∅
(A) 1 ± o(1)} > 1 − o(1).
• A∼{±1}(nk) {hΛ6D (A), Ai (1 ± o(1)) · λn Θ(−γ) } > 1 − o(1).
10This additional n −γ noising step is a technical convenience which has the effect of somewhat decreasing the number
of nonzero entries of v and decreasing the signal-strength λ.
35
References
[AK97]
Noga Alon and Nabil Kahale, A spectral technique for coloring random 3-colorable graphs,
SIAM J. Comput. 26 (1997), no. 6, 1733–1748. 1
[AKS98]
Noga Alon, Michael Krivelevich, and Benny Sudakov, Finding a large hidden clique in a
random graph, Random Struct. Algorithms 13 (1998), no. 3-4, 457–466. 1, 4, 8
[AOW15a] Sarah R. Allen, Ryan O’Donnell, and David Witmer, How to refute a random CSP, 2015
IEEE 56th Annual Symposium on Foundations of Computer Science—FOCS 2015,
IEEE Computer Soc., Los Alamitos, CA, 2015, pp. 689–708. MR 3473335 1
[AOW15b] Sarah R. Allen, Ryan O’Donnell, and David Witmer, How to refute a random CSP, FOCS,
IEEE Computer Society, 2015, pp. 689–708. 8
[BBH+ 12]
Boaz Barak, Fernando G. S. L. Brandão, Aram Wettroth Harrow, Jonathan A. Kelner, David Steurer, and Yuan Zhou, Hypercontractivity, sum-of-squares proofs, and their
applications, STOC, ACM, 2012, pp. 307–326. 1, 6, 8
[BCC+ 10]
Aditya Bhaskara, Moses Charikar, Eden Chlamtac, Uriel Feige, and Aravindan Vijayaraghavan, Detecting high log-densities—an O(n 1/4 ) approximation for densest k-subgraph,
STOC’10—Proceedings of the 2010 ACM International Symposium on Theory of Computing, ACM, New York, 2010, pp. 201–210. MR 2743268 8
[BCK15]
Boaz Barak, Siu On Chan, and Pravesh K. Kothari, Sum of squares lower bounds from
pairwise independence [extended abstract], STOC’15—Proceedings of the 2015 ACM Symposium on Theory of Computing, ACM, New York, 2015, pp. 97–106. MR 3388187
8
[BGG+ 16]
Vijay V. S. P. Bhattiprolu, Mrinal Kanti Ghosh, Venkatesan Guruswami, Euiwoong Lee,
and Madhur Tulsiani, Multiplicative approximations for polynomial optimization over the
unit sphere, Electronic Colloquium on Computational Complexity (ECCC) 23 (2016),
185. 1, 6, 8
[BGL16]
Vijay V. S. P. Bhattiprolu, Venkatesan Guruswami, and Euiwoong Lee, Certifying
random polynomials over the unit sphere via sum of squares hierarchy, CoRR abs/1605.00903
(2016). 1, 2, 9, 31
[BHK+ 16]
Boaz Barak, Samuel B. Hopkins, Jonathan A. Kelner, Pravesh Kothari, Ankur Moitra,
and Aaron Potechin, A nearly tight sum-of-squares lower bound for the planted clique
problem, FOCS, IEEE Computer Society, 2016, pp. 428–437. 1, 4, 5, 8, 9, 29, 31, 33
[BKS14]
Boaz Barak, Jonathan A. Kelner, and David Steurer, Rounding sum-of-squares relaxations,
STOC, ACM, 2014, pp. 31–40. 1
[BKS15]
, Dictionary learning and tensor decomposition via the sum-of-squares method, STOC,
ACM, 2015, pp. 143–151. 1
36
[BKS17]
Boaz Barak, Pravesh Kothari, and David Steurer, Quantum entanglement, sum of squares,
and the log rank conjecture, CoRR abs/1701.06321 (2017). 1
[BM16]
Boaz Barak and Ankur Moitra, Noisy tensor completion via the sum-of-squares hierarchy,
COLT, JMLR Workshop and Conference Proceedings, vol. 49, JMLR.org, 2016, pp. 417–
445. 1, 8
[BMVX16] Jess Banks, Cristopher Moore, Roman Vershynin, and Jiaming Xu, Information-theoretic
bounds and phase transitions in clustering, sparse pca, and submatrix localization, CoRR
abs/1607.05222 (2016). 6
[BR13a]
Quentin Berthet and Philippe Rigollet, Complexity theoretic lower bounds for sparse principal component detection, COLT, JMLR Workshop and Conference Proceedings, vol. 30,
JMLR.org, 2013, pp. 1046–1066. 7
[BR13b]
Quentin Berthet and Philippe Rigollet, Computational lower bounds for sparse pca, COLT
(2013). 2
[BS14]
Boaz Barak and David Steurer, Sum-of-squares proofs and the quest toward optimal algorithms, CoRR abs/1404.5236 (2014). 6
[CC09]
Eric Carlen and ERIC CARLEN, Trace inequalities and quantum entropy: An introductory
course, 2009. 18
[CPR16]
Siu On Chan, Dimitris Papailliopoulos, and Aviad Rubinstein, On the approximability
of sparse PCA, COLT, JMLR Workshop and Conference Proceedings, vol. 49, JMLR.org,
2016, pp. 623–646. 7, 34
[DM14a]
Yash Deshpande and Andrea Montanari, Information-theoretically optimal sparse PCA,
ISIT, IEEE, 2014, pp. 2197–2201. 7
[DM14b]
, Sparse PCA via covariance thresholding, NIPS, 2014, pp. 334–342. 2, 34
[DM15]
, Improved sum-of-squares lower bounds for hidden clique and hidden submatrix
problems, COLT, JMLR Workshop and Conference Proceedings, vol. 40, JMLR.org,
2015, pp. 523–562. 9
[DX13]
Feng Dai and Yuan Xi, Spherical harmonics, arXiv preprint arXiv:1304.2585 (2013). 44
[Fil16]
Yuval Filmus, An orthogonal basis for functions over a slice of the boolean hypercube, Electr.
J. Comb. 23 (2016), no. 1, P1.23. 44
[FM16]
Zhou Fan and Andrea Montanari, How well do local algorithms solve semidefinite programs?, CoRR abs/1610.05350 (2016). 8
[GM15]
Rong Ge and Tengyu Ma, Decomposing overcomplete 3rd order tensors using sum-of-squares
algorithms, APPROX-RANDOM, LIPIcs, vol. 40, Schloss Dagstuhl - Leibniz-Zentrum
fuer Informatik, 2015, pp. 829–849. 1
37
[Gri01a]
Dima Grigoriev, Complexity of positivstellensatz proofs for the knapsack, Computational
Complexity 10 (2001), no. 2, 139–154. 8
[Gri01b]
, Linear lower bound on degrees of positivstellensatz calculus proofs for the parity,
Theor. Comput. Sci. 259 (2001), no. 1-2, 613–622. 8
[GW94]
Michel X. Goemans and David P. Williamson, .879-approximation algorithms for MAX
CUT and MAX 2sat, STOC, ACM, 1994, pp. 422–431. 1
[Har70]
Richard A Harshman, Foundations of the parafac procedure: Models and conditions for an"
explanatory" multi-modal factor analysis. 1
[HKP15]
Samuel B. Hopkins, Pravesh K. Kothari, and Aaron Potechin, Sos and planted clique:
Tight analysis of MPW moments at all degrees and an optimal lower bound at degree four,
CoRR abs/1507.05230 (2015). 9
[HL09]
Christopher J. Hillar and Lek-Heng Lim, Most tensor problems are NP hard, CoRR
abs/0911.1393 (2009). 6
[HSS15]
Samuel B. Hopkins, Jonathan Shi, and David Steurer, Tensor principal component analysis
via sum-of-square proofs, COLT, JMLR Workshop and Conference Proceedings, vol. 40,
JMLR.org, 2015, pp. 956–1006. 1, 2, 6, 8, 9, 31
[HSSS16]
Samuel B. Hopkins, Tselil Schramm, Jonathan Shi, and David Steurer, Fast spectral
algorithms from sum-of-squares proofs: tensor decomposition and planted sparse vectors,
STOC, ACM, 2016, pp. 178–191. 1, 2, 8, 9, 31
[KMOW17] Pravesh K. Kothari, Ryuhei Mori, Ryan O’Donnell, and David Witmer, Sum of squares
lower bounds for refuting any CSP, CoRR abs/1701.04521 (2017). 8
[KNV+ 15]
Robert Krauthgamer, Boaz Nadler, Dan Vilenchik, et al., Do semidefinite relaxations solve
sparse pca up to the information limit?, The Annals of Statistics 43 (2015), no. 3, 1300–1322.
2
[KT17]
Ken-ichi Kawarabayashi and Mikkel Thorup, Coloring 3-colorable graphs with less than
n1/5 colors, J. ACM 64 (2017), no. 1, 4:1–4:23. 1
[LRS15]
James R. Lee, Prasad Raghavendra, and David Steurer, Lower bounds on the size of
semidefinite programming relaxations, STOC, ACM, 2015, pp. 567–576. 1
[MPW15]
Raghu Meka, Aaron Potechin, and Avi Wigderson, Sum-of-squares lower bounds for
planted clique [extended abstract], STOC’15—Proceedings of the 2015 ACM Symposium
on Theory of Computing, ACM, New York, 2015, pp. 87–96. MR 3388186 9
[MS16a]
Andrea Montanari and Subhabrata Sen, Semidefinite programs on sparse random graphs
and their application to community detection, STOC, ACM, 2016, pp. 814–827. 8
[MS16b]
Andrea Montanari and Nike Sun, Spectral algorithms for tensor completion, CoRR
abs/1612.07866 (2016). 8
38
[MSS16a]
Tengyu Ma, Jonathan Shi, and David Steurer, Polynomial-time tensor decompositions with
sum-of-squares, CoRR abs/1610.01980 (2016). 1
[MSS16b]
, Polynomial-time tensor decompositions with sum-of-squares, FOCS, IEEE Computer Society, 2016, pp. 438–446. 1
[MW15a]
Tengyu Ma and Avi Wigderson, Sum-of-squares lower bounds for sparse PCA, NIPS, 2015,
pp. 1612–1620. 2
[MW15b]
, Sum-of-squares lower bounds for sparse PCA, CoRR abs/1507.06370 (2015). 9
[O’D14]
Ryan O’Donnell, Analysis of boolean functions, Cambridge University Press, 2014. 30
[Pea01]
Karl Pearson, On lines and planes of closes fit to systems of points in space, Philosophical
Magazine 2 (1901), 559–572. 1
[PS17]
Aaron Potechin and David Steurer, Exact tensor completion with sum-of-squares, CoRR
abs/1702.06237 (2017). 1
[PWB16]
Amelia Perry, Alexander S. Wein, and Afonso S. Bandeira, Statistical limits of spiked
tensor models, CoRR abs/1612.07728 (2016). 6
[RM14]
Emile Richard and Andrea Montanari, A statistical model for tensor PCA, NIPS, 2014,
pp. 2897–2905. 2, 5, 9, 31
[RRS16]
Prasad Raghavendra, Satish Rao, and Tselil Schramm, Strongly refuting random csps
below the spectral threshold, CoRR abs/1605.00058 (2016). 1, 2, 6, 8, 9, 31
[RS15]
Prasad Raghavendra and Tselil Schramm, Tight lower bounds for planted clique in the
degree-4 SOS program, CoRR abs/1507.05136 (2015). 9
[RW17]
Prasad Raghavendra and Benjamin Weitz, On the bit complexity of sum-of-squares proofs,
CoRR abs/1702.05139 (2017). 40, 42
[Sch08]
Grant Schoenebeck, Linear level lasserre lower bounds for certain k-csps, FOCS, IEEE
Computer Society, 2008, pp. 593–602. 8
[Tre09]
Luca Trevisan, Max cut and the smallest eigenvalue, STOC, ACM, 2009, pp. 263–272. 1
[TS14]
Ryota Tomioka and Taiji Suzuki, Spectral norm of random tensors, arXiv preprint
arXiv:1407.1870 (2014). 9
[Wei17]
Benjamin Weitz, Polynomial proof systems, effective derivations, and their applications in the
sum-of-squares hierarchy, Ph.D. thesis, UC Berkeley, 2017. 42
[ZHT06]
Hui Zou, Trevor Hastie, and Robert Tibshirani, Sparse principal component analysis,
Journal of Computational and Graphical Statistics 15 (2006), no. 2, 265–286. 2
39
A
Bounding the sum-of-squares proof ideal term
We give conditions under which sum-of-squares proofs are well-conditioned, using techniques
similar to those that appear in [RW17] for bounding the bit complexity of SoS proofs. We begin
with some definitions.
Definition A.1. Let P be a polynomial optimization problem and let D be the uniform distribution
over the set of feasible solutions S for P. Define the degree-2d moment matrix of D to be
X D s∼D [ ŝ ⊗2d ], where ŝ [1 s]⊤ .
• We say that P is k-complete on up to degree 2d if every zero eigenvector of X D has a degree-k
derivation from the ideal constraints of P.
Theorem A.2. Let P be a polynomial optimization problem over variables x ∈ n of degree at most 2d,
with objective function f (x) and ideal constraints {1 j (x) 0} j∈[m] . Suppose also that P is 2d-complete up
to degree 2d. Let G be the matrix of ideal constraints in the degree-2d SoS proof for P. Then if
• the SDP optimum value is bounded by n O(d)
• the coefficients of the objective function are bounded by n O(d) ,
• there is a set of feasible solutions S ⊆ n with the property that for each α ⊆ [n]d , |α| 6 d for
which χα is not identically zero over the solution space, there exists some s ∈ S such that the square
monomial χα (s)2 > n −O(d) ,
it follows that the SoS certificate for the problem is well-conditioned, with no value larger than n O(d).
To prove this, we essentially reproduce the proof of the main theorem of [RW17], up to the very
end of the proof at which point we slightly deviate to draw a different conclusion.
Proof. Following our previous convention, the degree-2d sum-of-squares proof for P is of the form
sdpOpt − f (x) a(x) + 1(x),
where the 1(x) is a polynomial in the span of the ideal constraints, and A is a sum of squares of
polynomials. Alternatively, we have the matrix characterization,
sdpOpt −hF, x̂ ⊗2d i hA, x̂ ⊗2d i + hG, x̂ ⊗2d i,
where x̂ [1 x]⊤ , F, A, and G are matrix polynomials corresponding to f , a, and 1 respectively,
and with A 0.
Now let s ∈ S be a feasible solution. Then we have that
sdpOpt −hF, s ⊗2d i hA, s ⊗2d i + hG, s ⊗2d i hA, s ⊗2d i,
where the second equality follows because each s ∈ S is feasible. By assumption the left-hand-side
is bounded by n O(d).
We will now argue that the diagonal entries of A cannot be too large. Our first step is to argue
that A cannot have nonzero diagonal entries unless there is a solution element in the solution
Let X D [x ⊗2d ] be the 2d-moment matrix of the uniform distribution of feasible solutions to
40
P. Define Π to be the orthogonal projection into the zero eigenspace of X D . By linearity and
orthonormality, we have that
hX D , Ai X D , (Π + Π⊥ )A(Π + Π⊥ )
X D , Π⊥ AΠ⊥ + X D , ΠAΠ⊥ + X D , Π⊥ AΠ + hX D , ΠAΠi .
By assumption P is 2d-complete on D up to degree 2d, and therefore Π is derivable in degree 2d
from the ideal constraints {1 j } j∈[m] . Therefore, the latter three terms may be absorbed into G, or
more formally, we can set A′ Π⊥ AΠ⊥ , G′ G + (Π + Π⊥ )A(Π + Π⊥ ) − Π⊥ AΠ⊥ , and re-write the
original proof
sdpOpt −hF, x̂ ⊗2d i hA′ , x̂ ⊗2d i + hG′ , x̂ ⊗2d i.
(A.1)
The left-hand-side remains unchanged, so we still have that it is bounded by n O(d) for any feasible
solution s ∈ S. Furthermore, the nonzero eigenspaces of X D and A′ are identical, and so A′ cannot
be nonzero on any diagonal entry which is orthogonal to the space of feasible solutions.
Now, we argue that every diagonal entry of A′ is at most n O(d) . To see this, for each diagonal
term χ2α , we choose the solution s ∈ S for which χα (s)2 > n −O(d) . We then have by the PSDness of
A′ that
A′α,α · χα (s)2 6 hs ⊗2d , A′i 6 n O(d) ,
which then implies that A′α,α 6 n O(d) . It follows that Tr(A′) 6 n O(d) , and again since A′ is PSD,
kA′ kF 6
p
Tr(A′) 6 n O(d) .
(A.2)
Putting things together, we have from our original matrix identity (A.1) that
kG′ kF k sdpOpt −A′ − F kF
6 k sdpOpt kF + kA′ kF + kF kF
6 k sdpOpt kF + n O(d) + kF kF
(triangle inequality)
(from (A.2)).
Therefore by our assumptions that k sdpOpt k, kF kF n O(d) , the conclusion follows.
We now argue that the conditions of this theorem are met by several general families of
problems.
Corollary A.3. The following problems have degree-2d SoS proofs with all coefficients bounded by n O(d) :
1. The hypercube: Any polynomial optimization problem with the only constraints being {x 2i x i }i∈[n]
or {x 2i 1}i∈[n] and objective value at most n O(d) over the set of integer feasible solutions. (Including
max k-csp).
2. The hypercube with balancedness constraints: Any polynomial optimization problem with the only
Í
constraints being {x 2i − 1}i∈[n] ∪ { i x i 0}. (Including community detection).
Í
3. The unit sphere: Any polynomial optimization problem with the only constraints being { i∈[n] x 2i 1}
and objective value at most n O(d) over the set of integer feasible solutions. (Including tensor PCA).
41
4. The sparse hypercube: As long as 2d 6 k, any polynomial optimization problem with the only
Í
Í
constraints being {x 2i x i }i∈[n] ∪ { i∈[n] x i k}, or {x 3i x i }i∈[n] ∪ { i∈[n] x 2i k}, and objective
value at most n O(d) over the set of integer feasible solutions. (Including densest k-subgraph and the
Boolean version of sparse PCA).
5. The max clique problem.
We prove this corollary below. For each of the above problems, it is clear that the objective value
is bounded and the objective function has no large coefficients. To prove this corollary, we need to
verify the completeness of the constraint sets, and then demonstrate a set of feasible solutions so
that each square term receives non-negligible mass from some solution.
A large family of completeness conditions were already verified by [RW17] and others (see the
references therein):
Proposition A.4 (Completeness of canonical polynomial optimization problems (from Corollary
3.5 of [RW17])). The following pairs of polynomial optimization problems P and distributions over solutions
D are complete:
1. If the feasible set is x ∈ n with {x 2i 1}i∈[n] or {x 2i x i }i∈[n] , P is d-complete up to degree d
Í
(e.g. if P is a CSP). This is still true of the constraints {x 2i 1}i∈[n] ∪ { i x i 0} (e.g. if P is a
community detection problem).
2. If the feasible set is x ∈ n with
is the tensor PCA problem).
Í
i∈[n]
x 2i α, then P is d-complete on D up to degree d (e.g. if P
3. If P is the max clique problem with feasible set x ∈ n with {x 2i x i }i∈[n] ∪ {x i x j 0}(i, j)∈E , then
P is d-complete on D up to degree d.
A couple of additional examples can be found in the upcoming thesis of Benjamin Weitz
[Wei17]:
Proposition A.5 (Completeness of additional polynomial optimization problems) [Wei17]). The
following pairs of polynomial optimization problems P and distributions over solutions D are complete:
1. If P is the densest k-subgraph relaxation, with feasible set x ∈ n with {x 2i x i }i∈[n] ∪ {
k}, P is d-complete on D up to degree d 6 k.
Í
i∈[n]
xi
2. If P is the sparse PCA relaxation with sparsity k, with feasible set x ∈ n with {x 3i x i }i∈[n] ∪
Í
{ i∈[n] x 2i k}, P is d-complete up to degree d 6 k/2.
Proof of Corollary A.3. We verify the conditions of Theorem A.2 separately for each case.
1. The hypercube: the completeness conditions are satisfied by Proposition A.4. We choose the
® for which χ2α (s) 1 always.
set of feasible solutions to contain a single point, s 1,
2. The hypercube with balancedness constraints: the completeness conditions are satisfied by
Proposition A.4. We choose the set of feasible solutions to contain a single point, s, some
perfectly balanced vector, for which χ2α (s) 1 always.
42
3. The unit sphere: the completeness conditions are satisfied by Proposition A.4. We choose
the set of feasible solutions to contain a single point, s √1n · ®1, for which χ2α (s) > n −d as long
as |α| 6 d, which meets the conditions of Theorem A.2.
4. The sparse hypercube: the completeness conditions are satisfied by Proposition A.5. Here,
Í
we choose the set of solutions S {x ∈ {0, 1} n | i x i k}. as long as k > d, for any |α| 6 d
we have that χS (x)2 1 when s is 1 on α.
5. The max clique problem: the completeness conditions are satisfied by Proposition A.4. We
choose the solution set S to be the set of 0, 1 indicators for cliques in the graph. Any α
that corresponds to a non-clique in the graph has χα identically zero in the solution space.
Otherwise, χα (s)2 1 when s ∈ S is the indicator vector for the clique on α.
This concludes the proof.
B Lower bounds on the nonzero eigenvalues of some moment matrices
In this appendix, we prove lower bounds on the magnitude of nonzero eigenvalues of covariance
matrices for certain distributions over solutions. Many of these bounds are well-known, but we
re-state and re-prove them here for completeness. We first define the property we want:
Definition B.1. Let P be a polynomial optimization problem and let D be the uniform distribution
over the set of feasible solutions S for P. Define the degree-2d moment matrix of D to be
X D x∼D [ x̂ ⊗2d ], where x̂ [1 x]⊤ .
• We say that D is δ-spectrally rich up to degree 2d if every nonzero eigenvalue of X D is at least
δ.
Proposition B.2 (Spectral richness of polynomial optimization problems). The following distributions
over solutions D are polynomially spectrally rich:
1. If D is the uniform distribution over {±1} n , then D is polynomially spectrally rich up to degree
d 6 n.
2. If D is the uniform distribution over α · Sn−1 , then D is polynomially spectrally rich up to degree
d 6 n.
3. If D is the uniform distribution over x ∈ {1, 0} n with kx k0 k, then if 2d 6 k, D is polynomially
spectrally rich up to degree d.
4. If D is the uniform distribution over x ∈ {±1, 0} n with kx k0 k, then if 2d 6 k, D is polynomially
spectrally rich up to degree d.
def
Proof. In the proof of each statement, denote the 2dth moment matrix of D by X D x∼D [x ⊗2d ].
Because X D is a sum of rank-1 outer-products, an eigenvector of X D has eigenvalue 0 if and only if it
is orthogonal to every solution in the support of D, and therefore the zero eigenvectors correspond
exactly to the degree at most d constraints that can be derived from the ideal constraints.
43
Now, let p1 (x), . . . , p r (x) be a basis for polynomials of degree at most 2d in x which is orthonormal with respect to D, so that
x∼D
[p i (x)p j (x)]
(
1
ij
0 otherwise
If p̂ i is the representation of p i in the monomial basis, we have that
(p̂ i )⊤ X D p̂ j
Therefore, the matrix R
Í
⊤
i e i ( p̂ i )
x∼D
[p i (x)p j (x)].
diagonalizes X D ,
RX D R ⊤ Id .
It follows that the minimum non-zero eigenvalue of X D is equal to the smallest eigenvalue of
(RR ⊤ )−1 , which is in turn equal to σ 1(R)2 where σmax (R) is the largest singular value of R. Therefore,
max
for each of these cases it suffices to bound the singular values of the change-of-basis matrix
between the monomial basis and an orthogonal basis over D. We now proceed to handle each
case separately.
1. D uniform over hypercube: In this case, the monomial basis is an orthogonal basis, so R is the
identity on the space orthogonal to the ideal constraints, and σmax (R) 1, which completes
the proof.
2. D uniform over sphere: Here, the canonical orthonormal basis the spherical harmonic
polynomials. Examining an explicit characterization of the spherical harmonic polynomials
(given for example in [DX13], Theorem 5.1), we have that when expressing p i in the monomial
basis, no coefficient of a monomial (and thus no entry of p̂ i ) exceeds n O(d) , and since there
Í
are at most n d polynomials each with di0 nd 6 n d coefficients, employing the triangle
inequality we have that σmax (R) 6 n O(d) , which completes the proof.
3. D uniform over {x ∈ {0, 1} k | kx k0 k}: In this case, the canonical orthonormal basis is
the correctly normalized Young’s basis (see e.g. [Fil16] Theorems 3.1,3.2 and 5.1), and agan
we have that when expressing an orthonormal basis polynomial p i in the monomial basis,
no coefficient exceeds n O(d) . As in the above case, this implies that σmax (R) 6 n O(d) and
completes the proof.
4. D uniform over {x ∈ {±1, 0} k | kx k0 k}: Again the canonical orthonormal basis is Young’s
basis with a correct normalization. We again apply [Fil16] Theorems 3.1,3.2, but this time we
calculate the normalization by hand: we have that in expressing each p i , no element of the
monomial basis has coefficient larger than n O(d) multiplied by the quantity
x∼D
"
d
Ö
i1
2
(x2i−1 − x2i )
#
O(1).
This gives the desired conclusion.
44
C
From Boolean to Gaussian lower bounds
In this section we show how to prove our SoS lower bounds for Gaussian PCA problems using the
lower bounds for Boolean problems in a black-box fashion. The techniques are standard and more
broadly applicable than the exposition here but we prove only what we need.
The following proposition captures what is needed for tensor PCA; the argument for sparse
PCA is entirely analogous so we leave it to the reader.
n
Proposition C.1. Let k ∈ and let A ∼ {±1}( k ) be a symmetric random Boolean tensor. Suppose that for
n
every A ∈ {±1}( k ) there is a degree-d pseudodistribution ˜ satisfying {kx k 2 1} such that
A
˜ hx ⊗k , Ai C .
n
Let T ∼ N(0, 1)( k ) be a Gaussian random tensor. Then
T
max ˜ hx ⊗k , Ti > Ω(C)
˜
where the maximization is over pseudodistributions of degree d which satisfy {kx k 2 1}.
Proof. For a tensor T ∈ (n )⊗k , let A(T) have entries A(T)α sign(Tα ). Now consider
T
˜ A(T) hx ⊗k , Ti
Õ
α
T
˜ A(T) x α Tα
where α ranges over multi-indices of size k over [n]. We rearrange each term above to
A(T)
where 1 ∼ N(0, 1). Since
( ˜ A(T) x α ) ·
Tα | A(T)
Tα
( ˜ A(T) x α ) · A(T)α ·
A(T)
|1 |
|1 | is a constant independent of n, all of this is
Ω(1) ·
Õ
α
A
˜ A x α · Aα C .
45
| 8 |
A stabilized finite element formulation for liquid shells and its
application to lipid bilayers
Roger A. Sauer∗1 , Thang X. Duong∗ , Kranthi K. Mandadapu†§ and David J. Steigmann‡
∗
†
Aachen Institute for Advanced Study in Computational Engineering Science (AICES),
RWTH Aachen University, Templergraben 55, 52056 Aachen, Germany
Department of Chemical and Biomolecular Engineering, University of California at Berkeley,
110 Gilman Hall, Berkeley, CA 94720-1460, USA
§
Chemical Sciences Division, Lawrence Berkeley National Laboratory, CA 94720, USA
arXiv:1601.03907v2 [] 18 Jan 2016
‡
Department of Mechanical Engineering, University of California at Berkeley,
6141 Etcheverry Hall, Berkeley, CA 94720-1740, USA
Abstract
This paper presents a new finite element (FE) formulation for liquid shells that is based on an
explicit, 3D surface discretization using C 1 -continuous finite elements constructed from NURBS
interpolation. Both displacement-based and mixed FE formulations are proposed. The latter is
needed for area-incompressible material behavior, where penalty-type regularizations can lead
to misleading results. In order to obtain quasi-static solutions, several numerical stabilization
schemes are proposed based on either stiffness, viscosity or projection. Several numerical examples are considered in order to illustrate the accuracy and the capabilities of the proposed
formulation, and to compare the different stabilization schemes. The presented formulation is
capable of simulating non-trivial surface shapes associated with tube formation and proteininduced budding of lipid bilayers. In the latter case, the presented formulation yields nonaxisymmetric solutions, which have not been observed in previous simulations. It is shown that
those non-axisymmetric shapes are preferred over axisymmetric ones.
Keywords: cell budding, cell tethering, Helfrich energy, isogeoemetric analysis, non-linear
finite elements, non-linear shell theory
Contents
1 Introduction
4
2 Summary of thin liquid shell theory
6
2.1
Thin shell kinematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7
2.2
Quasi-static equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8
2.3
Constitution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9
2.3.1
Area-compressible lipid bilayer . . . . . . . . . . . . . . . . . . . . . . . .
9
2.3.2
Area-incompressible lipid bilayer . . . . . . . . . . . . . . . . . . . . . . .
9
2.3.3
Model properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.4
1
Weak form
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
corresponding author, email: [email protected]
1
3 Liquid shell stabilization
3.1
12
Adding stiffness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.1.1
In-plane shear and bulk stabilization . . . . . . . . . . . . . . . . . . . . . 12
3.1.2
Sole in-plane shear stabilization . . . . . . . . . . . . . . . . . . . . . . . . 12
3.1.3
Relation to physical viscosity . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2
Normal projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.3
Summary of the stabilization schemes . . . . . . . . . . . . . . . . . . . . . . . . 15
4 FE formulation
15
4.1
FE approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.2
Discretized weak form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.3
Area constraint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.4
Solution procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.5
Rotational constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.6
Normalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
5 Numerical examples
18
5.1
Pure bending and stretching of a flat strip . . . . . . . . . . . . . . . . . . . . . . 18
5.2
Inflation of a sphere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.3
Drawing of a tube . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
5.4
Cell budding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
5.4.1
Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
5.4.2
Computational setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
5.4.3
Bud shapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
5.4.4
Surface energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
5.4.5
Surface tension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.4.6
Effective shear stiffness . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.4.7
Influence of the area-compressibility . . . . . . . . . . . . . . . . . . . . . 34
6 Conclusion
36
e
A Linearization of finto
37
2
B Linearization of ge
38
List of important symbols
1
aα
Aα
aα
Aα
aα,β
aα;β
aαβ
Aαβ
a
A
bαβ
Bαβ
B
cαβγδ
C
γ
Γγαβ
da
dA
δ...
e
fe
g
G
Ge
ge
H
H0
I
I1 , I2
i
I
J
k
k∗
K
Keff
ke
κ
κ1 , κ2
LI
λ1 , λ2
me
identity tensor in R3
co-variant tangent vectors of surface S at point x; α = 1, 2
co-variant tangent vectors of surface S0 at point X; α = 1, 2
contra-variant tangent vectors of surface S at point x; α = 1, 2
contra-variant tangent vectors of surface S0 at point X; α = 1, 2
parametric derivative of aα w.r.t. ξ β
co-variant derivative of aα w.r.t. ξ β
co-variant metric tensor components of surface S at point x
co-variant metric tensor components of surface S0 at point X
class of stabilization methods based on artificial shear viscosity
class of stabilization methods based on artificial shear stiffness
co-variant curvature tensor components of surface S at point x
co-variant curvature tensor components of surface S0 at point X
left surface Cauchy-Green tensor
contra-variant components of the material tangent
right surface Cauchy-Green tensor
surface tension of S
Christoffel symbols of the second kind
differential surface element on S
differential surface element on S0
variation of ...
index numbering the finite elements; e = 1, ..., nel
penalty parameter
finite element force vector of element Ωe
expression for the area-incompressibility constraint
expression for the weak form
contribution to G from finite element Ωe
finite element ‘force vector’ of element Ωe due to constraint g
mean curvature of S at x
spontaneous curvature prescribed at x
index numbering the finite element nodes
first and second invariants of the surface Cauchy Green tensors
surface identity tensor on S
surface identity tensor on S0
surface area change
bending modulus
Gaussian modulus
initial in-plane membrane bulk modulus
effective in-plane membrane bulk modulus
finite element tangent matrix associated with f e and ge
Gaussian curvature of surface S at x
principal curvatures of surface S at x
pressure shape function of finite element node I
principal surface stretches of S at x
number of pressure nodes of finite element Ωe
3
mν , mτ
m̄ν , m̄τ
M αβ
µ
µeff
nno
nel
nmo
ne
N αβ
NI
n
N
N
ν
ν
ξα
P
q
q
qe
Sα
S
S0
σ
σ αβ
t
t̄
T
Tα
V, Q
ϕ
ϕ̄
w
W
x
X
xI
XI
x
xe
Xe
Ωe
Ωe0
1
bending moment components acting at x ∈ ∂S
prescribed bending moment components
contra-variant bending moment components
initial in-plane membrane shear stiffness
effective in-plane membrane shear stiffness
total number of finite element nodes used to discretize S
total number of finite elements used to discretize S
total number of finite element nodes used to discretize pressure q
number of displacement nodes of finite element Ωe
total, contra-variant, in-plane membrane stress components
displacement shape function of finite element node I
surface normal of S at x
surface normal of S0 at X
array of the shape functions for element Ωe
in-plane membrane shear viscosity
normal vector on ∂S
convective surface coordinates; α = 1, 2
class of stabilization methods based on normal projection; projection matrix
Lagrange multiplier associated with area-incompressibility
array of all Lagrange multipliers qI in the system; I = 1, ..., nmo
array of all Lagrange multipliers qI for finite element Ωe ; I = 1, ..., me
contra-variant, out-of-plane shear stress components
current configuration of the surface
initial configuration of the surface
Cauchy stress tensor of the shell
stretch related, contra-variant, in-plane membrane stress components
effective traction acting on the boundary ∂S normal to ν
prescribed boundary tractions on Neumann boundary ∂t S
traction acting on the boundary ∂S normal to ν
traction acting on the boundary ∂S normal to aα
admissible function spaces
deformation map of surface S
prescribed boundary deformations on boundary ∂x S
hyperelastic stored surface energy density per current surface area
hyperelastic stored surface energy density per reference surface area
current position of a surface point on S
initial position of x on the reference surface S0
position vector of finite element node I lying on S
initial position of finite element node I on S0
array of all nodal positions xI of the discretized surface; I = 1, ..., nno
array of all nodal positions xI for finite element Ωe ; I = 1, ..., ne
array of all nodal positions X I for finite element Ωe0 ; I = 1, ..., ne
current configuration of finite element e
reference configuration of finite element e
Introduction
Biological membranes form the boundaries of cells and cell-internal organelles such as the endoplasmic reticulum, the golgi complex, mitochondria and endosomes. Mechanically they are
4
liquid shells that exhibit fluid-like behavior in-plane and solid-like behavior out-of-plane. They
mainly consist of self-assembled lipid bilayers and proteins. At the macroscopic level, these
membranes exist in different shapes such as invaginations, buds and cylindrical tubes (Zimmerberg and Kozlov, 2006; Shibata et al., 2009; McMahon and Gallop, 2005; Shemesh et al.,
2014). These shapes arise as a result of the lateral loading due to cytoskeletal filaments and
protein-driven spontaneous curvature. Cell membranes undergo many morphological and topological shape transitions to enable important biological processes such as endocytosis (Buser
and Drubin, 2013; Kukulski et al., 2012; Peter et al., 2004), cell motility (Keren, 2011) and
vesicle formation (Gruenberg and Stenmark, 2004; Budin et al., 2009). The shape transitions
occur as a result of lateral loading on the membranes from cytoskeletal filaments, such as actin,
from osmotic pressure gradients across the membrane, or from membrane-protein interactions.
For example, in endocytosis – a primary mode of transport of cargo between the exterior of
the cell and its interior – proteins bind to the flat membrane and induce invaginations or bud
shapes, followed by a tube formation by actin-mediated pulling forces (Kukulski et al., 2012;
Walani et al., 2015).
Most of the computational studies regarding biological membranes are restricted to shapes
resulting from axisymmetric conditions. However, many of the processes in cells are nonaxisymmetric in nature. To the best of our knowledge, there exist only a few studies that allow
for general, non-axisymmetric shapes. Therefore, it is important to advance computational
methods that can yield solutions for general conditions.
In the past, several computational models have been proposed for cell membranes. Depending
on how the membrane is discretized, two categories can be distinguished: Models based on
an explicit surface discretization, and models based on an implicit surface discretization. In
the second category, the surface is captured by a phase field (Du and Wang, 2007) or level
set function (Salac and Miksis, 2011) that is defined on the surrounding volume mesh. In the
first category, the surface is captured directly by a surface mesh. The approach is particularly
suitable if only surface effects are studied, such that no surrounding volume mesh is needed.
This is the approach taken here. An example is to use Galerkin surface finite elements: The
first corresponding 3D FE model for lipid bilayer membranes seems to be the formulation
of Feng and Klug (2006) and Ma and Klug (2008). Their FE formulation is based on socalled subdivision surfaces (Cirak and Ortiz, 2001), which provide C 1 -continuous FE surface
discretizations. Such discretizations are advantageous, since they do not require additional
degrees of freedom as C 0 -continuous FE formulations do. Still, C 0 -continuous FEs have been
considered to model red blood cell (RBC) membranes and their supporting protein skeleton
(Dao et al., 2003; Peng et al., 2010), phase changes of lipid bilayers (Elliott and Stinner, 2010),
and viscous cell membranes (Tasso and Buscaglia, 2013). Subdivision finite elements have been
used to study confined cells (Kahraman et al., 2012). Lipid bilayers can also be modeled with socalled ‘solid shell’ (i.e. classical volume) elements instead of surface shell elements (Kloeppel and
Wall, 2011). Using solid elements, C 0 -continuity is sufficient, but the formulation is generally
less efficient. For two-dimensional and axisymmetric problems also C 1 -continuous B-Spline and
Hermite finite elements have been used to study membrane surface flows (Arroyo and DeSimone,
2009; Rahimi and Arroyo, 2012), cell invaginations (Rim et al., 2014), and cell tethering and
adhesion (Rangarajan and Gao, 2015). The latter work also discusses the generalization to
three-dimensional B-spline FE. For some problems it is also possible to use specific, Mongepatch FE discretizations (Rangamani et al., 2013, 2014).
There are also several works that do not use finite element approaches. Examples are numerical
ODE integration (Agrawal and Steigmann, 2009), Monte Carlo methods (Ramakrishnan et al.,
2010), molecular dynamics (Li and Lykotrafitis, 2012), finite difference methods (Lau et al.,
2012; Gu et al., 2014) and mesh-free methods (Rosolen et al., 2013). There are also non5
Galerkin FE approaches that use triangulated surfaces, e.g. see Jarić et al. (1995); Jie et al.
(1998).
For quasi-static simulations of liquid membranes and shells, the formulation needs to be stabilized. Therefore, various stabilization methods have been proposed considering artificial viscosity (Ma and Klug, 2008; Sauer, 2014), artificial stiffness (Kahraman et al., 2012) and normal
offsets – either as a projection of the solution (with intermediate mesh update steps) (Sauer,
2014), or as a restriction of the surface variation (Rangarajan and Gao, 2015). The instability
problem is absent, if shear stiffness is present, e.g. due to an underlying cytoskeleton, like in
RBCs (Dao et al., 2003; Peng et al., 2010; Kloeppel and Wall, 2011). However, this setting does
not apply to (purely) liquid membranes and shells.
Here, a novel FE formulation is presented for liquid shells that is based on an explicit, 3D
surface discretization using NURBS (Cottrell et al., 2009). The following aspects are new:
• A theoretical study of the effective bulk and shear stiffness of the Helfrich bending model,
• the use of 3D, C 1 -continuous NURBS-based surface discretizations, considering purely
displacement-based and
• mixed, LBB-conforming2 finite elements,
• a comparison of both these FE formulations, illustrating the limitations of the former,
• the formulation, application and comparison of various stabilization schemes for quasistatic simulations
• that allow to accurately compute the in-plane stress state of the liquid, like the surface
tension,
• the verification of the formulation by various analytical solutions,
• the possibility to capture complex, non-axisymmetric solutions, and
• new insight into the budding of cells and vesicles.
To the best of our knowledge, non-axisymmetric shapes have not been simulated to the same
detail as is presented here.
The remainder of this paper is organized as follows: Sec. 2 presents an overview of the underlying
theory of thin liquid shells. The governing weak form is presented and the model properties are
discussed. Stabilization methods are then addressed in Sec. 3. The finite element formulation
follows in Sec. 4. The formulation is verified and illustrated by several 3D numerical examples
in Sec. 5. In particular, tethering and budding are studied. The paper then concludes with
Sec. 6.
2
Summary of thin liquid shell theory
This section gives a brief overview of the governing equations of thin shell theory considering
liquid material behavior. A much more detailed discussion can be found in Sauer and Duong
(2015) and in the references therein.
2
satisfying the Ladyzhenskaia-Babuška-Brezzi condition, see e.g. Babuška (1973); Bathe (1996)
6
2.1
Thin shell kinematics
The surface of the shell, denoted by S, is described by the parameterization
x = x(ξ α ) ,
α = 1, 2,
(1)
where ξ 1 and ξ 2 are coordinates that can be associated with a flat 2D domain that is then
mapped to S by function (1). The surface is considered deformable. The deformation of
S is assessed in relation to a reference configuration S0 , described by the parameterization
X = X(ξ α ). S is considered here to coincide with S0 initially (e.g. at time t = 0), such that
x = X for t = 0. From these mappings, the tangent vectors Aα = X ,α and aα = x,α can be
determined. Here ...,α := ∂.../∂ξ α denotes the parametric derivative. From the tangent vectors,
the surface normals
A1 × A2
N=
,
(2)
kA1 × A2 k
a1 × a2
n=
,
(3)
ka1 × a2 k
and the metric tensor components
Aαβ = Aα · Aβ ,
(4)
aαβ = aα · aβ
(5)
can be defined. From the inverses [Aαβ ] := [Aαβ ]−1 and [aαβ ] := [aαβ ]−1 , the dual tangent
vectors
Aα = Aαβ Aβ ,
(6)
aα = aαβ aβ
(7)
α
(with summation implied on repeated indices) can be introduced, such that A · Aβ = δβα and
aα · aβ = δβα , where δβα is the Kronecker symbol. Further, we can define the surface identity
tensors
I = Aα ⊗ Aα ,
(8)
i = aα ⊗ aα ,
(9)
such that the usual, 3D identity becomes
1=I +N ⊗N =i+n⊗n .
(10)
From the second derivatives of Aα and aα one can define the curvature tensor components
Bαβ = Aα,β · N ,
(11)
bαβ = aα,β · n ,
(12)
allowing us to compute the mean, Gaussian and principal curvatures
H = 21 bαβ aαβ ,
κ=
det[bαβ ]
,
det[aαβ ]
κ1/2 = H ±
p
H2 − κ
(13)
of surface S, and likewise for S0 . Here, bαβ = aαγ bγδ aβδ .
For thin shells, the deformation between S0 and S is fully characterized by aαβ and bαβ , and
their relation to Aαβ and Bαβ . Two important quantities that characterize the in-plane part of
the deformation are
I1 = Aαβ aαβ
(14)
7
and
I2 = det [Aαβ ] · det [aαβ ] .
(15)
They define the invariants of the√surface Cauchy Green tensors C = aαβ Aα ⊗ Aβ and B =
Aαβ aα ⊗ aβ . The quantity J = I2 determines the change of surface area between S0 and S.
Given the parametric derivative of aα , the so-called co-variant derivative of aα is given by
aα;β = (n ⊗ n) aα,β ,
(16)
i.e. as the change of aα along n. Introducing the Christoffel symbol Γγαβ := aγ · aα,β , we can
also write
aα;β = aα,β − Γγαβ aγ .
(17)
For general vectors, v = v α aα + v n, the parametric and co-variant derivative are considered
to agree, i.e. v ,α = v ;α . Therefore, v;α = v,α , n;α = n,α , (v α aα );β = (v α aα ),β and
α = v α + Γα v γ ,
v;β
,β
βγ
(18)
due to (10). Likewise definitions follow for the reference surface.
The variation and linearization of the above quantities can be found in Sauer and Duong (2015).
2.2
Quasi-static equilibrium
For negligible inertia, equilibrium of the thin shell is governed by the field equation (Steigmann,
1999)
T α;α + f = 0 ,
(19)
where f is a source term and
T α = N αβ aβ + S α n
(20)
is the traction vector acting on the surface normal to aα . It can be related to the stress tensor
σ = N αβ aα ⊗ aβ + S α aα ⊗ n
(21)
via Cauchy’s formula T α = σ T aα , or equivalently T = σ T ν with ν = να aα and T = να T α . In
Eqs. (20) and (21),
N αβ = σ αβ + bαγ M γβ ,
(22)
S α = −M βα;β
are the in-plane and shear stress components acting on the cross section. The stress and bending
moment components σ αβ and M αβ are given through constitution. Eqs. (19) and (22) are a
consequence of momentum balance (Sauer and Duong, 2015). At the boundary of the surface,
∂S, the boundary conditions
x = ϕ̄
on ∂x S ,
t
on ∂t S ,
= t̄
(23)
on ∂m S
mτ = m̄τ
can be prescribed. Here, mτ is the bending moment component parallel to boundary ∂S. For
Kirchhoff-Love shells, bending moments perpendicular to boundary ∂S, denoted mν , affect the
boundary traction. Therefore the effective traction
t := T − (mν n)0
is introduced (Sauer and Duong, 2015). We will consider mν = 0 in the following.
8
(24)
2.3
Constitution
The focus here is on the quasi-static behavior of lipid bilayers, which can be described in the
framework of hyperelasticity. For thin shells the stored energy function (per reference surface
area) takes the form W = W (aαβ , bαβ ), such that (Steigmann, 1999; Sauer and Duong, 2015)
2 ∂W
,
J ∂aαβ
1 ∂W
.
=
J ∂bαβ
σ αβ =
M αβ
(25)
From this we can then evaluate the in-plane stress N αβ and the shear S α according to (22). For
convenience, we further define τ αβ := Jσ αβ , M0αβ := JM αβ and N0αβ := JN αβ . The bending
behavior of lipid bilayers is commonly described by the bending model of Helfrich (1973)
w = k (H − H0 )2 + k ∗ κ ,
(26)
which is an energy density per current surface area. Here k is the bending modulus, k ∗ is the
Gaussian modulus and H0 denotes the so-called spontaneous curvature caused by the presence
of certain proteins. Based on (26), we consider the following two constitutive models:
2.3.1
Area-compressible lipid bilayer
Combining the Helfrich energy with an energy resulting from the surface area change, we write
W =Jw+
K
(J − 1)2 .
2
(27)
Here a simple quadratic term for the compressible part is considered, since the area change of
lipid bilayers is very small before rupture occurs (typically |J − 1| < 4%). According to (25)
and (22), the stress and moment components then become
σ αβ = K (J − 1) + k ∆H 2 − k ∗ κ aαβ − 2 k ∆H bαβ ,
M αβ = k ∆H + 2 k ∗ H aαβ − k ∗ bαβ ,
(28)
N αβ = K(J − 1) + k ∆H 2 aαβ − k ∆H bαβ ,
where ∆H := H − H0 .
Remark: Here, k and k ∗ are material constants that have the units strain energy per current
surface area. In principle, also Jk and Jk ∗ could be regarded as material constants that now
have the units strain energy per reference surface area. This alternative would lead to different
expressions for σ αβ and N αβ .
2.3.2
Area-incompressible lipid bilayer
Since K is usually very large for lipid-bilayers, one may as well consider the surface to be fully
area-incompressible. Using the Lagrange multiplier approach, we now have
W = Jw+qg ,
(29)
g := J − 1 = 0
(30)
where the incompressibility constraint
9
is enforced by the Lagrange multiplier q, which is an independent variable that needs to be accounted for. Physically, it corresponds to a surface tension. The stress and moment components
now become
σ αβ = q + k ∆H 2 − k ∗ κ aαβ − 2 k ∆H bαβ ,
M αβ = k ∆H + 2 k ∗ H aαβ − k ∗ bαβ ,
(31)
N αβ = q + k ∆H 2 aαβ − k ∆H bαβ ,
which is identical to (28) for q = Kg.
As K becomes larger and larger both models approach the same solution. So from a physical
point of view it may not make a big difference which model is used. Computationally, model
(27) is easier to handle but can become inaccurate for large K as is shown in Sec. 5. In analytical
approaches, often (29) is preferred as it usually simplifies the solution. Examples for the latter
case are found in Baesu et al. (2004) and Agrawal and Steigmann (2009); the former case is
considered in the original work of Helfrich (1973).
2.3.3
Model properties
In both models, the membrane part only provides bulk stiffness, but lacks shear stiffness. For
quasi-static computations the model can thus become unstable and should be stabilized, as is
discussed in Sec. 3. Interestingly, the bending part of the Helfrich model can contribute an
in-plane shear stiffness, which is shown in the following.
We first introduce the surface tension γ of the surface as the average trace of the stress tensor,
giving
γ := 21 σ : i = 12 Nαα .
(32)
For both (28) and (31) we find
γ = q − k H0 ∆H ,
(33)
where q = Kg in the former case. It can be seen that for H0 6= 0, the bending part contributes
to the surface tension. The surface tension is therefore not given by the membrane part alone
(Rangamani et al., 2014). For the compressible case, the effective bulk modulus can then be
determined from
∂γ
(34)
Keff :=
,
∂J
i.e. as the change of γ w.r.t J. We find
Keff = K + k H0 H/J ,
(35)
since ∂H/∂J = −H/J. Likewise we can define the effective shear modulus from
µeff := J aαγ
αβ
∂Ndev
aβδ ,
2 ∂aγδ
(36)
i.e. as the change of the deviatoric stress w.r.t to the deviatoric deformation (characterized by
aγδ /J). The deviatoric in-plane stress is given by
αβ
Ndev
:= N αβ − γ aαβ .
(37)
We find
αβ
Ndev
= k ∆H H aαβ − bαβ
10
(38)
for both (28) and (31). Evaluating (36) thus gives
µeff = Jk 3H 2 − 2HH0 − κ /2 .
(39)
The model therefore provides stabilizing shear stiffness if 3H 2 > 2HH0 + κ. Since this is not
always the case (e.g. for flat surface regions), additional shear stabilization should be provided
in general. This is discussed in Sec. 3. The value of µeff is discussed in detail in the examples
of Sec. 5. It is shown that µeff can sufficiently stabilize the problem such that no additional
shear stabilization is needed. It is also shown that µeff does not necessarily need to be positive
to avoid instabilities. Geometric stiffening, arising in large deformations, can also stabilize the
shell.
2.4
Weak form
The computational solution technique proposed here is based on the weak form governing the
mechanics of the lipid bilayer. Consider a kinematically admissible variation of the surface
denoted by δx ∈ V. Such a variation of S causes variations of aα , n, aαβ and bαβ . Contracting
field equation (19) with δx, integrating over S and applying Stokes’ theorem leads to the weak
form (Sauer and Duong, 2015)
G = Gint − Gext = 0 ∀ δx ∈ V ,
with
(40)
Z
Gint
Gext
Z
1
αβ
=
δaαβ σ da +
δbαβ M αβ da ,
S 2
S
Z
Z
Z
=
δx · f da +
δx · t ds +
δn · mτ ν ds .
S
∂t S
(41)
∂m S
Denoting the in-plane and out-of-plane components of δx by wα and w, such that δx := wα aα +
w n, we find that
δaαβ = wα;β + wβ;α − 2w bαβ .
(42)
Thus, the first part of Gint can be split into in-plane and out-of-plane contributions as
Z
1
out
δaαβ σ αβ da = Gin
σ + Gσ ,
S 2
with
Gin
σ
Z
=
and
Gout
σ =−
(43)
wα;β σ αβ da
(44)
w bαβ σ αβ da .
(45)
S
Z
S
In principle – although not needed here – the second part of Gint can also be split into in-plane
and out-of-plane contributions (Sauer and Duong, 2015). In the area-incompressible case, we
additionally have to satisfy the weak form of constraint (30),
Z
Gg =
δq g dA = 0 ∀ δq ∈ Q ,
(46)
S0
where Q is an admissible space for the variation of Lagrange multiplier q.
11
3
Liquid shell stabilization
As noted above, the system is unstable for quasi-static computations. There are two principal ways to stabilize the system without modifying/affecting the original problem. They are
discussed in the following two sections and then summarized in Sec. 3.3.
3.1
Adding stiffness
αβ
Firstly, the system can be stabilized by adding a stabilization stress σsta
to σ αβ in order to
provide additional stiffness. This stress can be defined from a (convex) shear energy or from
numerical viscosity. An elegant and accurate way to stabilize the system is to add the stabilization stress only to the in-plane contribution (44) while leaving the out-of-plane contribution
(45) unchanged. The advantage of this approach is that the out-of-plane part, responsible for
the shape of the bilayer, is not affected by the stabilization, at least not in the continuum limit
of the surface discretization. There are several different ways to define the stabilization stress,
which we will group into two categories. An overview of all the options is then summarized in
Tab. 1.
3.1.1
In-plane shear and bulk stabilization
The first category goes back to Sauer (2014), who used it to stabilize liquid membranes governed
by constant surface tension. The stabilization stress for such membranes requires shear and bulk
contributions. Those are given for example by the stabilization stress
αβ
(47)
σsta
= µ/J Aαβ − aαβ ,
based on numerical stiffness, and
αβ
αβ ,
σsta
= µ/J aαβ
pre − a
(48)
αβ at the preceding computational
based on numerical viscosity. Here aαβ
pre denotes the value of a
step. These stabilization stresses are then only included within Eq. (44) and not in Eq. (45),
and the resulting two stabilization schemes are denoted ‘A’ (for (47)) and ‘a’ (for (48)) following
Sauer (2014). This reference shows that scheme ‘a’ is highly accurate and performs much better
than scheme ‘A’. It is also shown that applying the stabilization stresses (47) and (48) only to
the in-plane part is much more accurate than applying it throughout the system (i.e. in both
Eqs. (44) and (45)), which we denote as schemes ‘A-t’ and ‘a-t’.
3.1.2
Sole in-plane shear stabilization
If the surface tension is not constant, as in the lipid bilayer models introduced above, only
shear stabilization is required. Therefore, the following new category of stabilization schemes
is defined. Consider a split of the surface Cauchy-Green tensor into dilatational and deviatoric
b where C
b := J −1 C describes only the deviatoric deformation (since
parts, such that C = J C,
b = 1). The stored energy function of an elastic membrane can then be defined for example
det C
by
µ
K 2
(49)
W =
J − 1 − 2 ln J +
Ib1 − 2 ,
4
2
12
b The first term in (49) captures purely dilatoric
where Ib1 = I1 /J is the first invariant of C.
deformations, while the second part captures purely deviatoric deformations. The formulation
is analogous to the 3D case described for example in Wriggers (2008). Since the bilayer energy
introduced in Sec. 2.3 already contains a bulk part, we now only need to consider the contribution
Wsta =
µ b
I1 − 2
2
(50)
to derive the new stabilization scheme. From (25) we then find the stabilization stress
µ αβ I1 αβ
A − a
,
J2
2
αβ
σsta
=
(51)
or
µ αβ I1 αβ
(52)
A − a
.
J
2
As before, this stress will only be applied to Eq. (44) and not in Eq. (45), even though it has
been derived from a potential and should theoretically apply to both terms. Following earlier
nomenclature we denote this scheme by ‘A-s’. Replacing Aαβ by aαβ
pre in (52) gives
αβ
τsta
=
αβ
τsta
=
with J ∗ :=
I1∗ αβ
µ αβ
a
,
−
a
J ∗ pre
2
(53)
q
αβ
∗
det aαβ det apre
αβ and I1 := apre aαβ , which is an alternative shear-stabilization
scheme based on numerical viscosity. We denote it ‘a-s’. If stresses (52) and (53) are applied
throughout the system (i.e. to both (44) and (45)), we denote the corresponding schemes ‘A-st’
and ‘a-st’.
If the shell is (nearly) area-incompressible the two stabilization methods of Sec. 3.1.1 and 3.1.2
can behave identical, as can be seen by example 5.1.
3.1.3
Relation to physical viscosity
Schemes (48) and (53) are related to physical viscosity. Considering (near) area-incompressibility
(J ∗ = J = 1), the viscous stress for a Newtonian fluid is given by
αβ
σvisc
= −ν ȧαβ
(54)
(Aris, 1989; Rangamani et al., 2013, 2014), where ȧαβ = −aαγ ȧγδ aδβ . Considering the first
order rate approximation
1 αβ
(55)
ȧαβ ≈
a − aαβ
pre ,
∆t
and a small time step (I1∗ ≈ 2), immediately leads to expressions (48) and (53) with
ν = µ ∆t .
3.2
(56)
Normal projection
The second principal way to stabilize the system consists of a simple projection of the formulation onto the solution space defined by the normal surface direction. We apply this step directly
to the discretized formulation as was proposed by Sauer (2014). According to this, for the given
13
discrete system of linear equations for displacement increment ∆u given by K ∆u = −f , the
reduced system for increment ∆ured = P ∆u is simply obtained as
Kred ∆ured = −fred , Kred := P K PT , fred := P f ,
where
nT1
0T
···
0T
P :=
0T
..
.
nT2
..
.
···
..
.
0T
..
.
0T
0T
···
nTnno
(57)
(58)
is a projection matrix defined by the nodal normal vectors nI . Since this method can lead
to distorted FE meshes, a mesh update can be performed by applying any of the stabilization
techniques discussed above. If this is followed by a projection step at the same load level, a
dependency on parameter µ is avoided.
For NURBS discretizations, the computation of the normal nI corresponding to control point I
is not trivial, since the control points generally do not lie on the surface. The control points can
be projected onto the actual surface in order to evaluate the normal at the projection point. As
a simplification of this approach one can also work with the approximate normal obtained at the
current location of the initial projection point. The difference between the two approaches is
very small as Fig. 1 shows. There are minor differences in the locations of the projection points
a.
b.
Figure 1: Search direction nI for stabilization scheme ‘P’: a. Considering the normals at the
projected control points; b. Considering the normals at the current location of the initially
projected control points.
where nI is evaluated. But nI itself does not change much. In the example considered here and
in Sauer (2014), no siginificant difference was therefore found in the numerical results. In order
to solve the reduced problem (57) appropriate boundary conditions are needed. Boundary
conditions can be applied before reduction or they can be suitably adapted to the reduced
system. This is also shown in Fig. 1. In this example (see Sec. 5.3) the vertical displacement at
the outer two and inner two rings of nodes is prescribed. After a mesh update at a given solution
step no further vertical displacement can therefore occur on those nodes. But the nodes can
still move radially. Therefore one can simply replace nI at those nodes by radial unit vectors.
Remark: For liquid droplets, stabilization scheme ‘P’ shows superior accuracy over all other
schemes (Sauer, 2014). However this is not the case for area-incompressible surfaces: For a
curved surface, non-zero values of ∆ured lead to a change in surface area. In regions of non-zero
14
curvature, ∆ured will therefore tend to be zero, yielding scheme ‘P’ ineffective. This is confirmed
by the results in Sec. 5.3.
3.3
Summary of the stabilization schemes
The nine stabilization schemes presented above are summarized in Tab. 1. They can be grouped
αβ
application of σsta
dependence
only in (44)
only on µ
both in (44) & (45)
only on µ
only in (44)
only on µ
both in (44) & (45)
only on µ
only in (44)
on µ and nt
both in (44) & (45)
on µ and nt
only in (44)
on µ and nt
a-st
αβ
stab. stress σsta
/µ
αβ
αβ
A − a /J
Aαβ − aαβ /J
Aαβ − 12 I1 aαβ /J 2
Aαβ − 12 I1 aαβ /J 2
αβ /J
aαβ
pre − a
αβ /J
aαβ
pre − a
∗2
1 ∗ αβ
aαβ
/J
pre − 2 I1 a
∗2
αβ
1 ∗ αβ
apre − 2 I1 a /J
both in (44) & (45)
on µ and nt
P
0
–
on nodal nI
class
scheme
A
A
A-t
A-s
A-st
a
a
a-t
a-s
P
Table 1: Summary of the stabilization schemes presented in Sec. 3.1.1 and 3.1.2
into three classes: A, a and P. The schemes of class A depend only on µ but require this value
to be quite low. The schemes of class a also depend on the number of computational steps, nt . If
this number is high the schemes provide stiffness without adding much stress. The shell is then
stabilized without modifying the solution much, even when µ is high. Scheme ‘P’ depends on
the nodal projection vector nI , which is usually taken as the surface normal. The performance
of the different stabilization schemes is investigated in the examples of Sec. 5.
4
FE formulation
This section presents a general finite element formulation for lipid bilayers using both material
models of Sec. 2.3. The formulation applies to general 3D surfaces. It is based on the solid shell
formulation of Duong et al. (2015).
4.1
FE approximation
The geometry of the reference surface and the current surface are approximated by the finite
element interpolations
X ≈ N Xe
(59)
and
x ≈ N xe ,
(60)
where N := [N1 1, ..., Nne 1] is a (3 × 3ne ) array containing the ne nodal shape functions
T T
NI = NI (ξ 1 , ξ 2 ) of element Ωe defined in parameter space. Xe := [X T
and
1 , ...., X ne ]
15
T T
e
1
xe := [xT
1 , ...., xne ] contain the ne nodal position vectors of Ω . In order to ensure C continuity of the surface, NURBS-based shape functions are used (Borden et al., 2011). The
tangent vectors of the surface are thus approximated by
∂X
≈ N,α Xe
∂ξ α
Aα =
(61)
and
∂x
≈ N,α xe ,
(62)
∂ξ α
From these, the normal vectors of the surface are then determined from (2) and (3). The
variation of x and aα are approximated in the same manner, i.e.
aα =
δx ≈ N δXe
(63)
δaα ≈ N,α δxe .
(64)
and
Based on these expressions, all the kinematical quantities discussed in Sec. 2.1 as well as their
variation (Sauer and Duong, 2015) can be evaluated.
4.2
Discretized weak form
Inserting the above interpolations into the weak form of Sec. 2.4 leads to the approximated
weak form
nel
nel
X
X
e
G≈
G =
(Geint − Geext ) .
(65)
e=1
e=1
The internal virtual work of each finite element is given by (Duong et al., 2015)
e
e
Geint = δxT
e fintσ + fintM ,
(66)
with the FE force vectors due to the membrane stress σ αβ and the bending moment M αβ
Z
e
fintσ
:=
σ αβ NT
(67)
,α aβ da ,
Ωe
and
e
fintM
Z
:=
Ωe
γ
T
M αβ NT
,αβ − Γαβ N,γ n da .
(68)
e
Following decomposition (43), fintσ
can be split into the in-plane and out-of-plane contributions
(Sauer et al., 2014)
e
e
e
finti
:= fintσ
− finto
,
Z
(69)
e
finto
:= −
σ αβ bαβ NT n da .
Ωe
The external virtual work for each FE due to the external loads f , t and mτ is given by (Duong
et al., 2015)
e
e
e
Geext = δxT
(70)
e fextf + fextt + fextm ,
with
e
fextf
Z
Ωe
e
fextt
:=
Z
∂t
e
fextm
NT f da ,
:=
NT t ds ,
Ωe
Z
:=
∂m Ωe
α
NT
,α ν mτ n ds .
16
(71)
For the complete linearization of these terms, see Sauer et al. (2014), Duong et al. (2015) and
Appendix A. The linearization requires the material tangents of σ αβ and M αβ . For the material
models in (28) and (31) these are given in Sauer and Duong (2015). For the stabilization schemes
in Sec. 3.1.1 the tangent is given in Sauer (2014), while the tangent for schemes ‘A-s’ and ‘A-st’
(see (52)) in Sec. 3.1.2 is
∂τ αβ
µ I1 αβ γδ
αβγδ
αβ γδ
αβ γδ
αβγδ
(72)
c
:= 2
.
=
a a − I1 a
−a A −A a
∂aγδ
J 2
For schemes ‘a-s’ and ‘a-st’, the terms I1 and J are simply substituted by I1∗ and J ∗ , respectively.
Here, aαβγδ is given in Sauer and Duong (2015).
4.3
Area constraint
For the area-incompressible case, a further step is to discretize the constraint and the corresponding Lagrange multiplier. For the latter, we write
q ≈ L qe ,
(73)
as in Eq. (60). Here L := [L1 , ..., Lme ] is a (1 × me ) array containing the me nodal shape
functions LI = LI (ξ 1 , ξ 2 ) of surface element Ωe , and qe := [q1 , ...., qme ]T contains the me nodal
Lagrange multipliers of the element. It follows that
δq ≈ L δqe ,
(74)
such that weak constraint (46) becomes
nel
X
Geg ,
(75)
e
Geg = δq T
e g ,
(76)
Gg ≈
e=1
where
with
ge
Z
LT g dA .
:=
Ωe0
(77)
The linearization of ge , needed for the following solution procedure, is provided in Appendix B.
4.4
Solution procedure
e , fe
e
e
e
e
The elemental vectors fintσ
intM , fextf , fextt , fextm and g are assembled into the global vectors
f and g by adding corresponding entries. The discretized weak form then reads
δxT f (x, q) + δqT g(x) = 0 ,
∀ δx ∈ V h & δq ∈ Qh ,
(78)
where x, q, δx and δq are global vectors containing all nodal deformations, Lagrange multipliers
and their variations. V h and Qh are corresponding discrete spaces. Eq. (78) is satisfied if f = 0
and g = 0 at nodes where no Dirichlet BC apply. These two nonlinear equations are then solved
with Newton’s method for the unknowns x and q. We note that the discretization of x and
q should satisfy the LBB-condition (Babuška, 1973; Bathe, 1996). For that, we consider here
C 1 -continuous, bi-quadratic NURBS interpolation for x and C 0 -continuous, bi-linear Lagrange
interpolation for q. If no constraint is present (like in model (27)), the parts containing q and
g are simply skipped. A comparison between the different models ((27) and (29)) is presented
in Sec. 5.
17
4.5
Rotational constraints
To constrain rotations, we add the constraint potential
Z
Πn =
(n − n̄) · (n − n̄) dS
L0 2
(79)
to the shell formulation. This approach can be used to apply rotations at boundaries, to fix
rotations at symmetry boundaries, and to equalize normals at patch boundaries. The particular
boundary under consideration is denoted as L0 in the reference configuration. is a penalty
parameter. The variation, linearization and FE discretization of (79) is discussed in Duong
et al. (2015).
4.6
Normalization
For a numerical implementation, the above expressions need to be normalized. We normalize
the geometry and deformation by some length scale L, i.e. X̄ = X/L and x̄ = x/L, where the
bar indicates non-dimensional quantities, and non-dimensionalize all the kinematics based on
this. The material parameters are chosen to be normalized by parameter k, which has the unit
[force × length]. The non-dimensional material parameters thus are
k̄ = 1 ,
k̄ ∗ = k ∗ /k ,
K̄ = K L2 /k ,
µ̄ =
µ L2 /k
(80)
,
¯ = L/k .
With the chosen normalization parameters k and L, the normalization of stress and moment
components become3
q̄
= q L2 /k ,
σ̄ αβ = σ αβ L2 /k ,
M̄ αβ
=
M αβ
(81)
L/k ,
while the normalization of the loading follows as
f̄
= f L3 /k ,
t̄
= t L2 /k ,
(82)
m̄τ = mτ L/k .
5
Numerical examples
To illustrate the performance of the proposed finite element model, four examples – marked
by increasing computational complexity – are presented here. The first three examples have
analytical solutions that are used for model verification.
5.1
Pure bending and stretching of a flat strip
The first example considers the bending of a flat strip, with dimension S × L, by applying the
rotation Θ/2 at the ends of the strip as is shown in Fig. 2a. Further, a uniform stretch, with
3
supposing that the parameters ξ α carry units of length, so that aαβ and aαβ become dimensionless
18
a.
b.
Figure 2: Pure bending: a. initial FE configuration and boundary conditions (for S = πL
discretized with m = 8 elements); b. current FE configuration for an imposed rotation and
stretch of Θ/2 = π/3 and λ2 = 1.5. The color shows the relative error in the mean curvature,
defined as eH := 1 − HFE /Hexact , considering model (29) without any stabilization.
magnitude λ2 , is applied as shown in the figure. The remaining boundaries are supported as
shown. In particular, the rotation along the boundaries at Y = 0 and Y = L is not constrained.
We will study the problem by examining the support reactions M = M (Θ, λ2 ) (the distributed
moment along X = 0) and N = N (Θ, λ2 ) (the traction along Y = 0). The analytical solution
for this problem is given by Sauer and Duong (2015). According to this, the strip deforms into
a curved sheet with dimension s × ` = λ1 S × λ2 L and constant mean curvature
H=
κ1
,
2
(83)
where λ1 = s/S, λ2 = `/L and κ1 = Θ/s are the in-plane stretches and the out-of-plane
curvature of the strip. Further,
−2
λ1
0
αβ
(84)
[a ] =
0 λ−2
2
and
[bαβ ]
=
κ1 λ−2
0
1
0
0
.
(85)
With this, the in-plane stress components become
N11 = q − k H 2 ,
N22 = q + k H 2
(86)
both for the area-incompressible model of (29) and the compressible model of (27) with q =
K(J − 1). For the considered boundary conditions, N11 = 0, so that
q = k H 2,
(87)
and we thus have the support reaction (per current length) N := N22 = 2k H 2 along Y = 0
and Y = `. Per reference length this becomes N0 = λ1 N . The bending moment necessary to
support the applied rotation (along X = 0 and X = πR) becomes (Sauer and Duong, 2015)
M = kH
19
(88)
per current length of the support (or M0 = λ2 M per reference length). If the special case
k ∗ = −k/2 (Canham, 1970) is considered, there is no bending in the Y -direction.
For the area-incompressible model of (29), we have λ1 = 1/λ2 . For the area-compressible case
according to model (27), we can determine λ1 from (87) with J = λ1 λ2 , giving
λ1 =
i
1hk 2
H +1 .
λ2 K
(89)
The two cases are solved numerically using the computational setup shown in Fig. 2. The FE
mesh is discretized by m elements along X. The parameter t is introduced to apply the rotation
Θ = tπ/6 and stretch λ2 = 1 + t/2 by increasing t linearly from 0 to 1 in nt steps, where nt
is chosen as a multiple of m. The mean curvature then follows as H = Θ/(2λ1 S). For the
unconstrained case, λ1 is then the solution of the cubic equation
λ31 λ2 − λ21 −
Θ2
=0,
4K̄
(90)
with K̄ = KS 2 /k. Numerically, the rotation is applied according to (79) considering the penalty
parameter = 100 nx k/L. Fig. 3 shows the FE solution and analytical solution for M0 (t) and
N0 (t), normalizing M by k/L and N by k/L2 .
0.8
0.5
FE M0(t)
0.6
exact N0(t)
0.5
exact M0(t)
M0 [k/L] or N0 [k/L2]
0.7
FE N0(t)
0.4
0.3
0
0
M [k/L] or N [k/L2]
exact M0(t)
0.2
0.4
FE M0(t)
exact N0(t)
FE N0(t)
0.3
0.2
0.1
0.1
a.
0
0
0.2
0.4
0.6
parameter t
0.8
1
b.
0
0
0.2
0.4
0.6
parameter t
0.8
1
Figure 3: Pure bending: distributed boundary moment M0 (t) and normal traction N0 (t) as
obtained analytically and computationally for a. the area-incompressible model (29) and b. the
area-compressible model (27) for K̄ = 2.5.
Next, the accuracy of the different stability schemes is studied in detail by examining the L2 error of the solution, defined by
s
Z
1
(91)
L2 :=
kuexact − uFE k2 dA ,
SL S0
and the error in M and N , defined by
EM N :=
|Mexact − MFE | |Nexact − NFE |
+
,
Mexact
Nexact
(92)
where MFE and NFE are the computed mean values along the respective boundaries. The
first error is a measure of the kinematic accuracy, while the second is a measure of the kinetic
20
0
0
10
10
−1
−1
10
L2 error of U
L2 error of U
10
−2
10
stab. A−s
stab. A−t
stab. A
stab. A−st
stab. a−s
stab. a−t
stab. a
stab. a−st
−3
10
−4
10
−5
10
0
a.
−5
10
Number of elements
10
2
10
−10
10
stabilization parameter µ [k/L2]
b.
2
0
10
2
10
1
1
10
10
stab. A−s
stab. A−t
stab. A
stab. A−st
0
0
10
EMN
EMN
10
stab. A−s
stab. A−t
stab. A
stab. A−st
stab. a−s
stab. a−t
stab. a
stab. a−st
−1
10
−2
10
−3
c.
−3
10
−4
10
10
−2
10
10
1
10
stab. A−s
stab. A−t
stab. A
stab. A−st
0
10
−1
10
−2
10
−3
1
10
Number of elements
10
2
10
d.
−10
10
stabilization parameter µ [k/L2]
0
10
Figure 4: Pure bending: accuracy for the area-incompressibile model (29): a. L2 -error vs. m
considering stabilization classes A and a with µ̄ = 10 and nt = 12.5 m; b. L2 -error vs. µ
considering stabilization class A with m = 32; c.–d. same as a.–b., but now for error EM N .
Considered is Θ/2 = π/3 and λ2 = 1.5.
accuracy. Fig. 4 shows the two errors for the area-incompressible model of Eq. (29). Looking at
the L2 -error, schemes ‘A-t’, ‘A-st’, ‘a-t’ and ‘a-st’ perform best. In case of error EN M , schemes
‘a’ and ‘a-s’ perform best. Class A generally converges with µ, but it may not converge with the
number of elements for high values of µ. Interestingly, the L2 -error of scheme ‘A-t’ and ‘A-st’
is not affected by µ, as schemes ‘A’ and ‘A-s’ are. For sufficiently low µ (for m = 32 about
µ̄ < 10−3 ), the accuracy of class A (both in L2 and EM N ) reaches that of class a and then only
improves with mesh refinement. Class A with low µ may even surpass class a with high µ. But
generally, class a is more accurate and robust (as µ does not need to be very small). There is
no clear favourite in class a for this test case.
Fig. 5 shows the two errors for the area-compressible model of Eq. (27) considering K̄ = 2.5.
In case of the L2 -error, scheme ‘a’ performs best, while for error EM N , scheme ‘a-s’ is best. As
before, class A is poor for large µ, but reaches (and may surpass) the accuracy of class a at
some µ depending on m.
As the plots show, not a single stabilization scheme stands out here and the accuracy depends
both on the model and the error measure. In general, all schemes are suitable to solve the
problem. If class A is used, the value of µ needs to be suitably low. For class a even large
values for µ can be used. In this example it is even possible to set µ = 0 in the code. This works
21
0
0
10
10
−1
−1
10
L2 error of U
L2 error of U
10
−2
10
stab. A−s
stab. A−t
stab. A
stab. A−st
stab. a−s
stab. a−t
stab. a
stab. a−st
−3
10
−4
10
−5
10
0
a.
−5
10
Number of elements
10
2
10
−10
10
stabilization parameter µ [k/L2]
b.
2
0
10
2
10
1
1
10
10
stab. A−s
stab. A−t
stab. A
stab. A−st
0
0
10
EMN
EMN
10
stab. A−s
stab. A−t
stab. A
stab. A−st
stab. a−s
stab. a−t
stab. a
stab. a−st
−1
10
−2
10
−3
c.
−3
10
−4
10
10
−2
10
10
1
10
stab. A−s
stab. A−t
stab. A
stab. A−st
0
10
−1
10
−2
10
−3
1
10
Number of elements
10
2
10
d.
−10
10
stabilization parameter µ [k/L2]
0
10
Figure 5: Pure bending: accuracy of the area-compressible model (27): a. L2 -error vs. m
considering stabilization classes A and a with µ̄ = 10 and nt = 12.5 m; b. L2 -error vs. µ
considering stabilization class A with m = 32; c.–d. same as a.–b., but now for error EM N .
Considered is K̄ = 2.5, Θ/2 = π/3 and λ2 = 1.5.
since the effective shear stiffness according to (39) is positive here, i.e. µeff = 3JkH 2 /2 > 0. For
other problems µeff can be negative, and stabilization is required.
5.2
Inflation of a sphere
The second example considers the inflation of a spherical cell. Contrary to the previous example,
the FE mesh now also contains interfaces between NURBS patches. Since the surface area
increases during inflation, potential (27) is considered. For this model, the in-plane traction
component, given in (28), is
N αβ = Na aαβ + Nb bαβ ,
(93)
with
Na := k ∆H 2 + K (J − 1) ,
Nb := −k ∆H .
(94)
The initial radius of the sphere is denoted by R, the initial volume is denoted by V0 = 4πR3 /3.
The cell remains spherical during inflation, so that we can obtain an analytical reference solution.
22
The current radius during inflation shall be denoted by r, the current volume by V = 4πr3 /3.
Considering the surface parameterization
r cos φ sin θ
x(φ, θ) = r sin φ sin θ ,
(95)
−r cos θ
we find
[aαβ ]
1
= 2
r
"
1/ sin2 θ 0
0
1
#
,
(96)
bαβ = −aαβ /r and H = −1/r for this example. The traction vector T = να T α on a cut ⊥ ν
thus becomes
T = (Na − Nb /r) ν + S α να n
(97)
according to (20). The in-plane component Tν := Na − Nb /r must equilibrate the current
pressure according to the well-known relation
p=
2Tν
.
r
(98)
We can thus establish the analytical pressure-volume relation
1
1
1
2
p̄(V̄ ) = 2H̄0 V̄ − 3 − 2H̄02 V̄ − 3 + 2K̄ V̄ 3 − V̄ − 3 ,
(99)
normalized according to the definitions p̄ := pR3 /k, V̄ := V /V0 , H̄0 := H0 R and K̄ := KR2 /k.
Fig. 6 shows the computational setup of the problem. The computational domain consists of
a.
b.
Figure 6: Sphere inflation: a. initial FE configuration and boundary conditions (for mesh
m = 8); b. current FE configuration for an imposed volume of V̄ = 2 compared to the initial
configuration; the colors show the relative error in the surface tension Tν .
a quarter sphere discretized with four NURBS patches. The quarter sphere contains 3m2 /2
elements where m is the number of elements along the equator of the quarter sphere. At the
boundaries and at the patch interfaces C 1 -continuity is enforced using (79) with = 4000mk/R.
The area bulk modulus is taken as K = 5k/R2 , while k ∗ is taken as zero. Two cases are
considered: H0 = 0 and H0 = 1/R. Fig. 7 shows that the computational p(V )-data converge to
the exact analytical result of (99). Here the pressure error
ep =
|pexact − pFE |
pexact
23
(100)
−2
8
10
−3
10
4
error in p
internal pressure [ k/R3 ]
6
2
0
exact H0 = 0
−2
−5
FE H0 = 0
10
FE H0 = 1
a.
A−t
a−t
all other schemes
exact H0 = 1
−4
−6
0.5
−4
10
1
1.5
enclosed volume [ V/V0 ]
−6
10
2
0
10
b.
1
10
mesh parameter m
2
10
Figure 7: Sphere inflation: a. pressure-volume relation; b. FE convergence for the different
stabilization schemes.
is examined for H0 = 1/R and V̄ = 2 considering the 9 stabilization schemes of Tab. 1 with
µ̄ = 0.01 for class A and µ̄ = 1 and nt = 5m for class a. For schemes ‘A’, ‘A-s’, ‘A-st’, ‘a’, ‘a-s’,
‘a-st’ and ‘P’ this error converges nicely (and is indistinguishable in the figure). Only schemes
‘A-t’ and ‘a-t’ behave significantly different. They introduce further errors that only converge if
µ is decreased or nt is increased. The reason why all other schemes have equal error, is that here
the error is actually determined by the penalty parameter used within patch constraint (79).
The error stemming from the stabilization methods (apart from ‘A-t’ and ‘a-t’) is insignificant
compared to that. It is interesting to note that ‘A-st’ and ‘a-st’ perform much better than
‘A-t’ and ‘a-t’, even though no shear is present in the analytical solution. ‘A-st’ and ‘a-st’ can
therefore be considered as the best choices here, since they are the most efficient schemes to
implement.
We finally note that for a sphere µeff = JkH(H − H0 ), where H = −1/r. Thus µeff > 0 for
H < H0 , which is the case here.
5.3
Drawing of a tube
Tube-like shapes are one of the most common non-trivial shapes in biological cell membranes.
They can be observed in the endoplasmic reticulum (Shibata et al., 2006, 2009; Hu et al., 2009)
and mitochondria (Fawcett, 1981; Griparic and van der Bliek, 2001; Shibata et al., 2009), where
individual tubules or a complex network of tubules co-exist. The tubes can also be formed when
a membrane is pulled by actin or microtubule polymerization, or by molecular motors attached
to membranes but walking on cytoskeletal filaments (Terasaki et al., 1986; Waterman-Storer
and Salmon, 1998; Koster et al., 2003; Itoh et al., 2005). These situations can be modeled by
means of a lateral force that acts on a membrane. An analytical solution for this problem has
been obtained by Derényi et al. (2002) by assuming axisymmetry and infinitely long tubes,
which we use to verify our finite element formulation. The dynamics of tether formation and
relaxation have been also studied by Rahimi and Arroyo (2012).
To simulate the tube drawing process we consider the setup shown in Fig. 8. The cell membrane
is modeled as a circular, initially flat disc with initial radius L. The effect of the surrounding
membrane is captured by the boundary tension σ (w.r.t. the current boundary length). The
surface is described by material model (27). L and k are used for normalization. The remaining
24
a.
b.
Figure 8: Tube drawing: a. boundary conditions and b. coarse FE mesh of the initial configuration. The dots show the control points of the NURBS discretization.
material parameters are chosen as k̄ = −0.7 k and K = 20,000 k/L2 . H0 is taken as zero.
We consider σ ∈ {100, 200, 400, 800}k/L2 . Stabilization scheme ‘A-s’ is considered with µ =
0.1 k/L2 . The shell is clamped at the boundary, but free to move in the in-plane direction.
The traction t = σν is imposed and applied numerically via (71.2). Even though t is constant
e
changes and has to be linearized.
during deformation, the boundary length ds appearing in fextt
This is, for example, discussed in Sauer (2014). At the center, the displacement u is imposed
on the initially flat, circular surface. To prevent rigid rotations, selected nodes are supported
as shown in Fig. 8b.
Fig. 8b also shows one of the chosen finite element discretizations of the initial configuration.
Quadratic, NURBS-based, C 1 -continuous finite elements are used. For those, the outer ring of
control points lies outside of the mesh; their distance to the center is therefore slightly larger
than L. A finer discretization is chosen at the center, where the tube is going to form. The
chosen NURBS discretization degenerates at the center, such that the C 1 -continuity is lost
there. It is regained if displacement u is applied not only to the central control point but also to
the first ring of control points around the center. This ensures that the tangent plane remains
horizontal at the tip. Likewise, a horizontal tangent is enforced at the outer boundary by fixing
the height of the outer two rings of control points.
Fig. 9 shows the deformed surface at u = L, considering different values of the far-field surface
tension σ. The surface tension affects the slenderness of the appearing tube. Derényi et al.
(2002) showed from theoretical considerations4 that the tube radius is
r
1 k
(101)
a=
,
2 σ
while the steady force during tube drawing is
√
P0 = 2π σ k .
These values are confirmed by our computations, which is illustrated in Fig. 10.
4
Assuming that the tube is sufficiently long and can be idealized by a perfect cylinder.
25
(102)
Figure 9: Tube drawing: Results for σ ∈ {100, 200, 400, 800} k/L2 (bottom to top); colorscale
shows the mean curvature H normalized by L−1 .
26
0
1.2
10
µ = 0.1
µ = 0.4/m
µ=0
analytical
1
−1
0.8
error in P
drawing force P/P0
10
0.6
−2
10
0.4
0
0
−3
σ = 800
σ = 400
σ = 200
σ = 100
0.2
10
a.
20
30
40
prescribed displacement u/a
50
10
−4
10
60
0
10
b.
1
2
10
10
mesh parameter m
3
10
Figure 10: Tube drawing: a. load-displacement curve; b. FE convergence.
The left inset shows the force-displacement relation during drawing. Oscillations appear in the
numerical solution due to the mesh discretization error. They are more pronounced for more
slender tubes, as the black curve in Fig. 10a shows. They disappear upon mesh refinement, as
the solution converges. The convergence of P0 for σ = 200k/L2 and u = L (= 28.28a) is shown
in Fig. 10b by examining the error
e(P0FE ) :=
|P0ref − P0FE |
,
P0ref
(103)
where P0ref is the FE solution for m = 256 and µ = 0. The mesh sequence given in Tab. 2 is
used for the convergence study. For the convergence study, the following stabilization cases are
m
4
8
16
32
64
128
256
nel = 4 m2
64
256
1024
4096
16384
65336
262144
nno = 4m(m + 1)
80
288
1088
4224
16640
66048
263168
µ̄
1/10
1/20
1/40
1/80
1/160
1/320
1/640
Table 2: tube drawing: computational parameters (number of surface elements nel , number of
control points nno and stability parameter µ̄) for the convergence study of Fig. 10b.
considered:
1. scheme ‘A-s’ with fixed µ̄ = 0.1,
2. scheme ‘A-s’ with varying µ̄, as specified in Tab. 2,
3. scheme ‘P’ using the solution of case 1 as initial guess,
4. scheme ‘P’ using the solution of case 2 as initial guess,
5. no stabilization (µ = 0), using the solution of case 2 as initial guess.
It turns out that scheme ‘P’ does not at all improve the results obtained by scheme ‘A-s’,
probably due to the issue noted in the remark of Sec. 3.2. Case 1 does not converge below an
27
error of about 0.6%, which reflects the error caused by µ̄ = 0.1. Case 5 works due to the inherent
shear stiffness of the Helfrich model given in (39). For the cylindrical part, µ̄eff = 3H 2 /2 > 0.
With H = −1/(2a) follows µ̄eff = 3/(2a2 ). At the tip, the principal curvatures are equal
(κ1 = κ2 ), so that µ̄eff = κ21 > 0. At the tip the curvature is almost twice as large as the
cylinder curvature (i.e. κ1 ≈ −2/a), so that µ̄eff ≈ 4k/a2 . In the cap, µ̄eff ranges in between
these two extreme values, and so µeff is always positive. The initial flat disk has µeff = 0, so
that stabilization is needed for the initial step. In all cases, the error reported in Fig. 10b is
assessed by comparison to the finest mesh of case 2. From this we can find that the analytical
solution itself has an error of about 0.2%, due to its underlying assumptions.
5.4
Cell budding
The last example considers the budding of spherical cells. The example is used to demonstrate
the capabilities of the proposed computational model, in particular in the context of non-trivial
and non-axisymmetric deformations.
5.4.1
Motivation
It is known that protein adsorption can mediate shape changes in biological membranes (Zimmerberg and Kozlov, 2006; McMahon and Gallop, 2005; Kozlov et al., 2014; Shi and Baumgart,
2015). The lipid membrane deforms whenever its curvature is incompatible with the inherent
structure of a protein, giving rise to a spontaneous curvature. Such protein-induced spontaneous curvature is common in many important biological phenomena such as endocytosis
(Buser and Drubin, 2013; Kukulski et al., 2012; Peter et al., 2004), cell motility (Keren, 2011)
and vesicle formation (Gruenberg and Stenmark, 2004). For example, in endocytosis, a clathrin
protein coat is formed on the membrane which is incompatible with the membrane curvature,
thus driving the out-of-plane shape changes, finally leading to fission. Moreover, another set of
curvature generating proteins, the so-called BAR proteins (Peter et al., 2004; Kukulski et al.,
2012), play an important role in modulating shape changes leading to fission in the later stages
of endocytosis.
In this example, we use the proposed finite element formulation to study the shape changes in
membranes due to protein-induced spontaneous curvature. This spontaneous curvature usually
depends on the concentration of proteins adsorbed onto the membrane. In obtaining the shapes,
we assume that any proteins that are bound to the membrane do not diffuse and are concentrated
in a specific region (Walther et al., 2006; Agrawal and Steigmann, 2009; Karotki et al., 2011;
Kishimoto et al., 2011; Rangamani et al., 2014). Such processes are common in all endocytosis
related phenomena, where it is known that the clathrin protein coat does not diffuse in the
membrane once adsorbed. Therefore, our example aims at helping to understand the shapes
arising in the early stages of endocytosis.
5.4.2
Computational setup
For our example, we consider a hemi-spherical cell with initial radius R; the cell surface is
clamped at the boundary, but free to expand radially as is shown in Fig. 11. On the tip
of the cell, within the circular region of radius 0.2R, a constant spontaneous curvature H̄0
is prescribed. Unless otherwise specified, model (27) is used with the material parameters
k̄ ∗ = −0.7 and K̄ = 10,000, while k and R are used for normalization according to Sec. 4.6
and remain unspecified. Further, stabilization scheme ‘A-s’ is used with µ̄ = 0.01. The FE
28
Figure 11: Cell budding: initial configuration, FE discretization and boundary conditions. The
boundary normal is fixed and the boundary nodes are only free to move in the radial direction.
discretization shown in Fig. 11, consisting of five NURBS patches, is used. Where the patches
meet, constraint (79) is added to ensure rotational continuity and moment transfer. Constraint
(79) is also used to fix the boundary normal. The actual FE mesh is much finer than in Fig. 11
and uses 12228 elements (64 times more than in the figure). The penalty parameter of the
rotational constraint is ¯ = 6,400. Gaussian quadrature is considered, using 3 × 3 points for
surface integrals and 4 points for line integrals.
5.4.3
Bud shapes
In past numerical studies, axisymmetric bud shapes have been reported, e.g. Walani et al. (2015).
These shapes should be a natural solution due to the axisymmetry of the problem. However, as
is shown below, non-axisymmetric solutions are also possible, and in fact energetically favorable,
indicating that axisymmetric solutions can become unfavored. This is illustrated by considering
case
1
2
3
4
5
bud shape
axisymmetric
general
general
general
general
H0 region
circle
circle
ellipse
ellipse
ellipse
stabilization
A-s
A-s
A-s
A-st
a-st
µ̄
0.01
0.01
0.01
10
1250
in-plane stress state
hydro-static
hydro-static
hydro-static
elastic shear
viscous shear
Table 3: Cell budding: different physical test cases considered.
the five different physical test cases shown in Tab. 3 and discussed in the following:
Case 1: The deformation is constrained to remain axisymmetric (the FE nodes are only allowed
to move in radial direction5 ). The resulting deformation for H̄0 = −25 is shown in Fig. 12 and
in the supplemental movie file bud1.mpg.
Case 2: The deformation is not constrained. Consequently, the non-axisymmetric bud shape
shown in Fig. 13 appears. Viewed from the top, the bud takes the shape of a ‘+’. Essentially,
the initially circular region, where H0 is prescribed, tries to flow away in order to lower the
elastic energy. The flow leads to large mesh distortions and consequently the simulation crashes
at H̄0 = −23.17. It is reasonable to suspect that the ‘+’ shape is a consequence of the particular
discretization. To confirm this, the following case is considered:
5
For the given mesh, this does not enforce axisymmetry exactly as a close inspection of the results shows.
29
Figure 12: Cell budding: Axisymmetric case at H̄0 = −5, −10, −15, −20, −25 (left to right):
3D and side view of deformation and curvature H̄. Here H̄ ∈ [−15.0, 0.31].
Figure 13: Cell budding: Unconstrained, perfect case at H̄0 = −5, −10, −15, −20, −23.17 (left
to right): 3D and side view of deformation and curvature H̄. Here H̄ ∈ [−24.5, 6.12].
Case 3: Instead of a circle, H0 is prescribed within an imperfect circle, i.e. an ellipse with
half-axes a = 0.22R and b = 0.18R. The bud now flows into a ‘−’ shape, as is shown in Fig. 14.
The distortion of the mesh seen on the right clearly shows how the material flows outward in
horizontal direction and back inward in vertical direction. The mesh distortion becomes so
large that the simulation crashes at H̄0 = −11.98. In the current case the flow is not resisted
mechanically (since µ is so low). Resistance is provided either by shear stiffness (e.g. due to an
underlying cytoskeleton) or by viscosity. This is considered in the remaining two cases.
Case 4: Fig. 15 shows the deformation for the same case as before (a = 0.22R and b = 0.18R)
considering now a shear resisting membrane. Model ‘A-st’6 is now considered with µ̄ = 10. The
shear resistance prevents the unbounded spreading of the bud observed earlier. Now the bud
remains localized in the center. The bud starts growing in an almost circular fashion, but then
degenerates into the shape shown in Fig. 15. The process suggests that the initial imperfection is
only a trigger for the final shape, but does not affect it in magnitude. The evolving deformation
for case 4 is also shown in the supplemental movie file bud4.mpg.
Case 5: The final case also considers the imperfect circle from before (a = 0.22R and b =
6
The shear stresses are now physical and need to be applied both in-plane and out-of-plane.
30
Figure 14: Cell budding: Unconstrained, imperfect case at H̄0 = −4, −8, −11.98 (left to right):
3D and side view of deformation and curvature H̄. Here H̄ ∈ [−12.5, −0.27].
Figure 15: Cell budding: Shear resistant, imperfect case at H̄0 = −5, −10, −15, −20, −25 (left
to right): 3D and side view of deformation and curvature H̄. Here H̄ ∈ [−22.8, 4.61].
0.18R), but now provides significant shear stress through physical viscosity. This is captured
through model ‘a-st’ using relation (56) with µ̄ = 1250 and a load stepping increment for H0
of ∆H̄0 = 0.02 (such that ν = 25k/L3 /Ḣ0 , where Ḣ0 is the rate with which the spontaneous
curvature is prescribed). The evolution of the bud with H0 is shown in Fig. 16 and in the
supplemental movie file bud5.mpg. Again, the bud starts growing in an almost circular fashion,
but then degenerates into a plate-like shape. If H0 is kept fixed over time, the solution of case
5 will relax to the solution of case 3.
5.4.4
Surface energy
By examining the surface energy
Z
Π :=
W dA ,
(104)
S0
it can be seen that the non-axisymmetric shapes are preferred. As Fig. 17 shows case 2 and
3 have much lower surface energy than case 1. The difference becomes especially large below
H0 = 4/R, when the deformations become large. As the system tries to minimize energy, this
31
Figure 16: Cell budding: Viscous, imperfect case at H̄0 = −5, −10, −15, −20, −25 (left to right):
3D and side view of deformation and curvature H̄. Here H̄ ∈ [−24.4, 4.29].
9
9
axisymmetric
unconstrained
imperfect
8
7
6
5
4
3
6
5
4
3
2
2
1
1
0
−15
a.
axisymmetric
unconstrained
imperfect
7
elastic energy Π [ k ]
bending energy Πb [ k ]
8
−10
−5
prescribed spontaneous curvature H0 [ 1/R ]
0
−15
0
b.
−10
−5
prescribed spontaneous curvature H0 [ 1/R ]
0
Figure 17: Cell budding: elastic surface energy vs. spontaneous curvature for the three cases
shown in Figs. 12, 13 and 14: a. bending energy; b. total energy (bending + areal part).
shows that the axisymmetric bud shape of case 1 is not a preferred solution. As the figure
shows, almost all of the energy goes into bending (contribution Jw in (27)), as the areal part
(contribution Kg 2 /2 in (27)) becomes negligible for near incompressibility.
5.4.5
Surface tension
One of the advantages of the proposed finite element formulation is that the surface tension γ
can be studied. This is done here for cases 1, 4 and 5 listed above. It was shown before that
the surface tension is not uniform under protein-induced spontaneous curvature through FE
simulations utilizing Monge patches (Rangamani et al., 2014). The current simulations confirm
this result and in addition yield further understanding for large deformations.
First, the axisymmetric case (case 1) is examined in Fig. 18. As seen γ becomes maximum
within the protein patch. This maximum is constant across the patch for low H0 . This changes
for increasing H0 , where a distinct maximum appears in the center.7 At a certain level of H0
7
It can be seen that the distribution of γ is not exactly axisymmetric. This is due to the inexact enforcement
32
Figure 18: Cell budding: Axisymmetric case at H̄0 = −5, −10, −15, −20, −25 (left to right):
3D and side view of deformation and surface tension γ̄.
rupture would occur here, depending on the strength of the lipid bilayer.
Second, the shear stiff case (case 4) is examined in Fig. 19. As the plate-like bud appears, the
Figure 19: Cell budding: Shear resistant, imperfect case at H̄0 = −5, −10, −15, −20, −25 (left
to right): 3D and side view of deformation and surface tension γ̄.
maximum of γ moves away from the center and is concentrated at the end of the plate. The
behavior is similar for the viscous case (case 5) shown in Fig. 20. Both case 4 and 5 show
that the extrema of γ are concentrated in very small regions associated with large curvatures.
However, these peak values are still much lower than the more distributed peak values of case 1.
The lower values are not surprising, as the system has much lower energy for non-axisymmetric
shapes. For the non-axisymmetric cases with shear stiffness (case 4) and viscosity (case 5), the
resulting surface tensions are of similar magnitude. Only the shapes are different.
5.4.6
Effective shear stiffness
Fig. 21 shows the sign of the effective shear stiffness µeff for cases 1 and 4.8 It can be seen that for
of axisymmetry noted in footnote 4.
8
For case 4, the physical shear stiffness µ̄ = 10 needs to be added to Eq. (39).
33
Figure 20: Cell budding: Viscous, imperfect case at H̄0 = −5, −10, −15, −20, −25 (left to right):
3D and side view of deformation and surface tension γ̄.
Figure 21: Cell budding: sign of µ̄eff for cases 1 (top) and 4 (bottom) at H̄0 =
−5, −10, −15, −20, −25 (left to right).
both cases, µeff is negative in some regions. The minimum values of µ̄eff reaches down to −150 in
case 1 and to −50 in case 4. Since the simulations run stably, even though µeff < 0, there must
be another stabilizing effect. It is probable that this related to the geometry: The figure shows
that the regions of negative µeff are all convex. The condition µeff < 0 is therefore not sufficient
for unstable behavior. Since stabilization may be provided naturally, the computations can
sometimes be performed with no stabilization scheme. In practice it is however recommended
to add one of the stabilization schemes proposed in Sec. 3. For example, when both H and κ
are zero neither the Helfrich model nor the geometry can be expected to stabilize the structure.
The stabilization parameters (µ and nt ) can be picked such that the stabilization scheme does
not affect the physical behavior. This is the case for the stabilization chosen here.
5.4.7
Influence of the area-compressibility
As a final study, we investigate the influence of the area-compressibility (parameter K) and
compare the behavior of model (27), that depends on K, with (29), where K is infinite. For
this purpose, model (29) is discretized according to Sec. 4.3. In theory, the behavior of model
34
(27), should approach that of model (29) as K → ∞.
For case 1, there is no significant difference between the two models as Fig. 22 shows. Only
Figure 22: Cell budding: influence of K for the axisymmetric case (case 1) for H̄0 = −15
(top) and H̄0 = −25 (bottom) considering K̄ = 103 , 104 , 105 , 106 , ∞ (left to right). The color
shows γ̄.
for the lowest K, differences appear. Between all other cases, the bud shape only changes
minimally. However, increasing K in formulation (27) leads to oscillations in γ.
For case 5, a strong dependency on K appears, as Fig. 23 shows. Now, the deformation clearly
Figure 23: Cell budding: influence of K for the viscous case (case 5) for H̄0 = −15 (top) and
H̄0 = −25 (bottom) considering K̄ = 103 , 104 , 105 , 106 , ∞ (left to right). The color shows γ̄.
does not converge with increasing K in (27). Instead, the solution for K̄ = 103 is closest to
the solution from model (29). The figure also shows that oscillations appear in the solution
from model (27) as K increases. Essentially, the problem of model (27) is that even though
J converges to 1, as Fig. 24 shows, the pressure q does not converge. This can be seen in
Fig. 25 for the pure bending problem of Sec. 5.1, which has an analytical solution for q. As the
figure shows, there is an optimal value of K, where the pressure error is minimal and beyond
which it diverges. This is different to examples in computational contact mechanics, where
both kinematic and kinetic variables can converge with increasing penalty parameter(Sauer and
De Lorenzis, 2015). So even though model (27) is simpler and more efficient, it has to be used
35
Figure 24: Cell budding: influence of K for case 5 for H̄0 = −25 considering K̄ =
103 , 104 , 106 , ∞ (left to right). The color shows the relative error in J given by |J − 1|.
4
6
10
10
PM: m=1×4
PM: m=1×8
PM: m=1×32
PM: m=1×64
LM: m=1×4
LM: m=1×8
LM: m=1×32
LM: m=1×64
max |1−J|
0
10
4
Max rel. error of pressure
2
10
−2
10
−4
2
10
0
10
−2
10
−4
10
10
0
10
a.
10
2
4
6
0
10
10
10
penalty parameter K [2k/L2]
10
b.
2
4
6
10
10
10
penalty parameter K [2k/L2]
Figure 25: Pure bending: accuracy of the penalty regularization (27) compared to the Lagrange
multiplier formulation (29): a. error in stretch J; b. error in pressure q. As seen, J can be more
accurate, while q does not converge with K. There is rather an optimum K that minimizes the
pressure error.
with care, as the solution can become wrong. If an implementation of (29) is not available, the
mesh convergence of model (27) should be examined
6
Conclusion
This paper presents a general computational formulation for lipid bilayers based on general thinshell kinematics, the Helfrich bending model, and 3D, LBB-conforming, C 1 -continuous NURBSbased finite elements. The rotational continuity of the formulation is ensured by using a rotational constraint across the patch boundaries in the FE mesh. Two cases are considered in order
to model the in-plane membrane response: area-compressibility and area-incompressibility, and
for both suitable FE formulations are presented. Since the formulation lacks shear stiffness,
several shear stabilization schemes are proposed for quasi-static computations. They are based
on adding either numerical stiffness (class A), numerical viscosity (class a) or performing a
projection of the solution onto the shell manifold (class P). The numerical viscosity scheme
can also be used to model physical viscosity as the example in Fig. 16 illustrates, while the
numerical stiffness scheme can also be used to model physical stiffness as the example in Fig. 15
illustrates. It is further shown that the Helfrich bending model provides intrinsic shear stiffness
as long as the surface curvature is non-zero. Altogether, four different computational examples
are considered in order to verify the formulation and to study its physical behavior. The last ex36
ample shows that the 3D budding behavior of lipid bilayers – as described by the Helfrich model
– can become very complex, even though the model is purely mechanical and does not account
for other effects, such as diffusion or temperature. Computational challenges arise especially for
area-incompressible, non-axisymmetric bilayer shapes. For these a penalty regularization can
give misleading results.
The paper shows that the proposed computational model is not only capable of capturing
complex deformations, but is also a suitable tool to analyse and understand the stresses and
energetics of lipid bilayers. Still, more work is needed in order to further advance the computational modeling of lipid bilayers. For example, the modeling of surface flow, protein diffusion,
multicomponent lipids and thermal fluctuations would be useful extensions of the present work.
Acknowledgements
The authors are grateful to the German Research Foundation (DFG) for supporting this research
under grants GSC 111 and SA1822/5-1. They acknowledge support from the University of
California at Berkeley and from Director, Office of Science, Office of Basic Energy Sciences,
Chemical Sciences Division, of the U.S. Department of Energy under contract No. DE AC0205CH11231. Further, they thank the graduate student Yannick Omar for checking the theory.
A
e
Linearization of finto
e
Alternatively to Eq. (69.2), finto
can be also written as
Z
e
finto = −
τ αβ NT (n ⊗ n) N,αβ dA xe .
Ωe0
(105)
e
thus becomes
The linearization of finto
e (x + ∆x ) ≈ f e (x ) + ∆f e
finto
e
e
into e
into ,
with
e
∆finto
Z
= −
Z
−
Ωe0
Z
∆τ αβ bαβ NT n dA
N (n ⊗ n) N,αβ dA ∆xe −
e
Ω0
Z
αβ
T
τ N (n ⊗ ∆n) aα,β dA −
τ αβ bαβ NT ∆n dA .
τ
Ωe0
αβ
(106)
T
(107)
Ωe0
Inserting
∆τ αβ = cαβγδ aγ · N,δ ∆xe
(108)
∆n = −aγ (n · ∆aγ )
(109)
e
∆finto
= keinto ∆xe ,
(110)
(Sauer et al., 2014) and
(Wriggers, 2006), we get
with the tangent matrix
Z
Z
αβ
T
e
kinto = −
τ N (n ⊗ n) N,αβ dA −
cαβγδ bαβ NT (n ⊗ aγ ) N,δ dA
Ωe0
Ωe0
Z
Z
+
τ αβ Γγαβ NT (n ⊗ n) N,γ dA +
τ αβ bαβ NT (aγ ⊗ n) N,γ dA .
Ωe0
Ωe0
37
(111)
Here, Γγαβ := aγ · aα,β defines the Christoffel symbol of the second kind. The tensor components cαβγδ are given in Sauer and Duong (2015) for various material models. For an efficient
implementation, the contractions appearing above should be worked out analytically.
B
Linearization of ge
The vector ge is independent of the Lagrange multiplier and thus only depends on xe . The
linearization of ge thus is
ge (xe + ∆xe ) ≈ ge (xe ) + ∆ge ,
(112)
with
Z
∆ge
=
LT ∆g dA ,
(113)
J αβ
a ∆aαβ .
2
(114)
Ωe0
and (Sauer and Duong, 2015)
∆g = ∆J =
Inserting the discretisation of ∆aαβ (Duong et al., 2015), and exploiting the symmetry of aαβ ,
we get the approximation
∆g ≈ J aα · N,α ∆xe ,
(115)
such that
∆ge = keg ∆xe ,
(116)
with the tangent matrix
keg
Z
:=
Ωe0
LT aα · N,α J dA .
(117)
References
Agrawal, A. and Steigmann, D. (2009). Modeling protein-mediated morphology in biomembranes. Biomech. Model. Mechanobiol., 8(5):371–379.
Aris, R. (1989). Vectors, tensors and the basic equations of fluid mechanics. Dover, Mineola.
Arroyo, M. and DeSimone, A. (2009). Relaxation dynamics of fluid membranes. Phys. Rev. E,
79:031915.
Babuška, I. (1973). The finite element method with Lagrangian multipliers. Num. Math.,
20:179–192.
Baesu, E., Rudd, R. E., Belak, J., and McElfresh, M. (2004). Continuum modeling of cell
membranes. Int. J. Non-lin. Mech., 39:369–377.
Bathe, K.-J. (1996). Finite Element Procedures. Prentice-Hall, New Jersey.
Borden, M. J., Scott, M. A., Evans, J. A., and Hughes, T. J. R. (2011). Isogeometric finite
element data structures based on Bezier extraction of NURBS. Int. J. Numer. Meth. Engng.,
87:15–47.
Budin, I., Bruckner, R. J., and Szostak, J. W. (2009). Formation of protocell-like vesicles in a
thermal diffusion column. Journal of the American Chemical Society, 131(28):9628–9629.
38
Buser, C. and Drubin, D. G. (2013). Ultrastructural imaging of endocytic sites in saccharomyces cerevisiae by transmission electron microscopy and immunolabeling. Microscopy and
Microanalysis, 19(02):381–392.
Canham, P. B. (1970). The minimum energy of bending as a possible explanation of the
biconcave shape of the human red blood cell. J. Theoret. Biol., 26:61–81.
Cirak, F. and Ortiz, M. (2001). Fully C1 -conforming subdivision elements for finite elementdeformation thin-shell analysis. Int. J. Numer. Meth. Engng, 51:813–833.
Cottrell, J. A., Hughes, T. J. R., and Bazilevs, Y. (2009). Isogeometric Analysis. Wiley.
Dao, M., Lim, C. T., and Suresh, S. (2003). Mechanics of the human red blood cell deformed
by optical tweezers. J. Mech. Phys. Solids, 51:2259–2280.
Derényi, I., Jülicher, F., and Prost, J. (2002). Formation and interaction of membrane tubes.
Phy. Rev. Lett., 88(23):238101.
Du, Q. and Wang, X. Q. (2007). Convergence of numerical approximations to a phase field
bending elasticity model of membrane deformations. Int. J. Numer. Anal. Model., 4(3-4):441–
459.
Duong, T. X., Roohbakhshan, F., and Sauer, R. A. (2015). A new rotation-free isogeometric thin
shell formulation and a corresponding continuity constraint for patch boundaries. submitted.
Elliott, C. M. and Stinner, B. (2010). Modeling and computation of two phase geometric
biomembranes using surface finite elements. J. Comp. Phys., 229(18):6585–6612.
Fawcett, D. W. (1981). The Cell. Saunders, Philadelphia.
Feng, F. and Klug, W. S. (2006). Finite element modeling of lipid bilayer membranes. J.
Comput. Phys., 220:394–408.
Griparic, L. and van der Bliek, A. M. (2001). The many shapes of mitochondrial membranes.
Traffic, 2(4):235–244.
Gruenberg, J. and Stenmark, H. (2004). The biogenesis of multivesicular endosomes. Nature
Reviews Molecular Cell Biology, 5(4):317–323.
Gu, R., Wang, X., and Gunzburger, M. (2014). Simulating vesicle-substrate adhesion using two
phase field functions. J. Comp. Phys., 275:626–641.
Helfrich, W. (1973). Elastic properties of lipid bilayers: Theory and possible experiments. Z.
Naturforsch., 28c:693–703.
Hu, J., Shibata, Y., Zhu, P.-P., Voss, C., Rismanchi, N., Prinz, W. A., Rapoport, T. A., and
Blackstone, C. (2009). A class of dynamin-like gtpases involved in the generation of the
tubular er network. Cell, 138(3):549–561.
Itoh, T., Erdmann, K. S., Roux, A., Habermann, B., Werner, H., and De Camilli, P. (2005).
Dynamin and the actin cytoskeleton cooperatively regulate plasma membrane invagination
by bar and f-bar proteins. Developmental cell, 9(6):791–804.
Jarić, M., Seifert, U., Wirtz, W., and Wortis, M. (1995). Vesicular instabilities: The prolateto-oblate transition and other shape instabilities of fluid bilayer membranes. Phys. Rev. E,
52(6):6623–6634.
39
Jie, Y., Quanhui, L., Jixing, L., and Zhong-Can, O.-Y. (1998). Numerical observation of nonaxisymmetric vesicles in fluid membranes. Phys. Rev. E, 58(4):4730–4736.
Kahraman, O., Stoop, N., and Müller, M. M. (2012). Fluid membrane vesicles in confinement.
New J. Phys., 14:095021.
Karotki, L., Huiskonen, J. T., Stefanand, C. J., Ziólkowska, N. E., Roth, R., Surma, M. A.,
Krogan, N. J., Emr, S. D., Heuser, J., Grünewald, K., and Walther, T. C. (2011). Eisosome
proteins assemble into a membrane scaffold. J. Cell Biol., 195(5):889–902.
Keren, K. (2011). Cell motility: the integrating role of the plasma membrane. European
Biophysics Journal, 40(9):1013–1027.
Kishimoto, T., Sun, Y., Buser, C., Liu, J., Michelot, A., and Drubin, D. G. (2011). Determinants
of endocytic membrane geometry, stability, and scission. Proc. Natl. Acad. Sci., 108:E979–
988.
Kloeppel, T. and Wall, W. A. (2011). A novel two-layer, coupled finite element approach for
modeling the nonlinear elastic and viscoelastic behavior of human erythrocytes. Biomech.
Model. Mechanobiol., 10(4):445–459.
Koster, G., VanDuijn, M., Hofs, B., and Dogterom, M. (2003). Membrane tube formation from
giant vesicles by dynamic association of motor proteins. Proceedings of the National Academy
of Sciences, 100(26):15583–15588.
Kozlov, M. M., Campelo, F., Liska, N., Chernomordik, L. V., Marrink, S. J., and McMahon,
H. T. (2014). Mechanisms shaping cell membranes. Current opinion in cell biology, 29:53–60.
Kukulski, W., Schorb, M., Kaksonen, M., and Briggs, J. A. G. (2012). Plasma membrane reshaping during endocytosis is revealed by time-resolved electron tomography. Cell, 150(3):508–
520.
Lau, C., Brownell, W. E., and Spector, A. A. (2012). Internal forces, tension and energy density
in tethered cellular membranes. J. Biomech., 45(7):1328–1331.
Li, H. and Lykotrafitis, G. (2012). Two-component coarse-grained molecular-dynamics model
for the human erythrocyte membrane. Biophys. J., 102(1):75–84.
Ma, L. and Klug, W. S. (2008). Viscous regularization and r-adaptive meshing for finite element
analysis of lipid membrane mechanics. J. Comput. Phys., 227:5816–5835.
McMahon, H. T. and Gallop, J. L. (2005). Membrane curvature and mechanisms of dynamic
cell membrane remodelling. Nature, 438(7068):590–596.
Peng, Z., Asaro, R. J., and Zhu, Q. (2010). Multiscale simulation of erythrocyte membranes.
Phys. Rev. E, 81:031904.
Peter, B. J., Kent, H. M., Mills, I. G., Vallis, Y., Butler, P. J. G., Evans, P. R., and McMahon,
H. T. (2004). Bar domains as sensors of membrane curvature: the amphiphysin bar structure.
Science, 303(5657):495–499.
Rahimi, M. and Arroyo, M. (2012). Shape dynamics, lipid hydrodynamics, and the complex
viscoelasticity of bilayer membranes. Phys. Rev. E, 86:011932.
Ramakrishnan, N., Kumar, P. B. S., and Ipsen, J. H. (2010). Monte carlo simulations of fluid
vesicles with in-plane orientational ordering. Phys. Rev. E, 81:041922.
40
Rangamani, P., Agrawal, A., Mandadapu, K. K., Oster, G., and Steigmann, D. J. (2013). Interaction between surface shape and intra-surface viscous flow on lipid membranes. Biomech.
Model. Mechanobiol., 12(4):833–845.
Rangamani, P., Mandadapu, K. K., and Oster, G. (2014). Protein-induced membrane curvature
alters local membrane tension. Biophysical journal, 107(3):751–762.
Rangarajan, R. and Gao, H. (2015). A finite element method to compute three-dimensional
equilibrium configurations of fluid membranes: Optimal parameterization, variational formulation and applications. J. Comp. Phys., 297:266–294.
Rim, J. E., Purohit, P. K., and Klug, W. S. (2014). Mechanical collapse of confined fluid
membrane vesicles. Biomech. Model. Mechanobio., 13(6):1277–1288.
Rosolen, A., Peco, C., and Arroyo, M. (2013). An adaptive meshfree method for phase-field
models of biomembranes. Part I: Approximation with maximum-entropy basis functions. J.
Comput. Phys., 249:303–319.
Salac, D. and Miksis, M. (2011). A level set projection model of lipid vesicles in general flows.
J. Comput. Phys., 230(22):8192–8215.
Sauer, R. A. (2014). Stabilized finite element formulations for liquid membranes and their
application to droplet contact. Int. J. Numer. Meth. Fluids, 75(7):519–545.
Sauer, R. A. and De Lorenzis, L. (2015). An unbiased computational contact formulation for
3D friction. Int. J. Numer. Meth. Engrg., 101(4):251–280.
Sauer, R. A. and Duong, T. X. (2015). On the theoretical foundations of solid and liquid shells.
Math. Mech. Solids, published online, DOI: 10.1177/1081286515594656.
Sauer, R. A., Duong, T. X., and Corbett, C. J. (2014). A computational formulation for solid and
liquid membranes based on curvilinear coordinates and isogeometric finite elements. Comput.
Methods Appl. Mech. Engrg., 271:48–68.
Shemesh, T., Klemm, R. W., Romano, F. B., Wang, S., Vaughan, J., Zhuang, X., Tukachinsky,
H., Kozlov, M. M., and Rapoport, T. A. (2014). A model for the generation and interconversion of er morphologies. Proceedings of the National Academy of Sciences, 111(49):E5243–
E5251.
Shi, Z. and Baumgart, T. (2015). Membrane tension and peripheral protein density mediate
membrane shape transitions. Nature communications, 6:5974.
Shibata, Y., Hu, J., Kozlov, M. M., and Rapoport, T. A. (2009). Mechanisms shaping the
membranes of cellular organelles. Annual Review of Cell and Developmental Biology, 25:329–
354.
Shibata, Y., Voeltz, G. K., and Rapoport, T. A. (2006). Rough sheets and smooth tubules.
Cell, 126(3):435–439.
Steigmann, D. J. (1999). Fluid films with curvature elasticity. Arch. Rat. Mech. Anal., 150:127–
152.
Tasso, I. V. and Buscaglia, G. C. (2013). A finite element method for viscous membranes.
Comput. Meth. Appl. Mech. Engrg., 255:226–237.
Terasaki, M., Chen, L. B., and Fujiwara, K. (1986). Microtubules and the endoplasmic reticulum
are highly interdependent structures. The Journal of Cell Biology, 103(4):1557–1568.
41
Walani, N., Torres, J., and Agrawal, A. (2015). Endocytic proteins drive vesicle growth via
instability in high membrane tension environment. Proceedings of the National Academy of
Sciences, 112(12):E1423–E1432.
Walther, T. C., Brickner, J. H., Aguilara, P. S., Bernales, S., Pantoja, C., and Walter, P. (2006).
Eisosomes mark static sites of endocytosis. Nature, 439:998–1003.
Waterman-Storer, C. M. and Salmon, E. D. (1998). Endoplasmic reticulum membrane tubules
are distributed by microtubules in living cells using three distinct mechanisms. Current
Biology, 8(14):798–807.
Wriggers, P. (2006). Computational Contact Mechanics. Springer, 2nd edition.
Wriggers, P. (2008). Nonlinear Finite Element Methods. Springer.
Zimmerberg, J. and Kozlov, M. M. (2006). How proteins produce cellular membrane curvature.
Nature Reviews Molecular Cell Biology, 7(1):9–19.
42
| 5 |
Uplift Modeling with Multiple Treatments and General Response Types
Yan Zhao∗
Xiao Fang†
arXiv:1705.08492v1 [] 23 May 2017
Abstract
Randomized experiments have been used to assist decisionmaking in many areas. They help people select the optimal
treatment for the test population with certain statistical guarantee. However, subjects can show significant
heterogeneity in response to treatments. The problem
of customizing treatment assignment based on subject
characteristics is known as uplift modeling, differential
response analysis, or personalized treatment learning in
literature. A key feature for uplift modeling is that the data
is unlabeled. It is impossible to know whether the chosen
treatment is optimal for an individual subject because
response under alternative treatments is unobserved. This
presents a challenge to both the training and the evaluation
of uplift models. In this paper we describe how to obtain an
unbiased estimate of the key performance metric of an uplift
model, the expected response. We present a new uplift
algorithm which creates a forest of randomized trees. The
trees are built with a splitting criterion designed to directly
optimize their uplift performance based on the proposed
evaluation method. Both the evaluation method and the
algorithm apply to arbitrary number of treatments and
general response types. Experimental results on synthetic
data and industry-provided data show that our algorithm
leads to significant performance improvement over other
applicable methods.
Accepted: 2017 SIAM International Conference on Data
Mining (SDM2017)
1
Introduction
We often face the situation where we need to identify
from a set of alternatives the candidate that leads to
the most desirable outcome. For example, doctors want
to know which treatment plan is the most effective for
a certain disease. In an email marketing campaign, a
company needs to select the message that yields the
highest click through rate. Randomized experiments
are frequently conducted to answer these questions. In
such an experiment, subjects are randomly assigned to
a treatment and their responses are recorded. Then
by some statistical criteria, one treatment is selected as
the best. While randomized experiments (also known
as A/B testing in online settings) have been helpful in
∗ Department of Electrical Engineering and Computer Science,
Massachusetts Institute of Technology. [email protected]
† Department of Electrical Engineering and Computer Science,
Massachusetts Institute of Technology. [email protected]
‡ Institute for Data, Systems, and Society, Department of Civil
and Environmental Engineering, Operations Research Center,
Massachusetts Institute of Technology. [email protected]
David Simchi-Levi‡
many areas, it has one major shortcoming - disregard
for subject heterogeneity. A medical treatment that is
the most effective over the entire patient population
may be ineffective or even detrimental for patients
with certain conditions. An email message that leads
to the highest click-though-rate overall might tend to
put off customers in some subpopulation. Therefore,
it is of great interest to develop models that can
correctly predict the optimal treatment based on given
subject characteristics. This has been referred to as
the personalized treatment selection problem or Uplift
Modeling in the literature.
While appearing similar to classification problem at
the first sight, uplift modeling poses unique challenges.
In a randomized experiment, it is impossible to know
whether the chosen treatment is optimal for any individual subject because response under alternative treatments is unobserved. As a result, data collected from
a randomized experiment is unlabeled in the classification perspective because the true value of the quantity
that we are trying to predict (the optimal treatment) is
unknown even on the training data.
Perhaps the most obvious approach to Uplift Modeling is what we call the Separate Model Approach
(SMA). We first create one predictive model for each
treatment. Given a new data, we can obtain the predicted response under each treatment with the corresponding model, and then select the treatment with the
best predicted response. The main advantage of this
approach is that it does not require development of new
algorithms and software. Any conventional classification/regression method can be employed to serve the
purpose. Applications of SMA include direct marketing
[1] and customer retention [2]. However, the Separate
Model Approach, while simple and correct in principle, does not always perform well in real-world situation [3][4]. One reason for this is the mismatch between
the objective used for training the models and the actual purpose of the models. When the uplift (difference
in response between treatments) follows a distinct pattern from the response, SMA will focus on predicting
the response rather than the uplift signal. See [4] for
an illustrative example. The situation is exacerbated
when data is noisy and insufficient or when the uplift
is much weaker than the response. Unfortunately, these
are usually the cases for practical uplift applications.
Seeing the shortcomings of the Separate Model Approach, researchers have proposed a number of algorithms that aim at directly modeling the uplift effect.
However, almost all of them are designed for the situation with a single treatment. A logistic regression
formulation is proposed which explicitly includes interaction terms between features and the treatment [3].
Support Vector Machine is adapted for uplift modeling to predict whether a subject will be positively, neutrally, or negatively affected by the treatment [5]. The
adaption of K-Nearest Neighbors for uplift modeling is
briefly mentioned in both [6] and [7]. A new subject
is simply assigned to the empirically best treatment as
measured on the K training data that are closest to it.
Several tree-based algorithms have been proposed for
uplift modeling, each with a different splitting criterion
[8] [1] [9] [4] [10]. In [8], the authors modify the standard decision tree construction procedure [11] by forcing
a split on the treatment at each leaf node. In [1] a splitting criterion is employed that maximizes the difference
between the difference between the treatment and control probabilities in the left and right child nodes. In
[9], splitting points are chosen that maximize the distributional difference between two child nodes as measured by a weighted Kullback-Leibler divergence and a
weighted squared Euclidean distance. In [4], a linear
model is fitted to each candidate split and the significance of the interaction term is used as the measure
of the split quality. In [10], the variable that has the
smallest p-value in the hypothesis test on the interaction between the response and itself is selected as the
splitting variable. Then the splitting point is chosen to
maximize the interaction effect. It is demonstrated experimentally that the use of Bagging or Random Forest
on uplift trees often results in significant improvement
in performance [12].
Despite its wide application, literature on the more
general multi-treatment uplift problem is limited. Rare
exceptions include [13] and [14]. In [13], the tree-based
algorithm described in [9] is extended to multiple treatment cases by using a weighted sum of pairwise distributional divergence as the splitting criterion. In [14],
a multinomial logit formulation is proposed in which
treatments are incorporated as binary features. They
also explicitly include the interaction terms between
treatments and features. Moreover, finite sample convergence guarantees are established for model parameters and out-of-sample performance guarantee. Both
methods can handle binary as well as discrete response
type. It is worth mentioning that the causal K-nearest
neighbors originally intended for single treatment can
be naturally generalized to multiple treatments [6] [7].
This algorithm is implemented in a R package called
uplift by Leo Guelman [15].
One of challenges facing uplift modeling is how to
accurately evaluate model performance off-line using
the randomized experiment data. For single treatment
cases, qini curves and uplift curves have been used to
serve the purpose [16] [9]. The problem with them
as performance metrics is that them do not measure
the key quantity of interest - the expected response.
What they measure is some surrogate quantity which
is hopefully close to the increase in expected response
relative to only applying the control. In [9], the authors
explained that they use uplift curves because there does
not seem to be a better option at the time.
We now describe the contribution of our paper. In
Section 2, we discuss how to obtain an unbiased estimate
of the expected response under an uplift model. The
method applies to arbitrary number of treatments and
general response types (binary, discrete, continuous).
It works even when the treatments and the control
are not evenly distributed which is often the case
in practice. The method also allows us to compute
the confidence interval of an estimate of the expected
response. Furthermore, we introduce the modified uplift
curve which plots the expected response as a function
of the percentage of population subject to treatments.
As we discuss more in section 4.2, the modified uplift
curve provides a fair way to compare uplift models.
Based on the new evaluation method, we propose
a tree-construction procedure with a splitting criterion that explicitly optimizes the performance of the
tree as measured on the training data. This idea is
in line with the machine learning philosophy of loss
minimization on the training set. We use an ensemble of trees to mitigate the overfitting problem that
commonly happens with a single tree. We refer to
our algorithm as the CTS algorithm where the name
stands for Contextual Treatment Selection. The performance of CTS is tested on three benchmark data
sets. The first is a 50-dimensional synthetic data set.
The latter two are randomized experiment data provided by our industry collaborators. On all of the data
sets, CTS demonstrates superior performance compared
to other applicable methods which include Separate
Model Approach with Random Forest/Support Vector
Regression/K-Nearest Neighbors/AdaBoost, and Uplift
Random Forest (upliftRF) as implemented in the R uplift package [15].
The remainder of the paper is organized as follows.
In Section 2, we first introduce the formulation of the
multi-treatment uplift modeling and then present the
unbiased estimate of the expected response for an uplift
model. Section 3 describes the CTS algorithm in detail.
In Section 4 we present the setup and the results of the
experimental evaluation. The modified uplift curve is
also introduced in this Section. Section 5 ends the paper
with a brief summary and ideas for future research.
problem is determined by the underlying data generation process. The optimal expected response is achieved
by a model h∗ that satisfies the point-wise optimal condition, i.e., ∀x ∈ Xd ,
2
(2.2)
Evaluation of Uplift Models
h∗ (x) ∈ arg
max
t=0,1,...,K
E[Y |X = x, T = t].
Before introducing the evaluation method, we first describe the mathematical formulation of uplift problems 2.2 Model Evaluation One way of looking at an
uplift model is that it partitions the entire feature space
and the notation used throughout this paper.
into disjoint subspaces and assigns each subspace to one
2.1 Problem Formulation and Notation We use treatment. With data from a randomized experiment, it
upper case letters to denote random variables and lower is possible to estimate the probability of a sample falling
case letters their realizations. We use boldface for into any subspace as well as the expected response
in that subspace under the assigned treatment. Then
vectors and normal typeface for scalers.
by the law of total expectation we can estimate the
• X represents the feature vector and x its realiza- expected response in the entire feature space.
tion. Subscripts are used to indicate specific feaIn a randomized experiment, treatments are astures. For example, Xj is the jth feature in the signed randomly and independently from the features.
vector and xj its realization. Let Xd denote the However, treatments are not necessarily evenly disd-dimensional feature space.
tributed. Let pt denote the probability that a treatment
is equal to t. In any meaningful situation we will have
• T represents the treatment. We assume there are
pt > 0 for t = 0, ..., K.
K different treatments encoded as {1, . . . , K}. The
control group is indicated by T = 0.
Lemma 2.1. Given an uplift model h, define a new
• Let Y denote the response and y its realization. random variable
Throughout this paper we assume the larger the
K
X
1
value of Y , the more desirable the outcome.
Y I{h(X) = t}I{T = t}
(2.3)
Z=
p
t=0 t
In the email marketing campaign example mentioned in the Introduction where the company wants to where I{·} is the 0/1 indicator function. Then
customize the messages to maximize the click through
E[Z] = E[Y |T = h(X)].
rate, X would be the charactering information of customers such as the browsing history, the purchase patProof. The proof is straightforward using the law of
tern, demographics, etc.. T would be the different vertotal expectation.
sions of the email message. And the response Y would
be the 1/0 variable indicating whether a customer click
E Z
the message or not.
K
X
1
Suppose we have a data set of size N containing the
E Y I{ h X = t } T = t P {T = t}
=
joint realization of (X, T, Y ) collected from a randompt
t=0
ized experiment. We use superscript (i) to index the
K
X
samples as below.
=
E Y h(X) = t, T = t P {h(X) = t}
(i) (i) (i)
t=0
sN =
x ,t ,y
, i = 1, . . . , N .
= E Y |h(X) = T
An uplift model h is a mapping from the feature
a set of randomized experiment data sN =
space to the space of treatments, or h(·) : Xd → Given
(i) (i) (i)
, i = 1, . . . , N , computing the value
x
,
t
,
y
{0, 1, . . . , K}. The key performance metric of an uplift
(i)
of
z
is
simple.
If for the ith sample the predicted
model is the expected value of the response if the model
treatment
matches
the actual treatment, then z (i) is
is used to assign the treatment,
(i)
equal to y /pt , the actual response scaled by the
reverse of the treatment probability. Otherwise, z (i)
(2.1)
E[ Y |T = h(X) ].
equals zero. It is well known that the sample average is
As is with classification and regression problems, an unbiased estimate of the expected value. Therefore
the optimal expected response achievable in an uplift we have the following theorem.
Theorem 2.1. The sample average
(2.4)
z̄ =
N
1 X (i)
z
N i=1
For the conditional probability, we will simply use
the sample fraction
PN
I{x(i) ∈ φ0 }
0
(3.6)
.
p̂(φ |φ) = Pi=1
N
(i) ∈ φ}
i=1 I{x
is an unbiased estimate of E[Y |T = h(X)].
Estimating the conditional expectation requires
more
effort. First, the estimation is performed by
Moreover, we can compute the confidence interval of
treatments,
therefore less data is available. Second,
z̄ which also helps to estimate the possible range of
treatments
may
not be evenly distributed. It is common
E[Y |T = h(X)].
to have only a small percentage of population subject
to treatments in a randomized experiment. Let nt (φ0 )
3 The CTS Algorithm
be the number of samples in φ0 with treatment equal
Tree-based methods are time-tested tools in Machine
to t. With two user-defined parameters min split
Learning [11]. When combined into ensembles, they
and n reg, the conditional expectation is estimated as
prove to be among the most powerful algorithms for
follows.
general classification and regression problems [17]. Even
for the relatively new uplift modeling problem, there
If nt (φ0 ) < min split,
have been some reports on the excellent performance of
(3.7)
ŷt (φ0 ) = ŷt (φ),
tree ensembles [12].
The algorithm we present in this section also genotherwise,
erates a tree ensemble. We refer to it as the CTS algo(3.8)
PN (i)
rithm which stands for Contextual Treatment Selection.
y I{x(i) ∈ φ0 }I{t(i) = t} + ŷt (φ) · n reg
What is unique about CTS is its splitting criterion that ŷt (φ0 ) = i=1P
N
(i) ∈ φ0 }I{t(i) = t} + n reg
directly maximizes the expected response from the tree
i=1 I{x
as measured on the training set.
Note that ŷ (φ0 ) is defined recursively - the value
t
3.1 Splitting Criterion We take the recursive binary splitting approach. Each split creates two new
branches further down the tree. Let φ be the feature
space associated with a leaf node. The best we can do
for subjects falling into φ is to assign the subspace-wise
optimal treatment. Suppose s is a candidate split that
divides φ into the left child-subspace φl and the right
child-subspace φr . Because the two child subspaces can
have different treatments, the added flexibility leads to
increased expected response for subjects in φ overall.
The increase is denoted as ∆µ(s) as below.
(3.5) ∆µ(s)
= P {X ∈ φl |X ∈ φ} max E[Y |X ∈ φl , T = tl ]
tl =0,...,K
+ P {X ∈ φr |X ∈ φ}
−
max
tr =0,...,K
E[Y |X ∈ φr , T = tr ]
max E[Y |X ∈ φ, T = t].
t=0,...,K
So the idea is straightforward. At each step in the
tree-building process, we want to perform the split that
brings about the greatest increase in expected response
∆µ. The question is how to estimate ∆µ with training
data. Let φ0 stand for one of the child subspace φl or φr .
We use p̂(φ0 |φ) to denote the estimate for the conditional
probability of a subject in φ0 given that it is in φ , and
ŷt (φ0 ) the estimate for the conditional expected response
in subspace φ0 under treatment t.
of ŷt (φ0 ) depends on the corresponding estimate for
the parent node ŷt (φ). To initialize the definition,
the estimated expectation for the root node ŷt (Xd ) is
set to the sample average. We assume there are at
least enough samples to estimate expected response
accurately in the root node, otherwise customizing
treatment selection is impractical. The rational behind
the estimation formula is twofold. First, if there are not
enough samples for some treatment, we simply inherit
the estimation from the parent node. This mechanism,
combined with the termination rules in Section 3.2,
allows the trees to grow to a full extent while ensuring
reliable estimate of the expected response. Second, to
avoid being misled by outliers, we add a regularity term
to the sample average calculation. The larger the value
of n reg, the more samples it takes to shift the estimate
from the parent-estimate ŷt (φ) to the actual sample
average. Based on our experiments, it is usually helpful
to set n reg to a small positive integer.
To summarize, we estimate the increase in the
expected response from a candidate split s as below,
ˆ
∆µ(s)
= p̂(φl |φ) × max ŷt (φl )
t=0,...,K
(3.9)
+ p̂(φr |φ) × max ŷt (φr ) − max ŷt (φ).
t=0,...,K
t=0,...,K
At each step of the tree-growing process, the split that
leads to the highest estimated increase in expectation is
performed.
3.2 Termination Rules Another important compo- Algorithm 1 CTS - Contextual Treatment Selection
nent of a tree-based algorithm is the termination rules. Input: training data sN , number of samples in each
In CTS, a node is a terminal node if any of the foltree B (B ≤ N ), number of trees ntree, numlowing conditions is satisfied. The tree growing process
ber of variables to be considered for a split mtry
terminates when no more splits can be made.
(1 ≤ mtry ≤ d), the minimum number of samples
required for a split min split, the regularity factor
1. The number of samples is less than the user-defined
n reg
parameter min split for all treatments
Training:
For n = 1 : ntree
2. There does not exist a split that leads to nonnegative gain in the estimated expected response.
1. Draw B samples from s with replacement to
N
3. All the samples in the node have the same response
value.
create snB . Samples are drawn proportionally
from each treatment.
2. Build a tree from snB . At each node, we draw
The first condition allows us to split a node as long
mtry coordinates at random, then find the split
as there is at least one treatment containing enough
with the largest increase in expected response
samples. The second condition states that a split
among the mtry coordinates as measured by
should not be executed if it damages the performance
the splitting criterion defined in Eq. (3.9).
of the current tree. We allow a split with zero gain
3. The output of each tree is a partition of the
to be performed because a nonprofitable split for the
feature space as represented by the terminal
current step may lead to profitable splits in future steps.
nodes, and for each terminal node, the estiThe third condition is to avoid the split of pure node.
mation of the expected response under each
Without condition 3), a split will be selected randomly
treatment.
when all samples have the same response value because
all possible splits lead to zero gain.
Prediction: Given a new point in the feature space,
the predicted expected response under a treatment
3.3 The Algorithm To mitigate the overfitting
is the average of the predictions from all the trees.
problem commonly associated with a single tree, we forThe optimal treatment is the one with the largest
mulate CTS in a form similar to Random Forest [18]. A
predicted expected response.
group of trees are constructed based on the splitting criterion and termination rules described previously. Each
tree is built on a different bootstrapped training data provided datasets. One is a single-treatment binaryset. At each step of the learning process, only a random response dataset on the purchase of priority boarding
subset of features are considered for splitting. A termi- for flights. The other dataset is about the purchase of
nal node of a tree contains the estimation of expected reserved seats on flights which has multi treatments and
response under each treatment for that node. Given a continuous response. On the latter two datasets we inpoint in the feature space and a treatment, the predicted troduce the modified uplift curve which is a convenient
expected response from the forest is the average of the way of understanding the trade-off between the risk of
predictions from all the trees. The CTS algorithm is exposing subjects to treatments and the gain from cusoutlined in Algorithm 1.
tomizing treatment assignment.
4
Experimental Evaluation
4.1 Synthetic Data The feature space is the fiftydimensional hyper-cube of length 10, or X50 = [0, 10]50 .
Features are uniformly distributed in the feature space,
i.e., Xd ∼ U[ 0, 10 ], for d = 1, ..., 50. There are four
different treatments, T = 1, 2, 3, 4. The response under
each treatment is defined as below.
f (X) + U[0, αX1 ] + if T = 1,
f (X) + U[0, αX2 ] + if T = 2,
(4.10)
Y =
f (X) + U[0, αX3 ] + if T = 3,
f (X) + U[0, αX4 ] + if T = 4.
1 Exact values of data model parameters and datasets can
In this section, we present an experimental comparison between the proposed CTS algorithm and other
applicable uplift modeling methods on several benchmark datasets. The first dataset is generated from
a 50-dimensional artificial data model. Knowing the
true data model allows us to compare methods without worrying about model evaluation accuracy1 . Next,
we compare the methods on two large-scale industry
be found at this Dropbox link https://www.dropbox.com/sh/
sf7nu2uw8tcwreu/AAAhqQnaUpR5vCfxSsYsM4Tda?dl=0
The response is the sum of three components.
(4.11)
f (x1 , ..., x50 )
=
50
X
i
a ·
exp{−bi1 |x1
−
ci1 |
− ··· −
bi50 |x50
−
ci50 |},
Optimal
SMA-SVR
Single Trt
SMA-Ada
SMA-RF
CTS
SMA-KNN
5.8
Average Expected Response
• The first term f (X) defines the systematic dependence of the response on the features and is identical for all treatments. Specifically, f is chosen to
be a mixture of 50 exponential functions so that it
is complex enough to reflect real-world scenarios.
5.7
5.6
5.5
5.4
5.3
5.2
i=1
5.1
i
where a ,
bij
and
cij
are chosen randomly.
• The second term U[0, αXt ] is the treatment effect
and is unique for each treatment t. In many
applications we would expect the treatment effect
to be of a lower order of magnitude of the main
effect, so we set α to be 0.4 which is roughly 5% of
E[|f (X)|].
0
5000
10000
15000
20000
25000
30000
Training Data Size (samples per treatment)
Figure 1: Averaged expected response of different algorithms on the synthetic data. The 95% margin of error is
computed with results from 10 different training datasets.
For each data size, all algorithms are tested on the same 10
datasets.
• The third term is the zero-mean Gaussian noise,
i.e. ∼ N(0, σ 2 ). Note that the standard deviation
σ of the noise term is identical for all treatment. without markers). The vertical bars are the 95%
σ is set to 0.8 which is twice the amplitude of the margin of error computed with results from 10 training
datasets. To ensure consistency in comparison, for each
treatment effect α.
data size, all the methods are tested with the same
Under this particular data model, the expected response 10 datasets. From the figure we can see that CTS
is the same for all treatments, i.e., E[Y |T = t] = 5.18 for surpasses the separate model approach when data size
t = 1, 2, 3, 4. The expected response under the optimal is 2000, and the advantage continues to grow as the
training size increases. At 32000 sample per treatment,
treatment rule E[Y |T = h∗ (X)] is 5.79.
We compare the performance of 5 different meth- the performance of CTS is very close to the oracle
ods that are applicable to multi-treatment uplift prob- performance. Among the algorithms for the separate
lems with continuous response. They are CTS, Sepa- model approach, support vector regressor with radial
rate Model Approach with Random Forest (SMA-RF), basis kernel performs the best. This is not surprising
K-Nearest Neighbor (SMA-KNN), Support Vector Re- considering the true data model is basically a mixture
gressor with Radial Basis Kernel (SMA-SVR), and Ad- of exponentials. If the model for each treatment is
aBoost (SMA-Ada). CTS is implemented in R by the accurate enough, the separate model approach can also
authors. For other algorithms, we use the implementa- create uplift. It is worth emphasizing the performance
tion in scikit-learn, a popular machine learning library difference between CTS and Random Forest. The only
in Python. These algorithms are tested under increas- essential difference between the two algorithms is the
ing training data size, specifically 500, 2000, 4000, 8000, splitting criterion, and yet their performance is far from
16000, and 32000 samples per treatment. For each size, similar. Even with the largest training size, SMA10 training data sets are generated so that we can com- RF (dash line with triangle markers) does only slightly
pute the margin of error of the results. The performance better than assigning a fixed treatment. This example
of a model is evaluated using Monte Carlo simulation again, shows the importance of developing specialized
and the true data model. All models are tuned care- algorithms for uplift modeling.
fully with validation or cross-validation. Detail of the
parameter selection procedure specific to each algorithm 4.2 Priority Boarding Data One application for
randomized experiment is the online pricing problem
can be found in the Appendix.
The performance of the 5 methods are plotted in where a treatment is a candidate price of the product.
Fig. 1. For reference, we also plot the single treatment Customers are randomly assigned to different prices and
expected response (short dash line without markers) the one that leads to the highest profit per customer is
and the optimal expected response (long dash line selected. A major European airline applied this method
methods under comparison on the priority boarding
data. CTS performs the best at all population percentage. The upliftRF algorithm ranks the second and outperforms the separate model approach. The SMA-RF
is very accurate in terms of identifying subpopulation
for which the treatment is highly beneficial (the sharp
rise at the beginning of the curve) or extremely harmful
(the sharp decline at the end). Yet it fails to predict the
treatment effect for the vast majority which is demonstrated by the (almost) straight line for the middle part.
SMA-SVM and SMA-KNN perform poorly on this data
set which we think partly due to their limitations in
handling categorical variables.
Expected Response (Revenue Per Passenger)
CTS
SMA-Ada
UpliftRF
SMA-SVM
SMA-RF
SMA-KNN
0.54
0.52
0.5
0.48
0.46
0.44
0.42
0.4
0
0.2
0.4
0.6
0.8
1
Percentage Population Subject to Treatment
Figure 2: Modified uplift curves of different algorithms for
the priority boarding data.
Expected Response (Revenue Per Passenger)
to select the price for the priority boarding of flights.
The control is the default price e 5 and the treatment
is e 7 . Interestingly, these two prices lead to the same
revenue per passenger overall - e 0.42.
With the help of our industry collaborators, we
are able to access the randomized experiment data.
After initial analysis, we confirm that the purchasing
behavior of passengers varies significantly and it can be
beneficial to customize price offering based on certain
characteristics. A total of 9 features are derived based
on the information of flight and of the reservation.
These are the origin station, the origin-destination pair,
the departure weekday, the arrival weekday, the number
of days between flight booking and departure, flight fare,
flight fare per passenger, flight fare per passenger per
mile, and the group size.
The data is randomly split into the training set
(225,000 samples per treatment) and the test set (75,000
samples per treatment). Six methods are tested. They
are the separate model approach with Random Forest
(SMA-RF), Support Vector Machine (SMA-SVM), Adaboost (SMA-Ada), K-Nearest Neighbors (SMA-KNN),
as well as the uplift Random Forest method implemented in [15], and CTS. For the first 5 methods, customer decision is modeled as binary response, 1 for purchase and 0 for non-purchase. Expected revenue is then
calculated as the product of the purchase probability
and the corresponding price. With CTS, we directly
model the revenue as the (discrete) response. All the algorithms are carefully tuned with cross-validation. During cross-validation, the performance of a model is estimated on the hold-out set as measured by Eq. (2.4).
See Appendix for detail on parameter tuning.
In many applications, exposing subjects to treatments involves a certain level of risk, such as disruption
to customer experience, unexpected side effects, etc. As
a result, we may want to limit the percentage of population exposed to treatment while still obtaining as
much benefit from customization as possible. To measure the performance of an uplift model in this respect,
we introduce the modified uplift curve, in which the
horizontal axis is the percentage of population subject
to treatments and the vertical axis is the expected response. Given an uplift model, we can compute, for
each test subject, the difference in expected response
under the predicted optimal treatment and the control.
Then we rank the test subjects by the difference from
high to low. For a given percentage p, we assign the
top p percent of the test subjects to their corresponding
optimal treatment as predicted by the model, and the
rest to the control. The expected response under this
assignment is then estimated with Eq. (2.4).
Fig. 2 shows the modified uplift curves for the 6
0.60
0.49
0.50
0.42
0.42
0.42
0.42
0.44
0.52
0.45
0.40
0.30
0.20
0.10
0.00
Figure 3: Expected revenue per passenger from priority
boarding based on different models.
Fig. 3 plots the expected revenue per passenger if
Expected Response (Revenue Per Passenger)
4.3 Seat Reservation Data Another airline experiments with seat reservation price. Treatments are price
levels - low (L), medium low (ML), medium high (MH)
and high (H). The response is the revenue from each
transaction. Because the same price level may correspond to different prices on different routes and one
transaction may include the purchase of multiple seats,
we need to model the response as a continuous variable. The features we use include the booking hour,
the booking weekday, the travel hour, the travel weekday, the number of days between the last login and the
next flight, the fare class, the zone code (all flight routes
are divided into 3 zones, and prices are set for different
zones), whether the passenger returns to the website after flight ticket purchase, the journey travel time, the
segment travel time, the number of passengers, and the
quantity available.
The number of samples for the four treatments are
213,488, 176,637, 160,576, 214,515. We use 50% for
training, 30% for validation, and 20% for test. We
compare the performance of CTS and SMA-RF in this
test. We choose SMA-RF because it is the best among
the Separate Model Approach on the priority boarding
data. UpliftRF is not included because it can not be
applied to continuous response models.
The average revenue per customer with different
pricing models is shown in Fig. 4. The optimal single
price level is H with an expected revenue of $1.87 per
passenger. By personalizing treatment assignment, we
can achieve $2.37 with SMA-RF and $3.35 with CTS.
Fig. 5 shows the modified uplift curves of SMA-RF
and CTS. We can see that CTS outperforms SMARF at every population percentage. By employing a
specialized algorithm for uplift modeling the airline can
significantly improve its profit margin.
4.00
3.35
3.50
3.00
2.37
2.50
1.87
2.00
1.50
1.64
1.70
ML
MH
1.42
1.00
0.50
0.00
L
H
SMA-RF
CTS
Figure 4: Expected revenue per passenger from seat
reservation when applying different pricing models.
Expected Response (Revenue Per Passenger)
all the test subjects are assigned the predicted optimal
treatment. As can be seen from the figure, customized
pricing models can significantly increase the revenue.
The increase in the revenue per passenger from e0.42
to e0.52 could lead to a remarkable gain in profit for
an airline with tens of millions of scheduled passengers
per year. This test case demonstrates the benefit of
designing and applying specialized algorithms for uplift
modeling.
CTS
SMA-RF
4
3.5
3
2.5
2
1.5
1
0.5
0
0
0.2
0.4
0.6
0.8
1
Percentage Population Subject to Treatment
Figure 5: Modified uplift curves of SMA-RF and CTS on
the seat reservation data.
we described in Section 4 how to apply it to customized
pricing.
The contribution of this paper to Uplift Modeling is
threefold. First, we present a way to obtain an unbiased
estimate of the expected response under an uplift model
which has not been available in the literature. Second,
we design a tree ensemble algorithm with a splitting criterion based on the new estimation method. Both the
5 Conclusion
unbiased estimate and the algorithm apply naturally to
Uplift modeling initially gathered attention with its multiple treatments and continuous response, which sigsuccessful application in marketing and insurance. But nificantly extends the current focus of uplift algorithms
it does not need to be restricted to these domains. on single-treatment binary-response cases. Lastly, we
Any situation where personalized treatment selection is showed that our algorithm lead to 15% - 40% more revdesired and randomized experiment is possible can be a enue than non-uplift algorithms with the priority boardpotential use case for uplift modeling. As an example, ing and seat reservation data, which demonstrated the
impact of uplift modeling on customized pricing.
Acknowledgment
This work was supported in part by Accenture through
the Accenture-MIT Alliance in Business Analytics.
References
[1] B. Hansotia and B. Rukstales, Incremental value modeling, Journal of Interactive Marketing, 16.3 (2002),
pp. 35.
[2] C. Manahan, A proportional hazards approach to campaign list selection. In SAS User Group International
(SUGI) 30 Proceedings, 2005.
[3] V. S. Y. Lo, The true lift model - A novel data mining approach to response modeling in database management, ACM SIGKDD Explorations Newsletter, 4.2
(2002), pp.7886.
[4] N. J. Radcliffe and P. D. Surry, Real-world uplift
modelling with significance-based uplift trees, White
Paper TR-2011-1, Stochastic Solutions (2011).
[5] L. Zaniewicz and S. Jaroszewicz, Support vector
machines for uplift modeling, 2013 IEEE 13th International Conference on Data Mining Workshops
(ICDMW), IEEE, 2013.
[6] F. Alemi et al, Improved Statistical Methods Are Needed
to Advance Personalized Medicine, The open translational medicine journal 1 (2009): 1620. PMC. Web. 7
May 2015.
[7] X. Su, J. Kang, J. Fan, R. A. Levine, and X. Yan,
Facilitating score and causal inference trees for large
observational studies, The Journal of Machine Learning
Research, 13.1 (2012), pp. 2955-2994.
[8] D. M. Chickering and D. Heckerman, A decision theoretic approach to targeted advertising, Proceedings of
the Sixteenth conference on Uncertainty in artificial intelligence (pp. 82-88). Morgan Kaufmann Publishers
Inc., 2000.
[9] P. Rzepakowski and S. Jaroszewicz, Decision trees
for uplift modeling, Proceedings of the 10th IEEE
International Conference on Data Mining (ICDM),
Sydney, Australia, Dec. 2010, pp. 441450.
[10] L. Guelman, M. Guillen and A. M. Perez-Marin, A
survey of personalized treatment models for pricing
strategies in insurance, Insurance: Mathematics and
Economics, 58 (2014), pp. 68-76.
[11] L. Brieman, J. Friedman, R. Olshen, and C. Stone,
Classification and regression trees, Wadsworth Inc,
1984.
[12] M. Sotys, S. Jaroszewicz, and P. Rzepakowski, Ensemble methods for uplift modeling. Data mining and
knowledge discovery 29.6 (2015), pp. 1531-1559.
[13] P. Rzepakowski and S. Jaroszewicz, Decision trees for
uplift modeling with single and multiple treatments,
Knowledge and Information Systems 32.2 (2011), pp.
303-327.
[14] X. Chen, Z. Owen, C. Pixton, and D. Simchi-Levi,
A Statistical Learning Approach to Personalization in
Revenue Management.
[15] Leo Guelman (2014). uplift:
Uplift Modeling. R package version 0.3.5. http://CRAN.Rproject.org/package=uplift
[16] N. J. Radcliffe, Using control groups to target on
predicted lift: Building and assessing uplift models,
Direct Market J Direct Market Assoc Anal Council 1
(2007), pp. 14-21.
[17] M. Fernandez-Delgado, E. Cernadas, S. Barro, and
D.Amorim, Do we Need Hundreds of Classifiers to
Solve Real World Classification Problems?, Journal of
Machine Learning Research 15 (2014), pp 3133-3181.
[18] L. Breiman. ”Random forests.” Machine learning 45.1
(2001): 5-32.
[19] L. Guelman, M. Guillen and A. M. Prez-Marn, Optimal personalized treatment rules for marketing interventions: A review of methods, a new proposal, and an
insurance case study (No. 2014-06).
[20] V. Cherkassky and Y. Ma, Practical selection of SVM
parameters and noise estimation for SVM regression.
Neural networks, 17.1 (2004), pp. 113-126.
Appendix
Synthetic Data Here we describe the details of parameter tuning in Section 4.
CTS: Fixed parameters are ntree=100, mtry=25
and n reg=3.
The value of min split is selected among [25, 50, 100, 200, 400, 800, 1600,
3200, 6400] (Large values are omitted when they exceeds dataset size). min split is selected by 5-fold
cross-validation when training data size below 8000
sample-per-treatment, otherwise by validation (half
training/half test) on one data set and kept the same
for other nine data sets.
RF: Fixed parameters are n estimators=100 and
max features=25. Parameter nodesize is tuned with
5-fold cross-validation among [1,5,10,20].
KNN: Parameter n neighbors is tuned with 5-fold
cross-validation among [5,10,20,40].
SVR: The regularization parameter C and the
value of the insensitive-zone are determined analytically using the method proposed in [20]. The spread
parameter of the radial basis kernel γ is selected among
[10−4 , 10−3 , 10−2 , 10−1 ] using 5-fold cross-validation.
Ada: Square loss with n estimators=100.
Priority Boarding/Seat Reservation Data Models
are tuned similarly as with synthetic data except the
following. 1. For CTS, upliftRF, and SMA-RF, mtry=3.
2. For CTS, min split is selected among [25, 50,
100, 200]. For upliftRF, min split is selected among
[5, 10, 20, 40]. Parameter selection is conducted by
validation because of the time constraint.
| 2 |
arXiv:1303.6145v2 [] 8 Aug 2013
Particles Prefer Walking Along the Axes:
Experimental Insights into the Behavior of a Particle
Swarm
Manuel Schmitt
Rolf Wanka
Department of Computer Science
University of Erlangen-Nuremberg, Germany
{manuel.schmitt, rolf.wanka}@cs.fau.de
Abstract
Particle swarm optimization (PSO) is a widely used nature-inspired meta-heuristic for solving
continuous optimization problems. However, when running the PSO algorithm, one encounters
the phenomenon of so-called stagnation, that means in our context, the whole swarm starts to
converge to a solution that is not (even a local) optimum. The goal of this work is to point
out possible reasons why the swarm stagnates at these non-optimal points. To achieve our
results, we use the newly defined potential of a swarm. The total potential has a portion for
every dimension of the search space, and it drops when the swarm approaches the point of
convergence. As it turns out experimentally, the swarm is very likely to come sometimes into
“unbalanced” states, i. e., almost all potential belongs to one axis. Therefore, the swarm becomes
blind for improvements still possible in any other direction. Finally, we show how in the light
of the potential and these observations, a slightly adapted PSO rebalances the potential and
therefore increases the quality of the solution. Note that this is an extended version of [SW13b].
1
Introduction
Background. Particle swarm optimization (PSO), introduced by Kennedy and Eberhart [KE95,
EK95], is a very popular meta-heuristic for solving continuous optimization problems. It is inspired
by the social interaction of individuals living together in groups and supporting and cooperating
with each other. Fields of very successful application are, among many others, in Biomedical Image
Processing [WSZ+ 04], Geosciences [OD10], Mechanical Engineering [GWHK09], and Materials Science [RPPN09], to name just a few, where the continuous objective function on a multi-dimensional
domain is not given in a closed form, but by a “black box.” The popularity of the PSO framework
in these scientific communities is due to the fact that it on the one hand can be realized and,
if necessary, adapted to further needs easily, but on the other hand shows in experiments good
performance results with respect to the quality of the obtained solution and the speed needed to
obtain it. By adapting its parameters, users may in real-world applications easily and successfully
control the swarm’s behavior with respect to “exploration” (“searching where no one has searched
before”) and “exploitation” (“searching around a good position”). A thorough discussion of PSO
can be found in [PSL11].
To be precise, let an objective function f : RD → R on a D-dimensional domain be given
that (w. l. o. g.) has to be minimized. A population of particles, each consisting of a position (the
1
candidate for a solution), a velocity and a local attractor, moves through the search space RD . The
local attractor of a particle is the best position with respect to f this particle has encountered
so far. The population in motion is the swarm. In contrast to other evolutionary algorithms,
the individuals of the swarm cooperate by sharing information about the search space via the
global attractor, which is the best position any particle has found so far. The particles move in
time-discrete iterations. The movement of a particle is governed by both its velocity and the two
attractors and by some additional fixed parameters (for details, see Sec. 2).
PSO is widely used in real-world applications. It is usually desired that the swarm converges to a
single point in the search space, and that this point is at least a local optimum. It is well investigated
how the fixed parameters mentioned above should be chosen to let the swarm converge at all [Tre03,
JLY07]1 . However, experiments sometimes show the phenomenon of stagnation, meaning here that
the particle swarm sometimes converges to a single search point and gets stuck, although this point
is not even a local optimum. In [LW11], Lehre/Witt have for a certain situation formally proven that
the probability for a swarm to converge to a non-optimal point is positive. Several approaches to
deal with this stagnation phenomenon have been developed. Clerc [Cle06] examines the distribution
of the particles during the stagnation phase, derives properties of these distributions, and provides
several possibilities to adapt the algorithm for the case that the number of iterations without
an improvement reaches a certain threshold. Van den Bergh/Engelbrecht [vdBE02] substantially
modify the movement equations, enabling the particles to count the number of times they improve
the global attractor and use this information. Empirical evidence for the capability of their method
to find local optima on common benchmarks is given. Closest to our work, in [LW11] the movement
equations are modified by adding in every iteration a small random perturbation to the velocity.
New results. In this paper, we focus on the swarm’s behavior right before the convergence starts, in
order to find out about possible causes that let the swarm converge, i. e., why the global attractor
starts to stagnate. Although one would like the output of the PSO algorithm to be at least a
local optimum, we point out two possible reasons for a swarm to converge far away from any
local optimum. In order to state the causes for this premature convergence, we define a potential
that reflects the capability of the swarm to move. That means the swarm converges iff the total
potential approaches ~0. The swarm’s total potential has a portion for every dimension of the
D-dimensional search space. The experiments carried out suggest that unwanted stagnation can
indeed be explained in terms of the potential.
The first possible cause of a non-optimal limit of convergence is that, even though the global
attractor is updated frequently, the potential drops, so the swarm has not sufficient momentum to
find significantly improved points. In Sec. 3, we present experiments that demonstrate that this
phenomenon can in fact be observed for some parameter selections. Fortunately, it also turns out
that common parameter choices and a reasonable swarm size already avoid this problem.
The second and more important reason is that the potential tends to becoming imbalanced
among the dimensions, so dimensions in which only small improvement is possible may neverheless
have the by far highest potential. That means that every step in such a dimension results in a
worsening strong enough to void possible improvements in other dimensions. So, the global attractor of the swarm stagnates and the swarm starts to converge. We demonstrate that indeed the
swarm tends to reach a state where the potentials are unbalanced, i. e., one dimension gets the by
far highest portion of the total potential while all other portions are about equal. Then, we present
1
In the companion paper [SW13a] to the paper at hand, the quality of the best point found by the swarm (the
global attractor) is formally analyzed.
2
experimental evidence showing that this phenomenon makes the particles converge at non-optimal
search points. So the experiments suggest that first the swarm starts to prefer a promising direction
that is parallel to one of the axes and increases the potential in this dimension far above the potentials in the other dimensions. As soon as the chosen direction does no longer yield improvements,
its potential stays much larger than in the remaining dimensions where improvements would still
be possible. From that point on, improvements become rare and the swarm starts to stagnate,
although no local optimum is found yet.
Since the cause of this premature convergence is an imbalance of the potentials, we show how
a small, simple and easy to implement modification of the algorithm enables it to handle such
situation. Namely, if the potential is sufficiently small, we let the particles make pure random steps,
which do not favor any direction. We conclude with showing that the modification does not totally
overwrite the PSO algorithm by replacing it by some greedy random search procedure. Instead, our
experiments show that the modification is only applied in case of (premature) convergence. As long
as there is still room for improvements left in the search space, the unmodified behavior prevails.
2
Definitions
First we present the underlying model of the PSO algorithm. The model describes the positions of
the particles, the velocities and the global and local attractors as real-valued stochastic processes.
Furthermore, we define in Def. 2 the potential of a swarm that will be a measure for its movement.
A swarm with high potential is more likely to reach search points far away from the current global
attractor, while a swarm with potential approaching 0 is converging.
Definition 1 (Classical PSO Algorithm). A swarm S of N particles moves through the D-dimensional
search space RD . Each particle n consists of the following components:
• position X n ∈ RD , describing the current location of the particle in the search space,
• velocity V n ∈ RD , describing the vector of the particle’s current velocity,
• local attractor Ln ∈ RD , describing the best point particle n has visited so far.
Additionally, the swarm shares information via the global attractor G ∈ RD , describing the best
point any particle has visited so far. After some arbitrary initialization of the X n and V n (usually,
one assumes them to be initialized u. a. r. over some domain), the actual movement of the swarm
is governed by the procedure described in Algorithm 1 where f : RD → R denotes the objective
function.
Here, χ, c1 and c2 are some positive constants called the fixed parameters of the swarm, and
rand() returns values that are uniformly distributed over [0, 1] and all independent.
Note that in case of a tie between the previous attractor and the new point X n , we use the new
value, i. e., whenever a particle finds a search point with value equal to the one of its local attractor,
this point becomes the new local attractor. If additionally the function value is equal to the one
of the global attractor, this one is also updated. Also note that the global attractor is updated as
soon as a new better solution has been found.
Now we want to define a potential for measuring how close the swarm is to convergence. A
meaningful potential should of course involve the velocities of the particles. These considerations
lead to the following definition:
3
Algorithm 1: classical PSO
output : G ∈ RD
1 repeat
2
for n = 1 → N do
3
for d = 1 → D do
4
Vdn := χ · Vdn + c1 · rand() · (Lnd − Xdn )
+ c2 · rand() · (Gd − Xdn );
n
n
5
Xd := Xd + Vdn ;
6
end
7
if f (X n ) ≤ f (Ln ) then
8
Ln := X n ;
9
end
10
if f (X n ) ≤ f (G) then
11
G := X n ;
12
end
13
end
14 until termination criterion met;
Definition 2 (Potential). For d ∈ {1, . . . , D}, the potential of swarm S in dimension d is Φd with
Φd :=
N
X
|Vdn | + |Gd − Xdn | ,
n=1
the total potential of S is Φ = (Φ1 , . . . , ΦD ).
Note that we slightly deviate from the notion of the potential found in [SW13a] since the version
in the definition above is simpler and sufficient for the present work. However, the underlying idea
is the same for both versions of the potential.
So the current total potential of a swarm has a portion in every dimension. Between two
different dimensions, the potenial may differ much, and “moving” potential from one dimension to
another is not possible. On the other hand, along the same dimension the particles influence each
other and can transfer potential from one to the other. This is the reason why we do not define a
potential of an individual particle.
3
1-Dimensional PSO
In this section, we examine the behavior of a 1-dimensional PSO with respect to the potential.
If the swarm is close to a local optimum and there is no second local optimum within range,
the attractors converge and it is well-known that with appropriate choices for the parameters of
the PSO, convergence of the attractors implies convergence of the whole swarm. Such parameter
selection guidelines can be found, e. g., in [JLY07].
On the other hand, if the swarm is far away from the next local optimum and the function is
monotone on an area that is large compared to the current potential of the swarm, the preferred
behavior of the swarm is to increase the potential and move in the direction that yields the improvement until a local optimum is surpassed and the monotonicity of the function changes. In [LW11],
4
the authors show that there are non-trivial choices of parameters for which the swarm converges
even on a monotone function. In particular, if N = 1, every parameter choice either allows convergence to an arbitrary point in the search space, or it generally prevents the one-particle-swarm
from converging, even if the global attractor is already at the global optimum.
We ran the particle swarm algorithm on a monotone function to measure the development of
the potential over time. For our experiment, we chose the 1-dimensional function f (x) = −x as
objective function wanting the swarm always “running down the hill.” Note that this choice is not a
restriction, since the particles compare points only qualitatively and the behavior is exactly the same
on any monotone decreasing function: due to the rules for updating the attractors in lines 7 and
11, resp., of Algorithm 1, the new attractors are the points with greater x-coordinate. Therefore,
we used only one function in our experiment. The parameters for the movement equations are
common choices obtained from the literature. We let the particles make 1000 iterations and stored
the potential at every iteration. We made a total of 1000 experiments for each set of parameters and
calculated both average and standard deviation. The averages are stated in Fig. 1, the standard
deviations are of the same order and therefore omitted. In cases (a), (c), (d) and (e), the particles
have shown the expected behavior, namely an exponential increase of the potential. So the swarm
keeps running down the hill which is what we want it to do.
10
Potential
10
150
(e)
100
10
(a)
50
(c)
(d)
10
0
(b)
10
-50
0
Figure 1:
200
400
600
800
1000
1200
Iteration
(a) χ = 0.729, c1 = c2 = 1.49, N = 2 [CK02]
(b) χ = 0.729, c1 = 2.8 · χ, c2 = 1.3 · χ, N = 2 [CD01]
(c) χ = 0.729, c1 = 2.8 · χ, c2 = 1.3 · χ, N = 3 [CD01]
(d) χ = 0.6, c1 = c2 = 1.7, N = 2 [Tre03]
(e) χ = 0.6, c1 = c2 = 1.7, N = 3 [Tre03]
However, in case (b) where only two particles are involved, we see the potential decreasing
exponentially because the number of particles is presumably too small. In this case, the swarm will
eventually stop, i. e., stagnate. But we also see in case (c) that using one additional particle and
not changing the remaining parameters, the swarms keeps its motion.
5
In all cases, for the small swarm size of ≥ 3, the common parameter choices avoid the problem
mentioned in [LW11].
4
D-Dimensional PSO
In the D-dimensional case, the situation is more complicated as now the relations between distinct
dimensions become important. A new problem arising is the following: Assume that the whole
swarm is close to a point x ∈ RD such that every change of the first coordinate leads to a significantly worse value of the objective function, while in the other dimensions there is still room for
improvements. Furthermore let the swarm have high potential in the first and low potential in
any other dimension. Then an improvement of the global attractor is still possible, but it is very
unlikely and between two updates are many steps without an update. The reason is that any improvement in some of the dimensions 2, ..., D is voided by the much larger worsening in dimension
1. It follows that the attractors stay constant for long times between two updates and so the swarm
tends to converge and therefore looses potential. As long as the global attractor stays constant, the
situation is symmetric in every dimension, so while converging, the imbalance is still maintained.
First, we want to examine if and how such imbalances arise. Assume that the fitness function is
(on some area) monotone in every dimension. One of our main observations is that indeed in such a
situation the swarm tends to pick one dimension and favor it over all the others. As a consequence,
the movement of the swarm becomes more andPmore parallel to one ofP
the axes.
D
We used the fitness-functions f (~x) = − D
x
and
g(~
x
)
=
−
i=1 i
i=1 i · xi which are both
monotonically decreasing in every dimension and set D to 10. Initially, we distribute the particles
randomly over [−100; 100]D and the velocities over [−50; 50]D and let the swarm make 500 iterations.
The swarm size N was 10 and the parameters were set to χ = 0.729, c1 = c2 = 1.49 as found
in [CK02]. After each iteration, we calculated the potential for each dimension. We made 1000
runs and for each run, the dimensions were sorted according to the final value of Φ, i. e., we switched
the numbers of the dimensions such that after the last iteration dimension 1 always had the highest
potential, dimension 2 the second highest and so on. We calculated the mean of the potentials over
the 1000 runs for each of the sorted dimensions. The results are stated in Fig. 2. One can see that
the dimension with the greatest potential has for both functions a value far higher than the others,
while the other dimensions do not show such a significant difference between each other. In other
words: Particles like to move parallel to an axis.
An explanation for this behavior is the following: Assume that at some time, one dimension d0
has more potential than the others. Further assume that the advance is great enough such that for
some number of steps the particle with the largest value in dimension d0 is the one that determines
the global attractor. In a companion paper, we call a swarm in this situation “running”. Since
randomness is involved and this situation has a positive probability to occur, it will actually occur
after sufficiently many iterations. Then, each update of the global attractor increases the potential
in d0 considerably because it increases the distance of every single particle to the global attractor
except for the one particle that updated it. In any other dimension d 6= d0 , the situation is different.
Here, the decision which particle updates the global attractor is stochastically independent of the
value xd in dimension d. In other words: If one looks only on the dimension d, the global attractor
is chosen uniformly at random from the set of all particles. As a consequence, after some iterations,
the d0 ’th coordinate of the velocity becomes positive for every particle, so the attraction towards
the global attractor always goes into the same direction as the velocity, while in the remaining
6
10
10
200
300
200
Potential
10
Potential
10
300
10
100
10
10
0
0
100
200
300
400
Iteration
(a) Fitness function f
500
100
10
600
0
0
100
Figure 2: Growth of potential when processing (a) f (~x) = −
200
300
400
Iteration
(b) Fitness function g
PD
i=1 xi ,
500
(b) g(~x) = −
600
PD
i=1 i ·
xi .
dimensions, the velocities may as well point away from the global attractor, meaning that the
particles will be slowed down by the force of attraction. An overview over the situation is given in
Fig. 3.
So, roughly speaking, most of the time the global attractor is somewhere in the middle of the
different xd values, giving less potential increase then in dimension d0 where it has a border position.
That means that the balanced situation is not stable in a sense that after the imbalance has reached
a certain critical value, it will grow unbounded.
If at some point no more improvements can be made in dimension d0 , the swarm is in the
situation described above where it starts to converge while the convergence is, other than the
acceleration phase, balanced. That means after the same time the potential of every dimension is
decreased by approximately the same factor, so dimension d0 has still far more potential than any
other dimension and the swarm stays blind for possible improvements in dimensions other than d0 .
To supplement the results about the behavior of the PSO in that “artificial” setting, we ran it
on two well-known benchmark functions to show that the problems described above really occurs
on actual instances. Since the described scenario may happen with positive but, depending on
the situation, small probability, we choose the number of particles N compared to the number of
dimensions D small in order to be able to view the phenomenon in a preferably pure condition.
Table 1 lists our results on the sphere function with optimal solution z ∗ = (0, ..., 0), where we
distributed the particles randomly over [−100; 100]D and the velocities over [−50, 50]D , and another common benchmark, the Rosenbrock function with optimal solution z ∗ = (1, ..., 1) (found
in [Ros60]), where the initial population was randomly distributed over [−5; 10]D and the initial
velocity over [−2.5, 5]D . The results obtained on this function are stated in Table 2. We repeated
each experiment 1000 times and calculated the means. Additionally we calculated for each rep-
7
particle
global attractor
velocity
attraction
Figure 3: Particles running in direction d0 . In dimension d0 , the differences between the coordinate
of the particle and the global attractor is on average higher than in dimension d1 . The velocities
of dimension d0 point in the direction of the global attractor.
etition the dimension with the minimal and the one with the maximal value for the potential Φ
after the last iteration (see columns Φ) and the difference between the global attractor and the
optimal solution in the dimension with the lowest resp. highest remaining potential. One can see
that the dimension with the highest value for Φ usually is much closer to its optimal value than
the dimension with the lower value. In particular, in the 2-dimensional case the potential became
0 in one dimension preventing the swarm from any movement in this direction and consequently
from finding the minimum.
Table 1: Sphere-function
D
N
tmax
Value
min. Φ
max. Φ
*
Φ
dist. opt.
Φ
dist. opt.
4
2
10000
51.04
0*
1.58
3.75 · 10−8
1.16 · 10−8
60
10
100000
12.18
5.84 · 10−62
1.32
7.53 · 10−8
1.91 · 10−9
150
20
100000
11.97
4.15 · 10−37
1.30
1.59 · 10−7
2.28 · 10−9
Due to double precision.
We calculated the standard deviation for each unit and obtained values that were of the same
order but usually higher than the mean values. The reason for this high deviations is that the
examined phenomenon occurs randomly and therefore one cannot predict the potential level of the
swarm when it occurs, i. e., the level of imbalance at the point when the swarm starts to converge
is unpredictable.
Now that we know that the swarm tends to optimize functions dimensionwise, it is interesting
to see what happens if we try it on a function that is in some area increasing in every dimension
but still decreasing in some direction not parallel to one of the axes.
8
Table 2: Rosenbrock-function
D
N
tmax
Value
min. Φ
max. Φ
*
Φ
dist. opt.
Φ
dist. opt.
4
2
10000
126.54
0*
1.1075
4.72 · 10−5
2.59
60
10
100000
34.57
6.27 · 10−5
0.37
0.93
0.11
150
20
100000
28.88
2.32 · 10−4
0.12
8.07
0.06
Due to double precision.
Fix some b > 1 and define the D-dimensional function f as follows:
Pn
Pi=1 x2i , ∃i, j : xi ≥ b · xj ∨ xj ≥ b · xi
n
f (~x) =
x2
xi
i=1 i · 2 maxi6=j
− b − 1 , otherwise
b−1
xj
Figure 4: Continuous, not differentiable function f
In Fig. 4 one can see a plot of f for b = 1.1 and D = 2. For y not between x/b and x · b, this
function behaves like the well-known sphere-function, leading the particles close to the origin. For
x = y, f (x, y) = −2 · x2 and from y = x/b (y = x · b) to y = x, the function falls into a valley. One
can easily verify that this function is, though not differentiable, at least continuous. One would
want the particles to be able to move through the valley.
9
As in our previous experiment, we set χ = 0.729, c1 = c2 = 1.49, N = 10 and initialized the
particles uniformly at random over [−100; 100]D (except for the first particle that was initialized
at (1, ..., 1) such that the swarm could see the direction where the improvements are possible) and
the velocities over [−50; 50]D , with the value D = 3. We let the swarm do 1000 runs with 5000
iterations each. The potential of the dimension with the highest potential after the last iteration
was determined and the mean and standard deviation of the respective dimensions were calculated
over the 1000 repetitions. This was done for two different swarm sizes, namely N = 10 and N = 50.
We repeated the experiment with 10 particles and only 100 iterations, using the function frot , which
is obtained by first rotating the input vector and then applying f such that the valley
now leads
√
the particles along the x1 -axis. Formally speaking, the rotation maps the vector ( N , 0, . . . , 0) to
(1, 1, . . . , 1) and keeps every vector that is orthogonal to this two invariant. The results of the three
experiments can be seen in Fig. 5. In all three cases, for about the first 20 iterations, the swarm
behaves like on the sphere function and reduces its potential. Then, it discoveres the valley and
tries to move through it. However, in the unrotated case with 10 particles (Fig. 5a), the swarm
fails to accelerate and instead, it converges towards a non-optimal point. With much more effort,
the swarm consisting of 50 particles (Fig. 5b) is able to accelerate, but the acceleration rate and
therefore the speed are comparatively poor. Finally, Fig. 5c shows how the swarm handles the
rotated version much better than the original function f before. Here, after only 100 iterations,
the potential increased to a value of about 1045 . The reason for this large difference between the
behavior on f and on frot is the capability of the swarm to favor one direction only if this direction
is parallel to one of the axes.
In particular, this experiment shows that PSO is not invariant under rotations of the search
space.
(a) f with b = 1.1, 10 particles
(b) f with b = 1.1, 50 particles
(c) frot with b = 1.1. 10 particles
Figure 5: Behavior of the particles on functions f and frot
5
Modified PSO
In the previous section, we have seen that the particle swarm might get stuck if its potential is too
high in dimensions that are already optimized and too low in dimensions that could still be improved.
Then the global attractor stagnates and the swarm starts to converge. Since the convergence
10
happens in a symmetric manner along the different dimensions, the imbalance is maintained. A
small and simple modification of the PSO algorithm avoids that problem by enabling the swarm to
rebalance the potentials in the different dimensions:
Definition 3 (Modified PSO). For some arbitrary small but fixed δ > 0, we define the modified
PSO via the same equations as the classic PSO in Def. 1, only modifying the velocity update in line
4 of Algorithm 1 to
′
n
n
(2 · rand() − 1) · δ, if ∀ d ∈ {1, ..., D} : Vd′ + Gd′ − Xd′ < δ,
Vdn :=
χ · Vdn + c1 · rand() · (Lnd − Xdn ) + c2 · rand() · (Gd − Xdn ),
otherwise.
In words: As soon as the sum of the velocity and the distance between the position and the
global attractor of a particle are below the bound of δ in every single dimension, the updated velocity
of this particular particle is drawn u. a. r. from the interval [−δ, δ]. Note the similarity between
this condition and the definition of the potential. Indeed, we could have used the condition Φd < δ
(with some fixed a) instead, but we decided to keep the modification as simple and independent
from the terms occurring in the analysis as possible. Now the potential can no longer converge to
0 while staying unbalanced because if it decreases below a certain bound, we randomly assign a
value to the velocity which on expectation has an absolute value of δ/2. If a velocity is assigned
that way, we call the step forced.
This modified PSO is similar to the Noisy PSO proposed by Lehre and Witt in [LW11] where
they generally add a random perturbation drawn u. a. r. from [−δ/2, δ/2] for some small δ and
prove that their swarm is able to find a local optimum. However, their analysis is restricted to one
specific 1-dimensional fitness function.
The modification does not prevent the PSO from emerging imbalance between the potentials
of different dimensions. But the imbalanced convergence phenomenon described above is no longer
possible. When the global attractor of the modified PSO gets stuck and the potential decreases,
there will be a point when both the velocities and the distances to the global attractor in every dimension get lower than δ. From that moment on, the relationship between the different dimensions
gets balanced by the forced steps which on expectation give every dimension the same amount of
potential. So, the potentials of the different dimensions are made equal.
We repeated the experiment from the previous section in the same setting as before, but using
the modified PSO. The results can be seen in Table 3. It turns out that the modified PSO algorithm
actually leads to a better solution than the unmodified one.
We also calculated the standard deviation for each unit and obtained values that were of the
same order but usually higher than the mean values. The reason for this high deviations is that
the examined phenomenon occurs randomly and therefore one cannot predict the potential level of
the swarm when it occurs.
To make sure that the modification does not fully take over, we plotted the forced points with
δ = 10−7 and the 2-dimensional sphere function as objective function in Fig. 6. As can be seen
in the figure, the particles get forced near (−2 · 10−5 , 0) but their movement does not stay forced.
Instead, the swarm becomes running again until the particles approached the optimum at (0, 0).
This implies that for sufficiently smooth functions, the modification does not take over, replacing
the PSO by some random search routine. Instead, the modification just helps to overcome “corners”.
As soon as there is a direction parallel to an axis with decreasing function value, the swarm becomes
“running” again and the unmodified movement equations apply.
11
Table 3: Comparison between the classic and the modified PSO algorithm
Function
Sphere
Sphere
Sphere
Sphere
Sphere
Sphere
Rosenbrock
Rosenbrock
Rosenbrock
Rosenbrock
Rosenbrock
Rosenbrock
1
6
D
4
4
60
60
150
150
4
4
60
60
150
150
N
2
2
10
10
20
20
2
2
10
10
20
20
tmax
10000
10000
100000
100000
100000
100000
10000
10000
100000
100000
100000
100000
δ
10−12
10−12
10−12
10−7
10−7
10−3
-
Value
43.34
51.04
4.07
12.18
6.41
11.97
8.80
126.54
2.02
34.57
2.25
28.88
Due to double precision.
Conclusion
This paper focuses on the behavior of a particle swarm to find good regions in the search space. We
found out that the potentials of the different dimensions are most likely to become unbalanced and
that this imbalance possibly causes the particle swarm to get stuck at non-optimal search points.
A suggestion to modify the algorithm by randomly assigning a small velocity if the potential of a
particle falls below a certain bound is suggested. Additionally, we show that the modification does
not take over the swarm, it just corrects the direction before the classic movement equations are
applied again.
References
[CD01]
A. Carlisle and G. Dozier. An off-the-shelf PSO. In Proc. Particle Swarm Optimization
Workshop, pages 1–6, 2001.
[CK02]
Maurice Clerc and James Kennedy. The particle swarm – explo- sion, stability, and
convergence in a multidimensional complex space. IEEE Transactions on Evolutionary
Computation, 6:58–73, 2002. doi:10.1109/4235.985692.
[Cle06]
Maurice Clerc. Stagnation analysis in particle swarm optimization or what happens
when nothing happens. http://hal.archives-ouvertes.fr/hal-00122031, 2006. Technical
report.
[EK95]
Russell C. Eberhart and James Kennedy. A new optimizer using particle swarm theory.
In Proc. 6th International Symposium on Micro Machine and Human Science, pages
39–43, 1995. doi:10.1109/MHS.1995.494215.
[GWHK09] A. Gnezdilov, S. Wittmann, S. Helwig, and G. Kókai. Acceleration of a relative positioning framework. International Journal of Computational Intelligence Research,
5:130–140, 2009.
[JLY07]
M. Jiang, Y. P. Luo, and S. Y. Yang. Particle swarm optimization – stochastic trajectory analysis and parameter selection. In Felix T. S. Chan and Manoj Kumar Tiwari,
12
× 10
-5
3
2
y
1
0
-1
-2
-3
-3
-2
-1
0
x
1
2
3
× 10
-5
Figure 6: Behavior of the modified PSO on the sphere function
editors, Swarm Intelligence – Focus on Ant and Particle Swarm Optimization, chapter 17, pages 179–198. I-TECH Education and Publishing, Vienna, 2007.
[KE95]
James Kennedy and Russell C. Eberhart. Particle swarm optimization. In Proc.
IEEE International Conference on Neural Networks, volume 4, pages 1942–1948, 1995.
doi:10.1109/ICNN.1995.488968.
[LW11]
Per Kristian Lehre and Carsten Witt. Finite first hitting time versus stochastic convergence in particle swarm optimisation. arXiv:1105.5540, 2011.
[OD10]
J. E. Onwunalu and L. J. Durlofsky. Application of a particle swarm optimization
algorithm for determining optimum well location and type. Computational Geosciences,
14:183–198, 2010. doi:10.1007/s10596-009-9142-1.
[PSL11]
Bijaya Ketan Panigrahi, Yuhui Shi, and Meng-Hiot Lim, editors. Handbook of
Swarm Intelligence — Concepts, Principles and Applications.
Springer, 2011.
doi:0.1007/978-3-642-17390-5.
[Ros60]
H. H. Rosenbrock. An automatic method for finding the greatest or least value of a
function. The Computer Journal, 3:175–184, 1960. doi:0.1093/comjnl/3.3.175.
[RPPN09]
K. Ramanathan, V. M. Periasamy, M. Pushpavanam, and U. Natarajan. Particle
swarm optimisation of hardness in nickel diamond electro composites. Archives of
Computational Materials Science and Surface Engineering, 1:232–236, 2009.
[SW13a]
Manuel Schmitt and Rolf Wanka. Particle swarm optimization almost surely finds local
optima. In Proc. Genetic and Evolutionary Computation Conference (GECCO), 2013.
doi:10.1145/2463372.2463563.
13
[SW13b]
Manuel Schmitt and Rolf Wanka. Particles prefer walking along the axes: Experimental
insights into the behavior of a particle swarm. In Proc. Genetic and Evolutionary
Computation Conference (GECCO), 2013. doi:10.1145/2464576.2464583.
[Tre03]
Ioan Cristian Trelea. The particle swarm optimization algorithm: Convergence
analysis and parameter selection. Information Processing Letters, 85:317–325, 2003.
doi:10.1016/S0020-0190(02)00447-7.
[vdBE02]
F. van den Bergh and A. P. Engelbrecht. A new locally convergent particle swarm optimiser. In Proc. IEEE Int. Conf. on Systems, Man and Cybernetics (SMC), volume 3,
pages 94–99, 2002. doi:10.1109/ICSMC.2002.1176018.
[WSZ+ 04]
Mark P. Wachowiak, Renata Smolı́ková, Yufeng Zheng, Jacek M. Zurada, and Adel S.
Elmaghraby. An approach to multimodal biomedical image registration utilizing particle swarm optimization. IEEE Transactions on Evolutionary Computation, 8:289–301,
2004. doi:10.1109/TEVC.2004.826068.
14
| 9 |
SUBMITTED
1
EddyNet: A Deep Neural Network For Pixel-Wise
Classification of Oceanic Eddies
arXiv:1711.03954v1 [] 10 Nov 2017
Redouane Lguensat, Member, IEEE, Miao Sun, Ronan Fablet, Senior Member, IEEE, Evan Mason, Pierre Tandeo,
and Ge Chen
Abstract—This work presents EddyNet, a deep learning based
architecture for automated eddy detection and classification from
Sea Surface Height (SSH) maps provided by the Copernicus Marine and Environment Monitoring Service (CMEMS). EddyNet
consists of a convolutional encoder-decoder followed by a pixelwise classification layer. The output is a map with the same size of
the input where pixels have the following labels {’0’: Non eddy,
’1’: anticyclonic eddy, ’2’: cyclonic eddy}. Keras Python code,
the training datasets and EddyNet weights files are open-source
and freely available on https://github.com/redouanelg/EddyNet.
Index Terms—Mesoscale eddy, Segmentation, Classification,
Deep learning, Convolutional Neural Networks.
I. I NTRODUCTION
OING ”deeper” with artificial neural networks (ANNs)
by using more than the original three layers (input,
hidden, output) started the so-called deep learning era. The
developments and discoveries which are still ongoing are
producing impressive results and reaching state-of-the-art performances in various fields. The reader is invited to read
[1] for a general introduction to deep learning. In particular,
Convolutional Neural Networks (CNN) sparked-off the deep
learning revolution in the image processing community and are
now ubiquitous in computer vision applications. This has led
numerous researchers from the remote sensing community to
investigate the use of this powerful tool for tasks like object
recognition, scene classification, etc... More applications of
deep learning for remote sensing data can be found in [2], [3]
and references therein.
By standing on the shoulders of recent achievements in deep
learning for image segmentation we present ”EddyNet”, a deep
neural network for automated eddy detection and classification
from Sea Surface Height (SSH) maps provided by the Copernicus Marine and Environment Monitoring Service (hereinafter
denoted by AVISO-SSH). EddyNet is inspired by ideas from
widely used image segmentation architectures, in particular Ushaped architectures such as U-Net [4]. We investigate the use
of Scaled Exponential Linear Units (SELU) [5], [6] instead of
the classical ReLU + Batch Normalization (R+BN) and show
that we greatly speed up the training process while reaching
G
R. Lguensat and R. Fablet and P. Tandeo are with IMT Atlantique;
UBL; Lab-STICC; 29200 Brest, France. E-mail: [email protected].
M. Sun is with the National Marine Data and Information Service; Key
Laboratory of Digital Ocean; 300171 Tianjing, China.
G. Chen is with Department of Marine Information Technology; College
of Information Science and Engineering; Ocean University of China; 266000
Qingdao, China.
E. Mason is with the Mediterranean Institute for Advanced Studies
(IMEDEA/CSIC-UIB), 07190 Esporles - Balearic Islands, Spain.
comparable results. We adopt a loss function based on the
Dice coefficient (also known as the F1 measure) and illustrate
that we reach better scores for the two most relevant classes
(cyclonic and anticyclonic) than with using the categorical
cross-entropy loss. We also supplement dropout layers to our
architecture that prevents EddyNet from overfitting.
Our work joins the emerging cross-fertilization between
the remote sensing and machine learning communities that
is leading to significant contributions in addressing the segmentation of remote sensing images [7]–[9]. To the best of
our knowledge, the present work is the first to propose a
deep learning based architecture for pixel-wise classification
of eddies, dealing with the challenges of this particular type
of data.
This letter is organized as follows: Section II presents the
eddy detection and classification problem and related work.
Section III describes the data preparation process. Section IV
presents the architecture of EddyNet and details the training
process. Section V reports the different experiments considered in this work and discusses the results. Our conclusion
and future work directions are finally stated in Section VI.
II. P ROBLEM STATEMENT AND RELATED WORK
Ocean mesoscale eddies can be defined as rotating water
masses, they are omnipresent in the ocean and carry critical
information about large-scale ocean circulation [10], [11].
Eddies transport different relevant physical quantities such as
carbon, heat, phytoplankton, salt, etc. This movement helps in
regulating the weather and mixing the ocean [12]. Detecting
and studying eddies helps also considering their effects in
ocean climate models [13]. With the development of altimeter
missions and since the availability of two or more altimeters
at the same time, merged products of Sea Surface Height
(SSH) reached a sufficient resolution to allow the detection of
mesoscale eddies [14], [15]. SSH maps allow us distinguish
two classes of eddies: i) anticyclonic eddies that are recognized
by their positive SLA (Sea Level Anomaly which is SSH
anomaly with regard to a given mean) and ii) cyclonic eddies
that are characterized by their negative SLA.
In recent years, several studies were conducted with the aim
of detecting and classifying eddies in an automated fashion
[16]. Two major families of methods prevail in the literature,
namely, physical parameter-based methods and geometrical
contour-based methods. The most popular representative of
physical parameter-based methods is the Okubo-Weiss parameter method [17], [18]. The Okubo-Weiss parameter method
is however criticized for its expert-based and region-specific
SUBMITTED
2
Fig. 1: A snapshot of a SSH map from the Southern Atlantic
Ocean with the detected eddies by PET14 algorithm, red
shapes represent anticyclonic eddies while green shapes are
cyclonic eddies
parameters and also for its sensitivity to noisy SSH maps
[19]. Other methods were since then developed using other
techniques such as wavelet decomposition [20], winding angle
[21], etc. Geometric-based methods rely on considering the
eddies as elliptic shapes and use closed contour techniques,
the most popular method remains Chelton et al. method
[11] (hereinafter called CSS11). Methods that combines ideas
from both worlds are called hybrid methods (e.g. [22], [23]).
Machine learning methods were also used in the past to
propose a solution to the problem [24], [25], recently they
are again getting an increasing attention [26], [27].
We propose in this work to benefit from the advances in
deep learning to address ocean eddy detection and classification. Our proposed deep learning based method requires
a training database consisting of SSH maps and their corresponding eddy detection and classification results. In this
work, we train our deep learning methods from the results of
the py-eddy-tracker SSH-based approach (hereinafter PET14)
[28], the algorithm developed by Mason et al. is closely related
to CSS11 but has some significant differences such as not
allowing multiple local extremum in an eddy. An example
of a PET14 result is given in Figure 1 which shows eddies
identified in the southwest Atlantic (see [29]). The outputs
of the eddy tracker algorithm provide the center coordinates
of each classified eddy along with its speed and effective
contours. Since we aim for a pixelwise classification, i.e., each
pixel is classified, we transform the outputs into segmentation
maps such as the example shown in Figure 2. We consider here
the speed contour which corresponds to the closed contour that
has the highest mean geostrophic rotational current. The speed
contour can be seen as the most energetic part of the eddy and
is usually smaller than the effective radius. The next section
describes further the data preparation process that yields the
training database of pixelwise classification maps.
III. DATA PREPARATION
As stated in the previous section, we consider PET14 outputs as a training database for our deep-neural-network based
algorithms. We use 15 years (1998-2012) of daily detected
and classified eddies. The corresponding SSH maps (AVISOSSH) are provided by the Copernicus Marine Environment
Fig. 2: Example of a SSH-Segmentation training couple,
anticyclonic (green), cyclonic (brown), non eddy (blue)
Monitoring Service (CMEMS). The resolution of the SSH
maps is 0.25◦ .
Due to memory constraints, the input image of our architectures is 128 × 128 pixels. The first 14 years are used as a
training dataset and the last year (2012) is left aside for testing
our architecture. We consider the Southern Atlantic Ocean
region depicted in Figure 1 and cut the top region where no
eddies were detected. Then we randomly sample one 128×128
patch from each SSH map, which leaves us with 5100 training
samples. A significant property of this type of data is that
its dynamics are slow, a single eddy can live for several
days or even more than a year. In addition to the fact that
a 128 × 128 patch can comprise several examples of cyclonic
and anticyclonic eddies, we believe that data augmentation
(adding rotated versions of the patches to the training database
for example) is not needed; we observed experiments (not
shown here) that even resulted in performance degradation.
The next step consists of extracting the SSH 128×128 patches
from AVISO-SSH. For land pixels or regions with no data
we replaced the standard fill value by a zero; this helps to
avoid outliers and does not affect detection since eddies are
located in regions with non zero SSH. The final and essential
step is the creation of the segmentation masks of the training
patches. This is done by creating polygon shapes using the
speed contour coordinates mapped onto the nearest lattices
in the AVISO-SSH 0.25◦ grid. Pixels inside each polygon
are then labeled with the class of the polygon representing
the eddy {’0’: Non eddy/land/no data, ’1’: anticyclonic eddy,
’2’: cyclonic eddy}. An example of the coupled {SSH map,
segmentation map} from the training dataset is given in Figure
2.
IV. O UR PROPOSED METHOD
A. EddyNet architecture
The EddyNet architecture is based on the U-net architecture
[4]. It starts with an encoding (downsampling) path with 3
stages, where each stage consists of two 3 × 3 convolutional
layers followed by either a Scaled Exponential Linear Unit
(SELU) activation function [5] (referred to as EddyNet S)
or by the classical ReLU activation + Batch Normalization
(referred to as EddyNet), then a 2 × 2 max pooling layer that
halves the resolution of the input. The decoding (upsampling)
path uses transposed convolutions (also called deconvolutions)
SUBMITTED
[30] to return to the original resolution. Like U-net, Eddynet
benefits from skip connections from the contracting path to
the expanding path to account for information originating
from early stages. Preliminary experiments with the original
architecture of U-Net showed a severe overfitting given the
low number of training samples compared to the capacity
of the architecture. Numerous attempts and hyperparameter
tuning led us to finally settle on a 3-stage all-32-filter architecture as shown in Figure 3. EddyNet has the benefit of
having a small number of parameters compared to widely
used architecture, thus resulting in low memory consumption.
Our neural network can still overfit the data which shows
that it can capture the nonlinear inverse problem of eddy
detection and classification. Hence, we add dropout layers
before each max pooling layer and before each transposed
convolutional layer; we chose these positions since they are
the ones involved in the concatenations where the highest
number of filters (64) is present. Dropout layers helped to
regularize the network and boosted the validation loss performance. Regarding EddyNet S, we mention three essential
considerations: i) The weight initialization is different than
with EddyNet, we detail this aspect in the experiment section.
ii) The theory behind the SELU activation function stands on
the self-normalizing property which aims to keep the inputs
close to a zero mean and unit variance through the network
layers. Classical dropout that randomly sets units to zero
could harm this property; [5] propose therefore a new dropout
technique called AlphaDropout that addresses this problem
by randomly setting activations on the negative saturation
value. iii) SELU theory is originally derived for Feed Forward
Networks, applying them to CNNs needs careful setting. In
preliminary experiments, using our U-net like architecture
with SELU activations resulted in a very noisy loss that even
explodes sometimes. We think this could be caused by the
skip connections that can violate the self-normalizing property
desired by the SELU, and hence decided to keep Batch
Normalization in EddyNet S after each of the maxpooling,
transposed convolution and concatenation layers.
B. Loss metric
While multiclass classification problems in deep learning
are generally trained using the categorical cross-entropy cost
function, segmentation problems favor the use of overlap based
metrics. The dice coefficient is a popular and largely used cost
function in segmentation problems. Considering the predicted
region P and the groundtruth region G, and by denoting |P |
and |G| the sum of elements in each area, the dice coefficient
is twice the ratio of the intersection over the sum of areas:
2|P ∩ G|
.
(1)
DiceCoef(P, G) =
|P | + |G|
A perfect segmentation result is given by a dice coefficient of
1, while a dice coefficient of 0 refers to a completely mistaken
segmentation. Seeing it from a F1-measure perspective, the
dice coefficient is the harmonic mean of the precision and
recall metrics.
The implementation uses one-hot encoding vectors, an
essential detail is that the loss function of EddyNet uses a
3
Fig. 3: EddyNet architecture
soft and differentiable version of the dice coefficient which
considers the output of the softmax layer as it is without
binarization:
P
2 i pi ∗ gi
P ,
softDiceCoef(P, G) = P
(2)
i pi +
i gi
where the pi are the probabilities given by the softmax layer
0 ≤ pi ≤ 1, and the gi are either 1 for the correct class and 0
either. We found later that a recent study used another version
of a soft dice loss [31]; a comparison of both versions is out
of the scope of this work.
Since we are in the context of a multiclass classification
problem, we try to maximize the performance of our network
using the mean of three one-vs-all soft dice coefficients of
each class. The loss function that our neural network aims to
minimize is then simply:
Dice Loss = 1 − softMeanDiceCoef
(3)
V. E XPERIMENTS
A. Assessment of the performance
Keras framework [32] with a Tensorflow backend is considered in this work. EddyNet is trained on a Nvidia K80
GPU card using ADAM optimizer [33] and mini-batches
of 16 maps. The weights were initialized using truncated
Gaussian distributed weights of zero mean and {2/number of
input units} variance [34] for EddyNet, while we use weights
drawn from a truncated Gaussian distribution of zero mean
and {1/number of input units} variance for EddyNet S. The
training dataset is split into 4080 images for training and 1020
for validation. We also use an early-stopping strategy to stop
the learning process when the validation dataset loss stops
improving in five consecutive epochs. EddyNet weights are
then the ones resulting in the lowest validation loss value.
EddyNet and EddyNet S are then compared regarding the
use of the classical ReLU+BN and the use of SELU. We also
compare the use of overlap based metric represented by the
SUBMITTED
4
TABLE I: Metrics calculated from the results of 50 random sets of 360 SSH patches from the test dataset, we report the mean
value and put the standard variation between parenthesis.
Anticyclonic
EddyNet
EddyNet S
#Param
177,571
Epoch time
∼12 min
∼7 min
Train loss
Dice Loss
CCE
Dice Loss
CCE
0.708
0.695
0.694
0.682
(0.002)
(0.003)
(0.003)
(0.002)
Cyclonic
Dice Coef
0.677 (0.001)
0.651 (0.001)
0.665 (0.001)
0.653 (0.002)
Non Eddy
0.929
0.940
0.933
0.939
(0.001)
(0.001)
(0.001)
(0.001)
Mean
0.772
0.762
0.764
0.758
Dice Coef
(0.001)
(0.001)
(0.001)
(0.001)
Global Accuracy
88.60% (0.10%)
89.92% (0.07%)
88.98% (0.09%)
89.83% (0.08%)
B. Ghost eddies
(a)
The presence of ghost eddies is a frequent problem encountered in eddy detection and tracking algorithms [14]. Ghost
eddies are eddies that are found by the detection algorithm then
disappear between consecutive maps before reappearing again.
To point out the position of the missed ghost eddies, PET14
uses linear temporal interpolation between centers of detected
eddies and stores the positions of the centers of ghost eddies.
Using EddyNet we check if the pixels of ghost eddy centers
correspond to actual eddy detections. We found that EddyNet
assigns the centers of ghost eddies to the correct eddy classes
55% of the time for anticyclonic eddies, and 45% for cyclonic
eddies. EddyNet could be a relevant method to detect ghost
eddies that are missed out by conventional methods. Figure 5
illustrates two examples of ghost eddy detection.
VI. C ONCLUSION
(b)
Fig. 4: Examples of the eddy segmentation results using
Eddynet and EddyNet S: anticyclonic eddies (green), cyclonic
(brown), non eddy (blue)
Dice Loss (Equation 3), with the classical Categorical CrossEntropy (CCE). Table I compares the four combination in
terms of global accuracy and mean dice coefficient (original
not soft) averaged on 50 random sets of 360 SSH 120 × 120
maps from 2012. Training EddyNet S takes nearly half the
time needed for training EddyNet. Comparison regarding the
training loss function shows that training with the dice loss results in a higher dice coefficient for our two classes of interest
(cyclonic and anticyclonic) in both EddyNet and EddyNet S;
dice loss yields a better overall mean dice coefficient than
training with CCE loss. Regarding the effect of the activation
function, we obtained better metrics with EddyNet at the
cost of a longer training procedure. Visually Eddynet and
EddyNet S give close outputs as can be seen in Figure 4.
This work investigates the use of recent developments in
deep learning based image segmentation for an ocean remote
sensing problem, namely, eddy detection and classification
from Sea Surface Height (SSH) maps. We propose EddyNet,
a deep neural network architecture inspired from architectures
and ideas widely adopted in the computer vision community.
We transfer successfully the knowledge gained to the problem
of eddy classification by dealing with various challenges. Future work involves investigating the use of temporal volumes
of SSH and deriving a 3D version inspired by the works of
[31]. Adding other surface information such as Sea Surface
Temperature might also help improving the detection. Another
extension would be the application of EddyNet over the globe,
and assessing its general capacity over other regions. Postprocessing by constraining the eddies to verify additional
criteria and tracking the eddies was omitted in this work and
could also be developed in future work.
Beyond the illustrative aspect of this contribution, we offer
to the oceanic remote sensing community an easy and powerful tool that can save handcrafting model efforts. Any user can
employ his own eddy segmentation ”ground truth” and train
the model from scratch if he/she has the necessary memory
and computing resources, or simply use EddyNet provided
weights as an initialization then perform fine-tuning using
his/her dataset. One can also think of averaging results from
classical contour-based methods and EddyNet. In the spirit of
reproducibility, Python code is available at https://github.com/
redouanelg/eddynet, and we also share the training and testing
data used for this work to encourage competing methods and,
especially, other deep learning architectures.
SUBMITTED
5
(a)
(b)
Fig. 5: Detection of ghost eddies: [left] SSH map where ghost
eddies centers are marked: anticyclonic (red dots), cyclonic
(blue dots). [center] PET14 segmentation. [right] EddyNet segmentation: anticyclonic (green), cyclonic (brown), non eddy
(blue)
ACKNOWLEDGMENT
The authors would like to thank Antoine Delepoulle,
Bertrand Chapron and Julien Le Sommer for their constructive comments. This work was supported by ANR (Agence
Nationale de la Recherche, grant ANR-13-MONU-0014) and
Labex Cominlabs (grant SEACS). Evan Mason is supported
by the Copernicus Marine Environment Monitoring Service
(CMEMS) MedSUB project.
R EFERENCES
[1] I. Goodfellow, Y. Bengio, and A. Courville, Deep learning. MIT Press,
2016.
[2] L. Zhang, L. Zhang, and B. Du, “Deep learning for remote sensing
data: A technical tutorial on the state of the art,” IEEE Geoscience and
Remote Sensing Magazine, vol. 4, no. 2, pp. 22–40, 2016.
[3] X. X. Zhu, D. Tuia, L. Mou, G.-S. Xia, L. Zhang, F. Xu, and
F. Fraundorfer, “Deep learning in remote sensing: a review,” ArXiv eprints, Oct. 2017.
[4] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference
on Medical Image Computing and Computer-Assisted Intervention.
Springer, 2015, pp. 234–241.
[5] G. Klambauer, T. Unterthiner, A. Mayr, and S. Hochreiter, “Selfnormalizing neural networks,” arXiv preprint arXiv:1706.02515, 2017.
[6] D.-A. Clevert, T. Unterthiner, and S. Hochreiter, “Fast and accurate
deep network learning by exponential linear units (elus),” arXiv preprint
arXiv:1511.07289, 2015.
[7] E. Maggiori, Y. Tarabalka, G. Charpiat, and P. Alliez, “Convolutional
neural networks for large-scale remote-sensing image classification,”
IEEE Transactions on Geoscience and Remote Sensing, vol. 55, no. 2,
pp. 645–657, Feb 2017.
[8] N. Audebert, B. L. Saux, and S. Lefèvre, “Semantic segmentation of
earth observation data using multimodal and multi-scale deep networks,”
arXiv preprint arXiv:1609.06846, 2016.
[9] M. Volpi and D. Tuia, “Dense semantic labeling of subdecimeter resolution images with convolutional neural networks,” IEEE Transactions
on Geoscience and Remote Sensing, vol. 55, no. 2, pp. 881–893, 2017.
[10] W. R. Holland, “The role of mesoscale eddies in the general circulation of the oceannumerical experiments using a wind-driven quasigeostrophic model,” Journal of Physical Oceanography, vol. 8, no. 3,
pp. 363–392, 1978.
[11] D. B. Chelton, M. G. Schlax, and R. M. Samelson, “Global observations
of nonlinear mesoscale eddies,” Progress in Oceanography, vol. 91,
no. 2, pp. 167–216, 2011.
[12] J. C. McWilliams, “The nature and consequences of oceanic eddies,”
Ocean Modeling in an Eddying Regime, pp. 5–15, 2008.
[13] J. Le Sommer, F. dOvidio, and G. Madec, “Parameterization of subgrid
stirring in eddy resolving ocean models. part 1: Theory and diagnostics,”
Ocean Modelling, vol. 39, no. 1, pp. 154–169, 2011.
[14] J. H. Faghmous, I. Frenger, Y. Yao, R. Warmka, A. Lindell, and
V. Kumar, “A daily global mesoscale ocean eddy dataset from satellite
altimetry,” Scientific data, vol. 2, 2015.
[15] A. Pascual, Y. Faugère, G. Larnicol, and P.-Y. Le Traon, “Improved
description of the ocean mesoscale variability by combining four satellite
altimeters,” Geophysical Research Letters, vol. 33, no. 2, 2006.
[16] J. H. Faghmous, L. Styles, V. Mithal, S. Boriah, S. Liess, V. Kumar,
F. Vikebø, and M. dos Santos Mesquita, “Eddyscan: A physically
consistent ocean eddy monitoring application,” in Intelligent Data Understanding (CIDU), 2012 Conference on. IEEE, 2012, pp. 96–103.
[17] A. Okubo, “Horizontal dispersion of floatable particles in the vicinity of
velocity singularities such as convergences,” in Deep sea research and
oceanographic abstracts, vol. 17, no. 3. Elsevier, 1970, pp. 445–454.
[18] J. Weiss, “The dynamics of enstrophy transfer in two-dimensional
hydrodynamics,” Physica D: Nonlinear Phenomena, vol. 48, no. 2-3,
pp. 273–294, 1991.
[19] D. B. Chelton, M. G. Schlax, R. M. Samelson, and R. A. de Szoeke,
“Global observations of large oceanic eddies,” Geophysical Research
Letters, vol. 34, no. 15, 2007.
[20] A. Turiel, J. Isern-Fontanet, and E. Garcı́a-Ladona, “Wavelet filtering to
extract coherent vortices from altimetric data,” Journal of Atmospheric
and Oceanic Technology, vol. 24, no. 12, pp. 2103–2119, 2007.
[21] I. A. Sadarjoen and F. H. Post, “Geometric methods for vortex extraction,” in Data Visualization99. Springer, 1999, pp. 53–62.
[22] J. Yi, Y. Du, Z. He, and C. Zhou, “Enhancing the accuracy of automatic
eddy detection and the capability of recognizing the multi-core structures
from maps of sea level anomaly,” Ocean Science, vol. 10, no. 1, pp. 39–
48, 2014.
[23] J. Isern-Fontanet, E. Garcı́a-Ladona, and J. Font, “Identification of
marine eddies from altimetric maps,” Journal of Atmospheric and
Oceanic Technology, vol. 20, no. 5, pp. 772–778, 2003.
[24] M. Castellani, “Identification of eddies from sea surface temperature
maps with neural networks,” International journal of remote sensing,
vol. 27, no. 8, pp. 1601–1618, 2006.
[25] J. Hai, Y. Xiaomei, G. Jianming, and G. Zhenyu, “Automatic eddy
extraction from sst imagery using artificial neural network,” The international archives of the photogrammetry, remote sensing and spatial
information science, pp. 279–282, 2008.
[26] M. D. Ashkezari, C. N. Hill, C. N. Follett, G. Forget, and M. J. Follows,
“Oceanic eddy detection and lifetime forecast using machine learning
methods,” Geophysical Research Letters, vol. 43, no. 23, 2016.
[27] D. Huang, Y. Du, Q. He, W. Song, and A. Liotta, “Deepeddy: A simple
deep architecture for mesoscale oceanic eddy detection in sar images,”
in 2017 IEEE 14th International Conference on Networking, Sensing
and Control (ICNSC), May 2017, pp. 673–678.
[28] E. Mason, A. Pascual, and J. C. McWilliams, “A new sea surface
height–based code for oceanic mesoscale eddy tracking,” Journal of
Atmospheric and Oceanic Technology, vol. 31, no. 5, pp. 1181–1188,
2014.
[29] E. Mason, A. Pascual, P. Gaube, S. Ruiz, J. L. Pelegr, and A. Delepoulle,
“Subregional characterization of mesoscale eddies across the brazilmalvinas confluence,” Journal of Geophysical Research: Oceans, vol.
122, no. 4, pp. 3329–3357, 2017.
[30] M. D. Zeiler, D. Krishnan, G. W. Taylor, and R. Fergus, “Deconvolutional networks,” in Computer Vision and Pattern Recognition (CVPR),
2010 IEEE Conference on. IEEE, 2010, pp. 2528–2535.
[31] F. Milletari, N. Navab, and S.-A. Ahmadi, “V-net: Fully convolutional
neural networks for volumetric medical image segmentation,” in 3D
Vision (3DV), 2016 Fourth International Conference on. IEEE, 2016,
pp. 565–571.
[32] F. Chollet et al., “Keras,” https://github.com/fchollet/keras, 2015.
[33] D. Kingma and J. Ba, “Adam: A method for stochastic optimization,”
arXiv preprint arXiv:1412.6980, 2014.
[34] K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers:
Surpassing human-level performance on imagenet classification,” in
Proceedings of the IEEE international conference on computer vision,
2015, pp. 1026–1034.
| 1 |
INTRODUCING ENRICHED CONCRETE SYNTAX TREE
Gordana Rakić, Zoran Budimac
Dept. of Mathematics and Informatics, Faculty of Sciences, University of Novi Sad
Trg Dositeja Obradovića 4, 21000 Novi Sad, Serbia
+381 21 4852877, +381 21 458888
[email protected], [email protected]
parser with embedded rules and functions for generating
trees and for managing the content of its nodes. In fact, we
use it to generate parser that will produce eCST including
insertion of universal nodes.
The main idea is to add corresponding universal node as a
parent of sub-tree that represents specific element in a
source
code
(e.g.
COMPILATION_UNIT,
FUNCTION_CALL,
BRANCH_STATEMENT,
LOOP_STATEMENT, etc.). Key characteristic of these
nodes is that these are equivalent in all programming
languages.
ABSTRACT
In our earlier research [9] an area of consistent and
systematic application of software metrics was explored.
Strong dependency of applicability of software metrics
on input programming language was recognized as one
of the main weaknesses in this field. Introducing
enriched Concrete Syntax Tree (eCST) for internal and
intermediate representation of the source code resulted
with step forward over this weakness. In this paper we
explain innovation made by introducing eCST and
provide idea for broader applicability of eCST in some
other fields of software engineering.
3 RELATED WORK
1 INTRODUCTION
Syntax trees, abstract or concrete, are broadly used in
numerous fields of software engineering. Abstract Syntax
Tree (AST) is used as representation of source code and
model.
Baxter [1] and Ducasse [2] use abstract syntax trees for
representation of the source code for duplicated code
analysis. Those trees have some additional features designed
for easier implementation of the algorithm for comparison.
Koschke et al. [5] propose similar but fresh idea for code
clone detection using abstract syntax suffix trees
In [3] the role of AST as representation of model in Model
Driven Engineering is described. ASTs were also used for
monitoring of changes (Neamtiu et al. [6]) in the change
analysis of code written in programming language C.
Even if the construction of AST is language independent;
the content of these trees is always strongly related to
language syntax. That can be clearly concluded from all
papers related to usage of AST referred in this article.
In [9] a detailed motivation for initiating the research in
proposed direction is described. It is related to problems in
application of software metrics and early work on
introducing eCST in that particular field. In [9, 10] is
described eCST, its construction and its role in development
of SMIILE (Software Metrics - Independent of Input
LanguagE) tool, as well.
Additionally, we can propose [7] as an introduction to
automatic building of syntax trees by generated language
parser. It also provides mechanism for adding universal
nodes into tree that is to be generated.
We introduce the new type of Syntax Trees to be used as
intermediate representation of the source code. We call this
type of tree “enriched Concrete Syntax Tree (eCST),
because it is enriched by so-called “universal” nodes.
Universal nodes are common for all programming languages
and add additional value to these trees. In this way the tree
becomes broadly usable for different algorithms of software
metrics and independent on input programming language.
Besides original application field in building software
metrics tools and other static analyzers, eCSTs can be
further manipulated and transformed and thus applied in
other fields as well (e.g. software testing and source code
translation). In this way eCST can be used for numerous
purposes related to development, maintenance and analysis
of software systems. The major benefit of usage of eCST is
its universality and independency on programming language.
2 BACKGROUND
Syntax trees are usually secondary product of compiler and
parser generators. These generators usually have embedded
mechanisms to generate syntax trees as internal structures.
Additionally, these mechanisms can be extended with
mechanism for enrichment of syntax trees with additional
information about language or input source code. This
opportunity is our key instrument.
Parser generators take the language grammar as its input and
return parser for that language as an output. This grammar is
provided in some default form (e.g. EBNF – Extended
Backus-Naur form) or in some generator-specific syntax. In
this project parser generator is used to generate scanner and
211
express what condition should be fulfilled to leave the loop,
while the second one states condition to continue looping.
Simplified syntax trees representing given statements are
illustrated by Figure 1.
4 INTRODUCING eCST
Related research shows that there is no fully consistent
support for software development and maintenance. All
tools used for these purposes have some limitations, e.g.
limited programming language support, weak and
inconsistent usage of metrics and/or testing techniques, etc.
In the field of software evolution, which enforces techniques
such are advising, recommending and automating of
refactoring and reengineering, solutions based on a common
intermediate structure could be a key supporting element.
This support could be based on metrics, testing and deeper
static analysis. Development of such support would
introduce new values into software engineering field. For all
offered reasons, proposed universal tree could be an
appropriate internal representation applicable toward all
stated goals. Universality of internal structure is important
for meeting consistency in all fields.
By realization of this idea key benefit could be made from
language independency of eCST and its universality and
broad applicability.
Figure 1: Simplified syntax trees for REAPEAT-UNTIL (left)
and do-while (right) statements
4.1 Motivation
Motivation for introducing eCST as a new intermediate
representation of the source code lays in intention to fulfil
gaps in field of systematic application of software metrics by
improving characteristics of software metric tools. One of
the important weaknesses of available metric tools is the
lack of support for calculation of metric values
independently on input programming language.
Originally, Concrete Syntax Tree (CST) is used for
representation of a source code. CST represents concrete
source code elements attached to corresponding construction
in language syntax. Although this tree is quite rich, it is still
unaware of sophisticated details about meaning of syntax
elements and their role in certain problems (e.g. Algorithms
for calculation of software metrics). We enrich CST by
adding universal nodes to mark elements to become
recognizable independently on input programming language.
To illustrate the technique to achieve independency of
programming language we provide a simple example. It
illustrates problems in calculation of Cyclomatic Complexity
(CC) metric by predicate counting method.
The simple loop statement written in Modula-2 is stated as
follows:
REPEAT
…Some statements…
UNTIL (i > j);
The equivalent loop in Java would look like:
do{
…Some statements…
}while (i <= j);
Although given statements have different syntax they
express the same functionality: “some statements” in the
code will be repeated until parameter “i” becomes greater
then the parameter “j”. Beside the different syntax, condition
for leaving the loop is oppositely stated. First condition
Figure 2: eCSTs for REAPEAT-UNTIL (left) and do-while
(right) statements
For the implementation of CC algorithm we should
recognize “REPEAT” and “while” as loops and to increment
the current CC value by 1. It is clear that by using CST to
represent source code we would need two implementations
or at least two conditions to recognize these loops in the
tree. By adding universal nodes “LOOP_STATEMENT” as
parent of sub-trees that represent these two segments of
source code we meet our goal by only one condition in
implementation of the CC algorithm. Also we add universal
node CONDITION to mark condition for leaving the loop
repetition (Figure 2).
Additional enrichments for some other purposes could be
including information about logical value that condition
should have to leave the loop.
By adding all of needed universal nodes we implemented
algorithms for CC metric independently on programming
language. All we need is language grammar to modify and
generate appropriate parser that is used for generating eCST.
212
New prototype of SMIILE tool that use eCST in metric
calculation is described in [10]. This related paper describes
language independent implementation of CC software metric
based on universal nodes. It concentrates on CC as
characteristic example for demonstration of usefulness of
eCST in the direction of language independency of
described tool. The paper provide table of used universal
nodes in this prototype and provide way of usage in case of
three characteristic languages – object-oriented Java,
procedural Modula-2 and legacy COBOL.
4.2 Possible broader applicability
eCST was originally used in the development of language
independent software metrics tool (SMIILE) [9]. Current
prototype is implemented to support two software metrics
and three languages.
However eCST has a limitation - it represents only separate
compilation units. By translating all compilation units we get
a set of autonomous trees. For the implementation of e.g.
design metrics these trees should be connected by creating
directed graph.
Idea for connecting compilation units is based on
information about function (procedure, method, etc) calls
contained in other functions. These calls could be placed
either in the same or in some other compilation units.
Figure 4: Simplified view of eCST to eCFG transformation
Generated eCFG could be used in software testing [4] (e.g.
for development of automatic test case generator), in dead
and duplicated code discovering, code-clone analysis
[1],[2],[5], etc., but also as a basis for connecting
compilation units instead of original eCST. In this case
language independent call graph would be created by
connecting eCFG components that represent compilation
units.
Furthermore, we can notice that eCST could be used for
automatic source code translation between programming
languages. If we again consider given example we can
conclude that given statement could be translated from Java
to Modula-2 or from Modula-2 to Java. For automatic
translation by using eCST we would have reflection table
with rules for translation. In this example we should have
rule about:
- How to translate the loop
- How to translate the condition
- How to translate inside statements
In this concept for translation we will not get perfectly
written source code but by defining proper rules for
translation we could manage to get correct source code. This
limitation could be corrected by several cycles of code
transformation [8].
SMIILE tool which is based on eCST for short-term goal
had language independency, but as long term objective we
stated smart software metrics tool that would recommend to
developers how to improve their source code. It is planed to
develop input language independent metric based advising
system which would communicate with its user not only
about metric values, but by concrete advices for corrections
and refactoring of the source code based on calculated
software metrics. For this purpose metric values should be
stored in data storage. This storage could be well organized
XML file system as primarily was proposed by SMIILE
team, but also external software metrics repository could be
used. Integration of SMIILE prototype with particular
software metrics repository described in [11] is basis for
further work in this direction.
Opportunity for improvement refactoring process gives
additional value to described potential application of eCST
Figure 3: Connecting of compilation units into call graph
If function A contains call of function B than directed
branch would lead from node representing function B to
node that represent function A (Figure 3). Universal nodes
(FUNCTION_DECL,
FUNCTION_DEF
and
FUNCTION_CALL) would be used to locate the fragment
of source code that contains function declaration, definition
or call respectively. Information about function is placed in
sub-tree of corresponding universal node.
Generated graph is a specific call graph. Maybe we can use
even complex network (e.g. the one in [12]), but by creating
the network by connecting eCSTs it would become language
independent.
Additional possibility is to transform eCST to language
independent enriched Control Flow Graph (eCFG), by
inserting branches that represent possible execution paths
through program (Figure 4).
213
in code translation because needed after-translation
refactoring could be automatically suggested or even
applied.
The tool that integrates all described functionalities,
including ones planed for SMIILE tool, would provide
important features for consistent development of
heterogonous software systems consisting of different
components, implemented in different programming
languages.
Furthermore SMIILE tool has possibility of keeping history
of source code and corresponding software metrics. For
keeping history of the source code eCST is stored to XML
file created according to structure of eCST. By adding codechange analysis to the planed it would become important
support in software reengineering process [6].
[6] Neamtiu I., Foster J.S, Hicks M. Understanding Source
Code Evolution Using Abstract Syntax Tree Matching,
In Proceeding of the International Conference on
Software Engineering 2005, international workshop on
Mining software repositories, ISBN:1-59593-123-6, pp
1–5
[7] Parr T., The Definitive ANTLR Reference - Building
Domain-Specific Languages, The Pragmatic Bookshelf,
USA, 2007, ISBN: 0-9787392-5-6
[8] Pracner D., Budimac Z, Restructuring Assembly Code
Using Formal Transformations, In Proceedings Of
International Conference of Numerical Analysis and
Applied Mathematics ICNAAM2011, Symposium on
Computer Languages, Implementations and Tools
(SCLIT), September 19-25, 2011, Greece (in print)
[9] Rakic G., Budimac Z., Problems In Systematic
Application Of Software Metrics And Possible
Solution, In Proceedings Of The 5th International
Conference on Information Technology ICIT 2011,
Jordan
[10] Rakic G., Budimac Z., SMIILE Prototype, In
Proceedings Of International Conference of Numerical
Analysis and Applied Mathematics ICNAAM2011,
Symposium on Computer Languages, Implementations
and Tools (SCLIT), September 19-25, 2011, Greece (in
print)
[11] Rakic G., Gerlec Č., Novak J., Budimac Z., XMLBased Integration of the SMIILE Tool Prototype and
Software Metrics Repository, In Proceedings Of
International Conference of Numerical Analysis and
Applied Mathematics ICNAAM2011, Symposium on
Computer Languages, Implementations and Tools
(SCLIT), September 19-25, 2011, Greece (in print)
[12] Savić
M., Ivanović M., Radovanović M.,
Characteristics of Class Collaboration Networks, In
Large Java Software Projects, Information Technology
and Control Journal, Vol.40, No.1, 2011, pp. 48-58.
5 CONCLUSION
In this paper we introduce eCST and propose its usage in
source code and model representation in development of
universal tool to support different software engineering
techniques and processes.
Idea for introducing eCST is supported by example of
successful development of the prototype of language
independent software metrics tool.
As this paper provide still fresh idea, it is clear that there
exist numerous open questions and further work in proposed
directions is planned.
ACKNOWLEDGMENTS
The authors acknowledge the support of this work by the
Serbian Ministry of Education and Science through project
"Intelligent Techniques and Their Integration into WideSpectrum Decision Support," no. OI174023.
References
[1] Baxter I.D, Yahin A, Moura L, Sant'Anna M, Bier L,
[2]
[3]
[4]
[5]
Clone Detection Using Abstract Syntax Trees,
Proceedings of International Conference on Software
Maintenance, 1998. pp. 368-377
Ducasse S., Rieger M., Demeyer S.,1999, A Language
Independent Approach for Detecting Duplicated Code,
Proceedings. IEEE International Conference on
Software Maintenance (ICSM '99), pp 109-118
Fischer G., Lusiardi J., Wolff von Gudenberg J.,
Abstract Syntax Trees – and their Role in Model Driven
Software Development, In Proceedings of International
Conference on Software Engineering Advances(ICSEA
2007), 2007
Guangmei Z., Rui C., Xiaowei L., Congying L. The
Automatic Generation of Basis Set of Path for Path
Testing, In the Proceedings of the 14th Asian Test
Symposium (ATS ’05), 2005
Koschke R, Falke R, Frenzel P, Clone Detection Using
Abstract Syntax Suffix Trees, Proceedings of the 13th
Working Conference on Reverse Engineering
(WCRE'06), 2006
214
| 6 |
Neural Question Answering at BioASQ 5B
Georg Wiese1,2 , Dirk Weissenborn2 and Mariana Neves1
1
Hasso Plattner Institute, August Bebel Strasse 88, Potsdam 14482 Germany
2
Language Technology Lab, DFKI, Alt-Moabit 91c, Berlin, Germany
[email protected],
[email protected], [email protected]
arXiv:1706.08568v1 [cs.CL] 26 Jun 2017
Abstract
This paper describes our submission to the
2017 BioASQ challenge. We participated
in Task B, Phase B which is concerned
with biomedical question answering (QA).
We focus on factoid and list question, using an extractive QA model, that is, we
restrict our system to output substrings of
the provided text snippets. At the core
of our system, we use FastQA, a state-ofthe-art neural QA system. We extended
it with biomedical word embeddings and
changed its answer layer to be able to
answer list questions in addition to factoid questions. We pre-trained the model
on a large-scale open-domain QA dataset,
SQuAD, and then fine-tuned the parameters on the BioASQ training set. With our
approach, we achieve state-of-the-art results on factoid questions and competitive
results on list questions.
1
Introduction
BioASQ is a semantic indexing, question answering (QA) and information extraction challenge
(Tsatsaronis et al., 2015). We participated in
Task B of the challenge which is concerned with
biomedical QA. More specifically, our system participated in Task B, Phase B: Given a question
and gold-standard snippets (i.e., pieces of text that
contain the answer(s) to the question), the system
is asked to return a list of answer candidates.
The fifth BioASQ challenge is taking place at
the time of writing. Five batches of 100 questions
each were released every two weeks. Participating
systems have 24 hours to submit their results. At
the time of writing, all batches had been released.
The questions are categorized into different
question types: factoid, list, summary and yes/no.
Our work concentrates on answering factoid and
list questions. For factoid questions, the system’s
responses are interpreted as a ranked list of answer candidates. They are evaluated using meanreciprocal rank (MRR). For list questions, the system’s responses are interpreted as a set of answers
to the list question. Precision and recall are computed by comparing the given answers to the goldstandard answers. F1 score, i.e., the harmonic
mean of precision and recall, is used as the official evaluation measure 1 .
Most existing biomedical QA systems employ
a traditional QA pipeline, similar in structure to
the baseline system by Weissenborn et al. (2013).
They consist of several discrete steps, e.g., namedentity recognition, question classification, and
candidate answer scoring. These systems require a
large amount of resources and feature engineering
that is specific to the biomedical domain. For example, OAQA (Zi et al., 2016), which has been
very successful in last year’s challenge, uses a
biomedical parser, entity tagger and a thesaurus to
retrieve synonyms.
Our system, on the other hand, is based on a
neural network QA architecture that is trained endto-end on the target task. We build upon FastQA
(Weissenborn et al., 2017), an extractive factoid
QA system which achieves state-of-the-art results
on QA benchmarks that provide large amounts of
training data. For example, SQuAD (Rajpurkar
et al., 2016) provides a dataset of ≈ 100, 000
questions on Wikipedia articles. Our approach
is to train FastQA (with some extensions) on the
SQuAD dataset and then fine-tune the model parameters on the BioASQ training set.
Note that by using an extractive QA network as
our central component, we restrict our system’s
1
The details of the evaluation can be found at
http://participants-area.bioasq.org/
Tasks/b/eval_meas/
Start Probabilities pstart
sigmoid
softmax
EndScores
Scoresee(s)(s)
End
End
Scores yend
Start Scores ystart
Extractive QA System
Biomedical Embeddings
...
...
GloVe Embeddings
Character Embeddings
Question Type Features
Context Embeddings
Question Embeddings
Figure 1: Neural architecture of our system. Question and context (i.e., the snippets) are mapped directly to start and end probabilities for each context token. We use FastQA (Weissenborn et al.,
2017) with modified input vectors and an output
layer that supports list answers in addition to factoid answers.
responses to substrings in the provided snippets.
This also implies that the network will not be able
to answer yes/no questions. We do, however, generalize the FastQA output layer in order to be able
to answer list questions in addition to factoid questions.
2
2014) which have been trained on a large collection of web documents.
End Probabilitiesp(e|s)
p(e|s)
End
End Probabilities
Probabilities pend
• Character embedding: This embedding is
computed by a 1-dimensional convolutional
neural network from the characters of the
words, as introduced by Seo et al. (2016).
• Biomedical Word2Vec embeddings: We
use the biomedical word embeddings provided by Pavlopoulos et al. (2014). These
are 200-dimensional Word2Vec embeddings
(Mikolov et al., 2013) which were trained on
≈ 10 million PubMed abstracts.
To the embedding vectors, we concatenate a
one-hot encoding of the question type (list or factoid). Note that these features are identical for all
tokens.
Following our embedding layer, we invoke
FastQA in order to compute start and end scores
for all context tokens. Because end scores are conditioned on the chosen start, there are O(n2 ) end
scores where n is the number of context tokens.
We denote the start index by i ∈ [1, n], the end
i
index by j ∈ [i, n], the start scores by ystart
, and
i,j
end scores by yend .
In our output layer, the start, end, and span probabilities are computed as:
Model
Our system is a neural network which takes as input a question and a context (i.e., the snippets) and
outputs start and end pointers to tokens in the context. At its core, we use FastQA (Weissenborn
et al., 2017), a state-of-the-art neural QA system.
In the following, we describe our changes to the
architecture and how the network is trained.
2.1
Network architecture
In the input layer, the context and question tokens are mapped to high-dimensional word vectors. Our word vectors consists of three components, which are concatenated to form a single
vector:
• GloVe embedding: We use 300-dimensional
GloVe embeddings 2 (Pennington et al.,
2
We use the 840B embeddings available here: https:
//nlp.stanford.edu/projects/glove/
i
pistart = σ(ystart
)
(1)
i,·
pi,·
end = sof tmax(yend )
(2)
i,j
i
pi,j
span = pstart · pend
(3)
where σ denotes the sigmoid function. By computing the start probability via the sigmoid rather
than softmax function (as used in FastQA), we enable the model to output multiple spans as likely
answer spans. This generalizes the factoid QA network to list questions.
2.2
Training & decoding
Loss We define our loss as the cross-entropy of
the correct start and end indices. In the case of
multiple occurrences of the same answer, we only
minimize the span of the lowest loss.
Factoid MRR
Ensemble
Batch
Single
1
2
3
4
5
Average
52.0% (2/10)
38.3% (3/15)
43.1% (1/16)
30.0% (3/20)
39.2% (3/17)
40.5%
Single
57.1% (1/10)
42.6% (2/15)
42.1% (2/16)
36.1% (1/20)
35.1% (4/17)
42.6%
List F1
Ensemble
33.6% (1/11)
29.0% (8/15)
41.5% (2/17)
24.2% (5/20)
36.1% (4/20)
32.9%
33.5%(2/11)
26.2%(9/15)
49.5%(1/17)
29.3%(4/20)
39.1%(2/20)
35.1%
Table 1: Preliminary results for factoid and list questions for all five batches and for our single and
ensemble systems. We report MRR and F1 scores for factoid and list questions, respectively. In parentheses, we report the rank of the respective systems relative to all other systems in the challenge. The
last row averages the performance numbers of the respective system and question type across the five
batches.
Optimization We train the network in two steps:
First, the network is trained on SQuAD, following
the procedure by Weissenborn et al. (2017) (pretraining phase). Second, we fine-tune the network
parameters on BioASQ (fine-tuning phase). For
both phases, we use the Adam optimizer (Kingma
and Ba, 2014) with an exponentially decaying
learning rate. We start with learning rates of
10−3 and 10−4 for the pre-training and fine-tuning
phases, respectively.
Ensemble In order to further tweak the performance of our systems, we built a model ensemble.
For this, we trained five single models using 5-fold
cross-validation on the entire training set. These
models are combined by averaging their start and
end scores before computing the span probabilities (Equations 1-3). As a result, we submit two
systems to the challenge: The best single model
(according to its development set) and the model
ensemble.
BioASQ dataset preparation During finetuning, we extract answer spans from the BioASQ
training data by looking for occurrences of the
gold standard answer in the provided snippets.
Note that this approach is not perfect as it can produce false positives (e.g., the answer is mentioned
in a sentence which does not answer the question)
and false negatives (e.g., a sentence answers the
question, but the exact string used is not in the synonym list).
Because BioASQ usually contains multiple
snippets for a given question, we process all snippets independently and then aggregate the answer
spans, sorting globally according to their probability pi,j
span .
Implementation We implemented our system
using TensorFlow (Abadi et al., 2016). It was
trained on an NVidia GForce Titan X GPU.
Decoding During the inference phase, we retrieve the top 20 answers span via beam search
with beam size 20. From this sorted list of answer strings, we remove all duplicate strings. For
factoid questions, we output the top five answer
strings as our ranked list of answer candidates. For
list questions, we use a probability cutoff threshold
t, such that {(i, j)|pi,j
span ≥ t} is the set of answers.
We set t to be the threshold for which the list F1
score on the development set is optimized.
3
Results & discussion
We report the results for all five test batches of
BioASQ 5 (Task 5b, Phase B) in Table 1. Note
that the performance numbers are not final, as the
provided synonyms in the gold-standard answers
will be updated as a manual step, in order to reflect
valid responses by the participating systems. This
has not been done by the time of writing3 . Note
also that – in contrast to previous BioASQ challenges – systems are no longer allowed to provide
an own list of synonyms in this year’s challenge.
In general, the single and ensemble system are
performing very similar relative to the rest of field:
Their ranks are almost always right next to each
other. Between the two, the ensemble model performed slightly better on average.
On factoid questions, our system has been very
successful, winning three out of five batches. On
3
The final results will be published at http:
//participants-area.bioasq.org/results/
5b/phaseB/
list questions, however, the relative performance
varies significantly. We expect our system to perform better on factoid questions than list questions, because our pre-training dataset (SQuAD)
does not contain any list questions.
Starting with batch 3, we also submitted responses to yes/no questions by always answering
yes. Because of a very skewed class distribution
in the BioASQ dataset, this is a strong baseline.
Because this is done merely to have baseline performance for this question type and because of the
naivety of the method, we do not list or discuss the
results here.
Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and
Hannaneh Hajishirzi. 2016. Bidirectional attention
flow for machine comprehension. arXiv preprint
arXiv:1611.01603 .
4
Dirk Weissenborn, Georg Wiese, and Laura Seiffe.
2017. Making neural qa as simple as possible but
not simpler. arXiv preprint arXiv:1703.04816 .
Conclusion
In this paper, we summarized the system design
of our BioASQ 5B submission for factoid and list
questions. We use a neural architecture which is
trained end-to-end on the QA task. This approach
has not been applied to BioASQ questions in previous challenges. Our results show that our approach achieves state-of-the art results on factoid
questions and competitive results on list questions.
References
Martı́n Abadi, Ashish Agarwal, Paul Barham, Eugene
Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado,
Andy Davis, Jeffrey Dean, Matthieu Devin, et al.
2016. Tensorflow: Large-scale machine learning on
heterogeneous distributed systems. arXiv preprint
arXiv:1603.04467 .
Diederik Kingma and Jimmy Ba. 2014. Adam: A
method for stochastic optimization. arXiv preprint
arXiv:1412.6980 .
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing
systems. pages 3111–3119.
Ioannis Pavlopoulos, Aris Kosmopoulos, and
Ion Androutsopoulos. 2014.
Continuous
space word vectors obtained by applying
word2vec to abstracts of biomedical articles
http://bioasq.lip6.fr/info/BioASQword2vec/.
Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for
word representation. In Empirical Methods in Natural Language Processing (EMNLP). pages 1532–
1543. http://www.aclweb.org/anthology/D14-1162.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and
Percy Liang. 2016. Squad: 100,000+ questions
for machine comprehension of text. arXiv preprint
arXiv:1606.05250 .
George Tsatsaronis, Georgios Balikas, Prodromos
Malakasiotis, Ioannis Partalas, Matthias Zschunke,
Michael R Alvers, Dirk Weissenborn, Anastasia
Krithara, Sergios Petridis, Dimitris Polychronopoulos, et al. 2015. An overview of the bioasq largescale biomedical semantic indexing and question answering competition. BMC bioinformatics 16(1):1.
Dirk Weissenborn, George Tsatsaronis, and Michael
Schroeder. 2013. Answering factoid questions in the
biomedical domain. BioASQ@ CLEF 1094.
Yang Zi, Zhou Yue, and Eric Nyberg. 2016. Learning
to answer biomedical questions: Oaqa at bioasq 4b.
ACL 2016 page 23.
| 9 |
IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, VOL. #, NO. #, ??? 2016
1
Reference Governor Strategies for Vehicle Rollover
Avoidance
arXiv:1608.02266v1 [] 7 Aug 2016
Ricardo Bencatel, Anouck Girard, Senior Member, IEEE, and Ilya Kolmanovsky, Fellow, IEEE
Abstract—The paper addresses the problem of vehicle rollover
avoidance using reference governors applied to modify the
driver steering input in vehicles with an active steering system.
Several reference governor designs are presented and tested with
a detailed nonlinear simulation model. The vehicle dynamics
are highly nonlinear for large steering angles, including the
conditions where the vehicle approaches a rollover onset, which
necessitates reference governor design changes. Simulation results
show that reference governor designs are effective in avoiding
rollover. The results also demonstrate that the controllers are
not overly conservative, adjusting the driver steering input only
for very high steering angles.
Index Terms—Rollover Protection, Rollover Avoidance, Active
Steering, Reference Governor, Command Governor, Nonlinear
Control, Constraint Enforcement.
ACRONYMS
CG
CM
ECG
ESP
LRG
LTR
MPC
MPL
NHTSA
NRG
QP
RG
Command Governor
Center of Mass
Extended Command Governor
Electronic Stability Program
Linear Reference Governor
Load Transfer Ratio
Model Predictive Controller
Multi-Point Linearization
National Highway Traffic Safety Administration
Nonlinear Reference Governor
Quadratic Programming
Reference Governor
I. I NTRODUCTION
R
OLLOVER is an event where a vehicle’s roll angle
increases abnormally. In most cases, this results from
a loss of control of the vehicle by its driver.
This work focuses on the design of a supervisory controller
that intervenes to avoid such extreme cases of loss of control.
Conversely, in operating conditions considered normal the
controller should not intervene, letting the driver commands
pass through unaltered.
A. Problem Statement
This paper treats the following problem for a vehicle
equipped with an active front steering system. Given vehicle
dynamics, a control model, and a set of predefined rollover
avoidance constraints, find a control law for the steering angle
such that the defined constraints are always enforced and the
R. Bencatel (corresponding author), A. Girard, and I. Kolmanovsky are
with the Department of Aerospace Engineering, University of Michigan, Ann Arbor, MI, 48109 USA; e-mail: [email protected],
{anouck,ilya}@umich.edu.
Manuscript received ??? #, 2016; revised ??? #, 2016.
applied steering is as close as possible to that requested by
the vehicle’s driver.
B. Motivation
Between 1991 and 2001 there was an increase in the number
of vehicle rollover fatalities when compared to fatalities from
general motor vehicle crashes [1]. This has triggered the
development of safety test standards and vehicle dynamics
control algorithms to decrease vehicle rollover propensity.
Rollover remains one of the major vehicle design and control
considerations, in particular, for larger vehicles such as SUVs.
C. Background and Notation
A variety of technologies, including differential braking,
active steering, active differentials, active suspension and many
others, are already in use or have been proposed to assist the
driver in maintaining control of a vehicle. Since the introduction of the Electronic Stability Program (ESP) [2], much
research has been undertaken on the use of active steering,
see e.g., [3], to further enhance vehicle driving dynamics.
According to [4] and [5], there is a need to develop driver
assistance technologies that are transparent to the driver during
normal driving conditions, but would act when needed to
recover handling of the vehicle during extreme maneuvers.
The active steering system has been introduced in production
by ZF and BMW in 2003 [6].
For the rollover protection problem, Solmaz et al. [5], [7],
[8] develop robust controllers which reduce the Load Transfer
Ratio’s magnitude excursions. The presented controllers are
effective at keeping the Load Transfer Ratio (LTR) within
the desired constraints. Their potential drawbacks are that
the controller is always active, interfering with the nominal
steering input, or is governed by an ad hoc activation method.
Furthermore, the controllers were tested with a linear model,
whose dynamics may differ significantly from more realistic
nonlinear models, in particular for larger steering angles, at
which rollover is probable.
Constrained control methods have evolved significantly in
recent years to a stage where they can be applied to vehicle
control to enforce pointwise-in-time constraints on vehicle
variables, thereby assisting the driver in maintaining control
of the vehicle. Reference [9] is an indication of this trend.
We employ mostly standard notations throughout. We use
[a, b] to denote an interval (subset of real numbers between a
and b) for either a < b or a > b.
IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, VOL. #, NO. #, ??? 2016
2
D. Original Contributions
E. Paper Structure
This paper illustrates the development and the application of
reference and extended command governors to LTR constraint
enforcement and vehicle rollover avoidance. The reference
governor (see [10] and references therein) is a predictive
control scheme that supervises and modifies commands to
well-designed closed-loop systems to protect against constraint
violation. In the paper, we consider the application of both
linear and nonlinear reference governor design techniques to
vehicle rollover protection. While in the sequel we refer to the
reference governors modifying the driver steering input, we
note that they can be applied to modify the nominal steering
angle generated by another nominal steering controller.
Our linear reference governor design exploits a family of
linearizations of the nonlinear vehicle model for different
steering angles. The linearized models are used to predict the
vehicle response and to determine admissible steering angles
that do not lead to constraint violation. In an earlier conference
paper [11], reference and extended command governor designs
for steering angle modification were proposed and validated on
a linear vehicle model. This paper is distinguished by extending the design methodology to include compensation for nonlinear behavior, and validating the design on a comprehensive
nonlinear vehicle model that includes nonlinear tire, steering,
braking and suspension effects. The strong nonlinearity of
the vehicle dynamics for high steering angles, the conditions
where the vehicle is at risk of rolling over, caused the simpler
controller presented in [11] to produce steering commands
that were too conservative. The Linear Reference Governor
(LRG) and Extended Command Governor (ECG) presented
in Section III-G compensate for the strong nonlinearities with
low interference over the driver input, while maintaining a low
computational burden and a high effectiveness in avoiding car
wheel lift.
For comparison, a Nonlinear Reference Governor (NRG)
is also developed, which uses a nonlinear vehicle model
for prediction of constraint violation and for the onboard
optimization. This online reference governor approach is more
computationally demanding but is able to take into account
nonlinear model characteristics in the prediction.
The original contributions of this work are summarized as
follows:
The paper is organized as follows. Section II describes the
vehicle models as well as the control constraints and the
performance metrics used to evaluate the vehicle dynamic
response under different designs. Section III-G describes reference governors considered in this paper, and includes preliminary performance evaluation to support the controller design
decisions. Section IV illustrates the simulation results obtained
with the different Reference Governors (RGs) and comments
on their comparative performance. Section V describes the
conclusions of the current work, the current complementary
research, and the follow-up work envisioned.
1) This paper demonstrates how Reference and Extended
Command Governors [10], both linear and nonlinear,
can improve the output adherence to the driver reference, while maintaining the vehicle within the desired
constraints.
2) Conservatism, effectiveness, and turning response metrics are defined to evaluate, respectively, the controller
command adherence to the driver reference, the constraints enforcement success, and the adherence of the
vehicle trajectory in the horizontal plane to that desired
by the driver.
3) Several methodological extensions to reference governor
techniques are developed that can be useful in other
applications.
II. V EHICLE
AND
C ONSTRAINT M ODELING
A. Nonlinear Car Model
The nonlinear vehicle dynamics model is developed following [5], [12]–[14]. The model includes a nonlinear model for
the tires’ friction and the suspension. The suspension model
includes parameters to allow the simulation of a differential
active suspension system.
1) Vehicle Body Equations of Motion: Assuming that the
sprung mass rotates about the Center of Mass (CM) of the
undercarriage, that the car inertia matrix is diagonal, and that
all the car wheels touch the ground, the car nonlinear model
is defined by:
Fx,T = m (u̇ − vr) + mSM hSM p cos φ,
(1a)
ṗ cos φ − p sin φ , (1b)
2
Fy,T = m (v̇ + ur) − mSM hSM
2
LT = −Ks 1 − ∆k ss tan φ−
2
Ds 1 − ∆dss p cos φ − mg ∆k ss + ∆dss ,
NT = Izz ṙ,
ṗ =
hSM mSM
Fy,T
m
UC
+ sin φ g + hSM mm
p
Ixx,SM +
UC
h2SM mSM mm
cos φ
2
(1c)
(1d)
+ LT
Most of the model parameters are illustrated in Figure 1 and
their values are given in Table II for the simulation model
used. The model states are the vehicle velocity components
in the horizontal plane (u, v), roll (φ), roll rate (p), and turn
rate (r). Fx,T , Fy,T , LT , and NT are the forces and moments
acting on the car through the tires. mSM , mUC , m, Ixx,SM
and Izz are the sprung mass, the undercarriage, and the overall
vehicle mass and inertia moments, respectively. Ks and Ds
are the suspension roll stiffness and damping coefficients, and
∆k ss and ∆dss are the suspension roll differential stiffness
and damping factors.
The simulation results (sec. IV) include some instants where
the wheels on one of the sides lift from the road. In such conditions, the car dynamics are similar to a two segment inverted
pendulum [15]. For the sake of readability and because it is not
relevant for the design of the reference governors that maintain
vehicle operation away from this condition, the extension of
the vehicle equations of motion for the wheel lift condition is
not presented here.
.
IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, VOL. #, NO. #, ??? 2016
3
Tire Magic Formula − Lateral force vs Side Slip
3000
2000
1000
0
y
−1000
−2000
−3000
−45
−30
−15
0
15
α − Slip angle [deg]
30
45
Fig. 4. Tire lateral force variation with the vertical load. The solid red line
is the lateral force generated at nominal vertical load.
(a) Rear view.
(b) Top view.
Fig. 1. Vehicle forces diagram.
2) Magic Formula Tire Model: The main source of nonlinearities in the equations of motion is the tire forces’
dependence on slip. In this work, we use the Magic Formula
tire model [13], [16]–[18]. To compute the tire forces, the slip
ratio (λ) and the tire slip angle (α) are defined as (fig. 2):
(
Rw ωw −uw
Rw ωw < uw
λ = Rw ωuww−uw
,
(2a)
Rw ωw ≥ uw
Rw ω w
v + lf r
αf = δf − tan−1
,
(2b)
u
−v + lr r
αr = tan−1
.
(2c)
u
The tire slip ratio (λ) characterizes the longitudinal slip. It is
defined as the normalized difference between wheel hub speed
(uw ) and wheel circumferential speed (Rw ωw ). The tire slip
angle (α) is the angle between the tire heading direction and
the velocity vector at the contact patch (fig. 2). Equations (2b)
and (2c) define the tire slip angle for the forward and rear
wheels, respectively. For combined longitudinal and lateral
slip, the Magic Formula takes the following form [12] (fig. 3):
T
Fx
= FP P (sc , C, E) ŝ,
(3)
FyT
hs
s i
c
c
P (sc , C, E) = sin C tan−1
,
(1 − E) + E tan−1
C
C
c 2 Fz
Cα ksk
sc =
, Cα = c1 mg 1 − e− mg ,
Fp
BCD
,
c1 =
c2
4 1 − e− 4
Fz 1.0527D
3 ,
z
1 + 1.5F
mg
s
sx
λ
s=
=
, ŝ =
,
tan α
sy
ksk
FP =
where FxT and FyT are the forces along the tire longitudinal and
lateral axis, x and y (fig. 2), respectively. Cα is the cornering
stiffness, FP is the horizontal (or slip) peak force, ŝ is the
total slip, and B, C, D, E, and c2 are tire parameters that
depend on the tire characteristics and the road conditions (see
Table I).
TABLE I
T IRES M AGIC F ORMULA MODEL PARAMETERS .
Fig. 2. Tire slip angle (α) at the contact patch. V is the total tire speed.
Tire Magic Formula − Lateral force vs Side Slip
3000
2000
0
−1000
y
B
7.15
9.00
5.00
4.00
C
2.30
2.50
2.00
2.00
D
0.87
0.72
0.30
0.10
E
1.00
1.00
1.00
1.00
c2
1.54
1.54
1.54
1.54
Figure 4 illustrates the variation of the lateral force with the
vertical load. The illustrated model presents almost the same
side slip angle for all vertical load cases, and an initial slope
decreasing for lower vertical loads. This behavior is governed
by the parameter c2 .
1000
−2000
−3000
Conditions
Dry
Wet
Snow
Ice
−45
−30
−15
0
15
α − Slip angle [deg]
30
45
Fig. 3. Tire lateral force variation with the side slip for several levels of slip
ratio.
B. Constraints
The vehicle constraints reflect the system’s physical limits
or requirements to keep the vehicle state in a safe region.
IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, VOL. #, NO. #, ??? 2016
There are several options to define constraints that protect the
vehicle from rollover. One possibility is to define the rollover
constraints as the states where the car roll is so large that
there is no possible recovery. This approach would require the
treatment of complex, hybrid vehicle dynamics, which involve
the vehicle continuous dynamics states plus two discrete states
(all wheels on the road state and wheel liftoff state), and in
the case of wheel liftoff, multiple-body inverted pendulum-like
dynamics.
In this paper, following also [5], [7], [8], more conservative
rollover avoidance constraints are treated, which are, however,
simpler to enforce. These constraints are defined through the
Load Transfer Ratio (LTR). The LTR measures how much of
the vehicle vertical load is concentrated on one of the vehicle
sides:
LT R :=
Fz,R − Fz,L
.
mg
(4)
The wheel liftoff happens when the LTR increases above 1
or decreases below −1, i.e., when the right or left wheels,
respectively, bear all the car’s weight. Hence, the rollover
avoidance constraints are imposed as:
LT R ∈ [−LT Rlim , LT Rlim ] , 0 < LT Rlim < 1.
(5)
Remark 1. Note that the absolute value of the LTR may
exceed 1, even if wheel liftoff does not occur. This can
happen, in particular, due to the suspension roll moment, e.g.,
generated by the spring stored energy, and by the CM vertical
acceleration during wheel liftoff.
The steering input is also considered to be constrained:
δSW ∈ [−δSW,lim , δSW,lim ] .
(6)
C. Linearized Car Model
The vehicle linear model has the following form,
ẋ = Ax + BδS kδSW ∆δSW ,
(7a)
y = Cx + DδSW ∆δSW ,
(7b)
where x = [∆v, ∆p, ∆r, ∆φ] correspond to the relevant lateral
state variables. ∆δSW is the steering control input deviation.
The ratio between the steering wheel angle and the forward
tires steering angles is kδSW .
Linearizing (1), the vehicle linearized model is obtained,
with the dynamics matrix of the form,
A :=
∂Fy,T 1
∂v m′
∂NT 1
∂v Izz
∂Fy,T hSM
′
∂v
Ixx
0
∂Fy,T 1
∂r m′ − u
∂NT 1
∂r Izz
∂Fy,T hSM
′
∂r
Ixx
0
∂ ṗ
h′SM ∂p
0
∂ ṗ
∂p
1
∂ ṗ
h′SM ∂φ
0
,
∂ ṗ
∂φ
0
(8)
4
where
′
m2 Ixx
,
(9a)
′ + h 2 m2
mIxx
SM SM cos φ0
mUC
cos φ,
(9b)
= Ixx,SM + h2SM mSM
m
hSM mSM
=
,
(9c)
m
2
UC
sin φ0 − Ds 1 − ∆dss,0 cos φ0
2h2SM mSM p0 mm
=
,
′
Ixx
(9d)
h
2
= mSM ghSM − Ks 1 − ∆k ss,0 1 + tan2 φ0 −
i
2
′
.
(9e)
Ds 1 − ∆dss,0 p0 sin φ0 /Ixx
m′ =
′
Ixx
h′SM
∂ ṗ
∂p
∂ ṗ
∂φ
The steering control matrix is defined as:
h
∂Fy,T 1
∂Fy,T hSM
∂NT 1
BδS :=
∂δf m′
∂δf Izz
∂δf I ′
xx
0
i⊺
,
(10)
The system output matrices are defined by the operation
constraints (sec. II-B):
2Ks
2Ds
0 0 mgT
mgT
,
(11a)
C :=
0 0
0
0
⊺
.
(11b)
DδS := 0 1
D. Performance Metrics
This study uses four performance metrics: the step computation time, the effectiveness, the conservatism, and the turning
response. We have chosen not to use a metric that compares
the vehicle positions between the reference trajectory and
the trajectory affected by the controllers as there are several
reference trajectories that end in a full rollover.
The step computation time is the time it takes the controller
to perform the constraint enforcement verification and compute a command. The effectiveness metric is the success rate
in avoiding wheel lift up to a wheel lift limit. For each test:
max zwheel
ηlif t := 1 −
,
(12)
zwheel,lim
where max zwheel is the maximum wheel lift attained during
a test, and zwheel,lim is the wheel lift limit considered.
The conservatism metric indicates how much the controller
is over-constraining the steering command when compared
with a safe steering command:
RT
(|δSW,ref − δSW | − |δSW,ref − δSW,saf e |) dt
,
χ := 0
RT
|δSW,ref | dt
0
(13)
where δSW,ref is the driver reference command during the
maneuver, δSW is the controller command, δSW,saf e is a
reference safe command, and T is the test duration. Two
options for the reference safe command are the driver steering
input scaled down to avoid any wheel lift or the driver
steering input scaled down to produce a maximum wheel lift
of zwheel,lim .
IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, VOL. #, NO. #, ??? 2016
5
⊺
Considering (15), (16), xO∞ := u x , and a linear
model (7), we define an inner approximation of the maximum
output admissible set as
Fig. 5. Command governor control architecture for a steering command.
The turning response metric indicates how much the controller is limiting the vehicle turn rate relative to the driver
desired turning rate when compared with the turn rate achieved
with a safe steering command:
RT
(|ψdes − ψsaf e | − |ψdes − ψ|) dt
,
(14)
ηψ := 0
RT
0 |ψSW,ref | dt
where the driver desired turning rate is inferred from the reference steering command and the steering-to-turn rate stability
δSW,ref . ψ and ψsaf e are the
derivative: ψdes = dδdψ
SW
δSW =0
turn rates caused by the controller command and the reference
safe command, respectively.
O∞ := {xO∞ |AO∞ xO∞ ≤ bO∞ } ⊂ R4+n ,
Ay D
Ay C (I − A)−1 (I − A) B + D
..
AO∞ :=
.
−1
Ay C (I − A)
I − AN B + D
Ay H
bO∞ :=
by
by
..
.
by
by (1 − ǫ)
(17a)
Ay C
Ay CA
..
,
.
N
Ay CA
0
(17b)
,
(17c)
H = C (I − A)−1 B + D,
(17d)
where N is the selected prediction horizon and ǫ > 0. Under
mild assumptions [19], O∞ is the same for all N sufficiently
large and is positively invariant (for constant u) and satisfies
constraints pointwise. Generally, such an N is comparable to
the settling time of the system.
B. Linear Reference Governor (LRG)
III. R EFERENCE
AND
C OMMAND G OVERNORS
This work implements different versions of Reference and
Command Governor (CG) controllers, collectively referred to
by their common name as reference governors. Our reference
governors modify the reference command, the steering angle
(fig 5), if this reference command is predicted to induce a
violation of the LTR constraints (sec. II-B).
The reference governors solutions studied for this application are based on both linear and nonlinear models. The
following sections describe the various solutions in more
detail.
A. Linear Reference Governors and Command Governors
The LRG computes a single command value on every
update using the above O∞ set which we re-write as
n
−1
O∞ := (u, x) |yk = C (I − A)
I − Ak B + D u+
\
CAk x ∈ Y, k = 0, ..., N
Γ ⊂ R4+n ,
(18)
Γ = {(u, x) |Hu ∈ (1 − ǫ) Y} .
By applying the O∞ to the current state, the controller checks
if the reference or an alternative command are safe. If the
reference command is deemed safe, it is passed to the actuator.
If not, the controller selects an alternative safe command that
minimizes the difference to the reference:
vk = argmax {kRG |vk = vk−1 + kRG (uk − vk−1 ) ,
vk
Both the Linear Reference Governors and the Command
Governors rely on a maximum output admissible set O∞ (or
its subsets) to check if a reference command is safe, i.e., if it
does not lead to constraint violation, and to compute a safe
alternative command, if necessary. The O∞ set characterizes
the combinations of constant commands and vehicle states that
do not lead to constraint violating responses,
(15)
O∞ := (u, x) |yk ∈ Y, ∀k ∈ Z0+ ⊂ R4+n ,
where n = 1 is the number of control variables, x and u are
the state and command at the present moment and yk , k ∈
Z0+ , is the predicted evolution of the system output. The set
Y represents the constraints and delineates safe outputs
Y := {y|Ay y ≤ by } ⊂ Rl ,
(16)
where l is the number of system outputs on which constraints
are imposed.
(vk , xk ) ∈ O∞ and kRG ∈ [0, 1]} ,
(19)
where uk is the current reference command, xk is the current
state, and vk−1 is the previous command used.
Remark 2. Because the reference governor checks at each update if a command is safe for the future steps, vk−1 is assured
to be safe for the current step, provided an accurate model of
the system is used. As such, in the optimization process (19),
one only needs to analyze the interval u ∈ [vk−1 , uk ].
C. Extended Command Governor (ECG)
The ECG [20] is similar to the LRG, but, when it detects
an unsafe reference command, it computes a sequence of safe
commands governed by:
v = C̄x̄ + ρ,
x̄k+1 = Āx̄,
(20a)
(20b)
IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, VOL. #, NO. #, ??? 2016
where v is the safe command (output of the ECG) with
dynamics governed by (20b), x̄ is the virtual state vector of
the command dynamics, and ρ is the steady state command
to which the sequence converges. The matrices Ā and C̄ are
two of the design elements of the ECG. They are defined at
the end of this section. The key requirement is that the matrix
Ā must be Schur.
To detect if a reference command is safe, the ECG uses the
LRG O∞ set (18). If the reference command is unsafe, the
ECG uses an augmented Ǒ∞ set that takes into account the
computed command dynamics:
Ǒ∞ := {(ρ, xaug ) |yk =
Caug (I − Aaug )−1 I − Akaug Baug + Daug ρ+
\
Caug Akaug xaug ∈ Y, k = 0, ..., N
Γ′ ⊂ R4+n+m ,
(21)
′
Γ = {(ρ, xaug ) |Hρ ∈ (1 − ǫ) Y} ,
where m is the size of x̄, x⊺aug := [x⊺ , x̄⊺ ]⊺ is an augmented
state vector, and the matrices Aaug , Baug , Caug , Daug correspond to the augmented system. We note that this eventtriggered execution of ECG is quite effective in reducing
average chronometric loading and to the authors’ knowledge,
has not been previously proposed elsewhere.
In the ECG, both the steady state command and the initial
virtual state are optimized. This optimization is performed by
solving the following quadratic programming problem:
1 ′⊺
′
x′ = argmin
(22)
x
Hx
+
f
ρ|
(ρ,
x
)
∈
Ǒ
aug
∞ ,
x′
2
x̄
P 0
x′ :=
, H :=
, f := −u⊺k Q,
ρ
0 Q
where P and Q are symmetric positive-definite matrices. In
this work Q := kL I > 0 is the tuning matrix, while P is
computed by solving the discrete-time Lyapunov equation:
Ā⊺ PĀ − P + Q = 0.
(23)
The safe command is then computed by (20a).
Several possibilities for the matrices Ā and C̄ exist [10].
These matrices can define a shift register sequence, making
the ECG behave like a Model Predictive Controller (MPC), or
a Laguerre sequence, as follows:
αIm µIm −αµIm . . . (−α)N −2 µIm
N −3
0
αIm
µIm
. . . (−α)
µIm
..
..
Ā = 0
,
.
.
0
αIm
0
0
0
...
µI
m
0
0
0
...
αIm
(24a)
C̄ =
h
Im
−αIm
2
α Im
. . . (−α)
N −1
Im
i
, (24b)
where µ = 1 − α and 0 ≤ α ≤ 1 is a tuning parameter
that corresponds to the time constant of the command virtual
dynamics. If α = 0, the ECG becomes a shift register.
6
D. Steering Control
The output and the constraints of
follows:
⊺
,
y := LT R δSW
1
0
−1 0
, by :=
Ay :=
0
1
0 −1
the RG are defined as
(25a)
LT Rlim
LT Rlim
.
δSW,lim
δSW,lim
(25b)
The discrete time step for the dynamics matrices is ∆t =
0.01s and the prediction horizon is N = 100. For the ECG,
the optimization gain is kL = 1, the virtual state vector size is
4, and the α = 1− τ∆t
, so that the virtual dynamics match the
car
vehicle dynamics time constant that, as we have determined
empirically, appears to provide best response properties.
E. Nonlinear Compensation
Both the Linear Reference Governor and the Extended
Command Governor rely on linear model predictions to avoid
breaching the defined constraints. In reality, the controller is
acting on a vehicle with nonlinear dynamics. This results in
deviations between the predicted and the real vehicle response.
1) Nonlinear Difference: For the same state, there is a
difference between the linear model output prediction and
the vehicle’s real output variables, which we refer to as the
nonlinear difference:
d = f (x, u) − Cx − Du − y0 .
(26)
This difference, further exacerbated by the error in the state
prediction by the linear model, can either cause the vehicle to
breach the constraints when the controller does not predict
so, or cause the controller generated command to be too
conservative. This effect can be mitigated if the controller
takes into account the current nonlinear difference for the
current command computation, assuming that it is persisting
over the prediction horizon. As an example, the nonlinear
difference can be compensated in the LRG by including it
in the O∞ set:
Ǒ∞ := {(v, x, d) : yk =
−1
C (I − A)
I − Ak B + D v+
\
Γ′′ ⊂ R4+n+l , (27)
CAk x + d ∈ Y, k = 0, ..., N
Γ′′ = {(v, x, d) |Hv + d ∈ (1 − ǫ) Y} .
To use the nonlinear difference in the controller, the xO∞
vector and the AO∞ matrix are extended to account for d:
⊺
,
(28a)
x̆⊺O∞ := u⊺ , x⊺ , d⊺
Ay
Ay
(28b)
ĂO∞ := AO∞
, b̆O∞ := bO∞ .
..
.
Ay
7
200
Ref
−200
2) Multi-Point Linearization (MPL): The nonlinear difference compensation in the previous subsection reduces the nonlinear effects in the vicinity of the system’s current state. However, in this work and in particular for large input commands,
that are likely to cause a rollover, this alone is insufficient.
Figure 6 shows how much the vehicle dynamics’ poles can
change for a range of steering angles. If the controller uses
a model linearized around the no actuation point (δSW = 0),
the controller becomes too conservative. The use of multiple
linearization points to define multiple O∞ sets has proved to
be an appropriate compensation. The multi-point linearization
results in a less conservative controller, when compared to
a controller just with the nonlinear difference compensation
(fig. 7a).
The control strategy is the same as described in the previous
sections. The difference is that several linearization points
are selected (fig. 7b) and, for each one, an O∞ set (17) is
computed. The controller then selects the closest linearization
point and corresponding O∞ set, based on the current steering
angle. Note that in Figure 7a and subsequent figures we report
LTR in percent.
F. Command Computation Feasibility
In practical applications, the LRG and ECG optimization
problems, (19) and (22), may become infeasible due to unmodeled uncertainties, in particular, due to the approximation of
the nonlinear vehicle dynamics by a linear model in prediction.
Figure 8a shows the computation feasibility classification
for an example vehicle simulation. This section describes
the different approaches to deal with infeasibility and the
outcomes of their evaluation.
1) Last Successful Command Sequence: The simplest approach, in the event of infeasibility, is to maintain the last
successfully computed command, for the LRG:
vk = vk−1 ,
(29)
or command sequence, for the ECG:
x̄k = Āx̄k−1 ,
(30a)
ρk = ρk−1 ,
vk = C̄x̄k + ρk .
(30b)
(30c)
RGMPL
0
0.5
1
1.5
2
2.5
3
3.5
0
0.5
1
1.5
2
2.5
3
3.5
0
0.5
1
1.5
2
Time [s]
2.5
3
3.5
100
50
0
−50
−100
20
0
−20
(a) Vehicle steering command, LTR response, and
roll response. The dashed blue line is the reference command. The dot-and-dashed red and the
solid black lines are the LRG commands with a
single- and multi-point linearization, respectively,
both with the nonlinear difference compensation.
4
RGMPLlin−ref
Fig. 6. Vehicle dynamics’ poles for a range of steering angles from 0 degrees
(cross) to 150 degrees (circle).
RGNC
0
φ [deg]
LTRd [%]
δSteer [deg]
IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, VOL. #, NO. #, ??? 2016
2
0
−2
−4
0
0.5
1
1.5
2
Time (sec)
2.5
3
3.5
(b) Linearization point selection in the LRG with
multi-point linearization.
Fig. 7. LRG with multi-point linearization. Here linearization points used
corresponded to steering angles of δSW = 0, 20, 40, 80 & 130deg.
2) Command Contraction: In its standard form, the LRG
computation limits the LRG gain to be kLRG ∈ [0, 1], i.e., the
computed command is limited to the interval vk ∈ [vk−1 , uk ].
With the LRG, there are conditions in which maintaining the
last successfully computed command might be problematic. If
the solution infeasibility is caused by the differences between
the linear model and vehicle nonlinear dynamics, that usually
means that the controller allowed the command to be too large,
allowing the vehicle state to go too close to the constraints,
and maybe breaching them in the near future (fig. 8b). In this
case, modifying (19) to allow the command to contract even
if the reference command is not contracting produces safer
solutions. The computations are modified as follows:
[0, max {vk−1 , uk }] vk−1 > 0 and uk > 0,
S = [min {vk−1 , uk } , 0] vk−1 < 0 and uk < 0, (31)
[vk−1 , uk ]
otherwise,
v = argmin {|v − uk | | (v, x) ∈ O∞ and v ∈ S} .
(32)
v
Figure 8b shows that with the last successful command (RG)
the vehicle actually breaches momentarily the imposed constraints, LT R < −100% close to t = 1.9s, while with
the contracted command (RGCC) the LTR is kept within the
desired bounds and the command converges to reference value
sooner (@t ≈ 1.9s).
3) O∞ Set Constraints Temporal Removal: This method
is based on the premise that the controller might not be able
to avoid the constraint violation in the near term, but will be
able to bring the system into the desired operation envelope in
the future. The method removes as many initial rows from the
representation of the O∞ set as required to make the command
2il+1:2(N l+2)
computation feasible (alg. 1). In Algorithm 1, AO∞
IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, VOL. #, NO. #, ??? 2016
8
RGfeas
0
−2
−4
0.5
1
1.5
2
2.5
3
3.5
(a) Number of rows removed from the LRG O∞
set.
0
−2
0.5
1
1.5
2
Time (sec)
2.5
3
3.5
(a) Classification of the solution computation feasibility
200
Ref
RG
RGRR
0
−200
0
0.5
1
1.5
2
2.5
3
3.5
0
0.5
1
1.5
2
2.5
3
3.5
0
0.5
1
1.5
2
Time [s]
2.5
3
3.5
100
50
0
−50
−100
20
0
−20
(b) Vehicle steering command, LTR response, and
roll response. The dashed blue line is the reference
command. The dot-and-dashed red and the solid
black lines are the LRG commands blocking and
allowing the O∞ set rows removal, respectively.
Fig. 9. Feasibility recovery - O∞ set rows removal.
200
Ref
RG
RGCC
0
−200
0
0.5
1
1.5
2
2.5
3
3.5
0
0.5
1
1.5
2
2.5
3
3.5
100
50
0
−50
−100
φ [deg]
LTRd [%]
δSteer [deg]
for the LRG. Level 1 means the controller is able to
compute a solution. Level 0 means that there is no viable
solution, because the current output is already breaching
the constraints and the current reference is the same as the
last used command. The levels −1 and −2 indicate that
for some of the points in the prediction horizon a solution
would only exist if the gain kLRG was set to more than
1 or less than 0, respectively. Level −3 indicates that
for different points in the prediction horizon a solution
would only exist if the gain kLRG was set to more than
1 and less than 0, simultaneously. The levels −4, −5 and
−6 are set when the conditions for level 0 are verified at
the same time as those necessary for the levels −1, −2
and −3, respectively. The dashed blue line for the method
with command contraction allowed shows when a solution
becomes viable due to the contraction.
LTRd [%]
0
δSteer [deg]
−4
φ [deg]
RGCCfeas
0
controller to find a solution that may lead to a behavior closer
to the intended than a locked command. The method expands
and contracts the O∞ set by modifying the bO∞ vector:
b′O∞ = (1 + ǫ) bO∞ ,
20
0
−20
0
0.5
1
1.5
2
Time [s]
2.5
3
3.5
(b) Vehicle steering command, LTR response, and
roll response. The dashed blue line is the reference command. The dot-and-dashed red and the
solid black lines are the LRG commands with
kLRG ∈ [0, 1] and allowing command contraction,
respectively.
(33)
where ǫ is the expansion factor. This factor is doubled until
the command computation is successful and then the bisection
method is used to find the minimum ǫ for which the command
computation is feasible.
Figure 10a shows the constraint expansion factor that had to
be used at each computation step to make the command com-
Fig. 8. Feasibility recovery - LRG with command contraction allowed.
is the matrix composed by all rows between the (2il + 1) th
line and the last (2 (N l + 2) th row) of the AO∞ matrix.
Figure 9a indicates the number of rows that had to be
removed at each computation step to make the command
computation feasible. As shown by Figure 9b, this approach
(RGRR) is prone to failure, limiting the steering angle only
very slightly (@t ≈ 1.35s) and allowing the car to rollover.
4) O∞ Set Constraints Relaxation: The logic behind this
approach is that by allowing the controller to find a feasible
solution through constraint relaxation, a solution may be found
that unfreezes the command computation. This allows the
LTRd [%]
δSteer [deg]
(a) LRG constraint relaxation magnitude.
200
Ref
RG
RGRC
0
−200
0
0.5
1
1.5
2
2.5
3
3.5
0
0.5
1
1.5
2
2.5
3
3.5
0
0.5
1
1.5
2
Time [s]
2.5
3
3.5
100
50
0
−50
−100
φ [deg]
i=0
while v computation f ails do
i=i+1
2il+1:2(N l+2)
A′O∞ = AO∞
2il+1:2(N l+2)
b′O∞ = bO∞
end
Algorithm 1: Constraint removal from the O∞ set.
20
0
−20
(b) Vehicle steering command, LTR response, and
roll response. The dashed blue line is the reference
command. The dot-and-dashed red and the solid
black lines are the LRG commands without and
with constraint relaxation, respectively.
Fig. 10. Feasibility recovery - O∞ constraints relaxation.
IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, VOL. #, NO. #, ??? 2016
putation feasible. Figure 10b shows that this method (RGRC)
provides only a small improvement over the last successful
command (RG) by allowing the command to converge to the
reference value sooner (@t ≈ 2.0s). It still allows the LTR to
breach the imposed constraints (@t ≈ 1.9s).
We note that various heuristic and sensitivity-based modifications can be proposed where only some of the constraints are
relaxed but this entails additional computing effort which is
undesirable for this application. Note also that a soft constraint
version of the RG has been proposed in [21]; this strategy has
not been formally evaluated as it is similar to our O∞ set
constraints relaxation approach.
We also note that for the ECG a similar constraint relaxation
method could be implemented, where a relaxation variable is
included as part of the Quadratic Programming (QP) and the
constraints that are most violated are relaxed first.
5) Selected Feasibility Recovery Method: From the different methods tested with the LRG, the command contraction
method was the best performing (sec. IV-C and fig. 19). In the
simulations section, except for the results comparing directly
the infeasibility recovery methods, the LRG is tested with the
command contraction method.
G. Nonlinear Reference Governor (NRG)
The NRG relies on a nonlinear model in prediction to
check if a command is safe or, if otherwise, compute a safe
command. Instead of the O∞ set used by the LRGs, the NRG
uses a nonlinear model (sec. II-A1) to predict the vehicle
response to a constant command for the specified time horizon,
usually comparable to the settling time. If the predicted vehicle
response stays within the imposed constraints, the command
is deemed safe and is passed on. Otherwise, the NRG uses
bisections to find a safe command in the range between the
150
Ref
NRG1
NRG4
100
δSteer [deg]
50
0
−50
−100
−150
0
0.5
1
1.5
2
Time [s]
2.5
3
3.5
LTRd [%]
(a) Steering command.
100
50
0
−50
−100
0
0.5
1
1.5
2
2.5
3
3.5
0
0.5
1
1.5
2
2.5
3
3.5
0
0.5
1
1.5
2
Time [s]
2.5
3
3.5
β [deg]
10
last passed (safe) command and the reference command. Each
iteration, including the first with the reference command,
involves a nonlinear prediction of a modified by bisections
command, checks if it respects the constraints, and classifies
the command as safe or unsafe. These bisection iterations
numerically minimize the reference-command difference, i.e.,
the difference between the reference command and the used
safe command. The number of iterations is a configuration
parameter, and governs the balance between the computation
time and the solution accuracy. Parametric uncertainties can be
taken into account following the approach in [22], but these
developments are left to future work.
Figure 11 shows that the NRG with a single iteration
(NRG1), i.e., a simple verification of the trajectory constraints
with the reference command, produces very similar results
when compared with the NRG with three extra bisections
(NRG4). The NRG4 initially allows a slightly less constrained
command and then has slightly smoother convergence with
the reference command (@t = 0.8s & 1.7s). This behavior
is obtained at the expense of the computational load, taking
about 4 times longer to compute a safe command when the
reference is unsafe. For the illustrated example (fig. 11), the
NRG1 and the NRG4 take an average of 0.16s and 0.31s,
respectively, to compute a safe command.
IV. S IMULATION R ESULTS
A. Simulation Setup
1) Simulation Model: The nonlinear simulation model was
setup to have a behavior similar to a North American SUV.
The model parameters are listed in Table II.
TABLE II
V EHICLE SIMULATION PARAMETERS .
Parameter
lf
lr
T
hSM
hU C
m
mSM
mU C
Ixx,SM
Ixx,U C
Iyy,SM
Izz
Ixz,SM
kδSM
KS
DS
∆kss
∆dss
Value
1.160m
1.750m
1.260m
0.780m
0.000m
2000kg
1700kg
300kg
1280kg/m2
202kg/m2
2800kg/m2
2800kg/m2
0kg/m2
17.5
73991N.m
5993N.m.s/rad
0.0%
0.0%
0
−10
φ [deg]
9
20
0
−20
(b) Vehicle lateral response.
Fig. 11. Nonlinear Reference Governor command with 1 and 4 nonlinear
iterations per step.
We used a CarSim R simulation model to check the realism
of the nonlinear model presented in section II-A, implemented
in MATLAB R . Figure 12 illustrates the simulation results
from both models in terms of their lateral dynamics. The
lateral dynamics match very well. All variables except for
the roll angle match both in trend and amplitude. The roll
angle matches in trend, but shows a larger amplitude in our
IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, VOL. #, NO. #, ??? 2016
200
200
LTRd [%]
NL Model
CarSim Model
δSW [deg]
10
0
NL Model
CarSim Model
0
-200
-200
0
0.5
1
1.5
2
2.5
3
0
0.5
1
1.5
2
2.5
3
40
r [deg]
φ [deg]
NL Model
CarSim Model
50
0
-50
NL Model
CarSim Model
20
0
-20
-40
0
0.5
1
1.5
2
2.5
3
0
0.5
1
1.5
2
2.5
3
20
NL Model
CarSim Model
β [deg]
g y [G]
2
0
-2
NL Model
CarSim Model
10
0
-10
0
0.5
1
1.5
2
2.5
3
0
0.5
1
time [s]
1.5
2
2.5
time [s]
Fig. 12. Vehicle dynamics in a Sine with Dwell maneuver. Comparison between the trajectories simulated by a CarSim model and the nonlinear model
(sec. II-A).
100
LTRd [%]
δSteer [deg]
200
0
−100
Ref
−200
0
0.5
LRG
1
1.5
ECG
2
NRG4
2.5
3
3.5
200
150
100
50
0
−50
−100
−150
−200
40
β [deg]
r [deg/s]
−20
0.5
0
0.5
1.5
ECG
2
NRG4
2.5
3
3.5
0
−5
−40
LRG
Ref
0
0.5
1
1.5
ECG
2
NRG4
2.5
3
LRG
Ref
−10
3.5
1
1
1.5
ECG
2
NRG4
2.5
3
3.5
20
10
φ [deg]
0.5
gy [g]
0
LRG
1
5
0
0
−0.5
Ref
−1
0.5
10
20
−60
Ref
0
0
0.5
LRG
1
ECG
1.5
2
Time [s]
−20
NRG4
2.5
3
0
−10
3.5
−30
Ref
LRG
1
ECG
1.5
2
Time [s]
NRG4
2.5
3
3.5
Fig. 13. Vehicle dynamics in a Sine with Dwell maneuver. Comparison between the trajectories generated with the intervention of the proposed reference
governors.
MATLAB R model. More importantly, both simulations match
very well in terms of the main rollover metric, the LTR. The
simulations diverge more in the last moments, when the roll
angle is increasing and the car is in the process of rolling over.
2) Test Maneuvers: National Highway Traffic Safety Administration (NHTSA) defines several test maneuvers: Sine
with Dwell, J-Turn, and FishHook [1]. In this work, we chose
to test the controllers and demonstrate rollover avoidance for
Sine with Dwell maneuvers (fig. 12). Figure 14b illustrates the
variation of the vehicle roll (spring mass and undercarriage)
and of the maximum wheel lift (hW L,max = T sin |φuc,max |)
with respect to the maximum value of the Sine with Dwell
reference steering angle, showing ≈ 20deg of sprung mass
roll and ≈ 240mm of wheel lift for δSW,lim = 150deg.
B. Trajectories with the Reference Governors (RGs)
Figures 13 and 15 illustrate the effect of the various RGs
on the vehicle trajectory. The simulations have been performed
on our nonlinear vehicle dynamics model in MATLAB R . The
Linear Reference Governor (LRG) used in this comparison
uses the nonlinear difference compensation (sec. III-E1), the
Multi-Point Linearization (MPL) (sec. III-E2), and allows
command contraction (sec. III-F2). The Extended Command
Governor (ECG) uses the MPL (sec. III-E2) and maintains the
last successfully computed command sequence (sec. III-F1)
when an infeasible command computation is found. The
Nonlinear Reference Governor (NRG) performs 4 iterations
to find a suitable command, when the reference command is
deemed unsafe.
Max roll (φmax) [deg]
IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, VOL. #, NO. #, ??? 2016
30
Undercarriage
Sprung mass
20
10
0
0
20
40
60
80
100
120
Steering wheel angle (δSW,lim) [deg]
140
160
(a) Sprung mass and undercarriage maximum roll
angles.
Wheel lift [mm]
indicate that the controller was unable to avoid breaching the
LTR constraints. The LTR plots in Figures 13 and 16 show
that in this simulation instance the LRG allowed the LTR to
slightly exceed the constraints (@t ≈ 1s & 1.8s), but was able
to maintain the roll angle well under control.
C. Reference Governors Performance Comparison
300
200
100
0
11
0
20
40
60
80
100
120
Steering wheel angle (δSW,lim) [deg]
140
160
(b) Maximum wheel lift.
Fig. 14. Variation of the roll propensity for Sine with Dwell maneuvers. The
Sine with Dwell maneuvers vary in terms of steering amplitude, defined by
the limit steering angle (δSW,lim ).
It is clear from Figure 13 that the RGs steering adjustments
are different in shape, but not so much in their effect on the
LTR, side-slip (β), roll (φ), and lateral acceleration (gy ). The
amplitude of the turn rate (r) is more limited by the NRG than
the other RGs. The trajectories with the RGs do not diverge
much from the trajectory with the reference steering command,
up to the moment when the vehicle starts to rollover with the
reference command (fig. 15). The RefLif t , LRGLif t , and
N RG4Lif t shades over the x-y trajectory illustrate where the
LTR breaches the imposed constraints.
Figure 16 highlights with background shades when the RGs
are active, adjusting the steering command. The darker shades
The results presented next characterize the performance of
the controllers in terms of constraint enforcement effectiveness,
the adherence to the driver reference (conservatism), the adherence to the desired turning rate (turning response), and the
controllers’ computation time. The results were obtained from
simulation runs with a range of Sine-with-Dwell maneuvers’
amplitudes between 10 and 160 deg.
Figure 17 illustrates the trajectories that serve as reference
in the RGs’ performance evaluation and an example of a
trajectory with a controller intervention (LRG). The reference
safe trajectories used in (13) and (14) are: the reference
trajectory, produced by the original command; the limit lift
trajectory (LimLift), with the maximum allowable wheel lift
(5 cm, 2”); the no lift trajectory (NoLift), that produces
no wheel lift; and the quasi-optimal safe trajectory (NRG4),
produced by an NRG with 4 iterations. Each reference safe
trajectory has its own merits in the evaluation of the reference governors’ conservatism. The trajectory produced by
200
Ref
LimLift
NoLift
NRG4
LRG
150
100
Y [m]
20
LRG
LRGLift
10
ECG
NRG4
NRG4Lift
0
50
δSteer [deg]
Ref
RefLift
30
0
−50
−100
−150
−200
0
0.5
1
−10
1.5
2
Time [s]
2.5
3
3.5
(a) Steering command.
−20
−30
0
20
40
60
80
X [m]
Ref
RefLift
25
20
LimLift
NoLift
NRG4
NRG4Lift
15
Fig. 15. Vehicle trajectory in a Sine with Dwell maneuver.
Y [m]
10
5
LRG
0
−5
−10
−15
−20
−25
0
10
20
30
40
X [m]
50
60
70
(b) Vehicle trajectory on the X-Y plane.
Fig. 16. Reference Governors activation, i.e., steering command modification.
Fig. 17. Reference safe trajectories used for the reference governors
performance evaluation. The Ref trajectory is the unmodified sine-withdwell maneuver with the maximum steering angle defined for a specific
test, in this case 160 degrees. The LimLift trajectory is the sine-with-dwell
maneuver that produces the maximum allowable wheel lift (5 cm, 2”). The
NoLift trajectory is the sine-with-dwell maneuver with the maximum steering
angle that produces no wheel lift. The NRG4 trajectory illustrates the safe
maneuver, resulting from the Nonlinear Reference Governor intervention,
that is considered to interfere the least with the original trajectory (Ref ),
while avoiding almost completely wheel lift conditions. The LRG trajectory
illustrates a maneuver resulting from the application of the Linear Reference
Governor.
IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, VOL. #, NO. #, ??? 2016
the reference governors will be the least conservative when
compared with the NoLift trajectory. This shows how much
the controller reduces the conservatism when compared to
a simplistic safe trajectory. The comparison with the NRG4
trajectory produces a middle range conservatism evaluation,
allowing us to compare the reference governors’ command
with an almost optimal constraint enforcement strategy. The
reference governors’ trajectory will be the most conservative
when compared with the LimLift trajectory. This shows how
much leeway exists between the reference governors’ commands and the commands that produce the limit lift condition.
The two bottom plots in Figure 18 illustrate the comparison
between different LRG options and the NoLift, NRG4, and
LimLift reference safe trajectories. On all the figures that
illustrate the conservatism and turning response for the same
controller configuration, there are three comparison branches,
where the NoLift and the LimLift trajectories offer the most
advantageous and most disadvantageous comparison trajectories, respectively. That means that the NoLift branch is the
bottom branch in the conservatism plots and the top one in
the turning response plots (fig. 18). The middle branch, the
comparison with a trajectory considered to be NRG4, is the
most interesting, as it shows a comparison with one of the
least conservative trajectories that produce almost no wheel
lift.
Figure 18 shows how the LRG performance improves with
Multi-Point Linearization (MPL) and how it changes with the
selection of linearization points:
• RGMPL1: 0, 20, 40, and 100 deg;
• RGMPL2: 0, 80, 110, and 150 deg;
• RGMPL3: 0, 20, 40, 60, 80, 100, 120, 130, 140, and 150
deg.
Note that fewer linearization points might lead to less conservatism and a better turning response, however may also
result in a worse effectiveness (fig. 18). With a dense selection
12
of linearization points (RGMPL3 case), the LRG is 100%
effective for all amplitudes of the reference command.
The feasibility recovery methods are compared with the
linearization points of the RGMPL3 case (fig. 19). From the
various feasibility recovery methods presented in Section III-F,
the constraint temporal removal (RGRR), where rows from
the O∞ are removed, is the worst performing. The other three
methods, last successful command (RG), command contraction
(RGCC), and constraints relaxation (RGRC), are similar in
terms of the conservatism metric. The last successful command
(RG) method requires the lowest computational overhead.
Nevertheless, the command contraction (RGCC) method is
preferred, because it provides better effectiveness than the
other three methods when the state estimation includes some
noise (sec. IV-D).
Most of the RGs’ tests shown were run with an LTR constraint of 0.99 or 99%. The LTR constraint upper bound can be
relaxed, as shown in Figure 20, to reduce the controllers’ conservatism. For LT Rmax = 99, 102, & 105%, the effectiveness
is only slightly degraded and the conservatism is reduced about
10% from LT Rmax = 99% to LT Rmax = 105%. Note that
the relaxation of the LTR constraint beyond 100% is an ad hoc
tuning method, without any guarantees on the effectiveness
performance.
As expected, the NRG for a single iteration (NRG1), i.e.,
with a check of the actual reference command, runs faster
than the NRG setup for four iterations (NRG4) (bottom plot
of fig. 21). The unexpected result is that the NRG1 setup is
less conservative and has higher effectiveness (top plots of
fig. 21). This happens with Sine with Dwell maneuver, because
the NRG1 is slightly more conservative during the increment
of the reference command, i.e., the moment at which the
NRG command diverges from the reference command, but that
produces an earlier convergence with the reference command
100
Effectiv [%]
Effectiv [%]
100
80
RGNC
RGMPL1
RGMPL2
RGMPL3
60
40
20
40
−300
60
80
100
δSteer [deg]
120
140
Conserv [%]
0
20
40
60
80
100
δSteer [deg]
120
140
160
0
20
40
60
80
100
δSteer [deg]
120
140
160
0
20
40
60
80
100
δSteer [deg]
120
140
160
160
40
60
40
20
0
20
0
−20
−40
−60
−20
−40
20
40
60
80
100
δSteer [deg]
120
140
160
0.05
tComp, mean [s]
0
TurnResp [%]
RG
RGCC
RGRC
RGRR
−200
Conserv [%]
0
0
−100
10
0
−10
0.04
0.03
0.02
0.01
0
−20
0
20
40
60
80
100
δSteer [deg]
120
140
160
Fig. 18. Variation in performance of the LRG with a single linearization point
(RGNC - LRG with just the nonlinear difference compensation) and different
selections of MPLs (RGMPL#).
Fig. 19. Variation in performance of the MPL LRG (RGMPL3’s linearization
points) with the different feasibility recovery methods: last successful command (RG), command contraction (RGCC), constraints’ relaxation (RGRC),
and constraint temporal removal (RGRR).
IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, VOL. #, NO. #, ??? 2016
13
105
Effectiv [%]
Effectiv [%]
105
100
RG1
RG2
RG3
95
90
0
20
40
60
80
100
δSteer [deg]
120
140
LRG
ECG
NRG1
95
90
160
40
0
20
40
60
80
100
δSteer [deg]
120
140
160
0
20
40
60
80
100
δSteer [deg]
120
140
160
0
20
40
60
80
100
δSteer [deg]
120
140
160
60
Conserv [%]
Conserv [%]
100
20
0
−20
40
20
0
−20
−40
0
20
40
60
80
100
δSteer [deg]
120
140
−40
160
TurnResp [%]
TurnResp [%]
10
10
0
−10
−20
20
40
60
80
100
δSteer [deg]
120
140
160
Effectiv [%]
105
100
LRG
NRG1
NRG4
95
0
20
40
60
80
100
δSteer [deg]
120
140
160
0
20
40
60
80
100
δSteer [deg]
120
140
160
0
20
40
60
80
100
δSteer [deg]
120
140
160
Conserv [%]
40
20
0
−20
−40
0.3
tComp, mean [s]
−10
−20
0
Fig. 20. Variation in performance of the MPL LRG allowing contraction for
different choices of LT Rmax . LT Rmax = 99, 102, & 105% for RG1,
RG2, and RG3, respectively.
90
0
Fig. 22. RGs’ performance without estimation errors.
putation was lower than 0.01s for the LRG and about 0.02s
for the ECG. The NRG has a larger computation time, which
is about 0.16s per command update step, when the simulation
and control update step is 0.01s. This means that the NRG
setup tested is not able to compute the control solution in realtime in MATLAB R in the computer used for these tests (64bit; CPU: Intel R CoreTM i7-4600U @ 2.70 GHz; RAM: 8 GB).
In C++, the NRG would be about 10 times faster. Also, it may
be possible to use slower update rates or shorter prediction
horizons with NRGs to reduce the computation times, provided
this does not cause performance degradation or increase in
constraint violation. We note that explicit reference governor
[23] cannot be used for this application as we are lacking a
Lyapunov function.
0.2
0.1
0
D. Performance in the Presence of Estimation Error
Fig. 21. Variation in performance of the LRG, NRG1, and NRG4.
(fig. 11), while the NRG4 takes longer to converge, with a
much larger difference between the reference command and
the NRG command.
Figure 22 illustrates the performance of the LRG, ECG,
and NRG, for the selected configurations. The effectiveness is
very similar and over 99% for all the RGs (top plot of fig. 22).
That means that the RGs keep the wheels from lifting more
than 0.5mm (0.02”) from the ground, even when the reference
steering would lead to rollover. The LRG and the NRG are less
conservative and provide better turn response than the ECG
(bottom plots of fig. 22). For the most demanding conditions
(larger reference command amplitudes), both the LRG and the
ECG have a mean computation time of about 0.005s. In the
same conditions, the maximum time for the command com-
In this section, we illustrate the variation of the control
performance for 3 controllers: a LRG with command contraction, an ECG, and a NRG with a single iteration (NRG1). The
controllers are evaluated through Monte Carlo sampling with
a range of estimation error conditions in all states used by the
controllers: side-slip, turn rate, roll angle, and roll rate.
Figures 23, 24, 25, and 26 illustrate how the errors in the roll
angle affect the controllers performance. Figure 23 shows that
even with roll estimation errors up to 10%, the effectiveness of
all the RGs is almost unaffected. Figures 24 and 25 show that
the effectiveness of the ECG and the NRG is almost unaffected
even for extremely high roll estimation errors. The exception
is the LRG controller. Its average effectiveness approaches the
limit of 0% for larger reference steering angles, meaning that
the wheel could lift an average of 45mm (1.8”). Figure 25
indicates that this only happens in the presence of very high
roll estimation errors and extremely high steering amplitudes
(> 140 deg).
Figure 26 shows that the conservatism is also affected by the
estimation errors. Nevertheless, even with 20% of estimation
IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, VOL. #, NO. #, ??? 2016
14
Conservmean [%]
100
LRG
ECG
NRG1
95
90
0
20
40
60
80
100
δSteer [deg]
120
140
Conserv [%]
60
40
20
0
−20
−40
0
20
40
60
80
100
δSteer [deg]
120
140
Conservmin & max [%]
TurnResp [%]
40
20
0
−20
0
−10
0
20
100
40
LRG
60
80
100
δSteer [deg]
ECG
120
140
160
NRG1
80
60
40
20
0
160
10
−20
60
−40
160
Conservmean ± σ [%]
Effectiv [%]
105
0
20
40
60
80
100
δSteer [deg]
120
140
160
0
20
40
60
80
100
δSteer [deg]
120
140
160
250
200
150
100
50
0
0
20
40
60
80
100
δSteer [deg]
120
140
160
Fig. 23. RGs’ performance with estimation errors of σ = 10% about the
true roll angle.
Fig. 25. RGs’ effectiveness performance with estimation errors of σ = 20%
about the true roll angle.
80
100
60
40
LRG
ECG
NRG1
20
0
0
20
40
60
80
100
δSteer [deg]
120
140
160
Effectivmean [%]
Effectiv [%]
100
80
60
40
20
0
0
20
40
60
80
100
δSteer [deg]
120
140
160
40
20
0
−20
−40
0
20
40
60
80
100
δSteer [deg]
120
140
160
Effectivmean ± σ [%]
Conserv [%]
60
200
LRG
ECG
0
−100
0
20
40
60
80
100
δSteer [deg]
120
140
160
0
20
40
60
80
100
δSteer [deg]
120
140
160
−10
−20
0
20
40
60
80
100
δSteer [deg]
120
140
160
Fig. 24. RGs’ performance with estimation errors of σ = 20% about the
true roll angle.
error in roll, the RGs conservatism is quite acceptable, and in
most test runs it is below 50% for the LRG and NRG and
below 60% for the ECG. Most importantly, at lower steering
angles, even in the most extreme test runs, the controllers
conservatism is kept below 12% for steering angles that would
not cause any wheel lift (δSW < 48 deg) and is kept bellow
35% for steering angles that would take the wheels to reach
the limit wheel lift. Among the controllers compared, the ECG
controller seems to be the most sensitive to the roll estimation
errors in terms of conservatism.
The bottom plots of Figures 23 and 24 show that the controllers turning response is largely unaffected by the estimation
errors. Unlike in the conservatism metric, the ECG seems to
be the least affected controller.
Effectivmin & max [%]
TurnResp [%]
10
0
NRG1
100
100
0
−100
−200
−300
−400
Fig. 26. RGs’ conservatism performance with estimation errors of σ = 20%
about the true roll angle.
Figures 27, 28, and 29 show that the controllers’ performance is only slightly affected by roll rate estimation errors
and is not visibly affected by the side-slip and turn rate errors,
even for the high estimation errors (σ = 20%).
Figure 30 shows that the effect on the controllers performance from a combination of roll and roll rate estimation
errors is very similar to that of just the roll estimation errors.
These results show that the controllers are most sensitive
to the errors in the roll angle. Nevertheless, the controllers
average response is adequate, even with estimation errors with
a standard deviation of 20%, which is an extremely poor
estimation performance.
IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, VOL. #, NO. #, ??? 2016
15
105
Effectiv [%]
Effectiv [%]
105
100
LRG
ECG
NRG1
95
90
0
20
40
60
80
100
δSteer [deg]
120
140
Conserv [%]
Conserv [%]
20
0
−20
40
60
80
100
δSteer [deg]
120
140
160
0
20
40
60
80
100
δSteer [deg]
120
140
160
0
20
40
60
80
100
δSteer [deg]
120
140
160
40
20
0
20
40
60
80
100
δSteer [deg]
120
140
−40
160
10
TurnResp [%]
TurnResp [%]
0
0
−10
0
20
40
60
80
100
δSteer [deg]
120
140
10
0
−10
−20
160
Fig. 27. RGs’ performance with estimation errors of σ = 20% about the
true roll rate.
Fig. 29. RGs’ performance with estimation errors of σ = 20% about the
true turn rate.
105
100
Effectiv [%]
Effectiv [%]
20
−20
−40
100
LRG
ECG
NRG1
95
0
20
40
80
60
LRG
ECG
NRG1
40
20
60
80
100
δSteer [deg]
120
140
160
60
0
20
40
60
80
100
δSteer [deg]
120
140
160
0
20
40
60
80
100
δSteer [deg]
120
140
160
0
20
40
60
80
100
δSteer [deg]
120
140
160
60
Conserv [%]
Conserv [%]
0
60
40
90
LRG
ECG
NRG1
95
90
160
60
−20
100
40
20
0
40
20
0
−20
−20
−40
−40
0
20
40
60
80
100
δSteer [deg]
120
140
160
TurnResp [%]
TurnResp [%]
10
0
−10
10
0
−10
−20
−20
0
20
40
60
80
100
δSteer [deg]
120
140
160
Fig. 28. RGs’ performance with estimation errors of σ = 20% about the
true side-slip angle.
V. C ONCLUSIONS
AND
F UTURE W ORK
A. Conclusions
We have presented several Reference Governor (RG) designs for vehicle rollover avoidance using active steering commands. We implemented three types of Reference Governors:
a Linear Reference Governor (LRG), an Extended Command
Governor (ECG), and a Nonlinear Reference Governor (NRG).
The goal of the Reference Governors is to enforce Load
Transfer Ratio (LTR) constraints. The Reference Governors
predict the vehicle trajectory in response to a reference steering
command, from the driver, to check if it respects the Load
Transfer Ratio (LTR) constraints. The LRG and the ECG use a
linear model to check the safety of the steering command. The
NRG uses a nonlinear model to achieve the same goal. The
Fig. 30. RGs’ performance with estimation errors of σ = 20% about the
true roll angle and true roll rate.
controllers were tested with a nonlinear simulation model to
check their performance with realistic vehicle dynamics, that
are highly nonlinear for large steering angles. The nonlinearity
causes the standard versions of the LRG and the ECG to
be too conservative. We have presented several methods to
compensate for such nonlinearities.
To evaluate the controllers we have defined three performance metrics: effectiveness, conservatism, and turning
response. The effectiveness characterizes how well the constraints are enforced by the RG. The conservatism and the turning response characterize if the controller is too intrusive or
not, by measuring how well the RG command and respective
vehicle trajectory adhere to the driver command and desired
trajectory. An additional evaluation metric is the command
computation time, that characterizes the controller computa-
IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, VOL. #, NO. #, ??? 2016
tional load. The NRG provides the best performance in terms
of effectiveness, conservatism, and turning response, but it is
slower to compute. The simulation results show that LRG with
the nonlinear compensations (nonlinear difference and MPL)
provides the best balance between all the metrics. It has a
low computational load, while showing very high constraint
enforcement effectiveness and generating commands with a
conservatism almost as low as the NRG. The simulation results
also show that the Reference Governors’ performance is most
sensitive to the roll estimation errors, but that even with very
high estimation errors (σ = 20%) the Reference Governors
can still enforce the constraints effectively.
VI. F URTHER E XTENSIONS
Currently we are working to extend the current approach
to incorporate differential braking and active suspension commands. It is important to understand the performance limits
in rollover avoidance for each individual command. It is also
important to understand how the different commands can be
combined to provide the most effective and least intrusive
rollover avoidance intervention.
Research is also needed to determine the slowest acceptable
control update rate, i.e., to characterize how slower update
rates degrade the control performance and the driver handling
perception.
This research shows that the presented Reference Governors
can cope with a great deal of estimation errors. Further
research should address the state estimation methodology,
given the limited sensing capabilities in standard cars. With
such methodology, a better estimation error model should be
integrated with the vehicle simulation to verify the Reference
Governors performance with a realistic estimation error model.
The effects of vehicle and road uncertainties on the estimation
and control performance also need to be studied, in order
to understand how the system will perform in the range of
operating conditions in which the real vehicles will operate.
ACKNOWLEDGMENT
The authors would like to thank ZF-TRW that partially
funded this work, and in particular Dan Milot, Mark Elwell
and Chuck Bartlett for their technical support and guidance in
the controllers’ development process.
R EFERENCES
[1] NHTSA, “49 cfr part 575 - consumer information - new car assessment
program - rollover resistance,” Oct 2003.
[2] A. van Zanten, “Bosch esp systems: 5 years of experience,” in SAE
2000 Automotive Dynamics & Stability Conference, Troy, Michigan,
May 2000.
[3] J. Ackermann and D. Odenthal, “Damping of vehicle roll dynamics
by gain scheduled active steering,” in European Control Conference,
Karlsruhe, Germany, 1999.
[4] C. R. Carlson and J. C. Gerdes, “Optimal rollover prevention with
steer by wire and differential braking,” in ASME 2003 International
Mechanical Engineering Congress and Exposition, Washington, D.C.,
Nov 15-21 2003, pp. 345–354.
[5] S. Solmaz, M. Corless, and R. Shorten, “A methodology for the design
of robust rollover prevention controllers for automotive vehicles with
active steering,” International Journal of Control, vol. 80, no. 11, pp.
1763–1779, Nov 2007.
16
[6] P. Koehn and M. Eckrich, “Active Steering - The BMW Approach
Towards Modern Steering Technology,” SAE Technical Paper 2004-011105, 2004.
[7] S. Solmaz, M. Corless, and R. Shorten, “A methodology for the design
of robust rollover prevention controllers for automotive vehicles - part
1-differential braking,” in 45th IEEE Conference on Decision & Control.
San Diego, CA, USA: IEEE, Dec. 13-15 2006, pp. 1739 – 1744.
[8] ——, “A methodology for the design of robust rollover prevention
controllers for automotive vehicles: Part 2-active steering,” in American
Control Conference. New York, NY, USA: IEEE, July 9-13 2007, pp.
1606–1611.
[9] P. Falcone, F. Borrelli, H. E. Tseng, J. Asgari, and D. Hrovat, “Linear
time-varying model predictive control and its application to active
steering systems: Stability analysis and experimental validation,” International Journal of Robust and Nonlinear Control, vol. 18, no. 8, pp.
862–875, 2008.
[10] I. V. Kolmanovsky, E. Garone, and S. Di Cairano, “Reference and command governors: A tutorial on their theory and automotive applications,”
in American Control Conference (ACC), 2014, Portland, USA, June
2014, pp. 226–241.
[11] I. V. Kolmanovsky, E. G. Gilbert, and H. Tseng, “Constrained control
of vehicle steering,” in Control Applications, (CCA) Intelligent Control,
(ISIC), 2009 IEEE, Saint Petersburg, Russia, July 2009, pp. 576–581.
[12] J. Zhou, “Active safety measures for vehicles involved in light vehicle-tovehicle impacts,” Ph.D. dissertation, University of Michigan, Ann Arbor,
Michigan, USA, 2009.
[13] A. G. Ulsoy, H. Peng, and M. akmakci, Automotive Control Systems.
Cambridge University Press, June 2012.
[14] M. Abe, Vehicle handling dynamics - Theory and applications, 2nd ed.
Butterworth-Heinemann, Apr. 22 2015.
[15] R. Bencatel, I. V. Kolmanovsky, and A. Girard, “Arc-lab technical
report 2015-008 - car undercarriage dynamics - wheel lift condition,”
University of Michigan - Aerospace Robotics and Control Laboratory,
Tech. Rep., 2015.
[16] D. Karnopp, Vehicle Dynamics, Stability, and Control, ser. Mechanical
Engineering. CRC Press, Jan. 22 2013.
[17] S. Hong, G. Erdogan, J. K. Hedrick, and F. Borrelli, “Tyreroad friction
coefficient estimation based on tyre sensors and lateral tyre deflection:
modelling, simulations and experiments,” Vehicle system dynamics,
vol. 51, no. 5, pp. 627–647, Sep. 24 2013.
[18] J. L. Stein and J. K. Hedrick, “Influence of fifth wheel location on truck
ride quality,” Transportation Research Record, no. 774, pp. 31–39, 1980.
[19] E. G. Gilbert, I. Kolmanovsky, and K. T. Tan, “Discrete-time reference
governors and the nonlinear control of systems with state and control
constraints,” International Journal of Robust and Nonlinear Control,
vol. 5, pp. 487–504, 1995.
[20] E. G. Gilbert and C.-J. Ong, “Constrained linear systems with hard
constraints and disturbances: An extended command governor with large
domain of attraction,” Automatica, vol. 47, no. 2, pp. 334–340, 2011.
[21] U. Kalabi, Y. Chitalia, J. Buckland, and I. Kolmanovsky, “Prioritization
schemes for reference and command governors,” in European Control
Conference. Zurich, Swiss: IEEE, July 17-19 2013, pp. 2734 – 2739.
[22] J. Sun and I. V. Kolmanovsky, “Load governor for fuel cell oxygen
starvation protection: a robust nonlinear reference governor approach,”
IEEE Transactions on Control Systems Technology, vol. 13, no. 6, pp.
911–920, Nov. 2005.
[23] E. Garone, S. Di Cairano, and I. V. Kolmanovsky, “Reference and
Command Governors for Systems with Constraints: A Survey of Their
Theory and Applications,” Automatica, to appear.
| 3 |
REAL DIFFERENCE GALOIS THEORY.
THOMAS DREYFUS
arXiv:1507.02192v3 [] 13 Feb 2017
Abstract. In this paper, we develop a difference Galois theory in the setting of real
fields. After proving the existence and uniqueness of the real Picard-Vessiot extension,
we define the real difference Galois group and prove a Galois correspondence.
Contents
Introduction
1. Reminders of difference algebra
2. Existence and uniqueness of Picard-Vessiot extensions over real fields
3. Real difference Galois group
References
1
2
3
10
12
Introduction
Let us consider an equation of the form:
(1)
φY = AY,
where A is an invertible matrix having coefficients in a convenient field k∗ and φ is an
automorphism of k. A typical example is k := C(x) and φy(x) := y(x + 1). The aim of
the difference Galois theory is to study (1) from an algebraic point of view. See [vdPS97]
for details on this theory. See also [BB62, Fra63, HS08, Mor09, MU09]. The classical
framework for difference Galois theory is to assume that C, the subfield of k of elements
invariant under φ, is algebraically closed. The goal of the present paper is to present a
descent result. We explain what happens if we take instead a smaller field k, such that k
is a real field and C is real closed, see §2 for the definitions.
Assume that C is algebraically closed and let us make a brief summary of the difference
Galois theory. An important object attached to (1) is the Picard-Vessiot extension.
Roughly speaking, a Picard-Vessiot extension is a ring extension of k containing a basis
of solutions of (1). The Picard-Vessiot extension always exists, but the uniqueness is
proved in [vdPS97] only in the case where C is algebraically closed. To the Picard-Vessiot
extension, we attach a group, the difference Galois group, that measures the algebraic
relations between solutions belonging to the Picard-Vessiot extension. This group may
be seen as a linear algebraic subgroup of invertible matrices in coefficients in C. We
also have a Galois correspondence. Note that several definitions of the difference Galois
group have been made and the comparison between different Galois groups can be found
in [CHS08].
Date: February 14, 2018.
2010 Mathematics Subject Classification. 12D15,39A05.
Work supported by the labex CIMI. This project has received funding from the European Research
Council (ERC) under the European Union’s Horizon 2020 research and innovation programme under the
Grant Agreement No 648132.
∗In all the paper, all fields are of characteristic zero.
1
2
THOMAS DREYFUS
From now, we drop the assumption that C is algebraically closed, and we make
the assumptions that k is a real field and C is real closed. Our approach will follow
[CHS13, CHvdP16], which prove similar results in the framework of differential Galois
theory. Let us present a rough statement of our main result, Theorem 7. We prove
that in this setting, a real Picard-Vessiot exists, i.e., a Picard-Vessiot extension that is
additionally a real ring. Then, we also show a uniqueness result: given R1 and R2 two real
Picard-Vessiot extensions, then R1 and R2 are isomorphic over k if and only if R1 ⊗k R2
has no elements x satisfying x2 + 1 = 0. We define a real difference Galois group, which
may be seen as a linear algebraic subgroup of invertible matrices in coefficients in the
algebraic closure of C, and that is defined over C. See Proposition 11. This allows us
to prove a Galois correspondence, see Theorem 12. See also [CH15, Dyc05] for similar
results in the framework of differential Galois theory.
The paper is presented as follows. In §1, we make some reminders of difference algebra.
In §2, we state and prove our main result, Theorem 7, about the existence and uniqueness
of real Picard-Vessiot extensions. In §3, we define the real difference Galois group, and
prove a Galois correspondence.
Acknowledgments. The author would like the thank the anonymous referee for permitting him to correct some mistakes that was originally made in the paper.
1. Reminders of difference algebra
For more details on what follows, we refer to [Coh65]. A difference ring (R, φ) is a ring
R together with a ring automorphism φ : R → R. An ideal of R stabilized by φ is called
a difference ideal of (R, φ). A simple difference ring (R, φ) is a difference ring with only
difference ideals (0) and R. If R is a field then (R, φ) is called a difference field.
Let (R, φ) be a difference ring and m ∈ N∗ . The difference ring R{X1 , . . . , Xm }φ of
difference polynomials in m indeterminacies over R is the usual polynomial ring in the
infinite set of variables
{φν (Xj )}ν∈Z
j≤m ,
and with automorphism extending the one on R defined by:
φ (φν (Xj )) = φν+1 (Xj ).
The ring of constants Rφ of the difference ring (R, φ) is defined by
Rφ := {f ∈ R | φ(f ) = f }.
If Rφ is a field, the ring of constants will be called field of constants.
e
e φ)
A difference ring morphism from the difference ring (R, φ) to the difference ring (R,
e
e
is a ring morphism ϕ : R → R such that ϕ ◦ φ = φ ◦ ϕ.
e is a difference ring extension of a difference ring (R, φ) if R
e φ)
e is a
A difference ring (R,
e
e
ring extension of R and φ|R = φ; in this case, we will often denote φ by φ. Two difference
e ) and (R
e ) of a difference ring (R, φ) are isomorphic over (R, φ)
e1, φ
e2 , φ
ring extensions (R
1
2
e1 ) to (R
e2 ) such that ϕ = Id.
e1, φ
e2, φ
if there exists a difference ring isomorphism ϕ from (R
|R
Let (R, φ) be a difference ring such that X 2 + 1 ∈ R[X] is irreducible, i.e., there is no
x ∈ R such that x2 + 1 = 0. We define, R[i], to be the ring R[i] := R[X]/(X 2 + 1). We
equip R[i] with a structure of difference ring with φ(i) = i. If (R, φ) is a difference ring
with an element x ∈ R satisfying x2 + 1 = 0, we make the convention that R[i] = R.
REAL DIFFERENCE GALOIS THEORY.
3
2. Existence and uniqueness of Picard-Vessiot extensions over real fields
Let (k, φ) be a difference field of characteristic zero. Consider a linear difference system
(2)
φY = AY with A ∈ GLn (k),
where GLn denotes the group of invertible n × n square matrices with entries in k.
Definition 1. A Picard-Vessiot extension for (2) over (k, φ) is a difference ring extension
(R, φ) of (k, φ) such that
(1) there exists U ∈ GLn (R) such that φ(U ) = AU (such a U is called a fundamental
matrix of solutions of (2));
(2) R is generated, as a k-algebra, by the entries of U and det(U )−1 ;
(3) (R, φ) is a simple difference ring.
We may always construct a Picard-Vessiot extension as follows. Take an indeterminate
n × n square matrix X := Xj,k and consider k{X, det(X)−1 }φ which is equipped with a
structure of difference ring with φX = AX. Then, for any I, maximal difference ideal of
k{X, det(X)−1 }φ , the ring k{X, det(X)−1 }φ /I is a simple difference ring and therefore, is
a Picard-Vessiot extension.
According to [vdPS97, §1.1], when the field of constants C := kφ is algebraically closed,
we also have the uniqueness of the Picard-Vessiot extension, up to a difference ring isomorphism. Furthermore, in this case we have C = Rφ and, see [vdPS97, Corollary 1.16],
there exist an idempotent e ∈ R, and t ∈ N∗ , such that φt (e) = e, R =
for all 0 ≤ j ≤ t − 1,
φj (e)R
t−1
M
φj (e)R, and
j=0
is an integral domain.
In [CHS08], it is defined the notion of weak Picard-Vessiot extension we will need in
the next section.
Definition 2. A weak Picard-Vessiot extension for (2) over (k, φ) is a difference ring
extension (R, φ) of (k, φ) such that
(1) there exists U ∈ GLn (R) such that φ(U ) = AU ;
(2) R is generated, as a k-algebra, by the entries of U and det(U )−1 ;
(3) Rφ = kφ = C.
From what is above, we deduce that when the field of constants is algebraically closed,
a Picard-Vessiot extension is a weak Picard-Vessiot extension. Note that the converse is
not true as shows [vdPS97, Example 1.25].
We say that a field k is real when 0 is not a sum of squares in k\{0}. We say that a field
k is real closed when k does not admit an algebraic extension that is real. In particular,
k is real closed if and only if k[i] is algebraically closed and satisfies k[i] 6= k.
Example 3. The field R((x)) of formal Laurent series with real coefficients is real. The
field Q(x) is real. The field of real numbers is real closed.
From now we assume that k is a real field and its field of constants C := kφ is real closed.
Remind that we have seen that we have the existence of (R, φ), Picard-Vessiot extension
for (2) over (k, φ).
4
THOMAS DREYFUS
Lemma 4. Let (R, φ), be a Picard-Vessiot extension for (2) over (k, φ) and assume that
R 6= R[i]. Then, (R[i], φ), is a Picard-Vessiot extension for (2) over (k[i], φ).
Proof. Let (0) 6= I be a difference ideal of (R[i], φ). Note that I ∩ R is a difference ideal of
(R, φ). We claim that I ∩ R 6= (0). Let a, b ∈ R with 0 6= a + ib ∈ I. Then, φ(a) + iφ(b) ∈ I
and for all c ∈ R, ac + ibc ∈ I. Let J be the smallest difference ideal of R that contains
a. From what is above, we may deduce that for all a1 ∈ J, there exists b1 ∈ R such that
a1 + ib1 ∈ I. Since (R, φ) is a simple difference ring, we have two possibilities: J = (0)
and J = R. We are going to treat separately the two cases. Assume that J = (0).
Then a = 0 and ib ∈ I. But ib × (−i) = b ∈ I ∩ R \ {0} which proves our claim when
J = (0). Assume that J = R. Then, there exists b1 ∈ R such that 1 + ib1 ∈ I. But
(1 + ib1 )(1 − ib1 ) = 1 + b21 ∈ I ∩ R. Since R 6= R[i] we find that 1 + b21 6= 0 which proves
our claim when J = R.
Since I ∩ R 6= (0) and (R, φ) is a simple difference ring, I ∩ R = R. We now remark
that I is stable by multiplication by k[i], which shows that I = R[i]. This proves the
lemma.
Proposition 5. Let (R, φ), be a Picard-Vessiot extension for (2) over (k, φ). Then, there
exist an idempotent e ∈ R, and t ∈
N∗ ,
such that
φt (e)
= e, R =
t−1
M
φj (e)R, and for all
j=0
0 ≤ j ≤ t − 1, φj (e)R is an integral domain.
Proof. Let us treat separately two cases. Assume that R 6= R[i]. Due to Lemma 4,
(R[i], φ), is a Picard-Vessiot extension for (2) over (k[i], φ). We remind that by definition,
if R 6= R[i], we extend φ to R[i] by φ(i) = i. Then, the field of constants of R[i] is C[i],
which is algebraically closed. From [vdPS97, Corollary 1.16], we obtain that there exist
a, b ∈ R, with a + ib is idempotent, t ∈ N∗ , such that φt (a + ib) = a + ib,
(3)
R[i] =
t−1
M
φj (a + ib)R[i],
j=0
and for all 0 ≤ j ≤ t − 1, φj (a + ib)R[i] is an integral domain. Let e := a2 + b2 ∈ R. A
straightforward computation shows that a − ib is idempotent. Since e = (a + ib)(a − ib) is
the product of two idempotent elements it is also idempotent. Using φt (a − ib) = a − ib,
we find φt (e) = e.
Let us prove that for all 0 ≤ j ≤ t − 1, φj (a − ib)R[i] is an integral domain. Let
0 ≤ j ≤ t − 1, c + id ∈ R[i] with c, d ∈ R, such that φj (a − ib)(c + id) = 0. It follows
that φj (a + ib)(c − id) = 0 and therefore, c − id = 0 = c + id since for all 0 ≤ j ≤ t − 1,
φj (a+ib)R[i] is an integral domain. We have proved that for all 0 ≤ j ≤ t−1, φj (a−ib)R[i]
is an integral domain. Let us prove that for all 0 ≤ j ≤ t − 1, φj (e)R[i] is an integral
domain. Let 0 ≤ j ≤ t − 1, c ∈ R[i], such that cφj (e) = cφj (a + ib)φj (a − ib) = 0. We
use successively the fact that φj (a + ib)R[i] and φj (a − ib)R[i] are integral domains to
deduce that c = 0, which shows that φj (e)R[i] is an integral domain. Therefore, for all
0 ≤ j ≤ t − 1, φj (e)R is an integral domain.
We claim that {φj (e), 0 ≤ j ≤ t − 1} are linearly independent over R[i]. Let us consider
c0 , . . . , ct−1 ∈ R[i] such that
t−1
X
j=0
cj φj (e) = 0. We have
t−1
X
j=0
cj φj (a − ib)φj (a + ib) = 0. We
use (3) to deduce that for all 0 ≤ j ≤ t − 1, cj φj (a − ib) = 0. We remind that for all
0 ≤ j ≤ t − 1, φj (a − ib)R[i] is an integral domain. This shows that for all 0 ≤ j ≤ t − 1,
cj = 0. This proves our claim.
Using (3), to prove the proposition, it is now sufficient to prove the equality
REAL DIFFERENCE GALOIS THEORY.
t−1
M
(4)
j
φ (a + ib)R[i] =
j=0
The inclusion
t−1
M
j=0
α∈
φj (e)R[i].
j=0
φj (e)R[i] ⊂
t−1
M
φj (a + ib)R[i] is a direct consequence of the fact
j=0
that e = (a − ib)(a + ib) ∈ (a + ib)R[i].
t−1
M
t−1
M
5
φj (a + ib)R[i], and define f :=
j=0
t−1
Y
Let us prove the other inclusion.
Let
φj (e) which is invariant under φ. Therefore,
j=0
f R[i] is a difference ideal of R[i]. We use e = (a + ib)(a − ib) and the fact that (a + ib)R[i]
is an integral domain to obtain that f 6= 0 and f R[i] 6= (0). Since R[i] is a simple difference ring, the difference ideal f R[i] equals to R[i]. This means that the ideal of R[i]
generated by the φj (f ), j ∈ Z, is R[i]. Since φ(f ) = f , there exists β ∈ R[i] such that
f β = α. We again use (3) to find that α = f
t−1
X
j=0
cj φj (a + ib) for some cj ∈ R[i]. Since
e = (a−ib)(a+ib), we may define for all 0 ≤ j ≤ t−1, dj := f /φj (a−ib) ∈ R[i]. A straightforward computation shows that α =
t−1
X
j=0
have proved
t−1
M
j=0
φj (a + ib)R[i] ⊂
t−1
M
cj dj φj (e), which implies α ∈
t−1
M
φj (e)R[i]. We
j=0
φj (e)R[i]. If we combine with the other inclusion, we
j=0
obtain (4). This completes the proof in the case R 6= R[i].
Assume that R = R[i]. Since i2 = −1, we have φ(i) = ±i and then φ2 (i) = i. Hence,
(R, φ2 ) is a ring extension of (k[i], φ2 ), whose field of constants is C[i], which is algebraically closed. Furthermore, by construction, it is also a Picard-Vessiot extension for
φ2 Y = φ(A)AY over (k[i], φ2 ). From [vdPS97, Corollary 1.16], we obtain that there exist
an idempotent e ∈ R, t ∈ N∗ , such that φ2t (e) = e,
R=
t−1
M
φ2j (e)R,
j=0
and for all 0 ≤ j ≤ t − 1, φ2j (e)R is an integral domain. If t = 1, R = eR is an
integral domain, and we may take e = 1 to have the desired decomposition of R. Assume that t > 1. Using the fact that φ is an automorphism we find that for all j ∈ Z,
φj (e)R is an integral domain and φj (e) is idempotent. Let t′ ∈ N∗ maximal such that
′
′
eR, . . . , φt −1 (e)R are in direct sum. This implies that there exists r ∈ R with rφt (e) 6= 0,
′
such that rφt (e) ∈
′ −1
tM
j=0
φj (e)R. We claim that rφt (e) ∈ eR. If t′ = 1 the claim is
′
clear. Assume that t′ > 1. Then, for all 0 < j < t′ , we have eφj (e) = 0 and, since φ is
an automorphism,
′
φt (e)φj (e)
= 0. It follows that
′ −1
tM
j=0
′
φj (e)Rφt (e) ⊂ eR and therefore,
′
′
rφt (e) = r(φt (e))2 ∈ eR, which proves the claim in the t′ > 1 case. In particular, there
′
′
exists r ′ ∈ R such that rφt (e) = r ′ e 6= 0. We use the fact that φt is an automorphism,
′
′
′
′
′
′
to find φt (r)φ2t (e) = φt (r ′ )φt (e) 6= 0. Since φt (e)R is an integral domain and φt (e) is
′
′
′
′
′
idempotent, we have φt (r ′ )rφt (e) = φt (r ′ )φt (e)rφt (e) 6= 0. But the latter inequality im′
′
′
plies φt (r)φ2t (e)r ′ e 6= 0. This shows that φ2t (e)e 6= 0. Since R =
t−1
M
j=0
φ2j (e)R, φ2t (e) = e,
6
THOMAS DREYFUS
and
′ −1
tM
φj (e)R, we find t = t′ . With R =
j=0
t−1
M
j=0
φ2j (e)R, we obtain φt (e) ∈
t−1
M
φ2j (e)R.
j=0
We remind that for all 0 < j < t, we have eφj (e) = 0. Using the fact that φ is an automorphism, we obtain that for all 0 < j < t, we have φt (e)φ2j (e) = 0. It follows that
t−1
M
j=0
φ2j (e)Rφt (e) ⊂ eR and therefore, φt (e) = (φt (e))2 ∈ eR. So there exists r ′ ∈ R such
that φt (e) = er ′ . But an integral domain may have only one non zero idempotent element.
Since e, φt (e) are non zero idempotent and eR is an integral domain we find that e = φt (e).
In particular,
t−1
M
φj (e)R is a difference ideal of the simple difference ring (R, φ). Since
j=0
e 6= 0, we find that the difference ideal is not (0), proving that R =
t−1
M
φj (e)R. This
j=0
completes the proof in the case R = R[i].
Let R be a difference ring that is the direct sum of integral domains R :=
t−1
M
Rj . We
j=0
define K, the total ring of fractions of R, by K :=
t−1
M
j=0
Kj , where for all 0 ≤ j ≤ t − 1, Kj
is the fraction field of Rj .
We say that R is a real ring if for all 0 ≤ j ≤ t − 1, Kj is a real field. Note that by
[Lam84, Theorem 2.8], this is equivalent to the usual definition of a real ring, that is that
0 is not a sum of squares in R \ {0}, see [Lam84, Definition 2.1].
The notion of Picard-Vessiot extension is not well suited in the real case. Following
[CHvdP16], let us define:
Definition 6. A real Picard-Vessiot extension for (2) over (k, φ) is a difference ring extension (R, φ) of (k, φ) such that
(1) (R, φ) is a Picard-Vessiot extension for (2) over (k, φ);
(2) (R, φ) is a real difference ring.
Let us remind that if (R, φ) is a difference ring such that X 2 + 1 ∈ R[X] is irreducible,
then R[i] is the ring R[i] := R[X]/(X 2 + 1). If (R, φ) is a difference ring with x ∈ R
satisfying x2 + 1 = 0, we make the convention that R[i] = R.
We are now able to state our main result:
Theorem 7. Let us consider the equation (2) which has coefficients in (k, φ).
(1) There exists a real Picard-Vessiot extension for (2) over (k, φ).
(2) Let (R, φ) be a real Picard-Vessiot extension for (2) over (k, φ). Then, (R, φ) is a
weak Picard-Vessiot extension for (2) over (k, φ), i.e., the ring of constants of R
is C.
(3) Let (R1 , φ1 ) and (R2 , φ2 ) be two real Picard-Vessiot extensions for (2) over (k, φ).
Let us equip the ring R1 ⊗k R2 with a structure of difference ring as follows:
φ(r1 ⊗k r2 ) = φ1 (r1 ) ⊗k φ2 (r2 ) for rj ∈ Rj . Then, (R1 , φ1 ) is isomorphic to
(R2 , φ2 ) over (k, φ) if and only if R1 ⊗k R2 6= R1 ⊗k R2 [i].
Before proving the theorem, we are going to state and prove a lemma which is inspired
by a lemma of [Sei58].
REAL DIFFERENCE GALOIS THEORY.
7
Lemma 8. Consider a difference field (K, φ) of characteristic zero that is finitely generated
over Q by the elements u1 , . . . , um and let (KR , φ) be a real difference subfield of (K, φ).
Then, there exists h : K → C, injective morphism of fields that induces an injective
morphism from KR to R.
For every 1 ≤ j ≤ m, k ∈ Z, let us write cj,k := h(φk (uj )) ∈ C. Then, the assignment
ej := (cj,k )k∈Z defines (resp. induces) an injective morphism of difference fields
uj 7→ u
between (K, φ) (resp. (KR , φ)) and (CZ , φs ) (resp. (RZ , φs )), where φs denotes the shift.
Proof of Lemma 8. Let us prove that there exists h : K → C, injective morphism of fields.
Let tj be a transcendental basis of K|Q. Since K|Q is generated as a field by a countable
number of elements, the number of elements in the transcendental basis is countable.
Using the fact that R is not countable, we find that there exists h : Q(tj ) → R, injective
morphism of fields. Let us extend h to K. The elements of K|Q(tj ), satisfy a list of
algebraic equations, which have a solution in an extension of C. Since C is algebraically
closed, we find that the equations have a solution in C. In other words, we have the
existence of an embedding of K into C.
Let us prove that KR := h(KR ) ⊂ R. Let tj be a transcendental basis of KR |Q. We
have h(Q(tj )) ⊂ KR ⊂ C. Since h is an injective morphism of fields and KR is a real
field, we find that KR is a real field. Then, we obtain that the real closure of h(Q(tj ))
contains KR . Since by construction h(Q(tj )) ⊂ R we find that the real closure of h(Q(tj ))
is contained in R. Then, we conclude that KR ⊂ R ⊂ C.
e1 , . . . , u
em ) =
Let P ∈ Q{X1 , . . . , Xm }φ .
We have the following equality P (u
e1 , . . . , u
em ) = 0.
P ((c1,k )k∈Z , . . . , (cm,k )k∈Z ). Therefore, P (u1 , . . . , um ) = 0 if and only if P (u
ej := (cj,k )k∈Z defines (resp. induces) an inThis shows that the assignment uj 7→ u
jective morphism of difference fields between (K, φ) (resp. (KR , φ)) and (CZ , φs ) (resp.
(RZ , φs )).
Proof of Theorem 7. (1) Let us prove the existence of a real Picard-Vessiot extension.
We have seen how to construct (R, φ), Picard-Vessiot extension for (2) over (k, φ). Let
U ∈ GLn (R) be a fundamental solution. As we can see in Proposition 5, R is a direct
sum of integral domains and we may define K, the total ring of fractions of R. The ring
K is a direct sum of fields K :=
t−1
M
Kj satisfying φ(Kj ) = Kj+1 , Kt := K0 . Therefore,
j=0
for all 0 ≤ j ≤ t − 1, (Kj , φt ) is a difference field. Let (K, φ) be the difference subring of
(K, φ) generated over Q by the components on the Kj , 0 ≤ j ≤ t − 1, of the entries of
U , det(U )−1 , and the elements in k involved in the algebraic difference relations between
the entries of U and det(U )−1 . In particular, the entries of the matrix A of (2) belong to
K. As we can see from Lemma 8, for all 0 ≤ j ≤ t − 1, there exists e
hj , an embedding of
(K ∩ Kj , φt ) into (CZ , φs ). If t > 1, without loss of generality, we may assume that for
e j+1 (φ(u)) (resp.
all 0 ≤ j ≤ t − 2, (resp. for j = t − 1), for all u ∈ K ∩ Kj , e
hj (u) = h
e (φ(u)) = h
e
e
φs h
0
t−1 (u)). We may define h, an embedding of the difference ring (K, φ)
P
into the difference ring (CZ , φs ) as follows. Let k = t−1
j=0 kj with k ∈ K, kj ∈ Kj and let
Z
e
us define h(k) ∈ C as the sequence which term number c + dt, with 0 ≤ c ≤ t − 1, d ∈ Z,
equals to the term number d of e
hc (kc ). Furthermore, since k is a real field, we find, see
e
Lemma 8, that for all k ∈ (K ∩ k, φ), h(k)
∈ RZ .
Let Cr,1 , . . . , Cr,n , (resp. Ci,1 , . . . , Ci,n ) be the real parts (resp. the imaginary parts) of
the columns of the term number zero of e
h(U ). We remind that U is invertible. Theree
fore, the term number zero of h(U ) is invertible. Then, we may extract n columns
C1 , . . . , Cn ∈ {Cr,1 , . . . , Cr,n , Ci,1 , . . . , Ci,n }, that are linearly independent. Therefore,
e )B
there exists B a matrix with entries in Q[i], such that the term number zero of h(U
8
THOMAS DREYFUS
has columns C1 , . . . , Cn , and is consequently real and invertible. Then, the term number
e )B is invertible. Since the term number zero of e
zero of h(U
h(U ) is also invertible, we
find that B ∈ GLn (Q[i]). Let V := U B be a fundamental solution, which belongs to
e extends to a morphism of difference rings between (K[i], φ) and
GLn (K[i]). The map h
Z
e ) = h(U
e )h(B).
e
(C , φs ). Consequently, we have Ve := h(V
Z
e
e φs )
Let (Q, φs ) be the difference subring of (Q , φs ) of constant sequences. Note that (Q,
e φs ),
e φs ) be the difference subring of (RZ , φs ) generated over (Q,
is a difference field. Let (k,
e
e
by the elements h(k), k ∈ (K∩k, φ). Note that (k, φs ) is a difference field. We remind that
e
since k is a real field, Lemma 8 tells us that h(A)
∈ (GLn (R))Z . Since the term number
e
e
e
e
zero of V belongs to GLn (R), and φs (V ) = h(A)V , we obtain that Ve ∈ (GLn (R))Z . Let
e φs ) by the entries of Ve , and
e φs ) be the difference subring of (RZ , φs ) generated over (k,
(R,
−1
e
det(V ) .
e φs ) is a simple difference ring. To the contrary, assume that there
We claim that (R,
e different from (0) and R.
e It follows that I(R[i])
e
exists I, a difference ideal of R
is different
e
e
e
from (0) and R[i]. We have a natural embedding from (R, φs ) into (R[i], φ). Then, I(R[i])
induces a difference ideal of (R[i], φ), which is different from (0) and R[i]. Let us treat
separately two cases. If R = R[i], then we use the fact that the Picard-Vessiot extension
e φs )
(R[i], φ) is a simple difference ring to conclude that we have a contradiction and (R,
is a simple difference ring. If R 6= R[i], we use Lemma 4, to deduce that (R[i], φ) is a
Picard-Vessiot extension for (2) over (k[i], φ) and therefore, is a simple difference ring.
e φs ) is a simple
We find a contradiction and we have proved our claim, that is that (R,
e is a real ring to prove
difference ring. We additionally use the fact that by construction R
e
e
e
e
e φs ).
that (R, φs ) is a real Picard-Vessiot extension for φY = h(A)Y , over (k,
Let R1 , be the difference ring generated over K ∩ k by the entries of the fundamental
e φs ) is isomorphic to (K ∩ k, φ), and
solution V and det(V )−1 . Using the fact that (k,
e
e
e
e φs ), we obtain that R1
(R, φs ) is a real Picard-Vessiot extension for φY = h(A)
Ye , over (k,
is a real Picard-Vessiot extension for (2) over (K ∩ k, φ).
Let us prove that R2 := R1 ⊗K∩k k is a real Picard-Vessiot extension for (2) over (k, φ).
By construction, R2 is generated over k by the entries of V and det(V )−1 . It is sufficient
to prove that R2 is a simple difference ring which is real.
P
We claim that R2 is a real ring. Let aj ∈ R2 such that j (aj )2 = 0. Let us write
P
P
aj = ℓ rj,ℓ ⊗K∩k kj,ℓ , with rj,ℓ ∈ R1 , kj,ℓ ∈ k. With j (aj )2 = 0, we obtain an algebraic
relation over Q between the rj,ℓ and the kj,ℓ . Note that R1 ⊂ R2 ⊂ K[i]. Then, as a
consequence of the definition of K, we find that K[i] is the difference subring of (K[i], φ)
generated over Q by i, the components on the Kj [i], 0 ≤ j ≤ t − 1, of the entries of V ,
det(V )−1 , and the elements in k involved in the algebraic difference relations between the
entries of V and det(V )−1 . Furthermore, since i in an algebraic number that does not
belong to the real field k, and V = U B, with U ∈ GLn (K), B ∈ GLn (Q[i]), we find that
the elements in k involved in the latter relations are in fact involved in algebraic difference
relations between the entries of U and det(U )−1 , proving that they belong to K. Hence,
we find that for all j, ℓ, rj,ℓ ∈ R1 ∩ K[i] and kj,ℓ ∈ k ∩ K. Therefore, for all j, aj ∈ R1 .
Since R1 is a real ring, we find that for all j, aj = 0, proving that R2 is a real ring.
It is now sufficient to prove that (R2 , φ) is a simple difference ring. Let I 6= (0) be
a difference ideal of (R2 , φ). Since V = U B, with V ∈ GLn (R2 ), U ∈ GLn (R), and
B ∈ GLn (Q[i]), we find that R2 [i] = R[i]. The difference ideal I induces the difference
ideal I[i] of (R[i], φ). Let us treat separately two cases. If R[i] = R, then (R[i], φ) is a
simple difference ring since it is a Picard-Vessiot extension for (2) over (k, φ), proving
that I[i] = R[i] and I = R2 . Therefore, (R2 , φ) is a simple difference ring and a real
Picard-Vessiot extension for (2) over (k, φ). Assume that R[i] 6= R. With Lemma 4,
(R[i], φ) is a Picard-Vessiot extension for (2) over (k[i], φ). Then, (R[i], φ) is a simple
REAL DIFFERENCE GALOIS THEORY.
9
difference ring and I[i] = R2 [i], proving that I = R2 . This shows that (R2 , φ) is a simple
difference ring and a real Picard-Vessiot extension for (2) over (k, φ).
(2) With Lemma 4 we find that (R[i], φ) is a Picard-Vessiot extension for (2) over (k[i], φ).
Remind that by assumption, C[i] is algebraically closed. As we can deduce from [vdPS97,
Lemma 1.8], R[i]φ = C[i]. It follows that Rφ ⊂ C[i]. By assumption, R is a real ring.
This implies that i ∈
/ R. Therefore, C = kφ ⊂ Rφ . Hence, the field of constants of R is C.
(3) Let us assume that, R1 ⊗k R2 6= R1 ⊗k R2 [i] and let us prove that (R1 , φ1 ) is isomorphic
to (R2 , φ2 ) over (k, φ). We remind, see Lemma 4, that for j ∈ {1, 2}, (Rj [i], φj ), is a PicardVessiot extension for (2) over (k[i], φ). We also remind that the field of constants of k[i] is
C[i]. Due to [vdPS97, Proposition 1.9], we find that (R1 [i], φ1 ) is isomorphic to (R2 [i], φ2 )
over (k[i], φ). Let ϕ : R1 → R2 [i] be the restriction of the morphism. Then, we may define
a morphism of difference rings
Ψ : R1 ⊗k R2 → R2 [i]
x⊗y
7→ ϕ(x)y.
The morphism Ψ is a R2 -linear map, and the image of R1 ⊗k R2 under Ψ is a R2 -submodule
of R2 [i], called V .
The assumption R1 ⊗k R2 6= R1 ⊗k R2 [i] implies that there are no f ∈ R1 ⊗k R2 such
that f 2 + 1 = 0. Since Ψ is a morphism of difference ring, there are no g ∈ V such that
g2 + 1 = 0, which proves i ∈
/ V . Combining this fact with the inclusion R2 ⊂ V , we obtain
that V = R2 (we remind that V is a R2 -submodule of R2 [i]). In other words, the image
of R1 under ϕ is included in R2 . This implies that (R1 , φ1 ) is isomorphic to (R2 , φ2 ) over
(k, φ).
Conversely, if (R1 , φ1 ) is isomorphic to (R2 , φ2 ) over (k, φ), then there exists a morphism
of difference rings ϕ : R1 → R2 . As above, let us define Ψ, morphism of difference
rings between R1 ⊗k R2 and R2 defined by Ψ(x ⊗ y) = ϕ(x)y. Since R2 is a real ring,
we find that R2 6= R2 [i]. Since Ψ is a morphism of difference rings, we obtain that
R1 ⊗k R2 6= R1 ⊗k R2 [i].
The following example, who is inspired by [CHS13], illustrates a situation where two
Picard-Vessiot extensions are not isomorphic.
√
Example 9. Let φ := f (z) 7→ f (2z) and consider φY = 2Y √which has √coefficients
in R(x). Let us consider the following fundamental solutions ( h x) and (ii x). Con√ √ −1
sider the corresponding difference ring extensions R1 |R(x) := R x, x
|R(x) and
√
√ −1
R2 |R(x) := R i x, (i x) . Let us prove that (R1 , φ) is a simple difference ring.
The proof for (R2 , φ) is similar. Let I 6= (0)
√ be a difference ideal of R1 and let
Let k ∈ N be the degree of
P ∈ R[X] with minimal degree such that P ( x) ∈ I.
√ √
√
P . Assume that k 6= 0. We have φ(P ( x)) = P ( 2 x) ∈ I, which shows that
√ k √
√
√
φ(P ( x)) − 2 P ( x) = Q( x) ∈ I where Q ∈ R[X] has degree less than k. This
is in contradiction with the minimality of k, and shows that k = 0. This implies that
I = R1 , which proves that (R1 , φ) is a simple difference ring. Since R1 and R√
2 are real
rings, R1 |R(x) and R2 |R(x) are two real Picard-Vessiot extensions for φY = 2Y over
(R(x), φ). Note that there are no difference ring isomorphism between (R1 , φ) and (R2 , φ)
over R(x) because X 2 = x has a solution in R1 and no solutions in R2 . This is not in
contradiction with Theorem 7 since R1 ⊗R(x) R2 = R1 ⊗R(x) R2 [i], because
√
1
x ⊗R(x) √
i x
2
= −1.
10
THOMAS DREYFUS
3. Real difference Galois group
In this section, we still consider (2). Let (R, φ) be a real Picard-Vessiot extension for (2)
over (k, φ) with fundamental solution U ∈ GLn (R). Consider the difference ring (R[i], φ),
which is different from (R, φ), since R is a real ring. Inspiriting from [CHS13], let us define
the real difference Galois group as follows:
Definition 10. We define GR[i] , as the group of difference ring automorphism of R[i]
letting k[i] invariant. We define G, the real difference Galois group of (2), as the group
{ϕ|R , ϕ ∈ GR[i] }.
Note that elements of G are maps from R to R[i]. Due to Theorem 7, (2), we have an
injective group morphism
ρU : G −→ GLn (C[i])
ϕ 7−→ U −1 ϕ(U ),
which depends on the choice of the fundamental solution U in R. Another choice of a
fundamental solution in R will gives a representation that is conjugated to the first one.
Remind, see Proposition 5, that there exist an idempotent e ∈ R, and t ∈ N∗ , such that
φt (e)
= e, R =
t−1
M
j=0
φj (e)R, and for all 0 ≤ j ≤ t − 1, φj (e)R is an integral domain. Due
to Lemma 4, (R[i], φ) is a Picard-Vessiot extension for (2) over (k[i], φ). Furthermore,
R[i] =
t−1
M
φj (e)R[i] and the total ring of fractions of R[i] equals K[i], where K is the total
j=0
ring of fractions of R. Then, we call GK[i] , the classical difference Galois group of (2), the
group of difference ring automorphism of K[i] letting k[i] invariant. See [vdPS97] for more
details. The difference Galois group of (2) may also be seen as a subgroup of GLn (C[i]).
Furthermore, its image in GLn (C[i]) is a linear algebraic subgroup of GLn (C[i]). We have
the following result in the real case.
Proposition 11. Let (R, φ) be a real Picard-Vessiot extension for (2) over (k, φ) with
fundamental solution U ∈ GLn (R). Let G, be the real difference Galois group of (2) and
GK[i] , be the difference Galois group of (2). We have the following equality
n
o
o
n
Im ρU = U −1 ϕ(U ), ϕ ∈ G = U −1 ϕ(U ), ϕ ∈ GK[i] .
Furthermore, Im ρU is a linear algebraic subgroup of GLn (C[i]) defined over C. We will
identify G with a linear algebraic subgroup of GLn (C[i]) defined over C for a chosen
fundamental solution.
n
o
Proof. Let us prove the equality U −1 ϕ(U ), ϕ ∈ G = U −1 ϕ(U ), ϕ ∈ GK[i] . Remind
that U ∈ GLn (R). Since an element of GK[i] induces an element of G, we obtain the
o
n
inclusion U −1 ϕ(U ), ϕ ∈ GK[i] ⊂ U −1 ϕ(U ), ϕ ∈ G . Let ϕ ∈ G. We may extend ϕ as
an element ϕK[i] ∈ GK[i] by putting ϕK[i] (i) = i and for all 0 ≤ j ≤ t − 1, a, b ∈ φj (e)R[i],
ϕK[i] (a)
ϕK[i] (b) .
ϕK[i] ( ab ) =
Since U ∈ GLn (R), we find that U −1 ϕ(U ) = U −1 ϕK[i] (U ). Therefore,
we obtain the other inclusion
equality
U −1 ϕ(U ), ϕ
∈G =
n
U −1 ϕ(U ), ϕ ∈ G
U −1 ϕ(U ), ϕ
o
⊂
∈ GK[i] .
n
U −1 ϕ(U ), ϕ ∈ GK[i]
With a similar reasoning to what is above, we obtain the equalities:
n
o
n
o
n
o
o
U −1 ϕ(U ), ϕ ∈ G = U −1 ϕ(U ), ϕ ∈ GK[i] = U −1 ϕ(U ), ϕ ∈ GR[i] .
and the
REAL DIFFERENCE GALOIS THEORY.
11
We define GR , as the group of difference ring automorphism of R letting k invariant. Due
to Theorem 7, (2), (R, φ) is a weak Picard-Vessiot
extension for (2) over (k, φ). Applying
[CHS08, Proposition 2.2], we find that U −1 ϕ(U ), ϕ ∈ GR is a linear algebraic subgroup
of GLn (C). Then, we may use [CHS08, Corollary 2.5],nto find that the lattero group, viewed
as a linear algebraic subgroup of GLn (C[i]), equals to U −1 ϕ(U ), ϕ ∈ GR[i] . We conclude
the proof using the equality
n
o
n
o
U −1 ϕ(U ), ϕ ∈ G = U −1 ϕ(U ), ϕ ∈ GR[i] .
We finish this section by giving the Galois correspondence. See [vdPS97, Theorem 1.29]
for the analogous statement in the case where C is algebraically closed.
Theorem 12. Let (R, φ) be a real Picard-Vessiot extension for (2) over (k, φ) with total
ring of fractions K, F be the set of difference rings k ⊂ F ⊂ K, and such that every non
zero divisor is a unit of F . Let G, be the real difference Galois group of (2), G be the set
of linear algebraic subgroups of G.
(1) For any F ∈ F, the group G(K/F ) of elements of G letting F invariant belongs
to G.
(2) For any H ∈ G, the ring K H := {k ∈ K|∀ϕ ∈ H, ϕ(k) = k} belongs to F.
(3) Let α : F → G and β : G → F denote the maps F 7→ G(K/F ) and H 7→ K H .
Then, α and β are each other’s inverses.
Remark 13. If we replace G by GR , see the proof of proposition 11, which is a more natural candidate for the definition of the real difference Galois group, we loose the Galois
correspondence. Take for example φY (x) := Y (x + 1) = exp(1)Y (x), which has solution exp(x). A real Picard-Vessiot extension for Y (x + 1) = exp(1)Y (x) over (R, φ) is
(R[exp(x), exp(−x)], φ). Let K := R(exp(x)) be the total ring of fractions. We have
G ≃ C∗ and GR ≃ R∗ . Note that GR ⊂ GL1 (R), viewed as a linear algebraic subgroup of
GL1 (C), equals to G. On the other hand, we have no bijection with the linear algebraic
subgroups of GR , which are {1}, Z/2Z, R∗ , and the difference subfields of K, which are
R(exp(kx)), k ∈ N.
Proof of Theorem 12. Let F[i] be the set of difference rings k[i] ⊂ F ⊂ K[i], such that
every non zero divisor is a unit of F . Let G[i] be the set of linear algebraic subgroups of
GK[i] . Remind that the field of constants of k[i] is algebraically closed. In virtue of the
Galois correspondence in difference Galois theory, see [vdPS97, Theorem 1.29], we find
that
(a) For any F ∈ F[i] , the group GK[i] (K[i]/F ) of elements of GK[i] letting F invariant
belongs to G[i] .
(b) For any H ∈ G[i] , the ring K[i]H belongs to F[i] .
(c) Let α[i] : F[i] → G[i] and β[i] : G[i] → F[i] denote the maps F[i] 7→ GK[i] (K[i]/F ) and
H 7→ K[i]H . Then, α[i] and β[i] are each other’s inverses.
We use Proposition 5 to find that we have a bijection γ : F → F[i] given by γ(F ) := F [i].
The inverse is γ −1 (F ) = F ∩ K. Now, let us remark that since the fundamental solution
has coefficients in R, for all F ∈ F, G(K/F ) = GK[i] (K[i]/γ(F )). If we combine this fact
with (a) and Proposition 11, we find (1).
Proposition 11 tells us that we may identify the groups in G with the corresponding
groups in G[i] . To prove the point (2), we remark that for all H ∈ G, K[i]H = K H [i].
Combined with (b), this shows the point (2) since K H = γ −1 (K[i]H ) ∈ F .
The point (3) follows from (c) and the fact that for all F ∈ F (resp. H ∈ G) we have
G(K/F ) = GK[i] (K[i]/γ(F )) (resp. K[i]H = γ(K H )).
12
THOMAS DREYFUS
References
[BB62]
A. Bialynicki-Birula. On Galois theory of fields with operators. Amer. J. Math., 84:89–109,
1962.
[CH15]
Teresa Crespo and Zbigniew Hajto. Real Liouville extensions. Comm. Algebra, 43(5):2089–
2093, 2015.
[CHS08]
Zoé Chatzidakis, Charlotte Hardouin, and Michael F. Singer. On the definitions of difference
Galois groups. In Model theory with applications to algebra and analysis. Vol. 1, volume 349
of London Math. Soc. Lecture Note Ser., pages 73–109. Cambridge Univ. Press, Cambridge,
2008.
[CHS13]
Teresa Crespo, Zbigniew Hajto, and Elżbieta Sowa. Picard-Vessiot theory for real fields. Israel
J. Math., 198(1):75–89, 2013.
[CHvdP16] Teresa Crespo, Zbigniew Hajto, and Marius van der Put. Real and p-adic Picard–Vessiot fields.
Math. Ann., 365(1-2):93–103, 2016.
[Coh65]
Richard M. Cohn. Difference algebra. Interscience Publishers John Wiley & Sons, New YorkLondon-Sydeny, 1965.
[Dyc05]
Tobias Dyckerhoff. Picard-vessiot extensions over number fields. Fakultat fur Mathematik und
Informatik der Universitat Heidelberg, diplomarbeit, 2005.
[Fra63]
Charles H. Franke. Picard-Vessiot theory of linear homogeneous difference equations. Trans.
Amer. Math. Soc., 108:491–515, 1963.
[HS08]
Charlotte Hardouin and Michael F. Singer. Differential Galois theory of linear difference equations. Math. Ann., 342(2):333–377, 2008.
[Lam84]
T. Y. Lam. An introduction to real algebra. Rocky Mountain J. Math., 14(4):767–814, 1984.
Ordered fields and real algebraic geometry (Boulder, Colo., 1983).
[Mor09]
Shuji Morikawa. On a general difference Galois theory. I. Ann. Inst. Fourier (Grenoble),
59(7):2709–2732, 2009.
[MU09]
Shuji Morikawa and Hiroshi Umemura. On a general difference Galois theory. II. Ann. Inst.
Fourier (Grenoble), 59(7):2733–2771, 2009.
[Sei58]
A. Seidenberg. Abstract differential algebra and the analytic case. Proc. Amer. Math. Soc.,
9:159–164, 1958.
[vdPS97] Marius van der Put and Michael F. Singer. Galois theory of difference equations, volume 1666
of Lecture Notes in Mathematics. Springer-Verlag, Berlin, 1997.
Université Claude Bernard Lyon 1, Institut Camille Jordan, 43 boulevard du 11 novembre
1918, 69622 Villeurbanne, France.
E-mail address: [email protected]
| 0 |
The Fitness Level Method with Tail Bounds
Carsten Witt
DTU Compute
Technical University of Denmark
2800 Kgs. Lyngby
arXiv:1307.4274v1 [] 16 Jul 2013
Denmark
July 17, 2013
Abstract
The fitness-level method, also called the method of f -based partitions,
is an intuitive and widely used technique for the running time analysis of
randomized search heuristics. It was originally defined to prove upper and
lower bounds on the expected running time. Recently, upper tail bounds
were added to the technique; however, these tail bounds only apply to running times that are at least twice as large as the expectation.
We remove this restriction and supplement the fitness-level method with
sharp tail bounds, including lower tails. As an exemplary application, we
prove that the running time of randomized local search on OneMax is
sharply concentrated around n ln n − 0.1159n.
1
Introduction
The running time analysis of randomized search heuristics, including evolutionary
algorithms, ant colony optimization and particle swarm optimization, is a vivid
research area where many results have been obtained in the last 15 years. Different
methods for the analysis were developed as the research area grew. For an overview
of the state of the art in the area see the books by Auger and Doerr (2011),
Neumann and Witt (2010) and Jansen (2013).
The fitness-level method, also called the method of fitness-based partitions,
is a classical and intuitive method for running time analysis, first formalized by
Wegener (2001). It applies to the case that the total running time of a search
heuristic can be represented as (or bounded by) a sum of geometrically distributed
waiting times, where the waiting times account for the number of steps spent on
1
certain levels of the search space. Wegener (2001) presented both upper and lower
bounds on the running time of randomized search heuristics using the fitness-level
method. The lower bounds relied on the assumption that no level was allowed to
be skipped. Sudholt (2013) significantly relaxed this assumption and presented a
very general lower-bound version of the fitness-level method that allows levels to
be skipped with some probability.
Only recently, the focus in running time analysis turned to tail bounds, also
called concentration inequalities. Zhou, Luo, Lu, and Han (2012) were the first
to add tail bounds to the fitness-level method. Roughly speaking, they prove
w. r. t. the running time T that Pr(T > 2E(T ) + 2δh) = e−δ holds, where h is
the worst-case expected waiting time over all fitness levels and δ > 0 is arbitrary.
An obvious open question was whether the factor 2 in front of the expected value
could be “removed” from the tail bound, i. e., replaced with 1; Zhou et al. (2012)
only remark that the factor 2 can be replaced with 1.883.
In this article, we give a positive answer to this question and supplement the
fitness-level method also with lower tail bounds. Roughly speaking, we prove
2
in Section 2 that Pr(T < E(T ) + δ) ≤ e−δ /(2s) and Pr(T > E(T ) + δ) ≤
2
e− min{δ /(4s),δh/4} , where s is the sum of the squares of the waiting times over all
fitness levels. We apply the technique to a classical benchmark problem, more precisely to the running time analysis of randomized local search (RLS) on OneMax
in Section 3, and prove a very sharp concentration of the running time around
n ln n − 0.1159n. We finish with some conclusions and a pointer to related work.
2
New Tail Bounds for Fitness Levels
Miscellaneous authors (2011) on the internet discussed tail bounds for a special case
of our problem, namely the coupon collector problem (Motwani and Raghavan,
1995, Chapter 3.6). Inspired by this discussion, we present our main result in
Theorem 1 below. It applies to the scenario that a random variable (e. g., a
running time) is given as a sum of geometrically distributed independent random
variables (e. g., waiting times on fitness levels). A concrete application will be
presented in Section 3.
Theorem 1. Let Xi , 1 ≤ i ≤ n, be independent random variables
Pnfollowing
the
geometric
distribution
with
success
probability
p
,
and
let
X
:=
i
i=1 Xi . If
Pn
2
i=1 (1/pi ) ≤ s < ∞ then for any δ > 0
δ2
Pr(X < E(X) − δ) ≤ e− 2s .
For h := min{pi | i = 1, . . . , n},
Pr(X > E(X) + δ) ≤ e− 4 ·min{ s ,h} .
δ
2
δ
For the proof, the following two simple inequalities will be used.
Lemma 1.
ex
1+x
1. For x ≥ 0 it holds
2. For 0 ≤ x ≤ 1 it holds
≤ ex
e−x
1−x
2 /2
.
≤ ex
2 /(2−2x)
.
Proof. We start with the first inequality. The series representation of the exponential function yields
x
e =
∞
X
xi
i=0
since x ≥ 0. Hence,
∞
X
x2i
≤
(1 + x)
i!
(2i)!
i=0
∞
X
ex
x2i
≤
.
1+x
(2i)!
i=0
Since (2i)! ≥ 2i i!, we get
∞
X
x2i
ex
2
≤
= ex /2 .
i
1+x
2 i!
i=0
To prove the second inequality, we omit all negative terms except for −x from
the series representation of e−x to get
P
x2i
∞
X
1−x+ ∞
x2i
e−x
i=1 (2i)!
≤
= 1+
.
1−x
1−x
(1 − x) · (2i)!
i=1
For comparison,
ex
2 /(2−2x)
= 1+
∞
X
i=1
x2i
,
2i (1 − x)i i!
which, as x ≤ 1, is clearly not less than our estimate for e−x /(1 − x).
Proof of Theorem 1. Both the lower and upper tail are analyzed similarly, using
the exponential method (see, e. g., the proof of the Chernoff bound in Motwani and
Raghavan,
1995, Chapter 3.6). We start with the lower tail. Let d := E(X) − δ =
Pn
i=1 (1/pi ) − δ. Since for any t ≥ 0
X < d ⇐⇒ −X > −d ⇐⇒ e−tX > e−td ,
3
Markov’s inequality and the independence of the Xi yield that
n
Y
E e−tX
td
Pr(X < d) ≤
=e ·
E e−tXi .
−td
e
i=1
Note that the last product involves the moment-generating functions (mgf’s) of
the Xi . Given a geometrically distributed random variable
Y with parameter p, its
r
1
moment-generating function at r ∈ R equals E erY = 1−epe
r (1−p) = 1−(1−e−r )/p for
r < − ln(1 − p). We will only use negative values for r, which guarantees existence
of the mgf’s used in the following. Hence,
td
Pr(X < d) ≤ e ·
n
Y
i=1
n
Y
1
1
td
≤e ·
,
t
1 − (1 − e )/pi
1
+
t/p
i
i=1
where we have used ex ≥ 1 + x for x ∈ R. Now, by writing the numerators as
2
ex
et/pi · e−t/pi , using 1+x
≤ ex /2 for x ≥ 0 (Lemma 1) and finally plugging in d, we
get
!
n
Y
Pn
2
2
2
2
2
Pr(X < d) ≤ etd ·
et /(2pi ) e−t/pi = etd e(t /2) i=1 (1/pi ) e−tE(X) ≤ e−tδ+(t /2)s .
i=1
The last exponent is minimized for t = δ/s, which yields
δ2
Pr(X < d) ≤ e− 2s
and proves the lower tail inequality.
For the upper tail, we redefine d := E(X) + δ and obtain
n
Y
E etX
−td
=e ·
E etXi .
Pr(X > d) ≤
td
e
i=1
Estimating the moment-generating functions similarly as above, we get
!
n
−t/pi
Y
e
Pr(X > d) ≤ e−td ·
· et/pi .
1 − t/pi
i=1
Since now positive arguments are used for the moment-generating functions, we
limit ourselves to t ≤ min{pi | i = 1, . . . , n}/2 = h/2 to ensure convergence. Using
2
e−x
≤ ex /(2−2x) for 0 ≤ x ≤ 1 (Lemma 1), we get
1−x
!
!
n
n
Y
Y
2
2
2
2
2
et /(pi (2−2t/pi )) · et/pi =
e−tδ+t /pi ≤ e−tδ+t s ,
Pr(X > d) ≤ e−td ·
i=1
i=1
4
which is minimized for t = δ/(2s). If δ ≤ sh, this choice satisfies t ≤ h/2. Then
−tδ + t2 s = −δ 2 /(4s) and we get
δ2
Pr(X > d) ≤ e− 4s .
Otherwise, i. e. if δ > sh, we set t = h/2 to obtain −tδ + t2 s = −δh/2 + s(h/2)2 ≤
−δh/2 + δh/4 = −δh/4. Then
δh
Pr(X > d) ≤ e− 4 .
Joining the two cases in a minimum leads to the lower tail.
Based on Theorem 1, we formulate the fitness-level theorem with tail bounds for
general optimization algorithms A instead of a specific randomized search heuristic
(see also Sudholt, 2013, who uses a similar approach).
Theorem 2 (Fitness Levels with Tail Bounds). Consider an algorithm A maximizing some function f and a partition of the search space into non-empty sets
A1 , . . . , Am . Assume that the sets form an f -based partition, i. e., for 1 ≤ i < j ≤
m and all x ∈ Ai , y ∈ Aj it holds f (x) < f (y). We say that A is in Ai or on
level i if the best search point created so far is in Ai .
1. If pi is a lower bound on the probability that a step of A leads from level i to
some higher level, independently of previous steps, then the first hitting time
of Am , starting from level k, is at most
m−1
X
i=k
1
+ δ.
pi
− 4δ ·min{ δs ,h}
with probability at least 1 − e
h = min{pi | i = k, . . . , m − 1}.
, for any finite s ≥
Pm−1
i=k
1
p2i
and
2. If pi is an upper bound on the probability that a step of A leads from level i
to level i + 1, independently of previous steps, and the algorithm cannot
increase its level by more than 1, then the first hitting time of Am , starting
from level k, is at least
m−1
X
i=k
with probability at least 1 − e
2
− δ2s
1
−δ
pi
.
Proof. By definition, the algorithm cannot go down on fitness levels. Estimate the
time to leave level i (from above resp. from below) by a geometrically distributed
random variable with parameter pi and apply Theorem 1.
5
3
Application to RLS on OneMax
We apply Theorem 2 to a classical benchmark problem in the analysis of randomized search heuristics, more precisely the running time of RLS on OneMax. RLS
is a well-studied randomized search heuristic, defined in Algorithm 1. The function OneMax : {0, 1}n → R is defined by OneMax(x1 , . . . , xn ) = x1 + · · · + xn ,
and the running time is understood as the first hitting time of the all-ones string
(plus 1 to count the initialization step).
t := 0.
choose an initial bit string x0 ∈ {0, 1}n uniformly at random.
repeat
create x0 by flipping a uniformly chosen bit in xt .
xt+1 := x0 if f (x0 ) ≥ f (xt ), and xt+1 := xt otherwise.
t := t + 1.
forever.
Algorithm 1: RLS for the maximization of f : {0, 1}n → R
Theorem 3. Let T be the running time of RLS on OneMax. Then
1. n ln n − 0.11594n − o(n) ≤ E(T ) ≤ n ln n − 0.11593n + o(n).
3r 2
2. Pr(T ≤ E(T ) − rn) ≤ e− π2 for any r > 0.
( 3r2
2
e− 2π2 if 0 < r ≤ π6
3. Pr(T ≥ E(T ) + rn) ≤
.
r
otherwise.
e− 4
Proof. We start with Statement 1, i. e., the bounds on the expected running time.
Let the fitness levels A0 , . . . , An be defined by Ai = {x ∈ {0, 1}n | OneMax(x) =
i} for 0 ≤ i ≤ n. By definition of RLS, the probability pi of leaving level i equals
pi = n/(n − i) for 0 ≤ i ≤ n − 1. Therefore, the expected running time from
starting level k is
n−1
n−k
X
X
n
1
= n
,
n−i
i
i=1
i=k
which leads to the weak upper bound E(T ) ≤ n ln n + n in the first place. Due
to the uniform initialization in RLS, Chernoff bounds yield Pr(n/2 − n2/3 ≤ k ≤
1/3
n/2 + n2/3 ) = 1 − e−Ω(n ) . We obtain
n/2+n2/3
n/2+n2/3
X
X 1
1
+ e−Ω(n1/3 ) · (n ln n + n) = n
+ o(n).
E(T ) ≤ n
i
i
i=1
i=1
6
We can now estimate the Harmonic number by ln(n/2 + n2/3 ) + γ + o(1) = ln n +
γ − ln 2 + o(1), where γ = 0.57721 . . . is the Euler-Mascheroni constant. Plugging
in numerical values for γ − ln 2 proves the upper bound on E(T ). The lower one
is proven symmetrically.
Pn−k 1
≤
For Statement 2, the lower tail bound, we use Theorem 2. Now, i=1
p2i
Pn n2
2
2
n π
=: s. Plugging δ := rn and s in the second part of the theorem
i=1 i2 ≤
6
r 2 n2
3r 2
yields Pr(T ≤ E(T ) − rn) ≤ e− 2s = e− π2 .
For Statement 3, the upper tail bound, we argue similarly but have to determine
when δs ≤ h. Note that h = min{pi } = 1/n. Hence, it suffices to determine when
2
6rn
≤ 1/n, which is equivalent to r ≤ π6 . Now the two cases of the lower bound
n2 π 2
follow by appropriately plugging δs or h in the first part of Theorem 2.
The stochastic process induced by RLS on OneMax equals the classical and
well-studied coupon collector problem (started with k full bins). Despite this fact,
the lower tail bound from Theorem 3 could not be found in the literature (see
also the comment introducing Theorem 1.24 in Doerr, 2011, which describes a
simple but weaker lower tail). There is an easy-to-prove upper tail bound for the
coupon collector of the kind Pr(T ≥ E(T ) + rn) ≤ e−r , which is stronger than
our result but not obvious to generalize. Finally, Scheideler (2000, Theorem 3.38)
suggests upper and lower tail bounds for sums of geometrically distributed random
variables, which could also be tried out in our
√ example; however, it then turns out
that these bounds are only useful if r = Ω( ln n).
4
Conclusions
We have supplemented upper and lower tail bounds to the fitness-level method.
The lower tails are novel contributions and the upper tails improve an existing
result from the literature significantly. As a proof of concept, we have applied the
fitness levels with tail bounds to the analysis of RLS on OneMax and obtained a
very sharp concentration result.
If the stochastic process under consideration is allowed to skip fitness levels,
which is often the case with globally searching algorithms such as evolutionary
algorithms, our upper tail bound may become arbitrarily loose and the lower tail
is even unusable. To prove tail bounds in such cases, drift analysis may be used,
which is another powerful and in fact somewhat related method for the running
time analysis of randomized search heuristics. See, e. g., Lehre and Witt (2013)
and references therein for further reading.
Acknowledgement. The author thanks Per Kristian Lehre for useful discussions.
7
References
Auger, A. and B. Doerr (Eds.) (2011). Theory of Randomized Search Heuristics:
Foundations and Recent Developments. World Scientific Publishing.
Doerr, B. (2011). Analyzing randomized search heuristics: Tools from probability theory. In A. Auger and B. Doerr (Eds.), Theory of Randomized Search
Heuristics: Foundations and Recent Developments, Chapter 1. World Scientific
Publishing.
Jansen, T. (2013). Analyzing Evolutionary Algorithms - The Computer Science
Perspective. Natural Computing Series. Springer.
Lehre, P. K. and C. Witt (2013). General drift analysis with tail bounds. Technical
report, arXiv:1307.2559. http://arxiv.org/abs/1307.2559.
Miscellaneous authors (2011).
What is a tight lower bound on the
coupon collector time. http://stats.stackexchange.com/questions/7774/
what-is-a-tight-lower-bound-on-the-coupon-collector-time.
Motwani, R. and P. Raghavan (1995). Randomized algorithms. Cambridge University Press.
Neumann, F. and C. Witt (2010). Bioinspired Computation in Combinatorial Optimization – Algorithms and Their Computational Complexity. Natural Computing Series. Springer.
Scheideler, C. (2000). Probabilistic Methods for Coordination Problems, Volume 78
of HNI-Verlagsschriftenreihe. University of Paderborn. Habilitation thesis.
Available at: http://www.cs.jhu.edu/%7Escheideler/papers/habil.ps.gz.
Sudholt, D. (2013). A new method for lower bounds on the running time of evolutionary algorithms. IEEE Transactions on Evolutionary Computation 17 (3),
418–435.
Wegener, I. (2001). Theoretical aspects of evolutionary algorithms. In Proceedings
of the 28th International Colloquium on Automata, Languages and Programming
(ICALP 2001), Volume 2076 of Lecture Notes in Computer Science, pp. 64–78.
Springer.
Zhou, D., D. Luo, R. Lu, and Z. Han (2012). The use of tail inequalities on
the probable computational time of randomized search heuristics. Theoretical
Computer Science 436, 106–117.
8
| 9 |
Julien Berger
Pontifical Catholic University of Paraná, Brazil
Denys Dutykh
arXiv:1610.03688v1 [physics.comp-ph] 12 Oct 2016
CNRS–LAMA, Université Savoie Mont Blanc, France
Nathan Mendes
Pontifical Catholic University of Paraná, Brazil
On the optimal experimental
design for heat and moisture
parameter estimation
arXiv.org / hal
Last modified: October 13, 2016
On the optimal experimental design for heat
and moisture parameter estimation
Julien Berger∗ , Denys Dutykh, and Nathan Mendes
Abstract. In the context of estimating material properties of porous walls based on
in-site measurements and identification method, this paper presents the concept of Optimal Experiment Design (OED). It aims at searching the best experimental conditions in
terms of quantity and position of sensors and boundary conditions imposed to the material.
These optimal conditions ensure to provide the maximum accuracy of the identification
method and thus the estimated parameters. The search of the OED is done by using the
Fisher information matrix and a priori knowledge of the parameters. The methodology is
applied for two case studies. The first one deals with purely conductive heat transfer. The
concept of optimal experiment design is detailed and verified with 100 inverse problems
for different experiment designs. The second case study combines a strong coupling between heat and moisture transfer through a porous building material. The methodology
presented is based on a scientific formalism for efficient planning of experimental work
that can be extended to the optimal design of experiments related to other problems in
thermal and fluid sciences.
Key words and phrases: inverse problem; parameter estimation; Optimal Experiment
Design (OED); heat and moisture transfer; sensitivity functions
MSC: [2010] 35R30 (primary), 35K05, 80A20, 65M32 (secondary)
PACS: [2010] 44.05.+e (primary), 44.10.+i, 02.60.Cb, 02.70.Bf (secondary)
Key words and phrases. inverse problem; parameter estimation; Optimal Experiment Design (OED);
heat and moisture transfer; sensitivity functions.
∗
Corresponding author.
Contents
1
Introduction
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2
Optimal Experiment Design for non-linear heat transfer problem . . . . . . . 5
2.1 Physical problem and mathematical formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Optimal experiment design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3 Numerical example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Estimation of one single parameter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Verification of the ODE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3
Optimal Experiment Design for a non-linear coupled heat and moisture transfer
problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.1 Physical problem and mathematical formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.2 Optimal experiment design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.3 Numerical example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Estimation of one single parameter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Estimation of several parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
A
Hygrothermal properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
References
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
J. Berger, D. Dutykh & N. Mendes
4 / 31
1. Introduction
Heating or cooling strategies for buildings are commonly based on numerical building
physics mathematical models, which can be calibrated using on-site measurements for
estimating the properties of the materials constituting the walls, reducing the discrepancies
between model predictions and real performance.
Several experimental works at the scale of the wall can be reported from the literature.
Instrumented test cells, as ones presented in [8, 9, 13, 14, 18, 22, 25, 27] provide measured
dataset as temperature and relative humidity at different points in the wall for given
boundary conditions. Some experiments at the scale of the whole-building are described
in [7, 17]. These data can be used to estimate the properties (transport and capacity
coefficients) of the materials as reported for instance in [19, 29] for heat transfer and in
[23, 34] for coupled heat and moisture transfer.
The estimation of the unknown parameters P, e.g. wall thermophysical properties, based
on observed data and identification methods is illustrated in Figure 1. Observed data are
experimentally obtained. The latter is established by a design defining the boundary
and initial conditions of the material, the type of sensors as well as their quantity and
location. Thus, the accuracy of the estimated parameters P strongly depends on the
experiment design. The choice of the measurement devices, of the boundary and initial
conditions have consequences on the estimation of the parameter. Furthermore, due to
the correlation between the parameters, multiple local solutions of the estimation problem
exist. Hence, one can address the following questions: what are the best experimental
conditions to provide the best conditioning of the identification method? In particular,
how many sensors are necessary? Where are their best locations? What boundary and
initial conditions should be imposed? Can we really choose them?
These issues deal with searching the Optimal Experiment Design (OED) that enables
to identify the parameters P of a model with a maximum precision. A first objective of
the OED is to adjust the conditions of the experiment design in order to maximize the
sensitivity of the output field u to parameters P. A second objective is to find the optimum
location and quantity of sensors. The achievement of these objectives enables to determine
conditions of the experiments under which the identification of the parameters will have
the maximum accuracy.
The search of the OED is based on quantifying the amount of information contained by
the observed field uexp . For this, Fisher information matrix is used [1, 2, 10, 11, 15, 26,
31, 32], considering both the model sensitivity, the measurement devices and the parameter
correlation. The sensitivity of the output field u with respect to the model parameters P is
calculated, corresponding to the sensitivity of the cost function of the parameter estimation
problem. The higher the sensitivity is, the more information is available in the measurement
data and the more accurate is the identification of the parameters. Generally speaking, the
methodology of searching the ODE is important before starting any experiment aiming at
solving parameter estimation problems. It allows choosing with a deterministic approach
On the optimal experimental design
5 / 31
the conditions of the experiments. Furthermore, it provides information on the accuracy
of parameter estimation.
Several works can be reported on the investigation of OED [1–3, 10, 11, 15, 26, 31, 32].
Among them, in [3], the OED is planned as a function of the quantity of sensors and their
location, for the estimation of boundary heat flux of non-linear heat transfer. In [15], the
OED is searched for the estimation of transport coefficient in convection-diffusion equation.
In [20], the OED is analysed as a function of the boundary heat flux for the identification
of the radiative properties of the material. In [30], the OED is investigated for chemical
reactions using a Bayesian approach. However, the application of OED theory for nonlinear heat and moisture transfer with application for the estimation of the properties of a
wall is relatively rare.
This article presents the methodology of searching the OED for experiments aiming
at solving parameter estimation problems. In the first Section, the concept of OED is
detailed and verified for an inverse problem of non-linear heat transfer. The computation
of the model sensitivity to the parameter is specified. The OED is sought as a function
of the quantity and location of sensors as well as the amplitude and the frequency of the
heat flux at the material boundaries. Then the OED for the estimation of hygrothermal
properties considering non-linear heat and moisture transfer is investigated. Finally, some
main conclusions and perspectives are outlined in the last Section.
2. Optimal Experiment Design for non-linear heat transfer
problem
First, the methodology of searching the OED is detailed for a problem of non-linear heat
transfer. A brief numerical example is given to illustrate the results.
2.1. Physical problem and mathematical formulation
The physical problem considers an experiment of a one-dimensional transient conduction
heat transfer in domains x ∈ Ω = [0, 1] and t ∈ [0, τ ]. The initial temperature in the
body is supposed uniform. The surface of the boundary ΓD = {x = 1} is maintained at
the temperature uD . A time-harmonic heat flux q, of amplitude A and frequency ω, is
imposed at the surface of the body denoted by Γq {x = 0}. Therefore, the mathematical
6 / 31
J. Berger, D. Dutykh & N. Mendes
Boundary conditions
Initial conditions
Model
Direct problem
Sensitivity problem
Searching the OED
Fisher information matrix
D-optimum criteria Ψ
OED
π◦
Experiments
Data
Parameter estimation problem
Sensors location X
Sensors quantity N
Sensors errors σ 2
Identification methods
parameter P◦
Figure 1. Process of searching the Optimal Experiment Design.
formulation of the heat conduction problem can be written as:
∂u
∂
∂
⋆
c ⋆ − ⋆ k (u) ⋆ u = 0
∂t
∂x
∂x
u = uD
∂u
k ⋆ (u) ⋆ = A⋆ sin(2πω ⋆ t⋆ )
∂x
u = u0 (x⋆ )
⋆
⋆
k (u) =
(k0⋆
+
k1⋆ u)
x⋆ ∈ Ω, t⋆ ∈ ]0, τ ] ,
(2.1a)
x⋆ ∈ ΓD , t > 0,
(2.1b)
x⋆ ∈ Γq , t > 0,
(2.1c)
x⋆ ∈ Ω, t⋆ = 0,
(2.1d)
(2.1e)
7 / 31
On the optimal experimental design
where the following dimensionless quantities are introduced:
x⋆ =
x
,
L
u=
k1⋆ =
k1 Tref
,
kref
c⋆ =
T
,
Tref
c
cref
,
uD =
TD
,
Tref
u0 =
T0
,
Tref
k0⋆ =
k0
,
kr
tref =
cref L2
,
kref
A⋆ =
AL
,
kref Tref
ω ⋆ = ωtref
where T is the temperature, c the volumetric heat capacity, k0 the thermal conductivity
and k1 its dependency on temperature, L the linear dimension of the material, A the
intensity and ω the frequency of the flux imposed on surface Γq . Subscripts ref accounts
for a characteristic reference value, D for the Dirichlet boundary conditions, zero (0) for
the initial condition of the problem and superscript ⋆ for dimensionless parameters.
The problem given by Eqs. (2.1)(a-e) is a direct problem when all the thermophysical
properties, initial and boundary conditions, as well as the body geometry are known. Here,
we focus on the inverse problem consisting in estimating one or more parameters of the
material properties (as c, k0 and/or k1 ) using the mathematical model Eqs. (2.1)(a-e)
and a measured temperature data uexp obtained by N sensors placed in the material at
X = [xn ] , n ∈ {1, . . . , N}. The M unknown parameters are here denoted as vector P =
[pm ] , m ∈ {1, . . . , M}. Here, the inverse problems are all non-linear, over-determined
solved by considering a least-square single-objective function:
J [P] = ||uexp − T (u (x, t, P)) ||2
(2.2)
where u is the solution of the transient heat conduction problem Eqs. (2.1)(a-e). uexp
are the data obtained by experiments. They are obtained by N sensors providing a time
discrete measure of the field at specified points within the material. T is the operator
allowing to compare the solution u at the same space-time points where observations uexp
are taken. The cost function can be defined in its matrix form, using the ordinary least
squares norm, as:
⊤
J [P] = Uexp − U (P) Uexp − U (P)
where U is the vector containing the discrete point of the field u for the discrete set of
space-time points obtained by the experiments.
For this study, several experiments are distinguished. Ones aim at estimating one parameter: m = 1 and P = c, P = k0 or P = k1 . Others aim at estimating a group of m = 2
or m = 3 parameters. In this work, the optimal experiment design will be investigated for
both cases.
2.2. Optimal experiment design
Efficient computational algorithms for recovering parameters P have already been proposed. Readers may refer to [21] for a primary overview of different methods. They are
based on the minimisation of the cost function J [P]. For this, it is required to equate to
8 / 31
J. Berger, D. Dutykh & N. Mendes
zero the derivatives of J [P] with respect to each of the unknown parameters pm . Associated to this necessary condition for the minimisation of J [P], the scaled dimensionless
local sensitivity function [12] is introduced:
Θm (x, t) =
σp ∂u
,
σu ∂pm
∀m ∈ {1, . . . , M}
(2.3)
where σu is the variance of the error measuring uexp . The parameter scaling factor σp equals
1 as we consider that prior information on parameter pm has low accuracy. It is important
to note that all the algorithm have been developed considering the dimensionless problem
in order to compare only the order of variation of parameters and observation (and thus
avoid units and scales effects).
The sensitivity function Θm measures the sensitivity of the estimated field u with respect
to change in the parameter pm [3, 20, 21]. A small value of the magnitude of Θm indicates
that large changes in pm yield small changes in u. The estimation of parameter pm is
therefore difficult in such case. When the sensitivity coefficient Θm is small, the inverse
problem is ill-conditioned. If the sensitivity coefficients are linearly dependent, the inverse
problem is also ill-conditioned. Therefore, to get an optimal evaluation of parameters P,
it is desirable to have linearly-independent sensitivity functions Θm with large magnitude
for all parameters pm . These conditions ensure the best conditioning of the computational
algorithm to solve the inverse problem and thus the better accuracy of the estimated
parameter.
It is possible to define the experimental design in order to reach these conditions. The
issue is to find the optimal quantity of sensors N ◦ , their optimal location X◦ , the optimal
amplitude A◦ and the optimal frequency ω ◦ of the flux imposed at the surface Γq . To
search this optimal experiment design, we introduce the following measurement plan:
(2.4)
π = {N, X, A, ω}
In analysis of optimal experiment for estimating the unknown parameter(s) P, a quality
index describing the accuracy of recovering is the D−optimum criterion [1–4, 10, 11, 15,
20, 26, 31, 32]:
(2.5)
Ψ = det [F (π)]
where F (π) is the normalized Fisher information matrix [15, 31]:
F (π) = [Φij ]
N ˆ τ
X
Φij =
Θi (xn , t)Θj (xn , t)dt
n=1
, ∀(i, j) ∈ {1, . . . , M}2
(2.6a)
(2.6b)
0
The matrix F (π) characterises the total sensitivity of the system as a function of measurement plan π Eqs. (2.4). The search of the OED aims at finding a measurement plan
π ⋆ for which the objective function Eq. (2.5) reaches the maximum value:
π ◦ = {N ◦ , X◦ , A◦ , ω ◦ } = arg max Ψ
π
(2.7)
On the optimal experimental design
9 / 31
To solve problem (2.7), a domain of variation Ω π is considered for the quantity of sensors
N, their location X, the amplitude A and the frequency of the flux. Then, the following
steps are done for each value of the measurement plan π = {N, X, A, ω} in domain Ω π .
The direct problem defined byEqs. (2.1)(a-e) is computed. In this work, it is solved by
using a finite-difference standard discretization. An embedded adaptive in time Runge–
Kutta scheme combined with central spatial discretization is used. It is adaptive and
embedded to estimate local error with little extra cost.
Given the solution u of the direct problem 2.1(a-e) for a fixed value of the measurement
∂u
by solving the sensitivity problem
plan, the next step consists in computing Θm = ∂p
m
associated to parameter pm :
∂
∂k
∂k ∂ 2 u
∂Θm
∂
∂c ∂u ∂u ∂
k Θm = −
+
c
−
+
x ∈ Ω, t > 0,
∂t
∂x
∂x
∂pm ∂t
∂x ∂pm ∂x
∂pm ∂xx
Θm = 0
x ∈ Γd , t > 0,
∂u ∂k
∂Θm
x ∈ Γq , t > 0,
=
k
∂x
∂x ∂pm
Θm = 0
x ∈ Ω, t = 0,
k = (k0 + k1 u)
(2.8a)
(2.8b)
(2.8c)
(2.8d)
(2.8e)
The sensitivity problem Eqs. (2.8) is also solved using an embedded adaptive in time
Runge–Kutta scheme combined with central spatial discretization.
It is important to note that the solution of the direct (2.1) problem and the sensitivity
(2.8)(a-e) problem are done for a given parameter P. The later is chosen with the prior
information. The validity of the OED depends on the a priori knowledge. If there is
no prior information, the methodology of the OED can be done using an outer loop on
the parameter P sampled using Latin hypercube or Halton quasi-random samplings for
instance.
Then, given the sensitivity coefficients, the Fisher matrix (2.6)(a,b) and the D−optimum
criterion (2.5) are calculated.
2.3. Numerical example
Current section illustrates the search of optimum experiment design for problem Eqs.
(2.1) considering u0 = uD = 1. The dimension-less properties of the material are equal
to c⋆ = 10.9, k0⋆ = 1, k1⋆ = 0.12. The final simulation time of the experiment is fixed to
τ = 28.3.
From a physical point of view, the numerical values correspond to a material of length
Lr = 0.1 m. The thermal properties were chosen from a wood fibre: c = 3.92 · 105 J/m3 /K,
k0 = 0.118 W/m/K and k1 = 5 · 10−4 W/m/K2 . The initial temperature of the material
is considered uniform at T0 = Tref = 293.15 K. The temperature at the boundary ΓD is
maintained at TD = Tref = 293.15 K. The characteristic time of the problem is tref = 3050
10 / 31
J. Berger, D. Dutykh & N. Mendes
(a)
(b)
Figure 2. Schematic view of experimental design (a) and the extreme values of
the heat flux (b).
s. Thus, the total time of the experiments corresponds to 24 h. A schematic view of the
problem is given in Figure 2a.
As mentioned in previous section, the OED is sought as a function of the quantity of
sensors N, their location X, the amplitude A and frequency ω of the flux imposed at the
surface Γq . For the current numerical application, we consider a maximum of N = 4 possible sensors. Their location varies from X = [0] for N = 1 and X = [0 0.25 0.5 0.75] for
N = 4 as shown in Figure 2a. The variance of the error measurement equals σT = 0.05◦C.
For the amplitude A, 5 values are considered in the interval [0.2, 1] with a regular mesh
of 0.2. The minimal value of A corresponds to a physical heat flux of 70 W/m2 . For the
frequency, 30 values have been taken in the interval [10−3, 1]. The extreme values of the
heat flux are illustrated in Figure 2b.
2.3.1 Estimation of one single parameter
Considering previous description of the numerical example, we first focus on the search
of the OED for designing an experiment aiming at estimating one parameter. Thus, P
equals to c, k0 or k1 . Figures 3a, 3b and 3c show the variation of the criterion Ψ as a
function of the amplitude A and the frequency ω of the heat flux. For each of the three
experiments, the criterion is maximum for a low frequency and for a large amplitude as
presented in Figure 6. Numerical values of the OED π ◦ are given in Table 1. In terms of
physical numerical values, the OED is reached for a heat flux of amplitude 350 W/m2 and
a period of 17.3 h, 60.6h and 53.5h for estimating parameter c, k0 or k1 , respectively.
On the optimal experimental design
11 / 31
The effect of the numbers of sensors and their positions is given in Figures 4a, 4b and
4c for a fixed amplitude. The criterion increases with the number N of sensor. Adding
new sensors yields to a more optimum design. However, it can be noticed the slope of the
increase has a small magnitude. For a single sensor N = 1, almost 95% of the maximal
criterion is reached. Therefore, if the amount of sensors for the experiments is limited, just
one sensor located at the boundary receiving the heat flux is available reasonable, enabling
to recover one parameter (c, k0 or k1 ). Indeed, the boundary Γq is where the sensitivity
coefficient of the parameter has the largest magnitude as shown in Figures 5a, 5b and 5c
correspondingly.
It can be noticed in Figure 5d that the sensitivity coefficients on the surface Γq ≡ {x = 0}
are linearly-independent. Therefore, the inverse problem will not be very sensitive to
measurement errors which enables an accurate estimation of the parameters P = (c, k0 , k1 ).
The OED is based on the solution of the direct problem 2.1 and the sensitivity problem 2.8, computed for a given value of the parameter P. The a-priori knowledge on
parameter P is crucial for the success of the OED. For some applications, there is no prior
information on the parameter. For these cases, it is possible to operate an outer loop on
the parameter P sampled in a given domain. For this numerical example, it is considered
that there is no prior information on parameters k0 and k1 . It is only known their values belong to the domains [0.1, 0.2] W/m/K and [1 10] 10−4 W/m/K2 . The a priori value of the
volumetric heat capacity is still fixed to c = 3.92 · 105 J/m3 /K. A Halton quasi-random
algorithm [16] is then used to generate a sampling of 100 couples of parameters (k0 , k1 )
in the domains, as illustrated in Figure 7a. For each couple (k0 , k1 ), the methodology described in Section 2.2 is performed to compute the OED. Figure 7b gives the variation of
optimal frequency ω ◦ with parameter k0 and k1 . The blue points correspond to the results
of the OED method for each couple of the discrete sampling. A surface of variation of
ω ◦ is then interpolated. It can be noted that the increase of ω ◦ is more important with
parameter k1 .
2.3.2 Verification of the ODE
In order to illustrate the robustness of the ODE, several inverse problems are solved, considering different measurement designs. We consider 30 inverse problems (2.1) of parameter
P associated to the 30 values of the frequency ω in the interval [10−3, 1]. For a fixed value
of ω, N e = 100 experiments were designed to recover the parameters P equals to c, k0
or k1 and to P = (c, k0, k1 ). For this purpose, the simulated discrete time observation is
obtained by numerical resolution of the direct problem and a uniform sampling with time
period ∆t = 10 min. A normal distribution with a standard deviation of σ = 0.01 was
assumed. The inverse problem is solved using Levenberg-Marquardt method [6, 21, 33].
After the solution of the N e × 30 inverse problems, the empirical mean square error is
computed:
r
1
(P − P ◦ (ω)) 2 ,
(2.9)
EMSE (ω) =
Ne
12 / 31
J. Berger, D. Dutykh & N. Mendes
(a) IP(c)
(b) IP(k0 )
(c) IP(k1 )
(d) IP((c, k0 , k1 ))
Figure 3. D−optimum criterion Ψ as a function of A and ω for the 4 different
experiments (N = N ◦ = 1).
where P ◦ is the estimated parameter by the resolution of the inverse problem.
The procedure is then repeated for the quantity N and location X of the sensors of the
measurement plans and a fixed value of the frequency ω = ω ◦ . The empirical error is
computed with equation (2.9) as a function of N (as P now depends on N). This approach
is also applied for parameters P equals to c, k0 or k1 and P = (c, k0 , k1 ) .
The evolution of the empirical mean square error is illustrated in Figures 8a and 8b for
the parameter estimation problem corresponding to the four experiments.
13 / 31
On the optimal experimental design
(a)IP(c)
(b)IP(k0 )
(c)IP(k1 )
(d)IP(c, k0 , k1 )
Figure 4. D−optimum criterion Ψ as a function of N , X and ω for the 4
different experiments (A = A◦ = 1).
In Figure 8a, the variation of the error with the number of the sensors is smooth for the
experiments for the estimation of a single parameter. However, for the estimation of several
parameters P = (c, k0 , k1 ) , the EMSE is below 10−2 for three sensors. These results are
in accordance with the variation of the criterion Ψ, in Figure 5d. An important variation
of the magnitude of Ψ can be observed when passing from X = 0 to X = [0 0.25].
In Figure 8b, the error is minimised for the optimum experiment design ω ◦ for the four
experiments. The estimation of the parameter tends to be inaccurate when the frequency
14 / 31
J. Berger, D. Dutykh & N. Mendes
(a) IP(c)
(b) IP(k0 )
(c) IP(k1 )
(d) IP(c, k0 , k1 )
Figure 5. Sensitivity coefficient of parameters for the four different experiments
(N = N ◦ = 1).
tends to zero, as the heat flux tends to zero and there is almost no solicitation of the
material.
For the experiments for the estimation of parameter P = c, the error variation is smooth.
As illustrated in Figure 8b, the peak of the criterion Ψ has a large deviation. Therefore,
if the frequency ω of the experiments is different from the one of the ODE ω ◦ , the error
to estimate the parameter might still be acceptable. For the other experiments, we can
notice that the peak of the criterion Ψ has a small deviation in Figures 5b, 5c and 5d.
15 / 31
On the optimal experimental design
Figure 6. Optimal heat flux for the four different experiments (N = 1).
(a)
(b)
Figure 7. Halton quasi-random sampling of parameters (k0 , k1 ) (a) and
frequency optimal experiment design ω ◦ (b).
16 / 31
J. Berger, D. Dutykh & N. Mendes
Unknown parameter
Optimal experimental design π ◦
max(Ψ)
A◦ (-)
A◦
(W/m2 )
1
ω◦
(-)
1
ω◦
(h)
N◦
X◦
P=c
1.4 ×
10−2
1
350
20.4
17.3
1
0
P = k0
8.6
1
350
71.6
60.6
1
0
P = k1
3.23
1
350
63.2
53.5
1
0
P = (c, k0 , k1 )
9.1 ×
10−3
1
350
29.7
25.2
3
0
Table 1. Value of the maximum D−optimum criterion for the four different
experiments.
(a)
(b)
Figure 8. Evolution of the empirical mean square error with the number of
sensors (a) and with the frequency ω (b).
The peak of the criterion Ψ has a small deviation in Figures 5b, 5c and 5d. Consequently,
the variation of the error in Figure 8b is more important for estimating parameters P = k0 ,
P = k1 or P = (c, k0 , k1 ) . If the frequency of the experiment design is different from
the one of the OED, it implies a loose of accuracy in the estimation of these parameters.
17 / 31
On the optimal experimental design
3. Optimal Experiment Design for a non-linear coupled
heat and moisture transfer problem
In the previous section, the concept of optimal experiment design was detailed for a
non-linear heat transfer problem. It has also been verified for 100 inverse problems for
different experiment designs by calculating the error as a function of the frequency and the
number of sensors of the measurement plan. In next section, the approach goes further
by studying optimal experiment design to estimate transport and storage coefficients of a
heat and moisture transfer.
3.1. Physical problem and mathematical formulation
The physical problem concerns a 1-dimensional coupled heat and moisture transfer
through a wall based on [5, 23, 24, 28]:
∂T
∂
∂Pv
∂
∂T
d1
− (Lv − cl T )
d2
=0
−
c10
∂t
∂x
∂x
∂x
∂x
∂w
∂Pv
∂
d2
=0
−
∂t
∂x
∂x
(3.1a)
(3.1b)
with w the water content, Pv the vapour pressure, T the temperature, d2 the vapour permeability, c10 the volumetric thermal capacity, d1 the thermal conductivity, cl the specific
heat capacity of water and Lv the latent heat of evaporation. As this study remains in the
hygroscopic range of the material, the liquid transport is not presently taken into account.
The following assumptions are adopted on the properties of the material. The volumetric
moisture content is assumed as a first-degree polynomial of the vapour pressure. The
vapour permeability and the thermal conductivity are taken as a first-degree polynomial
of the volumetric moisture content:
w
c21
= c20 +
Pv
w0
w0
w
d2 = d20 + d21
w0
w
k = d10 + d11
w0
(3.2a)
(3.2b)
(3.2c)
Based on these equations, the experimental set-up considers a material with uniform
initial temperature and vapour pressure. At t > 0, sinusoidal heat and vapour flux are
imposed at boundary Γq = {x = 0}, while the temperature and vapour pressure are maintained constant at the other boundary ΓD = {x = 1}. The unscaled problem can be
18 / 31
J. Berger, D. Dutykh & N. Mendes
formulated as:
∂
⋆ ∂u
⋆ ∂v
d1 ⋆ − (Ko1 − Ko2 ) Lu ⋆ d2 ⋆ = 0 x⋆ ∈ Ω, t⋆ ∈ ]0, τ ]
∂x
∂x
∂x
c⋆10
∂u
∂
− ⋆
⋆
∂t
∂x
c⋆21
∂v
∂
− Lu ⋆
⋆
∂t
∂x
⋆ ∂v
d2 ⋆ = 0
∂x
(3.3a)
x⋆ ∈ Ω, t⋆ ∈ ]0, τ ]
(3.3b)
− d⋆1
∂u
= A⋆1 sin(2πω1⋆t⋆ )
⋆
∂x
x⋆ ∈ Γq , t⋆ ∈ ]0, τ ]
(3.3c)
− d⋆2
∂v
= A⋆2 sin(2πω2⋆t⋆ )
∂x⋆
x⋆ ∈ Γq , t⋆ ∈ ]0, τ ]
(3.3d)
u = uD ,
u = u0 (x),
d⋆1
d⋆2
=
=
x⋆ ∈ ΓD , t⋆ ∈ ]0, τ ] (3.3e)
v = vD
d⋆10
d⋆20
+
+
x⋆ ∈ Ω, t⋆ = 0
v = v0 (x)
d⋆11
d⋆21
(c20 +
(c20 +
c⋆21 v)
c⋆21 v)
(3.3f)
(3.3g)
(3.3h)
with the following dimension-less ratios:
u=
T
Tref
v=
Pv
Pref
u0 =
T0
Tref
uD =
TD
Tref
vD =
Pv,D
Pref
Ko1 =
Lu =
d2,ref c1,ref
c2,ref d1,ref
d⋆10 =
d10
d1,ref
d⋆11 =
d11
d1,ref
d⋆21 =
d21
d2,ref
c⋆10 =
c10
c⋆21 =
c21
A⋆1 =
A1 L
d1,ref Tref
A⋆2 =
A2 L
d2,ref Pref
ω1⋆ = ω1 tref
t⋆ =
t
x⋆ =
x
L
tref =
tref
c1,ref
Lv c2,ref Pref
c1,ref Tref
c2,ref
v0 =
Pv,0
Pref
Ko2 =
d⋆20 =
cL c2,ref
Pref
c1,ref
d20
d2,ref
c2,ref =
w0
Pref
ω2⋆ = ω2 tref
c1,ref L2
d1,ref
where Ko is the Kossovitch number, Lu stands for the Luikov number, L is the dimension of the material, tref , the characteristic time of the problem, A and ω the amplitude
and intensity of the heat and vapour fluxes. Subscripts ref accounts for a reference value,
19 / 31
On the optimal experimental design
D for the Dirichlet boundary conditions, 0 for the initial condition of the problem, 1 for
the heat transfer, 2 for the vapour transfer and superscript ⋆ for dimensionless parameters.
3.2. Optimal experiment design
The OED is sought as a function of the quantity of sensors N and their locations X
and as a function of the frequencies (ω1 , ω2 ) of the heat and vapour fluxes. According
to the results of Section 2.3 and to our numerical investigations, a monotonous increase
of the sensitivity of the system were observed with the amplitude (A1 , A2 ) of the flux.
Therefore, these parameters were considered as fixed. Thus, the OED aims at finding the
measurement plan π ◦ for which the criterion Eqs. (2.5) reaches a maximum value:
π ◦ = {N ◦ , X◦, ω1◦ , ω2◦} = arg max Ψ
π
(3.4)
Parameters Lv , cL are physical constants given for the problem. Therefore, considering
Eqs. (3.3), a number of 7 parameters can be estimated by the resolution of inverse problems:
(c⋆10 , d⋆10 , d⋆11 , c⋆20 , c⋆21 , d⋆20 , d⋆21 ). One can focus on the definition of an experiment for the
estimation of one single parameter or several parameters. It might be noted that parameters
(c⋆20 , c⋆21 , d⋆20 , d⋆21 ) can be identified by inverse problems considering field u, v or both (u, v)
as observation. The thermal properties (c⋆10 , d⋆10 , d⋆11 ) can only be estimated using the
observation of u.
All in all, 20 experiments can be defined as:
i.
ii.
iii.
iv.
15 for the estimation of single parameters among c⋆10 , d⋆10 , d⋆11 , c⋆20 , c⋆21 , d⋆20 or d⋆21 ,
1 for the estimation of the thermal properties (c⋆10 , d⋆10 , d⋆11 ),
3 for the estimation of the moisture properties (c⋆20 , c⋆21 , d⋆20 , d⋆21 ),
1 for the estimation of the hygrothermal properties (hg) (c⋆10 , d⋆10 , d⋆11 , c⋆20 , c⋆21 , d⋆20 , d⋆21 ).
Following notation is adopted: IP(p) [u] states for an experiment defined for the estimation
of parameter p using field u as the observation. The 20 experiments are recalled in Table 2.
The same methodology as presented in Section 2 is used. The fifteen sensitivity functions
are computed for calculating the criterion (3.4).
3.3. Numerical example
The following numerical values are considered for numerical application. The domain Ω
is defined as Ω = [0, 1], considering the wall thickness of the material as the characteristic
length of the problem Lr = 0.1m. The total simulation time of the experiments is τ =
6 × 103 , corresponding to a physical simulation of 40 days. The initial and prescribed
conditions equal to uD = u0 = 1 and vD = v0 = 0.5. The reference temperature and
vapour pressure are taken as Tref = 293.15K and Pv,ref = 2337Pa, respectively. The
amplitude of the heat and vapour fluxes are A⋆1 = 1.7 × 10−2 and A⋆2 = 1.7, equivalent to
600W/m2 and 1.2 × 10−7kg/m3 /s.
20 / 31
J. Berger, D. Dutykh & N. Mendes
The dimension-less parameters are:
Lu = 2.5 × 10−4 Ko1 = 2.1 × 10−1 Ko2 = 2.5 × 10−2 d⋆10 = 5 × 10−2 d⋆11 = 5 × 10−3
d⋆20 = 1
d⋆21 = 0.4
c⋆10 = 1
c⋆20 = 2
c⋆21 = 6
The properties correspond to a wood fibre material [5, 23]. They are given in its physical
dimension in Appendix A.
The OED is sought as a function of the number of the sensors N. It varies from N = 1,
located at X = [0], to N = 3, located at X = [0 0.2 0.4]. The variances of the
measurement error are σT = 0.05◦ and σP = 2Pa. The OED is also investigated as a
function of the frequencies (ω1⋆, ω2⋆ ) of the flux. For each frequency, 20 values are taken in
the interval [1 × 10−5 ; 1.5 × 10−3 ]. The minimal and maximal values correspond to a flux
having a physical period of 495 days and 3.3 days, respectively.
As mentioned in Section 2.2, the computation of the solution of the optimal experiment
plan is done by successive iterations for the whole grid of the measurement plan π =
{N, X, ω1 , ω2 } .
3.3.1 Estimation of one single parameter
In the current section, the OED is sought for experiments to estimate one single parameter among c⋆10 , d⋆10 , d⋆11 , c⋆20 , c⋆21 , d⋆20 or d⋆21 . The results of the ODE are given in Table 2
for the physical values and in Figure 11b for the dimensionless values.
The criterion Ψ varies actively with the frequencies (ω1⋆ , ω2⋆ ) for estimating parameter
c⋆10 , as shown in Figure 9a. The ODE is reached for a period for the heat and vapour flux
of 27.2 days.
On the other hand, Figures 9b and 9c illustrate that the criterion varies mostly with the
frequency ω1⋆ for estimating parameter d⋆10 and d⋆11 . The ODE is reached for a period for the
heat of 78.1 days. Furthermore, the magnitude of Ψ is really higher than zero, ensuring a
good conditioning to solve inverse problems. As observed in the previous section concerning
a non-linear heat transfer problem (Section 2.3), the ODE period of the heat flux is shorter
for the thermal capacity than for the thermal conductivity.
In Figure 12, the sensitivity functions of parameters d⋆10 is given for experimental conditions π where Ψ reaches its minimal value and for the ODE conditions (where Ψ reaches
it maximal value). In Figure 12a, the magnitude of the sensitivity function is almost 50
times smaller than one for the ODE conditions (Figure 12b). Therefore, the estimation
of the parameters might be less accurate for this conditions than the other ones. In Figure 12b, it can be also noticed that the sensitivity function is maximal at the boundary
Γq = {x = 0} . It emphasizes why the criterion Ψ is maximal for a single sensor settled
at this boundary.
For experiments estimating the vapour properties, c⋆20 , c⋆21 , d⋆20 or d⋆21 , the ODE is not
very sensitive to the frequency of the heat flux as reported in Figures 9d, 9e, 9f and 10a.
It can be noted that the criterion Ψ is higher when dealing with experiments considering
fields (u, v) as observations. The computational algorithm to solve the inverse problem is
On the optimal experimental design
21 / 31
better conditioned. The period of the ODE vapour flux is 9.5 and 12.3 days for experiments
estimating d⋆20 and c⋆21 . Experiments for parameters d⋆21 and c⋆20 have the same period of
the ODE vapour flux (27.2 days).
It might be recalled that this analysis has been done for a fixed and constant error measurement, equals for the temperature and vapour pressure sensors. Indeed, this hypothesis
can be revisited in practical applications. Furthermore, if only one field is available to
estimate the vapour properties (u or v), it is required to use the field v as observation and
prioritize the accuracy for those sensors. The criterion Φ and the sensitivity is highest for
the field v as shown in Figures 13a and 13b.
For all experiments, a single sensor located at x = 0 is sufficient to reach more than 95%
of the maximum criterion as given in Table 2. The surface receiving the heat and vapour
flux is where the sensitivity of the parameters is the higher as illustrated in Figures 13a
and 13b for the parameter c21 .
3.3.2 Estimation of several parameters
The optimal experiment design is now sought for experiments to estimate several parameters: the thermophysical properties (c⋆10 , d⋆10 , d⋆11 ) , the moisture properties (c⋆20 , c⋆21 , d⋆20 , d⋆21 )
and the hygrothermal (hg) coefficients (c⋆10 , d⋆10 , d⋆11 , c⋆20 , c⋆21 , d⋆20 , d⋆21 ) . Five experiments
are considered for the estimation of these parameters as reported in Table 2.
For the estimation of the hygrothermal properties, Figure 10b shows that the criterion Ψ
varies mostly with frequency ω1⋆ . The criterion is very close to zero (an order of O(10−9 )).
The computational algorithm for the solution of the inverse problem might be ill conditioned. The results of the inverse problem might not be accurate. A way to circumvent
the problem is to increase the precision of the sensors and the amplitude of the heat and
vapour fluxes.
According to Table 2, the moisture properties might be estimated using both fields (u, v) .
If it is not possible, the field v would give a more accurate estimation than field u. The
criterion varies mostly with the frequency ω1⋆ of the heat flux, Figure 10c. The ODE is
reached for a period of 35.4 and 16 days for the heat and vapour fluxes, respectively.
For the thermal properties, the criterion varies with both heat and vapour flux frequencies, as reported in Figure 10d. A period of 16 days for both fluxes yields the ODE.
The variation of the criterion with the quantity of sensors is given in Figure 14. As
expected, the criterion increases with the number of sensors for all experiments. For the
estimation of the hygrothermal properties, the ODE is achieved for 3 sensors. As the vapour
properties might be estimated using both fields (u, v), the use of two sensors, located at
x = 0 and x = 0.2m, for each field, is a reasonable choice that enables to reach more
than 95% of the criterion Ψ . In the case, for any reasons, the measurement of both fields is
not possible, three sensors measuring the field v are required for the ODE. For the thermal
properties, the use of only two sensors is reasonable, as 95% of the maximum criterion is
reached. These results are synthesised in Table 2.
22 / 31
J. Berger, D. Dutykh & N. Mendes
Optimal experimental design π ⋆
2π
2π
◦
ω1
ω2◦
N◦
(days)
(days)
Unknown parameter
max {Ψ}
IP(c10 )[u]
6.03
27.2
27.2
1
IP(d10 )[u]
288
78.1
20.9
1
IP(d11 )[u]
181
78.1
101.7
1
IP(d20 )[u]
0.53
7.3
7.3
1
IP(d20 )[v]
0.99
60
9.5
1
IP(d20 )[u, v]
1.53
9.5
9.5
1
IP(d21 )[u]
133
78.1
27.2
1
IP(d21 )[v]
140
9.5
27.2
1
IP(d21 )[u, v]
276
78.1
27.2
1
IP(c20 )[u]
0.02
20.9
20.9
1
IP(c20 )[v]
0.03
9.5
27.2
1
IP(c20 )[u, v]
0.05
27.2
27.2
1
IP(c21 )[u]
0.004
35.4
20.9
1
IP(c21 )[v]
0.014
78.1
12.3
1
IP(c21 )[u, v]
0.017
78.1
12.3
1
IP(hg)[u, v]
4.5 ×
10−9
27.2
12.3
3
IP(c20 , c21 , d20 , d21 )[u]
0.001
60.0
20.9
3
IP(c20 , c21 , d20 , d21 )[v]
0.15
12.3
27.2
3
IP(c20 , c21 , d20 , d21 )[u, v]
0.2
35.4
16
2
IP(c10 , d10 , d11 )[u]
137
16
16
2
Table 2. Value of the maximum D−optimum criterion for each experiments.
23 / 31
On the optimal experimental design
(a) IP(c10 )[u], N = 1
(b) IP(d10 )[u], N = 1
(c) IP(d11 )[u], N = 1
(d) IP(d20 )[u,v], N = 1
(e) IP(d21 )[u,v], N = 1
(f) IP(c20 )[u,v], N = 1
Figure 9. D−optimum criterion Ψ as a function of the frequencies (ω1 , ω2 ) .
24 / 31
J. Berger, D. Dutykh & N. Mendes
(a) IPc21 [u,v], N = 1
(b) IP(hg)[u,v], N = 2
(c) IP(c20 , c21 , d20 , d21 )[u,v], N = 1
(d) IP(c10 , d10 , d11 )[u], N = 1
Figure 10. D−optimum criterion Ψ as a function of the frequencies (ω1 , ω2 ) .
4. Conclusions
In the context of estimating material properties, using in-site measurements of wall in
test cells or real buildings combined with identification methods, this study explored the
concept of optimal experiment design (OED). It aimed at searching the best experimental
conditions in terms of quantity and location of sensors and flux imposed to the material.
These conditions ensure to provide the best accuracy of the identification method and thus
25 / 31
On the optimal experimental design
(a)
(b)
Figure 11. Frequencies (ω1⋆◦ , ω2⋆◦ ) of the ODE.
(a)
(b)
Figure 12. Sensitivity coefficient of parameter k0 for experimental conditions π
where Ψ reaches its minimal value (a) ( ω2π1 = 3.3 days, ω2π2 = 495 days, N = 3)
and for the ODE conditions (b) ( ω2π◦ = 78.1 days, ω2π◦ = 20.9 days, N ◦ = 1).
1
2
the estimated parameter. The search of the OED was done using the Fisher information
matrix, quantifying the amount of information contained in the observed field.
26 / 31
J. Berger, D. Dutykh & N. Mendes
(a)
(b)
Figure 13. Sensitivity coefficient of parameter c21 for the ODE conditions of
IP(c21 )[u, v] .
Figure 14. D−optimum criterion Ψ as a function of the quantity of sensors N .
On the optimal experimental design
27 / 31
Two cases were illustrated. The first one dealt with an inverse problem of non-linear
heat transfer to estimate the thermal properties (storage and transport coefficients), considering a uniform initial temperature field and, as boundary conditions, a fixed prescribed
temperature on one side and a sinusoidal heat flux on the other one, for 24 hours. The
ODE yields in using one single temperature sensor located at the surface receiving the
flux. The flux should have an intensity of 350 W/m2 and experiment periods of 17 h, 61 h,
54 h and 25 h for the estimation of thermal capacity, thermal conductivity, temperature
dependent conductivity and all parameters, respectively. For this case study, the concept
of optimal experiment was verified by solving 100 inverse problems for different experiment
designs and calculating the error as a function of the frequency and the number of sensors
of the measurement plan. It has been demonstrated that the accuracy of the parameter is
higher when parameters are recovered with measurements carried out via ODE.
The second application concerned experiments for inverse problems of a coupled nonlinear heat and moisture transfer problem to estimate the hygrothermal properties of the
material. The experiment is similar to the first case study. Uniform initial distribution of
temperature and vapour pressure fields were considered, submitted to harmonic heat and
vapour fluxes at one side and prescribed temperature and vapour pressure values at the
other one. The experiment was done for a period of 40 days. The achievement of the ODE
was explored for different experiments, aiming at estimating one or several parameters. As
the equation considered are weakly coupled, the thermal properties can only be determined
using the temperature. The accuracy of the identification method does not depend on the
vapour flux. For the vapour properties, results have shown that the estimation will be
more accurate using the temperature and vapour pressure as observations. Furthermore,
the accuracy actively depends on the period of the vapour flux. Single sensor has to
be located at the side where the flux is imposed. For experiments to estimate all the
hygrothermal properties, two sensors are enough to improve the accuracy.
This contribution explored the concept of optimal experiment design for application in
building physics for estimating the hygrothermal properties of construction materials. The
methodology of searching the ODE is important before starting any experiment aiming at
solving parameter estimation problems. With a priori values of the unknown parameters,
the sensitivity functions and the optimum criterion can be computed. Results allow choosing by means of deterministic approach the conditions of the experiments. A good design
of experiments avoids installing unnecessary sensors. In the case of coupled phenomena,
as the combined heat and moisture transfer problem, considering sensor accuracies, ODE
enables to choose and select the field that must be monitored. It also improves the accuracy
of the solution of the estimation problem.
Further work is expected to be carried out with different design strategies (ODE and
others), estimating properties using real observations.
28 / 31
J. Berger, D. Dutykh & N. Mendes
Properties
Value
d10 (W/m/K)
0.5
d11 (W/m/K/Pa)
0.05
d20 (s)
2.5 × 10−11
d21 (s/Pa)
1 × 10−11
c11 (J/m3 /K)
4 × 105
c20 (-)
2
c21 (s2 /m2 )
2.5 × 10−2
Physical constant
Value
Lv (J/kg)
2.5 × 106
cL (J/kg)
1000
Table 3. Hygrothermal properties of the material.
Acknowledgments
The authors acknowledge the Brazilian Agencies CAPES of the Ministry of Education and
the CNPQ of the Ministry of Science, Technology and Innovation for the financial support.
Dr. Dutykh also acknowledges the hospitality of PUCPR during his visit in April 2016.
A. Hygrothermal properties
The hygrothermal properties of the material used in Section 3.3 are given in Table 3.
References
[1] O. M. Alifanov, E. A. Artioukhine, and S. V. Rumyantsev. Extreme Methods for Solving
Ill-Posed Problems with Applications to Inverse Heat Transfer Problems. Begellhouse, New
York, 1995. 4, 5, 8
[2] M. L. Anderson, W. Bangerth, and G. F. Carey. Analysis of parameter sensitivity and
experimental design for a class of nonlinear partial differential equations. Int. J. Num. Meth.
Fluids, 48(6):583–605, jun 2005. 4
On the optimal experimental design
29 / 31
[3] E. A. Artyukhin and S. A. Budnik. Optimal planning of measurements in numerical experiment determination of the characteristics of a heat flux. Journal of Engineering Physics,
49(6):1453–1458, dec 1985. 5, 8
[4] J. V. Beck and K. J. Arnold. Parameter Estimation in Engineering and Science. John Wiley
& Sons, Inc., New York, 1977. 8
[5] J. Berger, M. Chhay, S. Guernouti, and M. Woloszyn. Proper generalized decomposition for
solving coupled heat and moisture transfer. Journal of Building Performance Simulation,
8(5):295–311, sep 2015. 17, 20
[6] J. Berger, S. Gasparin, M. Chhay, and N. Mendes. Estimation of temperature-dependent
thermal conductivity using proper generalised decomposition for building energy management. Journal of Building Physics, jun 2016. 11
[7] R. Cantin, J. Burgholzer, G. Guarracino, B. Moujalled, S. Tamelikecht, and B. Royet. Field
assessment of thermal behaviour of historical dwellings in France. Building and Environment,
45(2):473–484, feb 2010. 4
[8] T. Colinart, D. Lelievre, and P. Glouannec. Experimental and numerical analysis of the
transient hygrothermal behavior of multilayered hemp concrete wall. Energy and Buildings,
112:1–11, jan 2016. 4
[9] T. Z. Desta, J. Langmans, and S. Roels. Experimental data set for validation of heat, air
and moisture transport models of building envelopes. Building and Environment, 46(5):1038–
1046, may 2011. 4
[10] A. F. Emery and A. V. Nenarokomov. Optimal experiment design. Measurement Science
and Technology, 9(6):864–876, jun 1998. 4, 5, 8
[11] T. D. Fadale, A. V. Nenarokomov, and A. F. Emery. Two Approaches to Optimal Sensor
Locations. Journal of Heat Transfer, 117(2):373, 1995. 4, 5, 8
[12] S. Finsterle. Practical notes on local data-worth analysis. Water Resources Research,
51(12):9904–9924, dec 2015. 8
[13] C. James, C. J. Simonson, P. Talukdar, and S. Roels. Numerical and experimental data set
for benchmarking hygroscopic buffering models. Int. J. Heat Mass Transfer, 53(19-20):3638–
3654, sep 2010. 4
[14] T. Kalamees and J. Vinha. Hygrothermal calculations and laboratory tests on timber-framed
wall structures. Building and Environment, 38(5):689–697, may 2003. 4
[15] M. Karalashvili, W. Marquardt, and A. Mhamdi. Optimal experimental design for identification of transport coefficient models in convection-diffusion equations. Computers & Chemical
Engineering, 80:101–113, sep 2015. 4, 5, 8
[16] L. Kocis and W. J. Whiten. Computational investigations of low-discrepancy sequences.
ACM Transactions on Mathematical Software, 23(2):266–294, jun 1997. 11
[17] M. Labat, M. Woloszyn, G. Garnier, and J. J. Roux. Dynamic coupling between vapour
and heat transfer in wall assemblies: Analysis of measurements achieved under real climate.
Building and Environment, 87:129–141, may 2015. 4
[18] D. Lelievre, T. Colinart, and P. Glouannec. Hygrothermal behavior of bio-based building
materials including hysteresis effects: Experimental and numerical analyses. Energy and
Buildings, 84:617–627, dec 2014. 4
[19] A. Nassiopoulos and F. Bourquin. On-Site Building Walls Characterization. Numerical Heat
Transfer, Part A: Applications, 63(3):179–200, jan 2013. 4
J. Berger, D. Dutykh & N. Mendes
30 / 31
[20] A. V. Nenarokomov and D. V. Titov. Optimal experiment design to estimate the radiative
properties of materials. Journal of Quantitative Spectroscopy and Radiative Transfer, 93(13):313–323, jun 2005. 5, 8
[21] M. N. Ozisik and H. R. B. Orlande. Inverse Heat Transfer: Fundamentals and Applications.
CRC Press, New York, 2000. 7, 8, 11
[22] H. Rafidiarison, R. Rémond, and E. Mougel. Dataset for validating 1-D heat and mass
transfer models within building walls with hygroscopic materials. Building and Environment,
89:356–368, jul 2015. 4
[23] S. Rouchier, M. Woloszyn, Y. Kedowide, and T. Béjat. Identification of the hygrothermal
properties of a building envelope material by the covariance matrix adaptation evolution
strategy. Journal of Building Performance Simulation, 9(1):101–114, jan 2016. 4, 17, 20
[24] H.-J. Steeman, M. Van Belleghem, A. Janssens, and M. De Paepe. Coupled simulation
of heat and moisture transport in air and porous materials for the assessment of moisture
related damage. Building and Environment, 44(10):2176–2184, oct 2009. 17
[25] E. Stéphan, R. Cantin, A. Caucheteux, S. Tasca-Guernouti, and P. Michel. Experimental
assessment of thermal inertia in insulated and non-insulated old limestone buildings. Building
and Environment, 80:241–248, oct 2014. 4
[26] N.-Z. Sun. Structure reduction and robust experimental design for distributed parameter
identification. Inverse Problems, 21(2):739–758, apr 2005. 4, 5, 8
[27] P. Talukdar, S. O. Olutmayin, O. F. Osanyintola, and C. J. Simonson. An experimental
data set for benchmarking 1-D, transient heat and moisture transfer models of hygroscopic
building materials. Part I: Experimental facility and material property data. Int. J. Heat
Mass Transfer, 50(23-24):4527–4539, nov 2007. 4
[28] F. Tariku, K. Kumaran, and P. Fazio. Transient model for coupled heat, air and moisture
transfer through multilayered porous media. Int. J. Heat Mass Transfer, 53(15-16):3035–
3044, jul 2010. 17
[29] S. Tasca-Guernouti, B. Flament, L. Bourru, J. Burgholzer, A. Kindinis, R. Cantin, B. Moujalled, G. Guarracino, and T. Marchal. Experimental Method to determine Thermal Conductivity, and Capacity Values in Traditional Buildings. In VI Mediterranean Congress of
Climatization, Madrid, Spain, 2011. 4
[30] G. Terejanu, R. R. Upadhyay, and K. Miki. Bayesian experimental design for the active
nitridation of graphite by atomic nitrogen. Experimental Thermal and Fluid Science, 36:178–
193, jan 2012. 5
[31] D. Ucinski. Optimal Measurement Methods for Distributed Parameter System Identification.
2004. 4, 5, 8
[32] A. Vande Wouwer, N. Point, S. Porteman, and M. Remy. An approach to the selection
of optimal sensor locations in distributed parameter systems. Journal of Process Control,
10(4):291–300, aug 2000. 4, 5, 8
[33] C.-Y. Yang. Estimation of the temperature-dependent thermal conductivity in inverse heat
conduction problems. Appl. Math. Model., 23(6):469–478, jun 1999. 11
[34] A. Zaknoune, P. Glouannec, and P. Salagnac. Estimation of moisture transport coefficients in
porous materials using experimental drying kinetics. Heat and Mass Transfer, 48(2):205–215,
feb 2012. 4
On the optimal experimental design
31 / 31
Thermal Systems Laboratory, Mechanical Engineering Graduate Program, Pontifical
Catholic University of Paraná, Rua Imaculada Conceição, 1155, CEP: 80215-901, Curitiba
– Paraná, Brazil
E-mail address: [email protected]
URL: https://www.researchgate.net/profile/Julien_Berger3/
LAMA, UMR 5127 CNRS, Université Savoie Mont Blanc, Campus Scientifique, 73376 Le
Bourget-du-Lac Cedex, France
E-mail address: [email protected]
URL: http://www.denys-dutykh.com/
Thermal Systems Laboratory, Mechanical Engineering Graduate Program, Pontifical
Catholic University of Paraná, Rua Imaculada Conceição, 1155, CEP: 80215-901, Curitiba
– Paraná, Brazil
E-mail address: [email protected]
URL: https://www.researchgate.net/profile/Nathan_Mendes/
| 5 |
PSO-Optimized Hopfield Neural Network-Based Multipath Routing for Mobile Ad-hoc
Networks
Mansour Sheikhan
EE Department, Islamic Azad University, South Tehran Branch, Mahallati Highway, Dah-Haghi Blvd., Tehran, Iran
E-mail:[email protected]
Ehsan Hemmati
EE Department, Islamic Azad University, South Tehran Branch, Mahallati Highway, Dah-Haghi Blvd., Tehran, Iran
E-mail:[email protected]
Abstract
Mobile ad-hoc network (MANET) is a dynamic collection of mobile computers without the need for any existing infrastructure.
Nodes in a MANET act as hosts and routers. Designing of robust routing algorithms for MANETs is a challenging task. Disjoint
multipath routing protocols address this problem and increase the reliability, security and lifetime of network. However, selecting an
optimal multipath is an NP-complete problem. In this paper, Hopfield neural network (HNN) which its parameters are optimized by
particle swarm optimization (PSO) algorithm is proposed as multipath routing algorithm. Link expiration time (LET) between each
two nodes is used as the link reliability estimation metric. This approach can find either node-disjoint or link-disjoint paths in single
phase route discovery. Simulation results confirm that PSO-HNN routing algorithm has better performance as compared to backup
path set selection algorithm (BPSA) in terms of the path set reliability and number of paths in the set.
Keywords: Mobile ad-hoc networks; Reliability; Multipath routing; Neural networks; Particle swarm optimization (PSO)
1. Introduction
Mobile ad-hoc networks (MANETs) are defined as the
category of wireless networks that utilize multi-hop
radio relaying and are capable of operating without the
support of any fixed infrastructure. MANETs are useful
when no wired link is available such as in disaster
recovery or more generally when a fast deployment is
necessary. Also, required expensive investments in base
stations result to deployment of wireless networks in adhoc mode.1 The tasks such as relaying packets,
discovering routes, monitoring the network and securing
communication are performed by mobile nodes in the
network. Nodes typically communicate in multihopping fashion and intermediate nodes act as routers
by forwarding data.2 Unlike the wired networks, route
failure is a normal behavior in MANETs. Route failure
occurs frequently due to mobility and limited battery
power of nodes as well as characteristics of the wireless
communication medium. Route recovery process should
be done when the route failure occurs in the network.
This requires sending extra control packets which
consumes network resources like bandwidth and battery
power. It also leads to excessive delay that affects the
quality of service (QoS) for delay sensitive
applications.3 Routing protocols should adapt to these
topology changes and continue to maintain connection
between the source and destination nodes in the
presence of path breaks caused by link and/or node
failures.
http://dx.doi.org/10.1080/18756891.2012.696921
International Journal of Computational Intelligence Systems, Year 2012, Volume 5, Number 3, Pages 568-581
M. Sheikhan, E. Hemmati
In order to increase the routing resistance against link
or/and node failures, one solution is to use not just a
single path, but a set of redundant paths.4-9 In this way,
there is a fundamental and quite difficult question:
which of the potential exponentially many paths within
the network should the routing layer use to achieve the
highest reliability?
The path with low probability of failure is the reliable
one. The correlation of failures between the paths in the
set should be as low as possible. Common links and
nodes between paths are common failure points in the
set. In order to provide high reliable path set, we focus
on finding disjoint paths; i.e., the paths with no link- or
node- overlap. The problem of finding disjoint paths is
non-trivial. Two general principles for selecting the
reliable paths can be stated. First, a long path is less
reliable than a short one. Second, a larger number of
disjoint paths increases the overall reliability. Thus, one
should be looking for a large set of short and disjoint
paths.
Most of the past works on multipath routing protocols
have been based on the single version of an existing
routing protocol. They have been mostly focused on
load-balancing, delay, energy efficiency and quick
failure recovery, but not considered how to effectively
select the multiple paths and quality of selected paths as
like as the disjointedness of the selected paths. The
number of paths found by some algorithms has been
restricted to a specific number and they can not select
the appropriate number of paths. Also, the proposed
algorithms have been limited to find link-disjoint or
node-disjoint path set and they are not capable to find
both link- and node-disjoint path sets.
Split multipath routing (SMR) algorithm, proposed by
Lee and Gerla8, selects maximally disjoint paths. In
SMR, the multipath routes are discovered by a modified
route request procedure. In this scheme, the
intermediate nodes are allowed to rebroadcast duplicate
route request messages if they receive them from a link
with better QoS. However in this protocol, the reliability
of links has not been used and the paths are not entirely
disjoint. It is also limited to route replies provided by
the routing protocol. Pearlman et al.9 have proposed a
method which selects the two routes with the least
number of hops. This protocol does not provide a metric
or model to justify a particular route selection scheme.
Selecting paths based on a small number of hops does
not imply that paths will undergo less frequent
breakages, while the appropriate number of paths may
be far from two. Dana et al.6 have proposed a backup
and disjoint path set selection algorithm for MANETs.
This algorithm produces a set of backup paths with high
reliability. In order to acquire the link reliability
estimates, link expiration time (LET) between each two
nodes has been used.
The problem of finding the most reliable multipath has
already been shown to be computationally hard.10 It is
noted that the motivation for using soft-computing
methods is the need to cope with the complexity of
existing computational models of real-world systems.1115
The recent resurgence of interest in neural networks
has its roots in the recognition that human brain
performs the computations in a different manner as
compared to conventional digital computers. A neural
network has a parallel and distributed information
processing structure which consists of many processing
elements interconnected via weighted connections. One
of the important applications of neural network is to
solve optimization problems. In these cases, we want to
find the best way to do something, subject to certain
constraints. The best solution is generally defined by a
specific criterion. Hopfield neural network (HNN) is a
model that is commonly used to solve optimization and
NP-complete problems.16, 17 One of the most important
features of this model is that Hopfield network can be
easily implemented in hardware, therefore neural
computations are performed in parallel and the solution
is found more quickly. The use of neural networks to
find the shortest path between a given sourcedestination pair was first introduced by Rauch and
Winarske.18 An adaptive framework to solve the optimal
routing problem based on Hopfield neural network has
been introduced by Ali and Kamoun.19
The computation of the neural network is heavily
dependent on the parameters. The parameters should be
chosen in such a way that the neural network
approaches towards a valid solution.20 Consequently,
tuning the HNN parameters should be done in order to
achieve the best solution over the minimum iterations.
The lack of clear guidelines in selecting appropriate
values of the parameters of energy function is an
important issue in the efficiency of HNNs in solving
combinatorial optimization problems. It is obvious that
a trial and error approach does not ensure the
convergence to optimal solutions.19
PSO-HNN MANET Multipath Router
In the recent years, several intelligent optimization
algorithms have been used in different applications such
as: (a) genetic algorithm (GA) in scheduling problem21,
total cost and allocation problem22, obtaining the
optimal rule set and the membership function for fuzzybased systems23, and facility location problem24, (b) ant
colony optimization (ACO) in chaotic synchronization25
and grouping machines and parts into cells26, (c)
artificial immune method in several nonlinear systems27,
(d) particle swarm optimization (PSO) in singleobjective and multi-objective problems28, 29, bandwidth
prediction30, parameter identification of chaotic
systems31, QoS-aware web service selection in service
oriented communication problem32, and solving
multimodal problems33, (e) harmony search (HS)
algorithm for synchronization of discrete-time chaotic
systems34.
Among the mentioned approaches, PSO which has been
proposed by Kennedy and Eberhart35 is inherently
continuous and simulates the social behavior of a flock
of birds. In PSO, the solution of a specific problem is
being represented by multi-dimensional position of a
particle and a swarm of particles is working together to
search the best position which corresponds to the best
problem solution. In each PSO iteration, every particle
moves from its original position to a new position based
on its velocity. Particle's velocity is influenced by the
cognitive and social information of the particles. The
cognitive information of a particle is the best position
that has been visited by the particle. Based on the
traditional speed-displacement search model, Gao et
al.36 have analyzed the PSO mechanism and proposed a
generalized PSO model, so that the PSO algorithm can
be applied to the fields of discrete and combinatorial
optimization.
Bastos-Filho et al.37 have proposed a swarm intelligence
and HNN-based routing algorithm for communication
networks. They have used the PSO technique to
optimize HNN parameters and the energy function
coefficients. The results have shown that the proposed
approach achieves better results than existing algorithms
that employ the HNN for routing. Hemmati and
Sheikhan38 have proposed a reliable path set selection
algorithm based on Hopfield neural network. The
performance of their proposed algorithm has been
improved by using noisy HNN which introduces more
complexity to the HNN implementation39. They have
shown that the reliability of the multiple disjoint paths
found by the proposed algorithm is higher than those
found by traditional multipath routing algorithms. But
they did not mention how the HNN parameters can be
tuned.
In this paper, we introduce a Hopfield neural model to
find the most reliable disjoint multipath in a MANET.
The proposed scheme contains several parameters and
there is no rule to define them exactly. PSO is used to
find the best set of parameters used in HNN for
multipath calculation. Each node in the network can be
equipped with a neural network, and all the network
nodes can train and use the neural networks to obtain
the optimal or sub-optimal multipath.
In the next section, HNN and PSO are overviewed. The
operation and termination conditions of proposed
multipath routing algorithm are described in Section 3.
The implementation details and the use of PSO to tune
the proposed HNN model parameters are presented in
Section 4. Section 5 demonstrates the efficiency of the
proposed technique through a simulation study. Then,
Section 6 conducts a performance evaluation of the
proposed algorithm and then the computational
complexity of proposed algorithm is described in
Section 7. Finally, Section 8 presents the conclusions of
the study.
2. Background
2.1. Hopfield neural network
The use of neural networks to solve constrained
optimization problems was initiated by Hopfield and
Tank17, 40. The general structure of HNN is shown in
Fig. 1. Assume that the network consists of n neurons.
The neurons are modeled as amplifiers in conjunction
with resistors and capacitors which compromise
feedback circuits. A sigmoid monotonic increasing
function relates the output Vi of the ith neuron to its
input Ui:
V i = g (U i ) =
1
,
1 + e - lU i
(1)
where λ is a constant called the gain factor. Each
amplifier i has an input resistor ri and an input capacitor
Ci which partially define the time constant τi of ith
neuron. To describe synaptic connections, we can use
the matrix T = [Tij], also known as the connection
matrix, of the network. A resistor of value Rij connects
M. Sheikhan, E. Hemmati
one of the outputs of the amplifier j to the input of
amplifier i. In this model, each neuron receives an
external current (known also as a bias) Ii.
Input
bias
I1
Input
bias
I2
T21
C1
C2
r1
r2
r
1
2
Fig. 1. Hopfield neural network model.
The dynamic of ith neuron can be described as
follows16:
n
dU i
U
= åT ij .V j - i + I i ;
dt
ti
j =1
(2)
n
1
1
1 n 1
T ij = å
,
= + å ,t i = RiC i .
ri j =1 R ij
j =1 R ij C i R i
For a symmetric connection matrix and for a sufficiently
high gain of transfer function, then the dynamics of the
neurons follow gradient descent of the quadratic energy
function16:
n
1 n n
(3)
E = - ååT ijV iV j - å I iV i .
2 i =1 j =1
i =1
Hopfield has also shown that as long as the state of
neural network evolves inside the N-dimensional
hypercube, defined by Vi Î {0,1}, if li ® ¥ the
it can also be used in optimization problems that are
partially irregular, time variable, and noisy.
In PSO algorithm, each bird, referred to as a “particle”,
represents a possible solution for the problem. Each
particle moves through the D-dimensional problem
space by updating its velocities with the best solution
found by itself (cognitive behavior) and the best
solution found by any particle in its neighborhood
(social behavior). Particles move in a multidimensional
search space and each particle has a velocity and a
position as follow:
v i (k + 1) = v i (k ) + g 1i (Pi - x i (k )) + g 2i (G - x i (k )),
(4)
x i (k + 1) = x i (k ) +v i (k + 1),
(5)
where i is the particle index, k is the discrete time index,
vi is the velocity of ith particle, xi is the position of ith
particle, Pi is the best position found by ith particle
(personal best), G is the best position found by swarm
(global best) and γ1,2 are random numbers in the interval
[0,1] applied to ith particle. In our simulations, the
following equation is used for velocity41:
v i (k + 1) = j (k )v i (k ) + a1 [g 1i (Pi - x i (k )) ] + a 2 [g 2i (G - x i (k )) ],
in which
f is the inertia function and a1,2 are the
acceleration constants. The flowchart of the standard
PSO algorithm is summarized in Fig. 2.
Initialize position and velocity of particles
Evaluate particles to determine the search
space constrained ones by limitations of
variables
Determine Pi and G
minimum of energy function (3) will attain one of the 2N
vertices of this hypercube.
t < Tmax
No
2.2. Particle swarm optimization
PSO is a population based stochastic optimization
technique which does not use the gradient of the
problem being optimized, so it does not require being
differentiable for the optimization problem as is
necessary in classic optimization algorithms. Therefore
(6)
Yes
Update vi and xi
Fig. 2. Standard PSO flowchart.
PSO-HNN MANET Multipath Router
PSO and the genetic algorithm (GA) are both
population-based search algorithms and both of them
share information among their population members to
enhance their search processes. They also use a
combination of deterministic and probabilistic rules.
Different experiences have shown that although PSO
and GA result on average in the same solution quality,
the PSO is more computationally efficient which means
it performs less number of function-evaluations as
compared to GA. In the other hand, it has been shown
that computational effort in PSO and GA is problem
dependent; but in solving unconstrained nonlinear
problems with continuous design variables, PSO
outperforms the GA42, 43.
As the Hopfield performance is depending on its
parameter setting, and it is a continuous problem, PSO
is used in this study to determine the optimum values of
Hopfield network parameters.
wireless transmission range R > 0. Node j is called a
neighbor of node i if and only if j is within the
transmission range R of i, and the link (i,j) is included in
the link set L. The probability of proper operation is
also assigned to the links. A link (for example hth link)
operates with probability p hlink and fails with
probability q hlink = 1 - p hlink . In this protocol, each node
continuously monitors the reliability of its incident
links. For each source and destination pair,
ReliabilityS ® D ( S ¹ D) denotes the probability that
there exists at least one path connecting source and
destination over G graph.
3.2. Reliability computation method
Assume that pathi , between source and destination,
consists of m links. The probability that pathi be
operational or the path reliability is obtained by:
m
PathReli i = Õ p hlink .
3. Proposed Approach
Disjoint multiple paths between source and destination
are classified into two types, namely node-disjoint and
link-disjoint multiple paths. Node-disjoint paths do not
have any nodes in common, except the source and
destination. Link-disjoint paths do not have common
links, but may share some nodes. Link-disjoint paths are
more available than node-disjoint paths. Movement of
nodes at the junctions causes the failure of all the paths
going through that node. The node-disjoint type has the
most disjointedness, as all the nodes/links of two routes
are different; i.e., the network resource is exclusive for
the respective routes.
Here we propose an algorithm which can compute both
node-disjoint and link-disjoint paths. This approach
consists of three steps. First a method is introduced to
compute the multipath reliability. Then, a route
discovery mechanism is defined, and finally the
multipath calculation algorithm is proposed in which the
most reliable multipath is found by a PSO-optimized
HNN.
3.1. Assumptions
A MANET is denoted by a probabilistic graph
G=(V,L), where V is a set of nodes in the network and
L is a set of links connecting nodes. Nodes are located on
a two-dimensional field and move in the field. Node
i (ÎV ) has a distinct identifier IDi. Each node has the
(7)
h =1
A path fails with probability:
PathFail i = 1 - PathReli i .
(8)
Assume that P={Path1, Path2, … , Pathn} denotes a set
of disjoint paths that includes n paths. The reliability of
the path set is calculated by:
n
PSreliability = 1 - Õ PathFail i
i =1
n
= 1 - Õ (1 - PathReli i )
(9)
i =1
1 - (1 - PathReli 1 - PathReli 2 - .... - PathReli n ) ; PathReli i < 1
n
å PathReli
i =1
i
.
3.3. Route discovery algorithm
In this algorithm, each node has a route cache, which
preserves the order of nodes and probabilities of all
paths, PathRelii, from each source. In order to find paths
between the source and destination, the source node
broadcasts the route request (RREQ) packet to the nodes
which are in its transmission range. This RREQ packet
has the following fields:
Record: a record of the sequence of hops taken by
this packet;
Prob: the reliability of the followed path;
TTL: the maximum number of hops that a packet can
traverse along the network before it is discarded.
M. Sheikhan, E. Hemmati
When a node receives the RREQ packet, it decrements
TTL by 1 and performs the following steps:
1. If the node is a destination one, it updates the
Prob, and adds the Record and updates Prob to its route
cash.
2. If TTL=0, the RREQ packet is discarded. Thus,
TTL limits the number of intermediate nodes in a path.
3. If the ID of this node is already listed in the
Record of route request, the RREQ packet is discarded
to avoid looping.
4. Otherwise, the node appends its node ID to the
Record in the RREQ packet, and also updates the Prob
field, and re-broadcasts the request to its neighbors.
When the destination node receives the first RREQ
packet from a specific source node, it waits for a while
to receive other RREQ packets from longer paths.
Now all the information, which is needed to calculate
link-disjoint or node-disjoint paths, is obtained by single
route discovery, so there is no need to send extra
messages as overhead in the MANET; when both linkdisjoint and node-disjoint paths are needed.
For all of the paths in destination route cache, we can
assume a disjointedness matrix, ρ=[ρjk], with the size of
(n×n) in which n is the total number RREQ packets
received from a specific source. In order to find nodedisjoint paths, we define NDρjk as:
ND r jk
j ¹k
ì0 if j th path and k th path in
ï
= í route cache are node - disjointed
ï1 otherwise .
î
LD r jk
j ¹k
ì1 if ith path in route cache is in the path set
Vi = í
î0 otherwise .
(12)
The normalized reliability of ith path is defined as:
Ci =
PathReli i
,
PathReli max
(13)
where PathRelimax is the highest path reliability along all
of the paths in the route cache.
We have to define an energy function whose
minimization process drives the neural network into its
lowest energy state. This stable state shall correspond to
the most reliable set of multiple paths. The energy
function must favor states that correspond to disjoint
paths and it must also favor the path set which has the
highest reliability. A suitable energy function that
satisfies such requirements is proposed that is given by:
E =
µ1
n
n
åå r V V
2
i =1 j =1
ij
i
n
j
- µ 2 åC iV i ,
(14)
i =1
(10)
We define also LDρjk to find link-disjoint paths as
follows:
ì0 if j th path and k th path in
ï
= í route cache are link - disjointed
ï1 otherwise .
î
the route cache. Thus, the total number of neurons
required in HNN is equal to the total number of paths
found in the route discovery phase. Based on the fact
that a path is selected to be in the set or not, the output
of a neuron at location i is defined as follows:
(11)
All the diagonal elements in ρ are also set to zero.
3.4. Neural-based multipath calculation
To formulate the problem in terms of Hopfield neural
model, a suitable representation scheme should be found
so that the most reliable multiple disjoint paths can be
decoded from the final stable state of the neural
network. Each neuron in this model represents a path of
discovered paths in the route discovery phase, listed in
where µ1 and µ2 are positive constants and n is the
number of paths in the route cache. In (14), ρij is defined
as NDρij if we want to calculate node-disjoint multiple
paths and it is defined as LDρij if we want to calculate
link-disjoint multiple paths. The minimum value of the
µ1 term is zero and it is occurred when all the selected
paths are disjoint. The µ2 term is corresponding to the
reliability of the selected multiple disjoint paths. The
larger number of high reliable paths results in the lower
energy function, but this term should not cause nondisjoint paths participate in the set. By selecting each
path from the route cache, the energy decreases by
(µ2×Ci). If the selected path is not disjoint with other
paths of the set, then the total energy is changed by (µ1µ2×Ci). As we want to select disjoint paths, so this
criterion should have higher energy value and µ1-µ2×Ci
should be positive. As Ci<1, so µ1 and µ2 should meet
the criterion µ1 ³ µ2 > 0 . In this way, assume that
PSO-HNN MANET Multipath Router
P1 = {Path1 , Path2 , ..., Pathk } is a set of disjoint paths and
P2 = P1 È {Pathl }which Pathl Ï P1 has a common node
(or link) with at least one path in P1. Also assume that
EP1 is the energy of P1 and EP2 is the energy of P2.
Based on these assumptions, we can write EP2=EP1+µ1µ2×Ci. As P2 is not disjoint, so its energy should be
higher than the energy of P1. In other words, EP2>EP1;
thus µ1-µ2×Cl > 0. From (13) it is obvious that C l £ 1 ,
so in order to keep EP2>EP1 always true, µ1 and µ2
should meet the criterion µ1 ³ µ2 > 0 .
By comparing the corresponding coefficients in (14)
and (3), the connection strengths and the biases are
derived by:
T ij = - µ1 rij
(15)
I i = µ2C i .
As can be seen in (15), this model maps the reliability
information into the biases and path disjointedness
information of the neural interconnections. So the
destination node can set the neural interconnection as it
received each RREQ packet and then set the biases after
all the RREQ packets has been received. The
destination node uses this neural network to find the
most reliable path set, and then returns a copy of neural
network solution in a route reply packet to the source
node.
The conceptual scheme of implementing the proposed
algorithm in a MANET node is depicted in Fig. 3.
According to the network properties, the neural network
parameters can be tuned by PSO once and then these
parameters can be used.
4.
Implementation
Tuning
Details
and
Parameter
In order to implement this algorithm, there should be a
method to predict the link reliability and also a network
model that dictates how the nodes move throughout the
network and structure of the network itself. We also
need to define the Hopfield implementation method.
4.1. Link reliability prediction
We consider a free space propagation model44. In this
model, the wireless signal strength depends only on the
distance to the transmitter. Hence, the link duration of
Lij can be predicted from the motion information of the
two nodes. Assume that node i and node j are within the
same transmission range, r, of each other. Let (xi,yi) be
the coordinate of node i and (xj,yj) be that of node j.
Also let vi and vj be the speeds, and θi and θj
(0≤θi,θj<2π) be the moving directions of node i and j,
respectively. Then, the amount of time that the two
mobile hosts will stay connected, LET, is predicted as
follows45:
LET (i, j ) =
-(ab + cd ) + (a 2 + c 2 )r 2 - (ad - bc) 2
;
a2 + c2
b = xi - x j ,
d = yi - y j ,
(16)
a = vi cos qi - v j cos q j ,
c = vi sin qi - v j sin q j .
The probability of proper operation of hth link (the link
between node i and node j) is calculated by:
p hlink =
LET i , j
LET max
,
(17)
where LETmax is the maximum link expiration time in
the network.
4.2. Ad-hoc network model
All of the nodes start the experiment at a random
location within a rectangular working area of 1000×500
m2 and moved as defined by the random waypoint
model46. For this, each node selects a random
destination within the working area and moves linearly
to that location at a predefined speed. After reaching its
destination, it pauses for a specified time period (pause
time) and then the node selects a new random location
and continues the process again.
In the present study, each node pauses at the current
position for 5 s and speed of individual nodes ranges
from 0 to 20 m/s. We have run simulations for a
network with 30 mobile hosts, operating at transmission
ranges (R) varying from 100 to 300 m. The TTL is set to
3 as an initial value. The proposed model consists of n
neurons where n is the total number of paths in the
destination route cache found in the route discovery
phase. The average paths found in the route discovery
phase in different MANETs is calculated where TTL=3.
The result is shown in Fig. 4.
Route Discovery
Algorithm
Route
Cache
Reliability
Normalization
C
Path1, PathReli1
Path2, PathReli2
.
.
Pathn, PathRelin
Hopfield Neural
Network
Tij = - µ1 ρij
Ii = µ2 Ci
ρij , Ci
Disjointedness
Calculation
µ1, µ2, λ and Vth
ρ
Path Set
Selected_Path1
Selected_Path2
.
.
Selected_Pathk
Particle Swarm
Optimization
Fig. 3. Conceptual scheme of proposed algorithm.
Fig. 4. Number of paths averaged over different MANETs.
an undesirable state20, 47. At the start and based on our
simulation results, Ui's are chosen randomly such that
-0.0005<Ui<0.0005. The calculations are stopped when
the network reaches a stable state; e.g., when the
difference between the outputs is less than 10-6 from one
update to another. When the network is in a stable state,
the final values of Vi are rounded off. For this reason,
the threshold voltage, Vth, should be defined and Vi
should be rounded off in such a way that it is set to 0 if
Vi<Vth, and to 1 otherwise.
4.4. Selecting network parameters by PSO
4.3. Hopfield network initialization
The evolution of the neural network state is simulated
by the solution of a system of n differential equations
where the variables are the neuron outputs (Vi). The
solution consists of observing the outputs Vi for a
specific duration d t . Without loss of generality, it has
been considered that τ=1. To avoid bias in favor of any
particular path set, it is assumed that all the inputs Ui are
equal to 0. However, to help rapid convergence of the
network, small perturbations are applied to the initial
inputs of network. Initial random noise helps to break
symmetry which may caused by paths with same
reliability or by the possibility of having two or more
high reliable path set, while preventing it from adopting
The PSO algorithm is used to find the values of µ1, µ2,
λ, d t and Vth of the HNN. Each dimension of the PSO
particle is used to present a different HNN parameter,
thus each particle has five dimensions. To evaluate the
fitness of each particle, we compute the percentage error
obtained by 500 HNN simulations. An error is assumed
to have occurred if the HNN method finds multiple
paths which are not disjoint or the reliability of the set is
less than reliability of the set found by parameter setting
reported in Ref. 38 (Table 1). The initial values of PSO
parameters are shown in Table 2. Table 3 depicts the
maximum value (Xmax) and the minimum value (Xmin)
of the parameters. The result of applying PSO algorithm
to obtain the optimum values for HNN parameters is
shown in Table 4.
http://dx.doi.org/10.1080/18756891.2012.696921
International Journal of Computational Intelligence Systems, Year 2012, Volume 5, Number 3, Pages 568-581
M. Sheikhan, E. Hemmati
Table 1. Values of HNN parameters reported in Ref. 38.
Parameter
µ1
µ2
λ
dt
Value
1
1
50
10-5
Vth
0.1
Table 2. Initial values of PSO parameters.
Parameter
Max. No. of
iterations
Population size
Max. particle Initial inertia Final inertia Min. global
velocity
weight
weight
error gradient
Value
300
20
4
0.9
10-5
0.2
Table 3. Maximum and minimum values of the parameters.
Parameter
Value
µ1
µ2
Xmax
50
Xmin
0
Parameter
Value
dt
λ
Xmax
50
Xmin
0
Xmax
100
Xmin
0
Xmax
1
Table 4. Optimum parameter values obtained by PSO.
µ1
µ2
λ
dt
32
27
0.45
10-3
5. Simulation Results
For the purpose of evaluating the efficiency of proposed
routing method, it has been applied to different
networks with various parameters. Then, the total path
reliability, the number of paths in path set, and the
lifetime are calculated for each simulation. Several
MANETs with different characteristics are considered
in this study in order to tune the HNN parameters and
the simulation results are shown that HNN with this
parameter setting have good performance when
applying to a variety of MANETs. Therefore, the tuned
parameters obtained in this study can be used in other
MANETs also, or it can be tuned based on that MANET
once and then using the same parameter for all of the
nodes in a network.
Lifetime is considered as the time between the
construction of path set and the breakage of all paths in
the path set. Based on the network parameters, two
different scenarios have been considered. In each
scenario, one aspect of MANET characteristics is
considered. For example in the first scenario, the
connectivity is considered and the network density is
considered in the second scenario. Then the simulation
results for these scenarios, using the proposed routing
algorithm, are compared with previous works, nonoptimized HNN and noisy HNN path set selection
algorithms reported in Refs. 38 and 39. All the
simulation programs have been written and compiled in
Xmin
0
Vth
Xmax
1
Xmin
0
Vth
0.23
MATLAB 7.10 and run on PC with Intel Pentium
E5300 CPU and 2 GB RAM.
In the first scenario, the number and speed of nodes are
considered fixed and the transmission range is variable.
Since in this scenario, the transmission range is variable
the focus of this scenario is to study the effect of this
parameter on the reliability and number of paths. As
depicted in Fig. 5, the reliability of disjoint paths
selected by PSO-optimized HNN is higher than two
other algorithms. By comparing the reliability between
link-disjoint paths (Fig. 5a) and node-disjoint paths
(Fig. 5b), we find that link-disjoint paths are more
reliable than node-disjoint paths in the same
transmission range, because there are more choices to
select the link-disjoint paths rather than the nodedisjoint paths. Consequently, if the transmission range is
increased, then the number of selected paths is also
increased for both sets of link-disjoint and node-disjoint
paths (Fig. 6). When the radio transmission range of
nodes increases, there will be more paths between
source and destination nodes which routing algorithm
can select among them.
The time between the construction of path set and the
breakage of all paths in the path set is called lifetime or
time to failure. As shown in Fig. 7, the lifetime is
increased as the transmission range increases.
M. Sheikhan, E. Hemmati
1
300
Lifetime (s)
Reliability
Non-optimized HNN
250
0.8
Non-optimized HNN
0.6
Noisy HNN
PSO-optimized HNN
0.4
150
200
250
50
150
Lifetime (s)
Reliability
Non-optimized HNN
0.6
Noisy HNN
200
250
Non-optimized HNN
Noisy HNN
200
PSO-optimized HNN
150
50
150
300
Transmission range (m)
(b)
Fig. 5. Reliability of different HNN-based path selection
algorithms; a) Link-disjoint, b) Node-disjoint.
200
250
Transmission range (m)
(b)
1
Reliability
Non-optimized HNN
Noisy HNN
PSO-optimized HNN
8
0.8
0.6
Non-optimized HNN
6
14
Noisy HNN
PSO-optimized HNN
200
250
Transmission range (m)
(a)
12
Non-optimized HNN
Noisy HNN
10
PSO-optimized HNN
20
6
30
40
Number of nodes
(a)
1
8
4
150
0.4
10
300
Reliability
4
150
300
Fig. 7. Lifetime of selected paths by different HNN-based
algorithms; a) Link-disjoint, b) Node-disjoint.
14
Number of paths
250
300
100
PSO-optimized HNN
Number of paths
200
250
Transmission range (m)
(a)
300
0.8
10
150
100
1
12
PSO-optimized HNN
300
Transmission range (m)
(a)
0.4
150
200
Noisy HNN
0.8
Non-optimized HNN
0.6
Noisy HNN
PSO-optimized HNN
200
250
Transmission range (m)
(b)
300
Fig. 6. Number of paths selected by different HNN-based
algorithms; a) Link-disjoint, b) Node-disjoint.
Link-disjoint path sets have longer lifetime than nodedisjoint ones. In this case, the PSO-optimized HNN
algorithm also shows better performance than others
which means the disjoint paths selected by this
algorithm are more reliable and the connection can
continue longer by using the paths selected by this
algorithm.
0.4
10
20
30
40
Number of nodes
(b)
Fig. 8. Reliability for different number of nodes using HNNbased algorithms; a) Link-disjoint paths, b) Node-disjoint
paths.
The second scenario considers a fixed transmission
range while the number of nodes is variable. In this
scenario, the transmission range of all nodes is set to
250 m. The reliability of the set for both the nodedisjoint and the link-disjoint paths is shown in Fig. 8. As
can be seen, in a high density network, there are more
PSO-HNN MANET Multipath Router
routes in the path set and the path set reliability is higher
than those of a low density network.
Although the PSO-optimized HNN has similar or only
slightly better results with those of a noisy HNN, but it
is noticeable that the implementation of the HNN is
simpler than a noisy HNN. It can be found from the
simulation results that by fine tuning of the HNN, better
results can be achieved with a simpler implementation
and there is no need of an additional hardware.
optimized is also higher than those found by nonoptimized HNN.
Dana et al.6 have proposed the backup path set selection
algorithm (BPSA), based on a heuristic and picks a set
of highly reliable paths. As the BPSA can find just the
link-disjoint path set, we compare the link-disjoint path
set, selected by PSO-optimized HNN algorithm with
those selected by BPSA algorithm. Table 7 shows the
comparison between the proposed algorithm and BPSA.
The values of path set reliability and number of paths
are averaged over several simulations with different
MANETs. The proposed algorithm has better
performance in both the reliability and the number of
paths. It shows up to 58.3% improvement in the path set
reliability and up to 22.4% improvement in the number
of paths in the set.
6. Performance Evaluation
A performance comparison between the proposed
algorithm and the shortest path (SP) algorithm is done
in this study and the results are listed in Table 5. As can
be seen, the reliability of path set in the proposed
algorithm outperforms the SP algorithm. The best
improvement in reliability is achieved when the
transmission range is 250 m. In this transmission range,
the link-disjoint path set reliability is 4.5 times more
than the corresponding shortest path reliability. Also,
the best improvement in lifetime is achieved when the
transmission range is 250 m. In this transmission range,
the link-disjoint path set lifetime is 3.2 times more than
the corresponding shortest path lifetime.
Hemmati and Sheikhan38 have proposed a method for
path set selection using Hopfield neural network.
Hopfield parameter settings in Ref. 38 have been based
on the values reported in Table 1. The average number
of iterations for both PSO-optimized and non-optimized
settings are reported in Table 6. The PSO-optimized
HNN which is reported in this study takes less iterations
and thus less time to reach the steady state and get the
solution. The reliability of path sets found by PSO-
7. Computational Complexity of Algorithm
To determine the computational complexity, three
components should be considered: route discovery
calculations, calculating the elements of ρ matrix, and
normalized reliability calculations.
To determine this complexity, assume that M=TTL. The
maximum number of nodes in each path of the proposed
algorithm is M+1, in which source and destination are
included. So, there is M-1 intermediate nodes that the
algorithm finds them in the route discovery phase. At
the worst case, where all of the MANET nodes are in
the transmission range of each other, there are O(|V|(M-1))
such routes.
Table 5. Reliability and lifetime comparison between proposed algorithm and SP algorithm.
Transmission
range (m)
Reliability
Lifetime (s)
Node-disjoint
Link-disjoint SP
Node-disjoint
Link-disjoint
SP
150
0.466
0.475
0.107
101.3
111.5
48.3
200
0.754
0.762
0.171
149.2
155.9
62.4
250
0.839
0.865
0.191
221.7
228.8
71.2
300
0.949
0.951
0.352
264.9
267.3
87.5
M. Sheikhan, E. Hemmati
Table 6. Performance comparison between PSO-optimized and non-optimized HNN algorithms.
Number of iterations
Reliability
NonPSO-optimized
optimized
Percent of
increment in
optimized as
compared to
non-optimized
model
Transmission range
(m)
Nonoptimized
Percent of
iterations in
PSO-optimized
optimized to nonoptimized model
150
200
70476
87944
8587
7896
12.2
9.0
0.415
0.723
0.470
0.758
13.3
4.8
250
83969
9601
11.4
0.806
0.837
3.8
300
95040
11312
11.9
0.855
0.950
11.1
Table 7. Performance comparison between proposed algorithm and BPSA algorithm.
Reliability
Number of paths
Transmission range (m)
Proposed algorithm
BPSA algorithm
Proposed algorithm
BPSA algorithm
150
200
250
300
0.47
0.76
0.84
0.95
0.38
0.48
0.67
0.71
To determine the route discovery computational
complexity, the complexity in path reliability
calculations should be considered. In a naive
implementation, the path reliability is calculated
independently for each route. The maximum length of
each path between two nodes in the MANET is M, so
the route discovery algorithm has to make O(M |V|(M-1))
calculations. To calculate the elements of ρ matrix, (M1)2 comparisons for node-disjoint and M2 comparisons
for link-disjoint path sets should be performed between
each two paths of total O(|V|(M-1)) paths to determine ρ
matrix. So, ρ matrix calculation for the mentioned path
sets needs O(|V|2(M-1)) operations.
To calculate the normalized path reliability (Ci), it is
noted that according to (13) the number of operations in
this part is equal to the total number of paths. The
computational complexity of this part is O(|V|(M-1)).
Since the computation time in neural networks is
expected to be very short, then the complexity of the
proposed approach for the path set selection is best
assessed in terms of the programming complexity,
which is defined as the number of arithmetic operations
required to determine again the proper synaptic
connections and the biases each time a new data is fed
to the neural net. According to (15), we can conclude
that by determining the elements of ρ matrix, the
synaptic connections of neural network (Tij) are also
4.9
6.2
8.6
13.1
4.8
6.1
7.4
10.7
calculated. The biases are also specified when the
normalized path reliabilities (Ci) are determined. In this
study, we set M to 3 so the total computational
complexity in the worst case is O(|V|4).
8. Conclusion
In this paper, we have proposed a reliable multipath
routing algorithm using Hopfield neural network
optimized by PSO for MANETs. A reliable path is
constructed by the links that keep connection between
two nodes for a long time. Each node predicts
disconnection of all the incident links. A flooding
mechanism has been used for the route discovery.
Finding multiple paths in a single route discovery in this
algorithm reduces the routing overhead. The proposed
algorithm is able to compute both link-disjoint and
node-disjoint multiple paths. Multipath routing in
MANET consists of determining the most reliable
disjoint multiple paths between each node pair within
the network. The disjoint multiple path selection
algorithm is proposed using Hopfield neural network. In
order to improve the network performance, PSO
algorithm is used to optimize the HNN parameters.
The simulation results show that PSO is a reliable
approach to optimize the Hopfield network for
multipath routing, since this method results in fast
M. Sheikhan, E. Hemmati
convergence and produces more accurate results as
compared to non-optimized HNN, noisy HNN, shortest
path (SP) algorithm and recent researches in this field.
The simulation results have shown that for different
network conditions, the proposed model is efficient in
selecting multiple disjoint paths. Simulations also have
shown that the link-disjoint path set is more reliable
than the node-disjoint one in different conditions.
Simulation results show that the reliability and lifetime
are increased up to 4.5 and 3.2 times as compared to
shortest path routing algorithm, respectively. The PSOoptimized HNN routing algorithm has better
performance as the reliability of multiple paths is
increased by 8.3%, while the number of algorithm
iterations is reduced to 11.1% as compared to the nonoptimized HNN multipath routing when averaged over
different transmission ranges. Also, the proposed
algorithm has better performance in terms of reliability
and number of paths when compared with the backup
path set selection algorithm (BPSA). In this way, it
shows up to 58% improvement in the path set reliability
and up to 22% improvement in the number of paths in
the set.
References
1. T. L. Sheu and Y. J. Wu, Jamming-based medium access
control with dynamic priority adjustment in wireless adhoc networks, IET Commun. 1(1) (2007) 34–40.
2. Y. M. Huang and T. C. Chiang, A partition network
model for ad hoc networks in overlay environments,
Wirel. Commun. Mobile Comput. 6(5) (2006) 711–725.
3. Y. Liu and J. Huang, A novel fast multi-objective
evolutionary algorithm for QoS multicast routing in
MANET, Int. J. Computational Intelligence Systems 2(3)
(2009) 288–297.
4. M. Saleema, I. Ullah, S. A. Khayamc and M. Farooq, On
the reliability of ad hoc routing protocols for loss-anddelay sensitive applications, Ad Hoc Netw. 9(3) (2011)
285–299.
5. M. K. Anantapalli and W. Li, Multipath multihop routing
analysis in mobile ad hoc networks, Wirel. Netw. 16(1)
(2010) 79–94.
6. A. Dana, A. Khadem Zadeh and A. A. Sadat, Backup
path set selection in ad hoc wireless network using link
expiration time, Comput. Electr. Eng. 34(6) (2008) 503–
519.
7. S. K. Das, A. Mukherjee, S. Bandyopadhyay, D. Saha
and K. Paul, An adaptive framework for QoS routing
through multiple paths in ad-hoc wireless networks, J.
Parallel Distr. Comput. 63(2) (2003) 141–153.
8. S. J. Lee and M. Gerla, Split multi-path routing with
maximally disjoint paths in ad hoc networks, in Proc.
IEEE Int. Conf. Commun. (2001) Helsinki, Finland, pp.
3201–3205.
9. M. R. Pearlman, Z. J. Haas, P. Sholander and S. S.
Tabrizi, On the impact of alternate path routing for load
balancing in mobile ad hoc networks, in Proc. 1st
Workshop Mobile and Ad-hoc Networking and
Computing (2000) Boston, MA, pp. 3–10.
10. V. Raman, Finding the best edge-packing for twoterminal reliability is NP-hard, J. Combinatorial Math.
Comb. Comput. 9 (1991) 91–96.
11. L. Magdalena, What is soft computing? Revisiting
possible answers, Int. J. Computational Intelligence
Systems 3(2) (2010) 148–159.
12. R. Seising, What is soft computing? Bridging gaps for
21st century science, Int. J. Computational Intelligence
Systems 3(2) (2010) 160–175.
13. E. H. Ruspini, Soft computing: coping with complexity,
Int. J. Computational Intelligence Systems 3(2) (2010)
190–196.
14. J. Kacprzyk, Computational intelligence and soft
computing: some thoughts on already explored and not
yet explored paths, Int. J. Computational Intelligence
Systems 3(2) (2010) 223–236.
15. P. P. Bonissone, Soft Computing: A Continuously
Evolving Concept, Int. J. Computational Intelligence
Systems 3(2) (2010) 237–248.
16. J. Hopfield, Neurons with graded response have
collective computational properties like those of twostate neurons, Proc. Nat. Acad. Sci. 81(10) (1984) 3088–
3092.
17. J. Hopfield and D. Tank, Neural’s computation of
decisions in optimization problems, Biol. Cybern. 52(3)
(1985) 141–152.
18. H. E. Rauch and T. Winarske, Neural networks for
routing communication traffic, IEEE Cont. Syst. Mag.
8(2) (1988) 26–31.
19. M. K. M. Ali and F. Kamoun, Neural networks for
shortest path computation and routing in computer
networks, IEEE Trans. Neural Networks 4(6) (1993)
941–954.
20. P. Venkataram, S. Ghosal and B. P. Vijay Kumar, Neural
network based optimal routing algorithm for
communication networks, Neural Networks 15(10)
(2002) 1289–1298.
21. J. Gao and R. Chen, A hybrid genetic algorithm for the
distributed permutation flowshop scheduling problem,
Int. J. Computational Intelligence Systems 4(4) (2011)
497–508.
22. R. S. Kumar and N. Alagumurthi, Integrated total cost
and tolerance optimization with genetic algorithm, Int. J.
Computational Intelligence Systems 3(3) (2010) 325–
333.
23. P. G. Kumar, Fuzzy classifier design using modified
genetic algorithm, Int. J. Computational Intelligence
Systems 3(3) (2010) 334–342.
PSO-HNN MANET Multipath Router
24. N. Mohammadi, M. R. Malek and A. A. Alesheikh, A
new GA based solution for capacitated multi source
Weber problem, Int. J. Computational Intelligence
Systems 3(5) (2010) 514–521.
25. L. S. Coelho and D. L. A. Bernert, A modified ant colony
optimization algorithm based on differential evolution for
chaotic synchronization, Expert System with Applications
37(6) (2010) 4198–4203.
26. M. Hosseinabadi Farahani and L. Hosseini, An ant
colony optimization approach for the machine-part cell
formation problem, Int. J. Computational Intelligence
Systems 4(4) (2011) 486–496.
27. X. Wang, X. Z. Gao and S. J. Ovaska, A hybrid artificial
immune optimization method, Int. J. Computational
Intelligence Systems 2(3) (2009) 248–255.
28. Y. S. Wang and C. Tap, Particle swarm optimization with
novel processing strategy and its application, Int. J.
Computational Intelligence Systems 4(1) (2011) 100–
111.
29. X. Zheng and H. Liu, A scalable coevolutionary multiobjective particle swarm optimizer, Int. J. Computational
Intelligence Systems 3(5) (2010) 590–600.
30. L. Hu, X. Che and X. Cheng, Bandwidth prediction based
on Nu-support vector regression and parallel hybrid
particle swarm optimization, Int. J. Computational
Intelligence Systems 3(1) (2010) 70–83.
31. H. Modares, A. Alfi and M. M. Fateh, Parameter
identification of chaotic dynamic systems through an
improved particle swarm optimization. Expert System
with Applications 37(5) (2010) 3714–3720.
32. W. Wang, Q. Sun, X. Zhao and F. Yang, An improved
particle swarm optimization algorithm for QoS-aware
web service selection in service oriented communication,
Int. J. Computational Intelligence Systems 3(Supplement
1) (2010) 18–30.
33. L. Gao and A. Hailu, Comprehensive learning particle
swarm optimizer for constrained mixed-variable
optimization problems, Int. J. Computational Intelligence
Systems 3(6) (2010) 832–842.
34. L. S. Coelho and D. L. A. Bernert, An improved harmony
search algorithm for synchronization of discrete-time
chaotic systems, Chaos, Solitons & Fractals 41(5) (2009)
2526–2532.
35. J. Kennedy and R. C. Eberhart, Particle swarm
optimization, in Proc. IEEE Int. Conf. Neutral Networks
(1995) Perth, Australia, vol. 4, pp. 1942–1948.
36. H. B. Gao, C. Zhou and L. Gao, Generalized model of
particle swarm optimization, Chinese J. Comput. 28(12)
(2005) 1980–1987.
37. C. J. A. Bastos-Filho, W. H. Schuler, A. L. I. Oliveira
and L. N. Vitorino, Routing algorithm based on swarm
intelligence and Hopfield neural network applied to
communication networks, Electr. Lett. 44(16) (2008)
995–997.
38. E. Hemmati and M. Sheikhan, Hopfield neural network
for disjoint path set selection in mobile ad-hoc networks,
in Proc. 6th Int. Conf. Wireless Communication and
Sensor Networks (2010) Allahabad, India, pp. 140–144.
39. E. Hemmati and M. Sheikhan, Reliable disjoint path set
selection in mobile ad-hoc networks using noisy Hopfield
neural
network,
in
Proc.
5th
Int.
Symp.
Telecommunications (2010) Tehran, Iran, pp. 496–501.
40. D. W. Tank and J. J. Hopfield, Simple neural
optimization networks: an A/D converter, signal decision
circuit, and a linear programming circuit, IEEE Trans.
Circuits Syst. 33(5) (1986) 533–541.
41. Y. Shi and R. Eberhart, Parameter selection in particle
swarm optimization, in Proc. 7th Int. Conf. Evolutionary
Programming (1998) London, UK, pp. 591–601.
42. C. R. Mouser and S. A. Dunn, Comparing genetic
algorithms and particle swarm optimization for an inverse
problem exercise, Australian & New Zealand Industrial
and Applied Mathematics Journal 46(E) (2005) 89–101.
43. R. Hassan, B. Cohanim, O. De Weck and G. Venter, A
comparison of particle swarm optimization and the
genetic
algorithm,
in
Proc.
46th
AIAA/ASME/ASCE/AHS/ASC
Structures, Structural
Dynamics
and
Materials
Conf.
(2005)
Austin, Texas, pp. 1138–1150.
44. T. Rappaport, Wireless Communications: Principles and
Practice (Prentice Hall, PTR Upper Saddle River, NJ,
2001).
45. W. Su, S. Lee and M. Gerla, Mobility prediction in
wireless networks, in Proc. 21st Century Military
Communications Conf. (2000) Los Angeles, USA, vol. 1,
pp. 491–495.
46. T. Camp, J. Boleng and V. Davies, A survey of mobility
models for ad-hoc network research, Wirel. Commun.
Mobile Comput. 2(5) (2002) 483–502.
47. S. Pierre, H. Said and W. G. Probst, An artificial neural
network approach for routing in distributed computer
networks, Engineering Applications of Artificial
Intelligence 14(1) (2001) 51–60.
| 2 |
The Unfolding Semantics of Functional
Programs
arXiv:1708.08003v1 [] 26 Aug 2017
José M. Rey Poza and Julio Mariño
School of Computer Science,
Universidad Politécnica de Madrid
[email protected], [email protected]
Abstract. The idea of using unfolding as a way of computing a program
semantics has been applied successfully to logic programs and has shown
itself a powerful tool that provides concrete, implementable results, as its
outcome is actually source code. Thus, it can be used for characterizing
not-so-declarative constructs in mostly declarative languages, or for static
analysis. However, unfolding-based semantics has not yet been applied to
higher-order, lazy functional programs, perhaps because some functional
features absent in logic programs make the correspondence between execution and unfolding not as straightforward. This work presents an unfolding semantics for higher-order, lazy functional programs and proves
its adequacy with respect to a given operational semantics. Finally, we
introduce some applications of our semantics.
Keywords: Semantics, Unfolding, Functional Programming.
1
Introduction
The broad field of program semantics can be classified according to the different
meanings intended to be captured or the various techniques employed. Thus,
traditionally, the term denotational semantics is used when a high-level, implementation independent description of the behaviour of a program is pursued,
while operational semantics usually refers to descriptions intended to capture
more implementation-related properties of the execution of a program, which
can then be used to gather resource-aware information, or as a blueprint for
actual language implementations.
The inability of those denotational semantics to capture certain aspects of
logic programs (such as the computed answer semantics) and the introduction of
“impure” constructs in Prolog, led to a considerable amount of proposals for alternative semantics of logic programs during the 80’s and 90’s. One of the most
remarkable proposals is the so-called s-semantics approach [4] which explores
the possibility of using syntactic denotations for logic programs. In other words,
programs in a very restricted form are the building blocks of the denotation,
and program transformation (e.g. via unfolding) takes the role of interpretation transformers in traditional constructions. Being closer to the source code
facilitates the treatment of the less declarative aspects.
2
Prolog code / s-semantics unfolding
Functional code / Funct. unfolding
add (zero ,X,X).
add Zero x = x
add (suc (X),Y, suc (Z)):−add (X,Y,Z). add (Suc x) y = Suc (add x y)
S1 = {add(zero,X,X)}
S2 = {add(zero,X,X),
add(suc(zero),X2 ,suc(X2 ))}
I0 = ∅
I1 = {add Zero x = x}
I2 = {add Zero x = x,
add (Suc Zero) y = (Suc y) }
Fig. 1. Logic and functional versions of a simple program, and their unfoldings.
However, in spite of the fact that unfolding is a technique equally applicable
to functional programs, little attention has been paid to its use as a semantics tool. Investigating how unfolding can be applied to find the semantics of
functional programs is the goal of this paper.
1.1
Unfolding Semantics
The process of unfolding is conceptually simple: replace any function or predicate
invocation by its definition. In logic programming this amounts to unifying some
literal in the body of a rule with the head of some piece of knowledge that has
already been calculated, and placing the corresponding body instance where the
literal was.
The previous paragraph mentions two important concepts: the first is that
of piece of knowledge generated by unfolding program rules according to all
current pieces of knowledge. Every piece of knowledge (called a fact) is valid
source code. The set of facts may increase with every iteration. A set of facts is
called an interpretation. In addition, the second concept hinted in the paragraph
above is that of initial interpretation.
Unfolding in Logic Programming. As an example, the left half of Fig. 1
shows a predicate called add that adds two Peano Naturals. This part shows
the source code (upper side) together with the corresponding unfolding results
(lower side).
The general unfolding procedure can be easily followed in the example, where
the first two clause sets are generated (S1 and S2 ).
Unfolding in Functional Programming. Unfolding in functional programming (FP) follows the very same idea of unfolding in logic programming: any
function invocation is replaced by the right side of any rule whose head matches
the invocation.
Consider the right half of Fig. 1 as the functional version of the previous
example, written in our model language. Some differences and analogies between
3
Functional code
ite : Bool → a → a → a
ite True t e = t
ite False t e = e
filter :(a→Bool )→[a]→[a]
filter p [] = []
filter p (x:xs) =
ite (p x)
(x:( filter p xs ))
( filter p xs)
Unfolding
I0 = ∅
I1 = {ite(True,t,e) = t, ite(False,t,e) = e,
filter(b,Nil) = Nil}
I2 = {ite(True,t,e) = t, ite(False,t,e) = e,
filter(b,Nil) = Nil,
filter(b,Cons(c,Nil))|
snd(match(True,b@[c]))=Cons(c,Nil),
filter(b,Cons(c,Cons(d,e)))|
snd(match(True,b@[c]))=Cons(c,Bot),
filter(b,Cons(c,Nil))|
snd(match(False,b@[c])) = Nil
filter(b,Cons(c,Cons(d,e)))|
snd(match(False,b@[c]))=Bot}
Fig. 2. Functional program requiring higher-order applications.
both paradigms can be spotted: In FP, unfolding generates rules (equations) as
pieces of knowledge, instead of clauses which appeared in logic programming.
The starting seed is also different: bodyless rules are used in logic programming
while the empty set is used in functional programming.
Finally, observe that both unfoldings (logic and functional) produce valid
code and analogous results, being In equivalent to Sn . This fact provides a clue
into two of the main reasons to define an unfolding semantics: first they are
implementable as the procedure above shows and, second, they are also a clear
point between denotational and operational semantics in proving the equivalence
between a semantics of each type.
1.2
Extending Unfolding Semantics to Suit Functional Programs
Section 1.1 showed that the ideas of unfolding semantics in logic programming
can also be applied to FP. However, some features of FP (e.g. higher-order,
laziness) render the unmodified procedures invalid.
Consider the function filter in Fig. 2. It takes a list of values and returns
those values in the list that satisfy a predicate passed as its first argument.
Applying naı̈ve unfolding to filter is impossible since ite (short for if-thenelse) demands a boolean value but both p and x are unknown at unfold time
(i.e. before execution).
In order to overcome this limitation, we have developed a technique capable
of generating facts in the presence of incomplete information. In this case we
generate conditional facts (the last four facts in I2 ). The function match checks
whether a given term matches an expression that cannot be evaluated at unfolding time (here, (p x)). Observe that match must be ready to deal with infinite
values in its second argument.
Note that, in automatically-generated code, such as the unfolded code shown
in Fig. 2 and the figures to come, variables are most often renamed and that our
unfolding implementation uses tuples to represent curried expressions.
4
1.3
Related Work
One of the earliest usages of unfolding in connection to semantics is due to Scott
[9], who used it to find the denotation of recursive functions, even though the
word unfolding was not used at the time.
Concerning logic programming, our main inspiration source is s-semantics
[4], which defines an unfolding semantics for pure logic programs that is defined
as the set of literals that can be successfully derived by using the program given.
In addition, fold and unfold have been used in connection to many other
problems in the field of logic programming. For example [7] describes a method
to check whether a given logic program verifies a logic formula. It does this by
applying program transformations that include fold and unfold.
Partial evaluation of logic programs has also been tackled by means of unfolding but it usually generates huge data structures even for simple programs.
As in logic programming, fold/unfold transformations have been used extensively to improve the efficiency of functional programs [5], but not as a way of
constructing a program’s semantics.
Unfolding has also been applied to functional-logic programming [1]. However, that paper is not oriented towards finding the meaning of a program but
to unfold it partially to achieve some degree of partial evaluation. Besides, it is
restricted to first order, eager languages.
Paper Organization Section 2 presents preliminary concepts. Section 3 describes
the unfolding semantics itself, the core of our proposal. Section 4 presents the
formal meaning we want to assign to the core language that we will be using.
Section 5 points out some applications of unfolding semantics. Section 6 concludes.
2
Preliminaries
Notation Substitutions will be denoted by σ, ρ, µ. σ(e) or just σe will denote the
application of substitution σ to e. The empty substitution will be denoted by ǫ.
e ≡ e′ will denote that the expressions e and e′ have the same syntax tree.
Given a term t, a position within t is denoted by a dot-separated list of integers. t|o denotes the content of position o within t. Replacement of the content
at position o within a term t by some term t′ is denoted by t[t′ ]|o . The set of
positions within an expression e will be denoted by Pos(e).
k, d will be used to denote constructors while c will denote guards.
The auxiliary functions fst : a×b → a and snd : a×b → b extract the first and
second element of a tuple, respectively. Boolean conjunction and disjunction are
denoted by ∧ and ∨. mgu(f t1 . . . tn , f e1 . . . en ) (where the tj are terms and ei do
not have user-defined functions) denotes its most general unifier. The conditional
operator will denoted by ◮, which has type (◮) : Bool → a → a and is defined
as: (True ◮ a) = a, (False ◮ a) = ⊥s .
Regarding types, A →
p
B denotes a partial function from domain A to
domain B. The type of π-interpretations (sets of facts) is noted by Set (F ). F
5
is intended to denote the domain from which facts are drawn. The projection
of an interpretation I to predefined functions only is denoted as Ip . Lack of
information is represented by ⊥s in unfolding interpretations and by the wellknown symbol ⊥ when it is related to the minimum element of a Scott domain.
Lastly, HNF stands for head normal form. An expression is said to be in head
normal form if it is a variable or its root symbol is a constructor. Normal form
(NF) terms are those in HNF, and whose subterms are in NF.
2.1
Core Language. Abstract Program Syntax
The language1 that will be the base to develop this work is a functional language
with guarded rules. Guards (which are optional) are boolean expressions that
must be rewritten to True in order for the rule that contains it to be applicable.
Note that the language we are using is a purely functional language (meaning
that it uses pattern matching,higher-order features and referential transparency).
Let us consider a signature Σ = hVΣ , DC Σ , FS Σ , PF Σ i where V is the set
of variables, DC is the set of Data Constructors that holds at least ⊥s and a
tuple-building constructor, FS holds the user-defined functions and PF denotes
the set of predefined functions that holds at least a function match, a function
nunif and a function @ that applies an expression e to a list of expressions (that
is, e@[e1 , . . . , en ] represents (e e1 . . . en )). PF and FS are disjoint.
Some of the sets above depend on the program P under study, so they should
be denoted as, e.g., FS P but we will omit that subscript if it is clear from the
context. All these sets are indexed by arity. The domains for a program are:
(Variables)
V ::= x, y, z, w . . .
(Terms)
T ::= v | k t1 . . . tk
(Expressions)
E ::= t | f ′ e1 . . . ek
(Patterns)
Pat ::= f t1 . . . tk
(Rules)
Rule ::= l | g = r
(Programs)
P ::= Set (Rule)
v ∈ V, k ∈ DC , ti ∈ T
f ′ ∈ FS ∪ PF , t ∈ T, ei ∈ E
f ∈ FS , ti ∈ T
l ∈ Pat , g ∈ E, r ∈ E
Terms are built with variables and constructors only. Expressions comprise terms
and those constructs that include function symbols (predefined or not).
Note that the description corresponding to expressions (E) does not allow
for an expression to be applied to another expression but we still want our
language to be a higher order one. We manage higher order by means of partial
applications, written by using the predefined function @. Thus, un application
like e1 e2 (where e1 and e2 are arbitrary expressions) is represented in our setting
by e1 @[e1 ] (or by @(e1 ,[e2 ]) in prefix form).
To ensure that programs defined this way constitute a confluent rewriting
system, these restrictions will be imposed on rules [6]: linear rule patterns, no
free variables in rules (a free variable is one that appears in the rule body but
1
We assume the core language to be typed although we do not develop its typing
discipline here because of lack of space.
6
not in the guard or the pattern) and finally, no superposition among rules (i.e.
given a function application, at most a rule must be applicable).
The core language does not include local declarations (i.e. let, where) but
this does not remove any power from the language since local declarations can
be translated into aditional rules by means of lambda lifting.
3
3.1
Unfolding Semantics for the Core Language
Interpretations
Definition 1 (Fact and π-Interpretation). We will use the word fact to
denote any piece of proven knowledge that can be extracted from a program P
and which conforms to the following restrictions: (i) They have shape h | c = b,
(ii) b and c include no symbols belonging to FS , (iii) Predefined functions are
not allowed inside b or c unless the subexpression headed by a symbol in PF
cannot be evaluated further (e.g. x + 1 would be allowed in b or c but 1 + 1 would
not, 2 should be used instead) and (iv) The value of c can be made equal to True
(by giving appropriate values to its variables). The type of facts is denoted F .
Facts can be seen as rules with a restricted shape.
In addition, a π-interpretation is any set of valid facts that can be generated
by using the signature of a given program P . The concept of π-interpretation has
been adapted from the concept with the same name in s-semantics.
The reason for imposing these restrictions on facts is to have some kind of
canonical form for interpretations. Even with this restrictions, a program does
not have a unique interpretation, but we intend to be as close to a canonical
form for interpretations as possible.
3.2
Defining the Unfolding Operator
The process we are about to describe is represented in pictorial form in the
Appendix, Sect. A in order to help understand the process as a whole.
The unfolding operator relies on a number of auxiliary functions that are described next, together with the operator itself. A full example aimed at clarifying
how these functions work can be found in the Appendix (Example 3).
Evaluation of Predefined Functions The function eval (Fig. 3) is in charge
of finding a value for those expressions which do not contain any full application
of user-defined functions. Since predefined functions do not have rules, their
appearances cannot be rewritten, just evaluated. Only predefined functions are
evaluated; all the other expressions are left untouched. Note that eval requires
the interpretation in order to know how to evaluate predefined functions.
7
eval : Set (F) × E → E
eval (I, x)
eval (I, (k e1 . . . en ))
eval (I, (k e1 , . . . , en ))
eval(I, (p e1 . . . en ))
eval (I, (c1 ∧. . .∧ci−1 ∧
snd (match(p, e))∧
ci+1 ∧ . . . ∧ cn ◮ e′ ))
eval (I, (x e1 . . . em ))
eval (I, (p e1 . . . em ))
eval (I, (f e1 . . . em ))
=x
= (k e′1 . . . e′n )
= (k e1 . . . en )
= (Ip e′1 . . . e′n )
x∈V
(k ∈ DC , n ≥ 0, arity (k) = n, eval (I, ei ) = e′i )
(k ∈ DC , n ≥ 0, n < arity (k))
if (p e′1 , . . . , e′n ) can be evaluated to NF
without error. It is left untouched otherwise.
(eval (I, ei ) = e′i , p ∈ PF − {match }).
=
σ(c1 ∧ . . . ∧ ci−1 ∧ b ∧ ci+1 ∧ . . . ∧ cn ◮ e′ )
= (x e1 . . . em )
= (p e1 . . . em )
= (f e1 . . . em )
if match(p, e) = (σ, b).
(x ∈ V )
p ∈ PF o , m < o.
f ∈ FS n , m ≤ n
Fig. 3. Evaluation of predefined functions
Housekeeping the Fact Set Every time a step of unfolding takes place, new
facts might be added to the interpretation. These new facts may overlap with
some existing facts (that is, be applicable to the same expressions as the existing
ones). Although overlapping facts do not alter the meaning of the program, they
are redundant and removing them makes the interpretation smaller and more
efficient. The function clean removes those redundancies. We believe this cleaning
step is a novel contribution in the field of unfolding semantics for functional
languages (see [2], where a semantics for logic functional programs is presented
but where interpretations are not treated to remove any possible overlapping).
Given an interpretation, the function clean removes the overlapping pairs in
order to erase redundant facts. Before defining clean, some definitions are needed.
Definition 2 (Overlapping Facts). A fact h | c = b overlaps with some other
fact h′ | c′ = b′ if the following two conditions are met:
– There exists a substitution µ such that: h ≡ µ(h′ ) and
– The condition c ∧ µ(c′ ) is satisfiable2 .
Intuitively, two facts overlap if there is some expression that can be rewritten
by using any of the facts.
What clean does is to remove any overlapping between facts from the interpretation it receives as argument. It does this by conserving the most specific fact
of every overlapping fact set untouched while restricting the other facts of the
set so that the facts do not overlap any more. This restriction is accomplished
by adding new conditions to the fact’s guard.
In order to be able to state that a fact is more specific than some other, we
need an ordering:
2
Note that satisfiability is undecidable in general. This means that there might be
cases where clean is unable to remove overlapping facts.
8
Definition 3 (Term and Fact Ordering). Let us define t ⊑ t′ (t, t′ ∈ E),
(t, t′ ) linear or µ(t) = σ(t′ ) for some substitutions µ, σ:
–
–
–
–
⊥s ⊑ t t ∈ T
t ⊑ x t ∈ T, x ∈ V
t1 . . . tn ⊑ t′1 . . . t′n if and only if ti ⊑ t′i ∀i : 1..n
(k t1 . . . tn ) ⊑ (k t′1 . . . t′n ) if and only if ti ⊑ t′i ∀i : 1..n, k ∈ DC ∪ PF
Now, this ordering can be used to compare facts.
Given two overlapping facts F ≡ f t1 . . . tn | c = b and F ′ ≡ f t′1 . . . t′n | c′ =
′
b , it is said that F ′ is more specific than F if and only if at least one of the
following criteria is met:
– t′1 . . . t′n ❁ t1 . . . tn or
– If t′1 . . . t′n and t1 . . . tn are a variant of each other (i.e., they are the same
term with variables renamed), the fact that is more specific than the other is
the one with the most restrictive guard (a guard c′ is more restrictive than
another guard c if and only if c′ entails c but not viceversa).
– If two facts are such that their patterns are a variant of each other and their
guards entail each other, the fact that is more specific than the other is the
one with the greatest body according to ⊑.
Remember that facts’ bodies do not contain full applications of user-defined
functions, so ⊑ will never be used to compare full expressions. However, ⊑ may
be used to compare expressions with partical applications or with predefined
functions. In these cases, function symbols (both from FS or from PF ) must be
treated as constructors. Note that, in a program without overlapping rules, the
bodies of two overlapping facts are forced to be comparable by means of ⊑.
Definition 4 (Function clean).
Given a fact F belonging to an interpretation I, let us define the set SIF =
{Fi ≡ f ti1 . . . tin | ci = bi ∈ I such that F overlaps with Fi and Fi
is more specific than F (i : 1..m)}.
Considering the set SIF for every fact F ∈ I, we can define clean (whose type
is Set (F ) → Set (F )) as:
clean(I) = I − I ⊥ − I O ∪
[
[f t1 . . . tn | c ∧
F ′ ≡(f t1 ...tn |c=b)∈I O
^
∀Fi ∈SIF
(nunif ((t1 , . . . , tn ), (ti1 , . . . , tin )) ∨ not (ci )) = b]
′
(Fi ≡ f ti1 . . . tin |ci = bi )
(1)
where − stands for set subtraction and:
– I ⊥ = {l = ⊥s such that (l = ⊥s ) ∈ I}. clean removes all the facts that are
identically ⊥.
9
nunif : T × T → Bool
nunif (x, t) = nunif (t, x)
=
nunif (k, d)
=
nunif (k, k)
=
nunif ((p1 , p2 ), (p3 , p4 ))
=
nunif (k(. . .), k′ (. . .))
=
nunif (k(p1 , . . . , pn ), k(p′1 , . . . , p′n ))=
False
True
False
nunif (p1 , p3 ) ∨ nunif (p2 , p4 )
True
nunif (p1 , p′1 ) ∨ . . . ∨ nunif (pn , p′n )
t ∈ T, x ∈ V
k, d ∈ DC , k 6= d
k ∈ DC
Tuples
k 6= k′
k ∈ DC
Fig. 4. Lack of unification between patterns: function nunif
′
– I O = {F ′ ∈ I such that SIF 6= ∅}. All the facts F ′ in I which are overlapped
by some more specific fact are removed from I and replaced by the amended
′
fact shown above which does not overlap with any fact in SIF .
The function nunif (Fig. 4) denotes lack of unification between its arguments.
Under some conditions clean will not add new facts to the given interpretation. This will happen if the guards for the facts under the big ∪ in Eq. 1 are
unsatisfiable. If the program under analysis meets certain properties, this is sure
to happen. Two definitions are needed to define those properties:
Definition 5 (Complete Function Definition). A function definition for
function f written in the core language is said to be complete if and only if for
any well typed, full application f t1 . . . tn of f , where the ti are terms there is a
rule f p1 . . . pn |g = r that can be used to unfold that application (that is, there
exists a substitution σ such that σ(t1 , . . . , tn ) = (p1 , . . . , pn ) and σ(g) satisfiable).
Definition 6 (Productive Rule). A program rule is said to be productive if
at least one fact which is not equal to the unguarded bottom (⊥s ) is generated by
unfolding that rule at some interpretation Im (m finite).
clean will not add new facts if all the function definitions in the program are
complete and all the rules in the program are productive. The following Lemma
states this. Note that the conditions mentioned are sufficient but not necessary.
Lemma 1 (When Can clean Drop Facts). Let P be a program without
overlapping rules, whose function definitions are all complete and whose rules
are all productive. Then:
For every fact H ≡ f t1 . . . tn | c = b ∈ In which is a result of unfolding the
rule Λ ≡ f s1 . . . sn | g = r, there exist in In+1 some facts which are also the
result of unfolding Λ which cover all the invocations of f covered by H.
The proof for this Lemma can be found in the Appendix.
We will be using the simplified version of clean whenever the program under
analysis meets the criteria that have been just mentioned.
To finish this section, let us state a result that justifies why it is legal to use
clean to remove overlapping facts.
10
match : T × E → (V →
p E) × E
match(x, e)
= ({x ← e}, True)
match(t, ⊥s )
= ({}, False)
match(t, (f e1 . . . en )) = (σh′ , c′h ∧ b′h )
match(k, k)
= ({}, True)
match((k . . .), (k′ . . .))= ({}, False)
match((k t1 . . . tn ),
(k e1 . . . en ))
= (σ1 ◦ · · · ◦ σn ,
b1 ∧ . . . ∧ bn )
match(t, (f e1 . . . en )) = ({}, False)
match(t, x)
= ({x ← t}, True)
x∈V
t ∈ T, f ∈ FS ∪ PF , hnf ((f e1 . . . en )) =
c′h ◮ e′h . match(t, e′h ) = (σh′ , b′h ).
(k ∈ DC )
(k, k′ ∈ DC , k 6= k′ )
(ti ∈ T, k ∈ DC , match(ti , ei ) = (σi , bi ))
(f ∈ FS m ∪ PF m , m > n)
(t ∈ T, x ∈ V )
Fig. 5. Function match.
umatch : T × E → (V →
p E) × E
umatch(t, e) = (σ, True)
if there exists some unifying σ such that σ(t) ≡ e
.
umatch(t, e) = (σ, snd(match(t|o , e|o )) ∧ c)
if e and t do not unify because there is at least a position o such that e|o is headed
by a symbol of PF (including @) and t|o is not a variable. umatch(t, e[(t|o )]|o ) = (σ, c).
if e and t do not unify but this is not due to a predefined
umatch(t, e) = (ǫ, False)
function symbol in e.
Fig. 6. umatch: Generation of matching conditions.
Lemma 2 (Programs without Overlappings). The fixpoint interpretation
(namely, Iω = UP∞ (I⊥ ) where U is the unfolding operator that will be presented
later) of any program P without overlapping rules cannot have overlappings. I⊥
is the empty interpretation.
The proof for this Lemma can be found in the Appendix.
Lazy Matching of Facts and Rules The unfolding process rewrites userdefined function applications but predefined functions (including partial application) will be left unaltered by the unfolding steps since there are no rules for
them. This means that when a match is sought to perform an unfolding step, the
arguments to the user-defined functions may include predefined functions that
must be evaluated before it is known whether they match some pattern. Such
applications may also generate infinite values. Thus, we need a function match3
that lazily matches a pattern to an expression.
Recall Fig. 2. The unfolding operator generates facts containing match whenever it finds a subexpression headed by a symbol in PF that needs to be matched
against some rule pattern. These special facts can be thought as imposing assumptions on what the pattern must be like before proceeding.
3
Note that match is similar to operator =:<= proposed in [3].
11
Those assumptions are included inside the fact’s guard. Two functions are
needed in connection to those assumptions: umatch (Fig. 6) 4 generates them as
a conjunction of calls to match (Fig. 5) which performs the matches at runtime.
umatch and match must be distinguished: umatch fits facts’ heads into expressions for unfolding while match is not an unfolding function; it is a function
used to check (at runtime) whether certain conditions are met in evaluated expressions. umatch does not call match: umatch generates code that uses match.
The function hnf, used in the definition for match, receives an expression and
returns that expression evaluated to Head Normal Form. hnf has type E → E.
In the result of umatch, σ is a list of assignments assigning values to variables
inside the arguments Vpassed to umatch and the right part of the result is a
condition of the form i snd(match(pi , ei )) where the pi are patterns and the ei
are expressions without symbols of FS (they have been removed by unfolding).
The function match returns whether that matching was possible and a list
of assignments from variables to expressions. The rules of match are tested from
the first to the last, applying the first suitable one only.
Both lists of assignments (the ones returned by umatch or match) are not
exactly substitutions because variables can be assigned to full expressions (not
just terms) but they behave as such.
Two remarks must be made about match: (i) The first element of the pair
returned by match is never used inside the definitions given in this paper because
it is only used in order to bind variables at runtime (not at unfolding time).
Those bindings will occur when a guard containing calls to match is evaluated.
(ii) Therefore, match is not purely functional (i.e., it is not a side effect-free).
Example 1. (How umatch works.) Code that generates a situation like the one
described is the one in Fig. 7 left. Part of its unfolding appears in Fig. 7 right 5 .
When the rule for app first is unfolded, it is found that (f n) cannot be
unfolded any more but it still does not match (x:xs) (the pattern in first’s rule).
Therefore, the second rule for umatch imposes the assumption in the resulting
fact that (f n) must match (x:xs) if the rule for app first is to be applied. Note
that f@[n] (f applied to variable n) generates an infinite term in this case. This
is why match cannot be replaced by strict equality. Example 2 in the Appendix
(Sect. C) shows how unfolding behaves when infinite structures are generated.
Unfolding Operator Operator U (I) (short form for UP (I)) where I is a πinterpretation is defined as shown in Fig. 8.
Given a program P, its meaning is given by the least fixed point of UP or by
Iω (= UP∞ (I⊥ )) if the program has infinite semantics.
The auxiliary function unfold, that unfolds a rule using the facts in an interpretation, is defined in Fig. 9. The behaviour of unfold can be described as
4
5
Observe that a function like umatch is not needed in pure Prolog since every atom
is guaranteed to have a rule and lack of instantiation will cause a runtime error.
The variables in the unfolder’s output have been renamed to ease understanding.
12
a) Code that Needs Matching b) Unfolding of the Source Code
from n :: Int→[Int ]
∗ first(Cons (x,xs)) = x
from n n = n:( from n (n+1)) ∗ from n (n) =
Cons (n,Cons (n+1, Bot ))
first ::[ a]→a
∗ app first (f,n) |
first (x:xs) = x
snd ( match( Cons (x,xs),f@[n]))= x
app first :: (a→[b])→a→ b Note: Any code preceded by * in every line has
app first f n = first(f n) been generated by our Prolog-based unfolder. The
unfolder uses Prolog terms to represent functional
main :: Int→Int
applications. That is why the unfolder uses tuples
main n= app first from n n to represent curried applications.
Fig. 7. Lazy matching of terms and rules.
I0 = I0⊥ = ∅
⊤
Im+1 = U
S(Im ) = clean(Im+1 )
⊤
⊥
Im+1 = Λ∈Rules (unfold (Λ, Im ∪ Im
)) ∪ Im
⊥
s
s
⊤
Im+1 = {l = ⊥ such that (l = ⊥ ) ∈ Im+1
}
Fig. 8. Unfolding operator
follows: unfold receives a (partially) unfolded rule (a pseudofact) which is unfolded by means of recursive calls. When the input to unfold has no invocations
of user defined functions, it is just returned as it is (case 1). Otherwise, the
pseudofact is unfolded by considering all the facts and positions o which hold
an invocation of a user-defined function (Case 2a). Those positions occupied by
user-defined function calls which cannot be unfolded are replaced by ⊥s (case
2b). unfold returns all the possible facts obtained by executing this procedure.
When performing the unfolding of a program, unfold behaves much like the
rewriting process in a TRS (i.e., it tries all the possible pairs hposition o, facti).
To summarize, ⊥s and match are the two enhancements required to write valid
code for unfolding functional programs. If eager evaluation is used, these enhancements would not be necessary but naı̈ve unfolding would still fail to work.
4
Operational Semantics
The operational semantics that describes how ground expressions written in the
kernel language are evaluated is shown in Fig. 10. The semantics defines a small
step relationship denoted by ❀. The notation e ❀ e′ means that the expression e
can be rewritten to e′ . The reduction relation (p e1 . . . en ) ❀p t (p ∈ PF ) states
that p e1 . . . en can be rewritten to t by using the definition of the predefined
function p.
The unfolding and operational semantics are equivalent in the following sense
for any ground expression goal : goal ❀∗ e′ ↔ e′ ∈ ueval (I∞ , goal ) where ❀∗ is
13
unfold : Rule × Set (F) → Set(F)
unfold (l | g = r, Im ) =
{l | eval (Im , g) = eval (Im , r)} if g and r have no total apps. of user funcs.
{(h′′ | c′′ = b′′ ) such that (h′′ | c′′ = b′′ ) ∈
unfold (σ(l) | eval (Im , σ(ghbj iko ∧ cj ∧ c′m )) = eval (Im , σ(rhbj iko )), Im )
∀o ∈ Pos(r) ∪ Pos(g), r|o (resp. g|o ) = f e1 . . . en , f ∈ FS n
∀(f t . . . t | c = b ) ∈ I such that
1
n
j
j
m
′
′
′
′
umatch((t
,
.
.
.
,
t
),
(e
,
1
n
1 . . . , en )) = (σ, cm ) and cm satisfiable
∪
unfold (l | eval (Im , gh⊥s iko ) = eval (Im , rh⊥s iko ), Im )
∀o ∈ Pos(r) ∪ Pos(g), r|o (resp. g|o ) = f e1 . . . en , f ∈ FS n
∄(f t1 . . . tn | cj = bj ) ∈ Im such that
umatch((t1 , . . . , tn ), (e′1 , . . . , e′n )) = (σ, c′m ) and c′m satisfiable}
A position is unfoldable:
Case 2a):
Some facts fit position o
Case 2b):
No facts fit position o
where:
– e′i = eval (Im , ei ) ∀i : 1..n
– ghtiko = g[t]|o if o ∈ Pos(g) and ghtiko = g otherwise.
– rhtiko = r[t]|o if o ∈ Pos(r ) and rhtiko = r otherwise.
Fig. 9. Unfolding of a program rule using a given interpretation
the transitive and reflexive closure of ❀ and e′ is in normal form according to
❀, ueval is a function that evaluates expressions by means of unfolding and I∞
is the limit of the interpretations found by repeatedly unfolding the program.
This equivalence is proved in the Appendix, Sect. B.3.
Note that this semantics is fully indeterministic; it is not meant to be used
in any kind of implementation and its only purpose is to serve as a pillar for
the demonstration of equivalence between the unfolding and an operational semantics. Therefore, the semantics is not lazy or greedy in itself. It is the choice
of reduction positions where the semantics’ rules are apllied what will make a
certain evaluation lazy or not.
5
Some Applications of the Unfolding Semantics
Declarative Debugging 6 With declarative debugging, the debugger consults the
internal structure of source code to find out what expressions depend on other expressions and turns this information into an Execution Dependence Tree (EDT).
6
The listings of unfolded code provided in this paper have been generated by our
unfolder. Source at http://www.github.com/josem-rey/unfolder and test environment at https://babel.ls.fi.upm.es/~ jmrey/online_unfolder/unfolding.html
14
e = f e1 . . . en , f ∈ FS n ,
(f t1 . . . tn |g = r) ∈ P
σ = mgu((t1 , . . . , tn ), (e1 , . . . , en )),
σ(g) ❀∗ True
e ❀ σ(r)
e = f e1 . . . en , f ∈ FS n ,
∄(Λ ∈ P such that Λ ≡ (f t1 . . . tn |g = r)
(rule)
σ = mgu((t1 , . . . , tn ), (e1 , . . . , en )), (rulebot)
σ(g) ❀∗ True)
e ❀ ⊥s
e = (p e1 . . . en ), e ❀p t, t ∈ T, p ∈ PF
(predef)
e❀t
ei ❀∗ False, i : 1, 2
(andfalse)
e1 ∧ e2 ❀ False
True ∧ True ❀ True
(andtrue)
(False ◮ e) ❀ ⊥s (ifthenfalse) (True ◮ e) ❀ e (ifthentrue)
Fig. 10. Operational Semantics
The debugger uses this information as well as answers from the user to blame
an error on some rule. We have experimentally extended the unfolder to collect
intermediate results as well as the sequence of rules that leads to every fact. This
additional information allows our unfolder to build the EDT for any program
run. Consider for example this buggy addition:
A1: addb Zero n = n
A2: addb Suc (Zero ) n = Suc (n)
A3: addb Suc (Suc (m)) n = Suc ( addb m n)
M24 : main24 = addb Suc (Suc ( Suc (Zero ))) Suc (Zero )
We can let the program unfold until main24 is fully evaluated. This happens in I3 ,
which contains the following fact for the main function (after much formatting):
root : main24 = Suc (Suc ( Suc (Zero ))) <M24>
n1: addb ( Suc (Suc (Suc ( Zero ))), Suc ( Zero ))= Suc ( Suc (Suc ( Zero)))<A3>
n2:addb (Suc ( Zero ), Suc (Zero )) = Suc ( Suc (Zero )) <A2>
Now, following the method described in [8], we can think of the sequence above
as a 3-level EDT in which the root and node n1 contain wrong values while the
node n2 is correct, putting the blame on rule A3.
The main reason that supports the use of unfolding for performing declarative debugging is that it provides a platform-independent environment to test
complex programs. This platform independence can help check the limitations
of some implementations (such of unreturned answers due to endless loops).
Test Coverage for a Program It is said that a test case for a program covers
those rules that are actually used to evaluate the test case. We would like to
reach full code coverage with the smallest test set possible. The unfolder can be
a valuable tool for finding such a test set if it is enhanced to record the list of
rules applied to reach every fact.
15
What must be done with the enhanced unfolder is to calculate interpretations
until all the rules appear at least once in the rule list associated to the facts that
do not contain any ⊥s and then apply a minimal set coverage algorithm to find
the set of facts that will be used as the minimal test set. For example:
R1:
R2:
A1:
A2:
rev [] = []
// List inversion
rev (x:xs) = append (rev xs) [x]
append [] x = x
append (x:xs) ys = x:( append xs ys)
The first interpretation contains:
∗ rev (Nil ) = Nil <R1>
∗ append(Nil ,b) = b <A1>
∗ append( Cons (b,c),d) = Cons (b,Bot )
<A2>
So, appending the empty list to any other achieves 50% coverage of append.
Reversing the empty list uses 1 rule for rev: the coverage rate is 50% too. I3 has:
∗ append( Cons (b, Nil ),c) = Cons (b,c) <A2 ,A1>
...
∗ rev ( Cons (b,Cons (c,Nil ))) = Cons (c,Cons (b,Nil ))
<R2 ,R2 ,R1 ,A1 ,A2 ,A1>
This shows that the minimal test set to test append must consist of appending
a one element list to any other list. Meanwhile, reversing a list with 2 elements
achieves a 100% coverage of the code: all the rules are used.
To close this section, we would like to mention that Abstract Interpretation
can be used along with unfolding to find properties of the programs under study
such as algebraic or demand properties. See examples 4, 5, 6 in the Appendix.
6
Conclusion and Future Work
We have shown that unfolding can be used as the basis for the definition of a
semantics for lazy, higher-order functional programs written in a kernel language
of conditional equations. This is done by adapting ideas from the s-semantics
approach for logic programs, but dealing with the aforementioned features was
not trivial, and required the introduction of two ad-hoc primitives to the kernel language: first, a syntactic representation of the undefined and second, a
matching operator that deals with partial information.
Effort has also been devoted to simplifying the code produced by the unfolder,
by erasing redundant facts and constraining the shape of acceptable facts. We
have provided a set of requirements for programs that ensure the safety of these
simplification procedures. We have also proven the equivalence of the proposed
unfolding semantics with an operational semantics for the kernel language.
We have implemented an unfolder for our kernel language. Experimenting
with it supports our initial claims about a more “implementable” semantics.
Regarding future work, we want to delve into the applications that have been
just hinted here, particularly declarative debugging and abstract interpretation.
16
Finally, we are working on a better characterization of the necessary conditions that functional programs must meet in order for different optimized versions of the clean method to work safely.
References
1. Alpuente, M., Falaschi, M., Vidal, G.: Narrowing-driven Partial Evaluation of Functional Logic Programs. In: Proc. ESOP’96. LNCS, vol. 1058. Springer (1996) 4
2. Alpuente, M., Falaschi, M., Moreno, G., Vidal, G.: Safe folding/unfolding with conditional narrowing. In: Proc. ALP’97. pp. 1–15. Springer LNCS (1997) 7
3. Antoy, S., Hanus, M.: Declarative programming with function patterns. In: Proc. of
LOPSTR’05. pp. 6–22. Springer LNCS (2005) 10
4. Bossi, A., Gabbrielli, M., Levi, G., Martelli, M.: The s-semantics approach: Theory
and applications. Journal of Logic Programming 19/20, 149–197 (1994) 1, 4
5. Burstall, R.M., Darlington, J.: A transformation system for developing recursive
programs. J. ACM 24(1), 44–67 (Jan 1977) 4
6. Hanus, M.: The integration of functions into logic programming: From theory to
practice. Journal of Logic Programming pp. 583–628 (1994) 5
7. Pettorossi, A., Proietti, M.: Perfect model checking via unfold/fold transformations.
In: Computational Logic, LNCS 1861. pp. 613–628. Springer (2000) 4
8. Pope, B., Naish, L.: Buddha - A declarative debugger for Haskell (1998) 14
9. Scott, D.: The lattice of flow diagrams (Nov 1970) 4
17
APPENDIX
This appendix is not part of the submission itself and is provided just as supplementary material for reviewers. It pursues the following goals:
1. To provide a pictorical representation of the functions involved in the unfolding process, which hopefully helps in grasping how the whole process works
(Sect. A).
2. To describe in what sense the unfolding and the operational semantics are
equivalent and to prove such equivalence (Sect. B).
3. To present a larger example that intends to clarify how the functions that
have been used actually work as well as additional examples (Sect. C).
4. To establish some results that support the validity of the code generated by
the unfolder (Sect. D).
A
Pictorial Representation of the Unfolding Process
Invocation
U
Data flow
unfold
clean
eval
umatch
hi | ci ∧ c′mi = bi
Runtime
match
Code
Fig. 11. Relation among the functions taking part in unfolding
Throughout Sect. 3 a number of auxiliary functions were presented. These functions are depicted in Fig. 11. The figure can be explained as follows:
The starting point is U . U does nothing but to call unfold and remove the
redundant facts by calling clean. It is then up to the user to call U again to
perform another step in the unfolding process.
The second level of the figure shows unfold, which takes a program rule and
unfolds it as much as possible. unfold calls itself with the output of its previous
execution until no more positions are left to unfold (arrow pointing downwards).
If unfold receives an input where at least one position is unfoldable, it calls eval
on the arguments of the unfoldable expression and then calls umatch to perform
the actual fitting between the unfoldable position and the head of some fact.
18
The last level of the figure (below the dashed line) represents the execution
of the unfolded code. This part is not related with the definition of the unfolding
operator, but with the execution of the unfolded code. The code is made of the
output of unfold whose guards are (possibly) extended with c′mi , the output from
umatch, which contains the invocations to match. Observe that the output from
umatch goes to the generated code only, not to the unfolding process.
To the best of our knowledge, this unfolding process is a first effort to formulate an unfolding operator beyond naı̈ve unfolding.
B
B.1
Equivalence between the Unfolding Semantics and the
Operational Semantics
Unfolding of an Expression
Let us define a function ueval that finds what is the normal form for a given
expression by means of unfolding. In short, what ueval does is to evaluate a given
(guarded) expression by unfolding it according to a given interpretation.
The function ueval has type ueval : Set (F ) × E → Set (E) and is defined as
shown in Fig. 12.
Note that any expression e is equivalent to (True ◮ e).
B.2
Trace of a Fact or an Expression
Given a fact F , belonging to any interpretation I, its trace is the list of pairs
(Λi , o) where Λi is a rule in rule(P ) ∪ {Λ⊥s ,f = f x1 . . . xn = ⊥s } ∀f ∈ FS n
and o is a position within the expression to which the next rule in the trace
is to be applied. This position indicates what subexpression within the current
expression is to be replaced by the body of the rule applied.
Let us define the function that returns all the traces associated to all the
facts derivable from a single rule (++ denotes list of lists concatenation that
returns the list of lists resulting from appending every list in the first argument
to every list in the second argument):
tr ′ : Set (F ) × R → [τ ]
where R is the type of program rules and τ is the type of traces.
– tr′ (I, f t1 . . . tn |c = b) = [ ] if f t1 . . . tn |c = b is a valid input for the case 1
of unfold .
– tr′ (I, f t1 . . . tn |c = b) = o.tr (I, F ) + +tr ′ (I, F ′ ) if the case 2a) of unfold
can be applied to f t1 . . . tn |c = b using fact F ∈ I at position o. F ′ is the
result of unfolding f t1 . . . tn |c = b as it is done in the aforementioned case
of unfold .
– tr′ (I, f t1 . . . tn |c = b) = [[(Λ⊥s ,f ′ , o)]] + +tr ′ (I, F ′ ) if the case 2b) of unfold
can be applied to f t1 . . . tn |c = b at position o. F ′ is the result of unfolding
f t1 . . . tn |c = b as it is done in case 2b) of unfold .
19
ueval : Set(F) × E → Set(E)
= {e} if no rule from evalAux applies
to
S any position of e
=
o ueval (I, e[uevalAux (I, e|o )]|o )
∀o such that a rule of uevalAux is
applicable to e|o .
-------------
ueval (I, e)
ueval (I, e)
uevalAux : Set(F) × E → E
uevalAux (I, p e1 . . . en )
if p e1 . . . en ❀p t (p ∈ PF n )
= t
uevalAux (I, (c1 ∧ . . . ∧ ci−1 ∧ snd (match(p, e)) ∧ ci+1 ∧ . . . ∧ cn ◮ e′ )) = σ(c1 ∧ ci−1 ∧ b ∧ ci+1 ∧ . . . ∧ cn ◮ e′ )
if match(p, e) = (σ, b)
uevalAux (I, True ∧ True)
= True
uevalAux (I, b1 ∧ b2 )
b1 , b2 ∈ Bool , b1 = False or b2 = False
= False
uevalAux (I, True ◮ e)
= e
uevalAux (I, False ◮ e)
= ⊥s
uevalAux (I, f e1 . . . en )
= σ ′ (c ◮ b)
′
if f ∈ FS n , ∃f t1 . . . tn |c = b ∈ I. σ = mgu((t1 , . . . , tn ), (e1 , . . . , en )) with σ ′ (c) = True
uevalAux (I, f e1 . . . en )
= ⊥s
′
if f ∈ FS n , ∄f t1 . . . tn |c = b ∈ I. σ = mgu((t1 , . . . , tn ), (e1 , . . . , en )) with σ ′ (c) = True
Fig. 12. The ueval function: Evaluating expressions by means of unfolding
The composition of a position o and a trace (denoted by o.tr (. . .) above) is
defined (for every trace in a given list) as:
– o.[ ] = [ ]
– o.[(Λ, o′ )|xs] = [(Λ, o.o′ )|o.xs]
The list of traces for a fact F with respect to an interpretation I (tr (I, F )) relies
on tr ′ :
tr : Set (F ) × F → [τ ]
tr (I, F ) = [[(ΛF , {})]] + +{τ such that τ ∈ tr ′ (I, ΛF )∧
unfold (ΛF , I) generates F according to the steps given by the trace τ }
20
where ΛF is the only program rule that can generate F .
Note that tr and tr ′ are mutually recursive.
The list of traces of an expression e according to interpretation I (denoted
Tr (I, e)) is defined as the tail of all the lists in tr(I, goal ′ = e) where goal ′ is a
new function name that does not appear in the program P and the tail of a list
is the same list after removing its first element.
B.3
Equivalence between the Unfolding and Operational Semantics
This section will show that the unfolding semantics and the operational semantics are equivalent in the following sense for any ground expression goal :
goal ❀∗ e′ ↔ e′ ∈ ueval (I∞ , goal )
(2)
where ❀∗ is the transitive and reflexive closure of ❀ and e′ is in normal form
according to ❀.
Given a program P , I∞ is the limit of the following sequence:
– I0 = ∅
S
– In+1 (n ≥ 0) = Λ∈rule(P ) unfold (In , Λ)
B.4
Proof of Equivalence
We are now proving that Eq. 2 holds.
→)
This part of the double implication will be proven by induction on the number
of ❀-steps that an expression requires to reach normal form.
Base case (n=0): If goal ❀0 e′ , then goal = e′ , which means that goal is
in normal form already. Therefore, goal has no full applications of symbols in
PF ∪ FS . In that case, ueval (I, goal ) = {goal } ∀I ∈ Set (F ).
Induction step:
Let us take as induction hypothesis that any expression goal such that
goal ❀n e′ (where e′ is in normal form) then e′ ∈ ueval (I∞ , goal ).
Let en+1 be an expression that requires n + 1 ❀-steps in order to reach
normal form. Then there must exist (at least) one expression en such that:
en+1 ❀ en ❀n e′
where e′ is in normal form. Now, if we prove that both en+1 and en unfold to
the same values (that is, ueval (I∞ , en+1 ) = ueval (I∞ , en )), then we can apply
the induction hypothesis to en to state that en ❀n e′ → e′ ∈ ueval (I∞ , en+1 ) =
ueval (I∞ , en ).
Let us check all the rules in the operational semantics for the single ❀ step
going from en+1 to en .
Rule rule
21
In this case, en+1 = f e1 . . . ea (f ∈ FS a ) and en = σ(r) (assuming that the
rule for f within program P is Λ ≡ f t1 . . . ta |g = r and σ = mgu((t1 , . . . , tn ), (e1 , . . . , en ))).
By reductio ad absurdum let us assume now that ueval (I∞ , en+1 ) 6= ueval (I∞ , en ).
Then,
ueval (I∞ , f e1 . . . ea ) 6= ueval (I∞ , σ(g ◮ r))
However, note that σ(Λ) is equal to the rule instance f e1 . . . ea |σ(g) = σ(r),
which states exactly the opposite of the equation above. We have reached a contradiction, which means that our initial hypothesis (namely, ueval (I∞ , en+1 ) 6=
ueval (I∞ , en )) is false.
Rule rulebot
In this case, en+1 = f e1 . . . ea (f ∈ FS a ) and en = ⊥s . If there is no rule in
P whose pattern can unify with en+1 while at the same time having a satisfiable
guard, it is sure that no fact in any interpretation derived from P will be such
that its head unifies with en+1 while at the same time having a satisfiable guard
(which forces uevalAux to use its last case). That means that en+1 cannot be
reduced to anything different from ⊥s . The same happens with en (which is
already equal to ⊥s ). Therefore, ueval (I∞ , en+1 ) = ueval (I∞ , en ) as we wanted
to prove.
Rule predef
In this case, en+1 = p e1 . . . ea and en = t (t ∈ T ) where p ∈ PF and
p e1 . . . ea is has value t according to the predefined functions known to the
environment being used.
Also in this case ueval (I, p e1 . . . ea ) = {t} and ueval (I, t) = {t} for any
interpretation I. This case simply evaluates predefined functions.
Rule andfalse
In this case, en+1 = e1 ∧ e2 and en = False when either e1 ❀∗ False or
e2 ❀∗ False. Let us assume without loss of generality that e1 ❀∗ False.
Since en+1 requires n + 1 ❀-steps to reach normal form, then e1 must take at
most n steps to reach its normal form. This means that the induction hypothesis
is applicable to e1 and therefore ueval (I∞ , e1 ) ⊃ {False}. This in turn means
that ueval (I∞ , e1 ∧ e2 ) ⊃ {False} as we wanted to prove (assuming that the
logical connector ∧ is defined as lazy conjunction in eval ).
The remaining rules (andtrue, ifthentrue, ifthenfalse) are proven in
a similar way.
Let us proceed now to the reverse implication.
←)
The proof will be driven by structural induction on the shape of the expression to be evaluated (goal ).
Let goal be an expression that has no full applications of any symbol of FS
or PF . Then, ueval (I∞ , goal ) = {goal } and ❀ cannot apply any rewriting, so
goal ❀∗ goal , as we wanted to prove.
Next, let goal = p e1 . . . en (p ∈ PF n ) and no ei has any full application of
any symbol in FS ∪ PF . Then, ueval (I∞ , goal ) = {t} if goal ❀p t. ❀ will apply
22
predef, ifthentrue, ifthenfalse, andtrue or andfalse to evaluate the
same predefined function and reach the same t.
Next, let goal = f e1 . . . en (f ∈ FS n ). If ueval (I∞ , goal ) includes e′ that is
because goal has a trace (since e′ is in normal form) That is, [(Λ1 , o1 ), . . . , (Λk , ok )] ∈
Tr (I∞ , goal ) for some k > 0. We are now going to prove that:
e′ ∈ ueval (I∞ , goal ) ∧ [(Λ1 , o1 ), . . . , (Λk , ok )] ∈ Tr (I∞ , goal )
→ goal ❀∗ e′ (using the exact sequence of rules given below)
Specifically, it will be proven that every trace element (Λ, o) is equivalent to the
following ❀-sequence at position o of the expression input for the trace element:
1. The rules dealing with predefined functions (namely, predef, ifthentrue,
ifthenfalse, andtrue, andfalse) will be applied to the expressions being
rewritten as many times as possible.
2. rule or rulebot: The rule rule will be applied if Λ 6= Λ⊥s ,f . rulebot
will be applied otherwise.
3. The rules dealing with predefined functions (namely, predef, ifthentrue,
ifthenfalse, andtrue, andfalse) will be applied to the expression returned by the previous step as many times as possible.
The proof will be driven by induction on the length of the trace for goal .
Base case: ([(Λ ≡ f p1 . . . pa |g = r, o = {})] ∈ Tr (I∞ , goal )).
If goal can be rewritten to normal form e′ by just using the rule Λ, that means
that Λ 6= Λ⊥s ,f (since e′ is assumed to be in normal form) and that I∞ must
countain a fact f t1 . . . ta |c = b, acording to the definition for tr ′ (second case),
+
′
such that ∃σ ′ = mgu((t1 , . . . , ta ), (e+
1 , . . . , ea )) and eval (I∞ , σ (c)) = True and
+
′
′
eval (I∞ , σ (b)) = e , (ei = eval (I∞ , ei )).
On the other hand, ❀ can apply rule to goal = f e1 . . . ea . This rule rewrites
goal to:
σ(g ◮ r)
+
mgu((p1 , . . . , pa ), (e+
1 , . . . , ea )).
Note that the application of eval
where σ =
here is equivalent to the application of all the cases of uevalAux except the last
two ones (let us call these first cases uevalAuxpredef ) which in turn have the
same effect than the ❀-rules predef, ifthentrue, ifthenfalse, andtrue,
andfalse as many times as necessary to evaluate any predefined function that
appears in any of the arguments to f . Let us remark that match is unnecessary
in ❀ since all the expressions handled by ❀ are ground and the substitution
returned by match generates the same ground expressions than σ in rule rule.
We also assume that match cannot be used in normal programs.
The same cases for predefined functions (uevalAuxpredef ) can be applied to
the expression above to get σ(eval (I∞ , g) ◮ eval (I∞ , r)).
+
By construction, we know that σ ′ (t1 , . . . , ta ) = σ(p1 , . . . , pa ) = (e+
1 , . . . , ea ).
Since any valid program in our setting can only have at most one rule that
matches the ground expression f e1 . . . en , then:
23
– eval (I∞ , σ ′ (b)) = eval (I∞ , σ(r)) = e′
– eval (I∞ , σ ′ (c)) = eval (I∞ , σ(g)) = True
Therefore, both unfolding and ❀ have used a fact and a rule which were sintactically indentical (once predefined functions have been evaluated) to find the
same answer for goal . This proves the base case.
Induction step: Now the length of the trace for goal is equal to l +1 (l > 0).
Let us assume, as induction hypothesis that:
e′ ∈ ueval (I∞ , goal ) → goal ❀∗ e′
provided that the trace for goal has exactly l elements
and using the sequence of ❀-rules given earlier for that trace)
Let us prove the equation for goals with trace of length l+1. In order to do that let
us consider that [(Λ1 , o1 ), . . . , (Λl+1 , ol+1 )] ∈ Tr (I∞ , goal ) and an intermediate
expression el whose trace is the same as before except for the first element.
Two cases have to be looked at here: One in which rulebot is applied as
first step (that is, Λ1 = Λ⊥s ,f ′ for some f ′ ) and one more where rule is applied
as first step (that is, Λ1 6= Λ⊥s ,f ′ ).
Let us begin with rulebot. If el = goal [⊥s ]|o1 , the following must be true:
– goal |o1 = f ′′ e′′1 . . . e′′f (f ′′ ∈ FS f )
– ∄(F ′′ ∈ I∞ such that F ′′ ≡ f ′′ t′′1 . . . t′′f |c′′ = b′′ and
′′+
′′+
σ ′ = mgu((t′′1 , . . . , t′′f ), (e′′+
= eval (I∞ , e′′j )
1 , . . . , ef )) where ej
′ ′′
and True ∈ ueval (I∞ , σ (c ))).
– If F ′′ does not exist that means that no rule for f ′′ originates a fact like
′′+
F ′′ which in turn means that no rule unifies with (e′′+
1 , . . . , ef ) either. This
forces ❀ to apply rulebot to el |o1 and the same does ueval (last rule).
Note that the sequence of ❀ rules applied in this case was (predef, ifthentrue, ifthenfalse, andtrue, andfalse)∗ , rulebot.
Since the trace for goal [⊥s ]|o1 has length l, the induction hypothesis is suitable for it and then:
e′ ∈ ueval (I∞ , el ) → el ❀∗ e′
by using the sequence of ❀-rules given earlier for el )
and since the normal form for goal is the same as the normal form for el by
definition of unfolding, we have that the trace for goal has the required shape.
Lastly, consider some goal whose trace is again
[(Λ1 , o1 ), . . . , (Λl+1 , ol+1 )] but where Λ1 6= Λ⊥s ,f . Let us apply uevalAuxpredef
(or, equivalently, predef, ifthentrue, ifthenfalse, andtrue, andfalse)
′′+
and
as many times as possible to the arguments of goal |o1 to get f ′′ e′′+
1 . . . ef
′′+
′′ ′′+
then rule (the only rule that ❀ can apply to f e1 . . . ef ) by using the rule
24
Λ1 ≡ f ′′ p′′1 . . . p′′f |g ′′ = r′′ and the unifier
′′+
σ = mgu((p′′1 , . . . , p′′f ), (e′′+
1 , . . . , ef )). Now, we can rewrite goal to:
goal [σ(g ′′ ◮ r′′ )]|o1
given that according to tr ′ ’s definition (second case), the rule Λ1 is applicable to
goal |o1 (since the trace of any fact begins by the rule which originates the fact and
unfold has been able to apply a fact derived from Λ1 to goal |o1 ). This means that
the expression above can be rewritten to goal [eval (I∞ , σ(g ′′ ◮ r′′ ))]|o1 which is
the new el . Note that the eval in the former expression is equivalent to (predef,
ifthentrue, ifthenfalse, andtrue, andfalse)∗ (which in turn is equivalent
to applying the cases of uevalAuxpredef as many times as necessary).
What we know now is:
– The normal form for goal is the same as the normal form for el since ❀
conserves the semantics by definition.
– The rewriting sequence from goal to its normal form is the same as the
rewriting sequence for el preceded by the rules applied above (i.e. (predef,
ifthentrue, ifthenfalse, andtrue, andfalse)∗ , rule, (predef, ifthentrue, ifthenfalse, andtrue, andfalse)∗ ).
– Since the length of the trace for el is equal to l, the induction hypothesis is
applieable to it and then:
e′ ∈ ueval (I∞ , el ) → el ❀∗ e′
by using the sequence of ❀-rules given earlier for el )
Since we have proved that, given an unfolding sequence, we are able to find
a precise sequence of ❀-rules that provokes the very same effect to any given
expression, we have also proved the ←) implication of the theorem.
B.5
Example
Let us present an example that may clarify some of the concepts involved in the
proof of equivalence. Consider this program:
f(x)=g(x+1)
g(x)=h(x+2)
h(x)=x+3
j (5)=6
f2(x,y)=x+y
goal =f(4)
goal2=f2(f(4),f (4))
goal3=K(j (5))
25
What we have here are a constructor (K), some test goals (goal, goal2 and
goal3) and some functions. A first function group (f,g,h) is such that f requires
the evaluation of g and h. We will see that the trace for f’s facts reflects this
issue. Next, we have two more unrelated functions. j and goal3 will be used to
demonstrate the usage of rulebot while f2 and goal2 will show why non-unique
traces exist.
The first interpretation generated by the unfolder is:
∗
∗
∗
∗
h(b) = b+3 <H>
j(5) = 6 <J>
f2(b,c) = b+c <F2>
goal3 = K(Bot ) <Goal3>
The sequences enclosed between < and > is the trace of every fact. It can be
seen, however, that the unfolder does not display the position of every step and
it does not display the usage of Λ⊥s ,j rules either (see the last fact).
So, the whole trace for the last fact would be
[(Goal3,{}), (Lambda Bot,{1})].
All the other facts have a trace of length 1. It can be seen that they are
identical to their respective program rules. This property supports the proof for
the base case of the induction since applying the rule to perform rewriting is
exactly the same as applying a fact with a trace length of one. Realize that even
though these facts cannot be unfolded any further, they require eval to reach a
normal form (since the addition needs to be evaluated after its arguments have
been bound to ground values).
Further interpretations provide more interesting traces. This is I4 :
∗
∗
∗
∗
∗
∗
∗
∗
h(b) = b+3 <H>
j(5) = 6 <J>
f2(b,c) = b+c <F2>
g(b) = b +2+3 <G,H>
goal3 = K(6) <Goal3 ,J>
f(b) = b +1+2+3 <F,G,H>
goal = 10 <Goal ,F,G,H>
goal2 = 20 <Goal2 ,F2 ,F,G,H,F,G,H>
Take a look at how the fact regarding goal reflects the dependency between f,
g and h. That fact has a trace of length 4 (it is very easy to follow how goal
is evaluated by looking at its trace). Removing the first element of its trace
(as needed in the induction step) yields the trace <F,G,H> for which there is a
fact (the fact for f). This means that in this case, the induction step says that
evaluating goal is the same as applying rule to find f(4) (eval is not needed in
this step) and then applying the induction hypothesis to f(4) whose trace is an
element shorter than that of goal.
Finally, consider how goal2 can be evaluated to normal form in multiple
orders. Since f2 demands both arguments, both of them must be taken to normal
form but the order in which is done is irrelevant. Since our unfolder does not
show the positions for the reduction steps all the traces for goal2 look the same
but more than one trace would appear if positions were taken into account.
26
B.6
Lemma: Applicability of Every Step Inside a Fact’s Trace
We have seen that the proof of equivalence between the operational semantics
and the unfolding semantics relies on the fact that every element inside a fact’s
trace is applicable to the expression resulting from applying all the trace steps
that preceded the steps under consideration. How can we be sure that a trace
step is always applicable to the expression on which that step must operate?.
This lemma states that this will always happen. Intuituvely, what the lemma
says is that the trace of a fact is nothing else that the sequence of rules that
unfold has applied to get from a program rule to a valid fact.
Lemma 3 (Applicability of Every Step Inside a Fact’s Trace). Let e be
an expression and let [(Λ1 , o1 ), . . . , (Λm , om )] ∈ Tr (I∞ , e). If e ❀∗ em′ (m′ ≥
0, m′ < m) and [(Λm′ +1 , om′ +1 ), . . . , (Λm , om )] ∈ Tr (I∞ , em′ ) then the following
assertions are true:
1. If Λm′ +1 = Λ⊥s ,f ′ then ∄(Λ ∈ rule(P ) = f ′ t′1 . . . t′n |g ′ = r′ such that
∃σ = mgu(f ′ t′1 . . . t′n , ueval (I∞ , em′ |om′ +1 )) and True ∈ ueval (I∞ , σ(g ′ )))
2. If Λm′ +1 6= Λ⊥s ,f ′ then Λm′ +1 ≡ f t1 . . . tn |g = r and
∃σ = mgu(f t1 . . . tn , eval (I∞ , em ′ |om ′ +1 )) and True ∈ ueval (I∞ , σ(g))
C
Additional Examples
Example 2 (Lazy Evaluation).
Think of the following code and its first interpretations:
first : [a] → a
first (x: ) = x
ones : [Int ]
ones = 1: ones
main : Int
main = first ones
I0 =I0⊥ = ∅
I1 =U (I0 ) = {ones=1:⊥s ,
first(x:xs)=x}
I1⊥ ={main=⊥s }
I2 =U (I1 ) = {ones=1:(1:⊥s ),
first(x:xs)=x, main=1}
I2⊥ =I1⊥
The semantics for this program is infinite: every step adds a 1 to the list generated
by ones.
Consider the step from I1 to I2 : when unfolding ones, a fact matching ones
is found in I1 (namely, ones=1:⊥s ) so this last value is replaced in the right side
of the rule. Since the new value for ones is greater than the existing fact and
both heads are a variant of each other, the function clean can remove the old
fact.The fact ones=1:⊥s can now be used to evaluate main. Since the new fact
main=1 is greater than the fact main=⊥s it replaces the existing one. The fact for
first remains unaltered.
Example 3 (Larger Example).
Let us present an example that intends to describe how all the functions and
concepts that we have seen throughout the paper work. Think of the following
program:
27
ite : Bool ∗ a ∗ a → a
ite (True ,t,e) = t
ite (False ,t,e) = e
gen : Int → [Int ]
gen n = n:( gen (n +1))
senior : Int → Bool
senior age = ite (age >64,True , False)
map : (a → b) ∗ [a] → [b]
map (f ,[])=[]
map (f ,(x:xs )) = (f x) : (map (f,xs))
main50 : [Bool ]
main50 = map (senior ,gen (64))
Let us see how this program is unfolded.
First, the initial interpretation (I0 ) is empty, by definition. At this point, the
function unfold is applied to every rule in turn, using I0 as the second argument.
This produces the following interpretation (I1 ):
∗
∗
∗
∗
∗
ite (True ,b,c) = b
ite (False ,b,c) = c
gen (b) = Cons (b,Bot )
map (b,Nil ) = Nil
map (b,Cons (c,d)) = Cons (b@[c],Bot )
How did we get here?. When a rule is applied to unfold, every full application
of a symbol in FS is replaced by the value assigned to the application in the
interpretation also applied to unfold. The actual matching between an expression
and some rule head is performed by umatch, which is called by unfold every time
an expression needs to be unfolded. In most cases, umatch behaves as a simple
unifier calculator but higher order brings complexity into this function (in ssemantics, where higher order does not exist, simple unification is used in the
place of umatch). In this case, the interpretation applied was the empty one, so,
the following has happened to every rule:
– The two rules for ite have no applications of user-defined functions, so nothing has to be done to them in order to reach a fact in normal form. That is
why they appear in I1 right away.
– The rule for gen is a little bit different since this rule does have an application
to a user-defined function. However, since I0 contains nothing about those
functions, all that unfold can do is to replace that invocation by the special
symbol Bot (represented by ⊥s in formulas) to represent that nothing is
known about the value of gen (b+1).
– The function senior has no facts inside I1 since the function clean removes
any unguarded fact with a body equal to ⊥s . This is precisely what has
happened since I0 contains no information about ite, so the resulting new
fact for senior would be ∗ senior age=Bot
28
– The first rule for map is left untouched since it has no full applications of
user-defined functions (as it happened with ite).
– The second rule for map generates the fact
∗ map(b,Cons(c,d)) = Cons(b@[c],Bot) where the Bot denotes that the value
for map(f,xs) is not contained inside I0 .
– And, finally, there is not any fact for main50 since the whole application of
map that appears at the root of the body is unknown, so it gets replaced by
Bot, which is in turn eliminated by clean (and moved into I1⊥ ).
Since we saw that two facts were removed by clean because they did not have a
guard and their body was equal to Bot, I1⊥ , has the following content:
∗ senior( age ) = Bot
∗ main50 = Bot
These two facts will be reinjected into the factset when I2 is calculated but in
this case, they do not have a noticeable effect on the results, so we will not insist
on them any more.
One more iteration of the unfolding operator generates I2 :
∗
∗
∗
∗
∗
∗
∗
∗
ite (True ,b,c) = b
ite (False ,b,c) = c
map (b,Nil ) = Nil
gen (b) = Cons (b, Cons (b+1, Bot ))
senior(b) | snd (match (True ,b>64)) = True
senior(b) | snd (match (False ,b>64)) = False
map (b,Cons (c, Nil )) = Cons (b@[c], Nil )
map (b,Cons (c, Cons (d,e))) =
Cons (b@[c],Cons (b@[d],Bot ))
∗ main50 = Cons (Bot ,Bot )
Remember that I2 has been calculated by taking I1 ∪ I1⊥ as the relevant interpretation. By definition of the unfolding operator, I2 includes all the facts that
were already present inside I1 (unless they are removed by clean).
Remember also that we are using the optimized version of clean (the one that
removes subsumed facts instead of enlarging the constraints of the subsuming
facts). Once these aspects have been settled, the calculations that lead to the
formation of I2 can be explained as follows:
– The two facts for ite are transferred directly from I1 into I2 . This is so since
they cannot be unfolded any further and besides, they are not overlapped
by any fact. The same happens with the first fact for map.
– The fact for gen is much more interesting: There are not two facts for
gen in I2 . There is only one. This is due to the application of clean in
unfold. What has happened here is that clean has compared the old fact
(∗ gen(b)=Cons(b,Bot)) to the new one
(∗ gen(b)=Cons(b,Cons(b+1,Bot))) and has removed the old one. The reason
for this is that both facts clearly overlap but the newest fact has a body
that is greater (according to ⊑) than that of the old fact. Given that the
optimized version of clean is being used (all the functions here are complete
and the rules are productive), the old fact is removed.
29
One more point of interest here: Note that the expression b+1 cannot be
further unfolded since the value for b is unknown at unfolding time. We will
see the opposite case later.
– The explanation for senior will be detailed later.
– The two facts for map have become three. This has happened as follows:
• The second rule for map, when unfolded using I1 ∪I1⊥ generates two facts:
∗ map (b,Cons (c,Nil )) = Cons (b@[c],Nil )
∗ map (b,Cons (c,Cons (d,e ))) = Cons (b@[c],Cons (b@[d],Bot ))
Those two facts overlap with the old fact
(∗ map(b,Cons(c,d)) = Cons(b@[c],Bot)), so this fact is removed by clean,
which brings us to the count of three facts for map.
– main50 has progressed slightly: The invocation of map within the body of
main50 has been replaced by the body of the second fact for map in I1 ∪ I1⊥
generating Cons(senior@[b],Bot). Since nothing is known about senior in
I1 ∪ I1⊥ , the final result is Cons(Bot,Bot).
The unfolding of senior requires special attention: In order to unfold the only
rule for this function, the call to ite is unfolded. However, the first argument for
ite must be fully known before proceeding. This is impossible at unfolding time
since age will receive its value later, at runtime. The only way to go in cases
like this is to assume certain hypotheses and to generate facts that record those
hypotheses. In this example, we are forced to assume that age>64 is True when
the first rule for ite is unfolded while age>64 is assumed to be False when the
second rule for ite is unfolded. These hypotheses are recorded in the guards for
the facts corresponding to senior.
The function responsible for generating these hypotheses is umatch (more
specifically, its second rule). This rule is used when an expression rooted by a
predefined function (here, <) has to be matched to some pattern term which
is not a variable (here, True and then False). In this case, umatch extends the
new fact’s guard by adding the new condition (here snd(match(True,b>64)))
(resp. False) and then proceeds as if the PF-rooted expression matched the
given pattern in order to continue generating hypotheses. In this case, umatch
would call itself with umatch(True,True), (resp. False) which is solved by using
umatch’s first rule which generates no more conditions or variable substitutions.
Unfolding once again yields I3 :
∗
∗
∗
∗
∗
∗
∗
∗
∗
ite (True ,b,c) = b
ite (False ,b,c) = c
map (b,Nil ) = Nil
senior(b) | snd ( match(True ,b>64)) = True
senior(b) | snd ( match(False ,b>64)) = False
map (b,Cons (c, Nil )) = Cons (b@[c], Nil )
gen (b) = Cons (b, Cons (b+1, Cons (b +1+1 , Bot )))
map (b,Cons (c, Cons (d, Nil ))) = Cons (b@[c],Cons (b@[d],Nil ))
map (b,Cons (c, Cons (d, Cons (e,f)))) =
Cons (b@[c],Cons (b@[d],Cons (b@[e],Bot )))
∗ main50 | snd (match( True ,64>64)),
snd (match( True ,65>64)) = Cons (True , Cons (True ,Bot ))
30
∗ main50 | snd (match( True ,64>64)), snd (match (False ,65>64)) =
Cons (True ,Cons (False ,Bot ))
∗ main50 | snd (match( True ,64>64)) = Cons (True , Cons (Bot ,Bot ))
∗ main50 | snd (match( False ,64>64)),
snd (match(True ,65>64)) = Cons (False ,Cons (True , Bot ))
∗ main50 | snd (match( False ,64>64)),
snd (match(False ,65>64)) = Cons (False ,Cons (False ,Bot ))
∗ main50 | snd (match( False ,64>64)) = Cons (False ,Cons (Bot , Bot ))
∗ main50 | snd (match( True ,65>64)) = Cons (Bot , Cons (True ,Bot ))
∗ main50 | snd (match( False ,65>64)) = Cons (Bot , Cons (False ,Bot ))
∗ main50 = Cons (Bot ,Cons (Bot ,Bot ))
∗ main50 | snd ( match(True ,65>64)), snd ( match(True ,64>64))=
Cons (True ,Cons (True ,Bot ))
∗ main50 | snd (match( True ,65>64)),
snd (match(False ,64>64)) = Cons (False ,Cons (True ,Bot ))
∗ main50 | snd (match( False ,65>64)),
snd (match( True ,64>64)) = Cons (True , Cons (False ,Bot ))
∗ main50 | snd (match( False ,65>64)),
snd (match( False ,64>64)) = Cons (False ,Cons (False ,Bot ))
We are not repeating all the details above. Instead, we just want to point out
some interesting aspects of this interpretation:
– The reader might have expected to find expressions like 64>64 fully reduced
(that is, replaced by False). That would be correct but boolean operators are
not evaluated due to a limitation in the implementation of our unfolder. In
this example, this limitation is a blessing in disguise since those expressions
are needed to understand the origin of some facts.
– An expression like b+1+1 has not been reduced to b+2 since it stands for
(b+1)+1. The function eval has returned the same expression that it is given
since it cannot be further evaluated.
– The combinatory explosion of facts denotes that the unfolder tries all possible
unfolding alternatives (in particular, those facts with less than two conditions
in the guard are the result of unfolding senior before ite, so the result for
senior cannot be other than an unguarded ⊥s ).
– Note that our Prolog implementation does not have an underlying constraint
solver, so the entailment condition of the guards that is used to sort overlapping facts is not checked. That is why the unfolder has generated facts that
should have been removed, such as main50 = Cons(Bot ,Cons(Bot ,Bot )).
– A value of 65 appears whenever the function eval has been applied to evaluate 64+1.
Example 4 (Unfolding and Abstract Interpretation). This example will show how
unfolding can be used to synthesize an abstract interpreter of a functional program. Think of the problem of the parity of addition. The sum of Peano naturals
can be defined as shown in Fig.1 (right).
We also know that the successor of an even number is an odd number and
viceversa. The abstract domain (the domain of parities) can be written as:
data Nat # = Suc c # Nat # | Even # | Odd #
31
Now, the user would define the abstract version for add together with the properties of Suc regarding parity:
add # : Nat # → Nat # → Nat #
add # Even # m = m
add # ( Suc c # n) m = Suc f # ( add # n m)
Suc f # : Nat # → Nat #
Suc f # Even # = Odd #
Suc f # Odd # = Even #
In order to enforce the properties of the successor in the abstract domain, a
catamorphism 7 linking Suc f# to Suc c# will be used:
C
C
C
C
s
s
s
s
: Nat # → Nat #
Even # = Even #
Odd # = Odd #
( Suc c # n) = Suc f # ( C s n)
Then, the unfolding process that has been described must be slightly modified:
after every normal unfolding step, every abstract term in a pattern must be
replaced by the term returned by the catamorphism. By doing this, the unfolding
of the previous program reaches a fixed point at I2 8 :
∗
∗
∗
∗
∗
add #( Even #,m) = m
Suc f #( Even #) = Odd #
Suc f #( Odd #) = Even #
add #( Odd #,Odd #) = Even #
add #( Odd #, Even #) = Odd #
Example 5 (Addition of Parities Revisited).
As an interesting point of comparison, consider this alternative version for
add#:
addr # : Nat # → Nat # → Nat #
addr # Even # m = m
addr # ( Suc c # n) m = addr # n ( Suc c # m)
The fixed point for this new function is as follows (also in I2 ):
∗
∗
∗
∗
addr #( Even #,b) = b
addr #( Odd #,b) = Suc #(b)
suc f #( Even #) = Odd #
suc f #( Odd #) = Even #
Example 6 (Demand Analysis).
The following example shows how abstraction can help to find program properties. This particular example investigates how to find demand properties for
the functions in a program. By demand properties we mean the level of definition
7
8
A catamorphism takes a term an returns the term after replacing constructors by a
corresponding operator.
The rules for the catamorphism do not take part in unfolding
32
that a function requires in its arguments in order to return a result strictly more
defined than ⊥s .
For the sake of simplicity, we are limiting our analysis to top-level positions
within the arguments although the method can be easily extended to cope with
deeper positions.
As before, we begin by defining the abstract domain. This example will run
on Peano Naturals, so the new domain reflects what elements are free variables
and what others are not:
data NatDemand # = Z# | S# NatDemand # | FreeNat #
As an example, we will use the well known function leq. leq x y returns whether
x is lesser or equal than y. The standard (unabstracted) version of leq is as
follows:
leq
leq
leq
leq
: Nat → Nat → Bool
Zero y = True
(Suc x) Zero = False
(Suc x) (Suc y) = leq x y
The abstracted version, which is useful for finding the demand properties for leq
at the top level positions of its arguments is as follows:
data Bool # = True # | False# | DontCareBool #
leq #
leq #
leq #
leq #
:: NatDemand # → NatDemand # → Bool #
Zero # FreeNat # = DontCareBool #
(Suc # x) Zero # = DontCareBool #
(Suc # x) ( Suc y) = leq # x y
Observe that those rule bodies that do not influence the demand properties of
the function have been abstracted to DontCareBool# (and not to True# and False#
in order to get an abstract representation that is as simple as possible while not
losing any demand information). Note that FreeNat# represents that a certain
argument is not demanded. This abstraction transformation can be mechanised:
Any singleton variable in a rule is sure not to be demanded so it is abstracted
to FreeNat#. The rest of variables are left as they are.
What we need next is to define the functions that assert when a term is not
free (that is, demanded when it appears as a function argument). We need one
such function for every data constructor of type NatDemand#:
FreeNat f # : NatDemand #
FreeNat f # = FreeNat #
Z f # : NatDemand #
Z f # = Z#
S
S
S
S
f#
f#
f#
f#
: NatDemand # → NatDemand #
FreeNat # = S# FreeNat #
Z# = S# Z#
(S# ) = S# FreeNat #
We also need the catamorphsims that link the functions above to the constructors
belonging to the type NatDemand#:
33
C freeNat : NatDemand #
C freeNat : FreeNat f #
C Z : NatDemand #
C Z : Z f#
C S : NatDemand # → NatDemand #
C S (S# x) = S f # ( C S x)
As we did in the previous example, we now have to apply the following steps to
a program composed of the rules for leq#, freeNat f#, Z f# and S f#:
– Apply an unfolding iteration.
– Apply the catamorphisms to the heads of the resulting facts.
– Evaluate the resulting head expressions.
The fixed point is reached at the second iteration (I2 ). It contains the following:
∗ leq #(Z#, FreeNat #) = DontCareBool #
∗ leq #(S#( FreeNat #),Z#) = DontCareBool #
∗ leq #(S#( FreeNat #),S#( FreeNat #)) = DontCareBool #
∗ z f # = Z#
∗ s f #( FreeNat #) = S#( FreeNat #)
∗ s f #(Z#) = S#( FreeNat #)
∗ s f #(S#(b)) = S#( FreeNat #)
∗ freeNat f # = FreeNat #
That means that leq# does not demand its second argument if the first one
is Z# (since FreeNat# represents no demand at all). However, leq# demands its
second argument if the first one is headed by S#. Note that we are considering
top level positions for the arguments only but that deeper positions can be easily
considered by just extending s f#.
D
Validity of the Unfolded Code
The lemma below supports the validity of the code generated by the unfolding
process:
D.1
Proof of Lemma 1
Let H be a fact generated by unfolding rule Λ and belonging to interpretation
In . Let {Si (i : 1..m)} be the set of facts that belong to In+1 , that have been
generated by unfolding Λ and which overlap with H.
By reductio ad absurdum, let us think that, even in the conditions stated, the
Si do not cover all the cases that H covers. Then, it must be possible to build at
least one fact S ′ that overlaps with H but that does not overlap with any fact
Si .
In order to build a fact like S ′ , the following options can be taken:
34
1. Choose H such that its pattern and/or guard does not match with any of
the rules for f .
2. When unfolding H, use a fact that has not been used when calculating the
facts {Si (i : 1..m)}.
However, condition 1 is impossible since all the function definitions are assumed
to be complete (i.e. there is no fact for f which does not match a rule) and
to have only generating rules. In addition, condition 2 is also impossible since
unfold uses all the existing facts by definition.
Note that the condition which requires that the rules be generative cannot
be dropped since a complete function having one or more non-generative rules
would have some facts removed from In+1 by clean, which would render the
function definition incomplete in that interpretation.
Therefore, no fact like S ′ can exist. We have reached a contradiction and
thus we have proved that under the conditions stated for P , clean can always
get rid of the most general fact.
D.2
Proof of Lemma 2
If program P does not have overlapping rules then any pair of rules l | g = r
and l′ | g ′ = r′ must meet one of the following conditions:
1. There is no unifier between l and l′ .
2. If a substitution σ is such that σ = mgu(l, l′ ), then the constraint g ∧ σ(g ′ )
is unsatisfiable.
At every application, the unfold function takes a rule and applies a substitution
to its pattern as well as a (possible) conjunction to its guard. Now:
1. If the two rules given do not overlap because l and l′ cannot be unified,
applying any substitution to them makes them even less unifiable.
2. If the two rules given do not overlap because l and l′ can be unified but
the conjunction of their guards cannot be satisfied, adding a conjunction to
either guards makes their combined satisfiability even less likely.
Up to this point, we have shown that the unfoldings of any two non overlapping rules cannot give rise to overlapping facts but the facts generated by the
unfolding of a single rule may still contain overlapping pairs. In order to prove
that the unfoldings of a single rule from a program P can be written without
overlappings, we need to use the function ueval that was defined in Sect. B.1.
We now want to prove that, for any single rule R belonging to a program P
without overlapping rules, the unfoldings of R carry the same meaning with or
without the cleaning phase. That is, let us call PR = unfold (R, I∞ ):
ueval (PR , c ◮ e) = ueval (clean(PR ), c ◮ e) ∀c ◮ e ∈ E
(3)
We will prove that Equation 3 holds by induction on the number of full applications of symbols of FS held in c and e combined.
35
Base case: If neither c nor e have any full application of symbols of FS ,
then both c and e are expressions (terms which may include calls to predefined
functions) and therefore cannot be unfolded any more. Their value (as computed
by ueval ) does not depend on the interpretation used, so Equation 3 trivially
holds.
Induction step: Let us assume that Equation 3 holds if c and e have a
combined total of n full applications of symbols of FS and let us try to prove that
Equation 3 holds when c and e have a combined total of n + 1 full applications
of symbols of FS .
In order to do that, let us define an expression e′ which has exactly one more
application of symbols of FS than e (the reasoning over c would be analogous).
Let us define e′ = e[f t1 . . . tn ]|o where e|o ∈ E which no full invocations of
symbols of FS , f ∈ FS , ti ∈ T . This guarantees that e′ has one more full
application of symbols of FS than e. Since the induction hypothesis holds for
c ◮ e, all we have to prove is:
ueval (PR , True ◮ f t1 . . . tn ) = ueval (clean P (PR ), True ◮ f t1 . . . tn )
(4)
Now, if PR does not contain overlapping facts or does not contain facts about
f at all, the Equation above trivially holds since the interpretations PR and
clean P (PR ) are the same by definition of clean.
Let us now assume that PR contains (maybe among others), the following
facts:
– F ≡ f p1 . . . pn | c = b
– Fi ≡ σi (f p1 . . . pn | c ∧ ci = b) (i : 1..m)
That is, the facts Fi overlap F and Fi are more specific than F . Then, by
definition of clean, clean P (PR ) will hold the facts Fi together with a new fact:
F ′ ≡ f p1 . . . pn | c ∧
^
(nunif ((p1 , . . . , pn ), σi (p1 , . . . , pn )) ∨ not (σi (ci ))) = b
i
Let χ = f t1 . . . tn . The following cases can occur:
– If χ is not unfoldable by F , then it is not unfoldable by any of the more
specific facts (the Fi and F ′ ), so Equation 4 holds.
– If χ is unfoldable by F but not by any of the Fi , then χ is unfoldable by F ′ ,
which returns the same result as F .
– Lastly, if χ is unfoldable by F and one of the Fi , then the left side of Equation
F
4 returns two values (let them be cF ◮ eF and cF
i ◮ ei ) which verify
F
F
F
F
c ◮ e ⊑ ci ◮ ei . Since all the functions have to be well-defined, the
value for χ has to be the greatest of the two mandatorily. The right side of
F
Equation 4 returns only the value cF
i ◮ ei by definition of clean (which will
have removed F from PR and replaced it by F ′ which will not be usable to
unfold χ).
| 6 |
9
arXiv:1710.09635v3 [cs.LO] 8 Nov 2017
Automated Lemma Synthesis in Symbolic-Heap
Separation Logic
QUANG-TRUNG TA∗ , School of Computing, National University of Singapore, Singapore
TON CHANH LE, School of Computing, National University of Singapore, Singapore
SIAU-CHENG KHOO, School of Computing, National University of Singapore, Singapore
WEI-NGAN CHIN, School of Computing, National University of Singapore, Singapore
The symbolic-heap fragment of separation logic has been actively developed and advocated for verifying the
memory-safety property of computer programs. At present, one of its biggest challenges is to effectively prove
entailments containing inductive heap predicates. These entailments are usually proof obligations generated
when verifying programs that manipulate complex data structures like linked lists, trees, or graphs.
To assist in proving such entailments, this paper introduces a lemma synthesis framework, which automatically
discovers lemmas to serve as eureka steps in the proofs. Mathematical induction and template-based constraint
solving are two pillars of our framework. To derive the supporting lemmas for a given entailment, the
framework firstly identifies possible lemma templates from the entailment’s heap structure. It then sets
up unknown relations among each template’s variables and conducts structural induction proof to generate
constraints about these relations. Finally, it solves the constraints to find out actual definitions of the unknown
relations, thus discovers the lemmas. We have integrated this framework into a prototype prover and have
experimented it on various entailment benchmarks. The experimental results show that our lemma-synthesisassisted prover can prove many entailments that could not be handled by existing techniques. This new
proposal opens up more opportunities to automatically reason with complex inductive heap predicates.
CCS Concepts: • Theory of computation → Logic and verification; Proof theory; Automated reasoning; Separation logic; • Software and its engineering → Software verification;
Additional Key Words and Phrases: Separation logic, entailment proving, mathematical induction, structural
induction, lemma synthesis, proof theory, automated reasoning
ACM Reference Format:
Quang-Trung Ta, Ton Chanh Le, Siau-Cheng Khoo, and Wei-Ngan Chin. 2018. Automated Lemma Synthesis
in Symbolic-Heap Separation Logic. Proc. ACM Program. Lang. 2, POPL, Article 9 (January 2018), 42 pages.
https://doi.org/10.1145/3158097
∗ The
first and the second author contributed equally.
Authors’ addresses: Quang-Trung Ta, Department of Computer Science, School of Computing, National University of
Singapore, Singapore, [email protected]; Ton Chanh Le, Department of Computer Science, School of Computing,
National University of Singapore, Singapore, [email protected]; Siau-Cheng Khoo, Department of Computer Science,
School of Computing, National University of Singapore, Singapore, [email protected]; Wei-Ngan Chin, Department
of Computer Science, School of Computing, National University of Singapore, Singapore, [email protected].
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee
provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and
the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses,
contact the owner/author(s).
© 2018 Copyright held by the owner/author(s).
2475-1421/2018/1-ART9
https://doi.org/10.1145/3158097
Proceedings of the ACM on Programming Languages, Vol. 2, No. POPL, Article 9. Publication date: January 2018.
9:2
1
Quang-Trung Ta, Ton Chanh Le, Siau-Cheng Khoo, and Wei-Ngan Chin
INTRODUCTION
Having been actively developed in the recent two decades, separation logic appears as one of the
most popular formalisms in verifying the memory-safety property of computer programs [O’Hearn
et al. 2001; Reynolds 2002]. It combines spatial operations, which describe the separation of the
memory, and inductive heap predicates to expressively model the shape of complex data structures,
such as variants of linked lists, trees, or graphs. This modeling has been successfully realized in
both academic research and industrial use. For example, it is implemented by the static analysis
tools SLAYer [Berdine et al. 2011] and Infer [Calcagno et al. 2015] to find memory bugs in system
code or mobile applications in large scale.
Both SLAYer and Infer employ a logic fragment called symbolic-heap separation logic, which differentiates spatial formulas describing the memory shape of a program’s states from pure formulas
representing Boolean constraints of the program’s variables. This fragment is also utilized by
other academic verification systems such as HIP [Chin et al. 2012], jStar [Distefano and Parkinson
2008], and Smallfoot [Berdine et al. 2005a]. Primarily, the verification of a program in such systems
involves three phases: (i) specifying desired properties of the program using separation logic, (ii)
analyzing the program’s behavior against its specification to obtain a set of verification conditions,
mostly in the form of separation logic entailments, and (iii) proving the collected entailments to
determine whether the program behaves accordingly to its specification.
There have been several efforts to prove separation logic entailments, and one of the biggest
challenges is to effectively handle their inductive heap predicates. A commonly practised approach
is to restrict the predicates to only certain classes, such as: predicates whose syntax and semantics
are defined beforehand [Berdine et al. 2004, 2005b; Bozga et al. 2010; Pérez and Rybalchenko 2011,
2013; Piskac et al. 2013, 2014], predicates describing only variants of linked lists [Enea et al. 2014], or
predicates satisfying the particular bounded tree width property [Iosif et al. 2013, 2014]. On the one
hand, such predicate restrictions lead to the invention of effective entailment proving techniques;
on the other hand, they prevent the predicates from modeling complex constraints, which involve
not only the shape, but also the size or the content of data structures.
A more general approach to handle inductive heap predicates is to use proof techniques which are
similar to or based on mathematical induction. These techniques include cyclic proof [Brotherston
et al. 2011], structural induction proof [Chu et al. 2015], and mutual induction proof [Ta et al.
2016]. In general, induction-based techniques are capable of reasoning about structures which
are recursively defined [Bundy 2001]. Therefore, they can be used to deal with inductive heap
predicates in separation logic entailments. However, it is well-known that the induction-based proof
techniques are incomplete, due to the failure of cut-elimination in inductive theories. Consequently,
their successes often depend on an eureka step: discovering supporting lemmas [Bundy 2001].
Not only are the supporting lemmas directly important to induction-based proof systems, but they
are also highly needed by verification systems that use non-induction-based back-end provers, as in
the works of Nguyen and Chin [2008] and Qiu et al. [2013]. These systems cannot automatically
discover the desired lemmas but require the users to manually provide them. Consequently, the
aforementioned verification systems are not fully automated.
Although any valid entailment can become a lemma, the lemmas that are really important are the
ones that can convert between inductive heap predicates, or can combine many inductive heap
predicates into another one, or can split an inductive heap predicate into a combination of many
others. These lemmas are helpful for rearranging inductive heap predicates in a goal entailment
so that the entailment proof can be quickly discovered. They are also called inductive lemmas, as
Proceedings of the ACM on Programming Languages, Vol. 2, No. POPL, Article 9. Publication date: January 2018.
Automated Lemma Synthesis in Symbolic-Heap Separation Logic
9:3
induction proofs are often needed to prove them, i.e., there are derivation steps that record and
apply induction hypotheses in the lemmas’ proofs.
In this work, we propose a novel framework to automatically synthesize inductive lemmas to assist in
proving entailments in the fragment of symbolic-heap separation logic. Our framework is developed
based on structural induction proof and template-based constraint solving. Given a goal entailment,
the framework first analyzes the entailment’s heap structure to identify essential lemma templates. It
then sets up unknown relations among each template’s variables and conducts structural induction
proof to generate constraints about the relations. Finally, it solves the constraints to find out actual
definitions of the unknown relations, thus discovers the desired inductive lemmas.
In summary, our work makes the following contributions:
– We propose a novel framework to automatically synthesize lemmas to assist in proving entailments in the fragment of symbolic-heap separation logic with inductive heap predicates.
– We integrate the lemma synthesis framework into an entailment proof system, which allows
the lemmas to be flexibly discovered on-the-fly to support the entailment proofs.
– We implement a prototype prover and experiment it with numerous entailments from various
benchmarks. The experimental result is promising since our tool can prove most of the valid
entailments in these benchmarks and outperforms all existing separation logic provers.
2
MOTIVATING EXAMPLE
In this section, we present a running example to illustrate our work and result. Here, we consider an
inductive heap predicate dll and its variant dllrev, both modeling a doubly linked list data structure
with a length property.
dll(hd, pr , tl, nt, len)
def
=
hd7→pr , nt ∧ hd=tl ∧ len=1
∨ ∃u.(hd7→pr, u ∗ dll(u, hd, tl, nt, len−1))
dllrev(hd, pr , tl, nt, len)
def
=
hd7→pr , nt ∧ hd=tl ∧ len=1
∨ ∃u.(dllrev(hd, pr , u, tl, len−1) ∗ tl7→u, nt)
Each element in the linked list is modeled by a singleton heap predicate x7→pr, nt, where x is
its memory address, pr and nt respectively point to the previous and the next element in the list.
Moreover, both dll(hd, pr, tl, nt, len) and dllrev(hd, pr, tl, nt, len) denote a non-empty doubly linked
list from the first element pointed-to by hd (head) to the last element pointed-to by tl (tail). The
previous and the next element of the entire list are respectively pointed-to by pr and nt, and len
denotes the linked list’s length. The only difference of the two predicates is that dll is recursively
defined from the linked list’s head to the tail, whereas dllrev is defined in the reversed direction.
Suppose that when verifying a program manipulating the doubly linked list data structure, a verifier
needs to prove an entailment relating to an extraction of the last element from a concatenation of
two linked lists with certain constraints on their lengths, such as the following entailment E 1 .
E 1 ≜ dllrev(x, y, u, v, n) ∗ dll(v, u, z, t, 200) ∧ n≥100 |− ∃r .(dll(x, y, r , z, n+199) ∗ z7→r , t)
Unfortunately, the existing entailment proving techniques could not prove E 1 . Specifically, the
predicate-restriction approaches either consider only singly linked list [Berdine et al. 2005b; Bozga
et al. 2010; Cook et al. 2011; Pérez and Rybalchenko 2011, 2013], or do not handle linear arithmetic
constraints [Enea et al. 2014; Iosif et al. 2013, 2014], or require a pre-defined translation of heap
predicates into other theories [Piskac et al. 2013, 2014]. Moreover, the non-induction-based approaches require users to provide supporting lemmas [Chin et al. 2012]. Finally, the induction-based
techniques [Brotherston et al. 2011; Chu et al. 2015] fail to prove E 1 because this entailment is not
general enough to be an effective induction hypothesis, due to its specific constant values.
Proceedings of the ACM on Programming Languages, Vol. 2, No. POPL, Article 9. Publication date: January 2018.
9:4
Quang-Trung Ta, Ton Chanh Le, Siau-Cheng Khoo, and Wei-Ngan Chin
We briefly explain the failure of induction proof on E 1 as follows. Typically, induction is performed on
an inductive heap predicate in E 1 ’s antecedent. Suppose the chosen predicate is dllrev(x, y, u, v, n);
E 1 is then recorded as an induction hypothesis H , whose variables are renamed to avoid confusion:
H ≜ dllrev(a, b, p, q, m) ∗ dll(q, p, c, d, 200) ∧ m≥100 |− ∃k.(dll(a, b, k, c, m+199) ∗ c7→k, d)
Subsequently, the predicate dllrev(x, y, u, v, n) of E 1 is unfolded by its recursive definition to derive
new sub-goal entailments. We consider an interesting case when the below entailment E 1′ is obtained
from an inductive case unfolding of dllrev(x, y, u, v, n):
E 1′ ≜ dllrev(x, y, w, u, n−1) ∗ u7→w, v ∗ dll(v, u, z, t, 200) ∧ n≥100 |− ∃r .(dll(x, y, r, z, n+199) ∗ z7→r, t)
When applying the induction hypothesis H to prove E 1′ , the predicates of the same symbols (dllrev or
dll) in the antecedents of H and E 1′ need to be unified by a substitution. However, such substitution
does not exist since q is mapped to u when unifying dllrev(a, b, p, q, m) vs. dllrev(x, y, w, u, n−1),
whereas q is mapped to the different variable v when unifying dll(q, p, c, d, 200) vs. dll(v, u, z, t, 200).
Alternatively, we can weaken the spatial formula u7→w, v ∗ dll(v, u, z, t, 200) of E 1′ ’s antecedent into
dll(u, w, z, t, 201), w.r.t. the definition of dll, to derive a new entailment E 1′′:
E 1′′ ≜ dllrev(x, y, w, u, n−1) ∗ dll(u, w, z, t, 201) ∧ n≥100 |− ∃r .(dll(x, y, r , z, n+199) ∗ z7→r , t)
Again, no substitution can unify the antecedents of H and E 1′′. For example, the substitution θ 1 ≜
[x/a, y/b, w/p, u/q, n−1/m] might be able to unify dllrev(a, b, p, q, m) with dllrev(x, y, w, u, n−1),
that is, dllrev(a, b, p, q, m)θ 1 ≡ dllrev(x, y, w, u, n−1). However, the constraint n≥100 in the antecedent of E 1′′ cannot prove that n−1≥100, which is obtained from applying θ 1 on the constraint
m≥100 of H . In addition, the substitution θ 2 ≜ [u/q, w/p, z/c, t/d] could not unify dll(q, p, c, d, 200)
with dll(u, w, z, t, 201) since the two constants 201 and 200 are non-unifiable. In short, the above
inability in unifying heap predicates makes induction proof fail on E 1 .
Nevertheless, the entailment E 1 is provable if necessary lemmas can be discovered. For instance,
the following lemmas L 1 , L 2 , and L 3 can be introduced to assist in proving E 1 . More specifically,
L 1 converts a linked list modeled by the predicate symbol dllrev into a linked list modeled by the
variant predicate dll; L 2 combines two linked lists modeled by dll into a new one (similar in spirit to
the “composition lemma” introduced by Enea et al. [2015]); lastly, L 3 splits a linked list modeled by
dll into two parts including a new linked list and a singleton heap:
L1
L2
L3
≜
≜
≜
dllrev(a, b, c, d, m) |− dll(a, b, c, d, m)
dll(a, b, p, q, m) ∗ dll(q, p, c, d, l) |− dll(a, b, c, d, m + l)
dll(a, b, c, d, m) ∧ m≥2 |− ∃w.(dll(a, b, w, c, m−1) ∗ c7→w, d)
The three lemmas L 1 , L 2 , and L 3 can be used to prove E 1 as shown in Figure 1. They are successively
utilized by the lemma application rules LML , LMR to finally derive a new entailment E 4 , which can
be easily proved by standard inference rules in separation logic. We will explain the details of these
lemma application rules LML , LMR and other inference rules in Section 4.1.
|−
n≥100 |− (x=x ∧ y=y ∧ z=z ∧ t=t ∧ n+200=n+200) Π
∗P
E 4 ≜ dll(x, y, z, t, n+200) ∧ n≥100 |− dll(x, y, z, t, n+200)
LMR with L 3 and θ 3
E 3 ≜ dll(x, y, z, t, n+200) ∧ n≥100 |− ∃r .(dll(x, y, r , z, n+199) ∗ z7→r , t)
LML with L 2 and θ 2
E 2 ≜ dll(x, y, u, v, n) ∗ dll(v, u, z, t, 200) ∧ n≥100 |− ∃r .(dll(x, y, r , z, n+199) ∗ z7→r , t)
LML with L 1 and θ 1
E 1 ≜ dllrev(x, y, u, v, n) ∗ dll(v, u, z, t, 200) ∧ n≥100 |− ∃r .(dll(x, y, r , z, n+199) ∗ z7→r , t)
Fig. 1. Proof tree of E 1 using the lemmas L 1 , L 2 , and L 3 , with the substitutions θ 1 ≡ [x/a, y/b, u/c, v/d, n/m],
θ 2 ≡ [x/a, y/b, u/p, v/q, n/m, z/c, t/d, 200/l], and θ 3 ≡ [x/a, y/b, z/c, t/d, r /w, n+200/m]
Proceedings of the ACM on Programming Languages, Vol. 2, No. POPL, Article 9. Publication date: January 2018.
Automated Lemma Synthesis in Symbolic-Heap Separation Logic
9:5
In the proof tree in Figure 1, the entailment E 1 can only be concluded valid if all the lemmas L 1 ,
L 2 , and L 3 are also valid. Furthermore, this proof tree is not constructed by any induction proofs.
Instead, induction is performed in the proof of each lemma. We illustrate a partial induction proof
tree of L 1 in Figure 2, where an induction hypothesis is recorded by the induction rule ID and is
later utilized by the induction hypothesis application rule IH. As discussed earlier in Section 1, these
three lemmas are called inductive lemmas. In general, the inductive lemmas help to modularize
the proof of a complex entailment like E 1 : induction is not directly performed to prove E 1 ; it is
alternatively utilized to prove simpler supporting lemmas such as L 1 , L 2 , and L 3 . This modularity,
therefore, increases the success chance of proving complex entailments.
|−Π
true |− ∃t .(v=t ∧ a=a ∧ c=c ∧ d=d ∧ m−1=m−1)
∗P
dll(v, a, c, d, m−1) |− ∃t .(dll(t, a, c, d, m−1) ∧ v=t)
IH(2)
dll(v, a, u, c, m−2) ∗ c7→u, d |− ∃t .(dll(t, a, c, d, m−1) ∧ v=t)
∗7→
a7→b, v ∗ dll(v, a, u, c, m−2) ∗ c7→u, d |− ∃t .(a7→b, t ∗ dll(t, a, c, d, m−1))
PR
...
a7→b, v ∗ dll(v, a, u, c, m−2) ∗ c7→u, d |− dll(a, b, c, d, m)
...
ID(2)
PR
dll(a, b, u, c, m−1) ∗ c7→u, d |− dll(a, b, c, d, m)
a7→b, d ∧ a=c ∧ m=1
IH(1)
|− dll(a, b, c, d, m)
dllrev(a, b, u, c, m−1) ∗ c7→u, d |− dll(a, b, c, d, m)
ID(1)
L 1 ≜ dllrev(a, b, c, d, m) |− dll(a, b, c, d, m)
Fig. 2. Partial induction proof tree of the inductive lemma L 1
where ID(1), ID(2) are performed on dllrev(a, b, c, d, m), dll(a, b, u, c, m−1) to obtain IHs H 1 , H 2 ,
IH(1) with H 1 ≜ dllrev(a ′, b ′, c ′, d ′, m ′ ) |− dll(a ′, b ′, c ′, d ′, m ′ ) and θ 1 ≡ [a/a ′, b/b ′, u/c ′, c/d ′, m−1/m ′ ],
IH(2) with H 2 ≜ dll(a ′, b ′, u ′, c ′, m ′ −1)∗c ′ 7→u ′, d ′ |− dll(a ′, b ′, c ′, d ′, m ′ ), θ 2 ≡ [v/a ′, a/b ′, u/u ′, c/c ′, d/d ′, m−1/m ′ ]
On the other hand, we are not interested in trivial lemmas, which can be proved directly without using
any induction hypotheses: the rule IH is not used in their proofs. For example, L 3′ ≜ dll(a, b, c, d, m)∧
m≥2 |− ∃w.(a7→b, w ∗ dll(w, a, c, d, m−1)) is a trivial lemma, whose direct proof tree is presented
in Figure 3. Obviously, if a trivial lemma can be applied to prove an entailment, then the same
sequence of inference rules utilized in the lemma’s proof can also be directly applied to the goal
entailment. For this reason, the trivial lemmas do not help to modularize the induction proof.
|−Π
m≥2 |− ∃w.(u=w ∧ a=a ∧ c=c ∧ d=d ∧ m−1=m−1)
∗P
dll(u, a, c, d, m−1) ∧ m≥2 |− ∃w.(dll(w, a, c, d, m−1) ∧ u=w)
a7→b, d ∧ a=c ∧ m=1 ∧ m≥2 |−
∗7→
∃w.(a7→b, w ∗ dll(w, a, c, d, m−1))
a7→b, u ∗ dll(u, a, c, d, m−1)∧ m≥2 |−∃w.(a7→b, w ∗ dll(w, a, c, d, m−1))
ID
L 3′ ≜ dll(a, b, c, d, m) ∧ m≥2 |− ∃w.(a7→b, w ∗ dll(w, a, c, d, m−1))
⊥2L
Fig. 3. Direct proof tree of the trivial lemma L 3′
In this work, we propose a novel framework to synthesize inductive lemmas, such as L 1 , L 2 , and L 3 ,
to assist in proving separation logic entailments. For a given goal entailment E, we first identify
all possible lemma templates, which essentially are entailments constructed from heap predicate
symbols appearing in E. The templates will be refined with more Boolean constraints on their
variables until valid inductive lemmas are discovered. We will explain the details in Section 5.
3
THEORETICAL BACKGROUND
Our lemma synthesis framework is developed to assist in proving entailments in the fragment of
symbolic-heap separation logic with inductive heap predicates and linear arithmetic. This fragment
is similar to those introduced in [Albarghouthi et al. 2015; Brotherston et al. 2011, 2016; Ta et al.
2016]. It is also extended with unknown relations to represent Boolean constraints of the desired
lemmas. We will present related background of the entailment proof in the following subsections.
Proceedings of the ACM on Programming Languages, Vol. 2, No. POPL, Article 9. Publication date: January 2018.
9:6
Quang-Trung Ta, Ton Chanh Le, Siau-Cheng Khoo, and Wei-Ngan Chin
3.1
Symbolic-heap separation logic with unknown relations
Syntax. We denote our symbolic-heap separation logic fragment with inductive heap predicates and
unknown relations as SLU
ID and present its syntax in Figure 4. In particular, x is a variable; c and e are
respectively an integer constant and an integer expression1 ; nil is a constant denoting a dangling
memory address (null) and a is a spatial expression modeling a memory address. Moreover, σ denotes
a spatial atom, which is either (i) a predicate emp modeling an empty memory, (ii) a singleton heap
ι
predicate x 7→x 1 , . . . , x n describing an n-field data structure of sort ι in the memory, pointed-to by
x, and having x 1 , . . . , x n as values of its fields2 , or (iii) an inductive heap predicate P(x 1 , . . . , x n )
modeling a recursively defined data structure (Definition 3.1). These spatial atoms constitute a
spatial formula Σ via the separating conjunction operator ∗. On the other hand, π denotes a pure
atom comprising equality constraints among spatial expressions and linear arithmetic constraints
among integer expressions. These pure atoms compose a pure formula Π using standard first-order
logic connectives and quantifiers. Moreover, Π may contain an unknown relation U (Definition 3.4).
We will utilize the unknown relation in various phases of the lemma synthesis.
e
a
π
F
F
F
σ
Π
Σ
F
F
F
F
F
c | x | − e | e1 + e2 | e1 − e2 | c e
nil | x
true | false | a 1 = a 2 | a 1 , a 2 | e 1 = e 2 | e 1 , e 2 | e 1 > e 2 | e 1 ≥ e 2 | e 1 < e 2 | e 1 ≤ e 2
ι
emp | x 7→x 1 , . . . , x n | P(x 1 , . . . , x n )
π | U (x 1 , . . . , x n ) | ¬Π | Π 1 ∧ Π 2 | Π 1 ∨ Π 2 | Π 1 → Π 2 | ∀x .Π | ∃x .Π
σ | Σ1 ∗ Σ2
Σ | Π | Σ ∧ Π | ∃x . F
Fig. 4. Syntax of formulas in SLU
ID
Definition 3.1 (Inductive heap predicate). [Ta et al. 2016] A system of k inductive heap predicates Pi
of arity ni , with i = 1, . . . , k, is defined as follows:
Pi (x 1i , . . . , x ni i )
=
def
k
i (x i , . . . , x i )
F 1i (x 1i , . . . , x ni i ) ∨ . . . ∨ Fm
ni i = 1
i
1
def
where F ji is a definition case of Pi . This fact is also denoted by F ji (x 1i , . . . , x ni i ) ⇒ Pi (x 1i , . . . , x ni i ),
with 1 ≤ j ≤ mi . Moreover, F ji is a base case if it does not contain any predicates recursively defined
with Pi ; otherwise, it is an inductive case.
Example 3.2. The two predicates dll(hd, pr , tl, nt, len) and dllrev(hd, pr , tl, nt, len) in Section 2 are
two examples of inductive heap predicates. Moreover, their definitions are self-recursively defined.
Example 3.3. The two predicates ListE and ListO [Brotherston et al. 2011] are mutually recursively
defined to model linked list segments which respectively contain even and odd number of elements.
(1) ListO(x, y) = x7→y ∨ ∃u.(x7→u ∗ ListE(u, y))
def
(2) ListE(x, y) = ∃u.(x7→u ∗ ListO(u, y))
def
Definition 3.4 (Unknown relation). An unknown relation U (u 1 , . . . ., un ) is an n-ary pure predicate
in first-order logic, whose definition is undefined.
Semantics. Figure 5 exhibits the semantics of formulas in SLU
ID . Given a set Var of variables, Sort
of sorts, Val of values, Loc of memory addresses (Loc ⊂ Val), a model of a formula consists of: a
stack model s, which is a function s: Var → Val, and a heap model h, which is a partial function
h: (Loc × Sort) ⇀ Val+ . We write ⟦Π⟧s to denote the valuation of a pure formula Π under the stack
model s. Moreover, dom(h) denotes the domain of h; h # h ′ indicates that h and h ′ have disjoint
1 We
write c e to denote the multiplication by constant in linear arithmetic.
sort ι represents a unique data type. For brevity, we omit using it when presenting the motivating example.
2 Each
Proceedings of the ACM on Programming Languages, Vol. 2, No. POPL, Article 9. Publication date: January 2018.
Automated Lemma Synthesis in Symbolic-Heap Separation Logic
9:7
domains, i.e., dom(h) ∩ dom(h ′) = ∅; and h ◦ h ′ is the union of two disjoint heap models h and h ′.
Besides, [f | x:y] is a function like f except that it returns y for the input x. To define the semantics
of the inductive heap predicates, we follow the standard least fixed point semantics by interpreting
an inductive predicate symbol P as the least prefixed point ⟦P⟧ of a monotone operator constructed
from its inductive definition. The construction is standard and can be found in many places, such
as [Brotherston and Simpson 2011].
s, h |= Π
s, h |= emp
s, h
s, h
s, h
s, h
s, h
ι
|= x 7→x 1 , ..., x n
|= P(x 1 , ..., x n )
|= Σ1 ∗ Σ2
|= Σ ∧ Π
|= ∃x . F
iff
iff
⟦Π⟧s = true and dom(h) = ∅
dom(h) = ∅
iff
iff
iff
iff
iff
dom(h) = {s(x)} and h(s(x), ι) = (s(x 1 ), ..., s(x n ))
(h, ⟦x 1 ⟧s , ..., ⟦x n ⟧s ) ∈ ⟦P⟧
∃h 1 , h 2 : h 1 # h 2 and h 1 ◦ h 2 = h and s, h 1 |= Σ1 and s, h 2 |= Σ2
⟦Π⟧s = true and s, h |= Σ
∃v ∈ Val : [s | x:v], h |= F
Fig. 5. Semantics of formulas in SLU
ID
Entailments. Given the syntax and semantics of formulas, we can define entailments as follows.
This definition is similar to those in separation logic’s literature, such as [Berdine et al. 2004].
Definition 3.5 (Entailment). An entailment between two formulas F 1 and F 2 , denoted as F 1 |− F 2 , is
said to be valid, iff s, h |= F 1 implies that s, h |= F 2 , for all models s, h. Formally,
F 1 |− F 2 is valid, iff ∀s, h. (s, h |= F 1 → s, h |= F 2 )
Here, F 1 and F 2 are respectively called the antecedent and the consequent of the entailment. In
general, separation logic entailments satisfy a transitivity property as stated in Theorems 3.6.
Theorem 3.6 (Entailment transitivity). Given two entailments F 1 |− ∃®
x .F 2 and F 2 |− F 3 , where x®
is a list of variables. If both of them are valid, then the entailment F 1 |− ∃®
x .F 3 is also valid.
x .F 2 is valid, it follows that
Proof. Consider an arbitrary model s, h such that s, h |= F 1 . Since F 1 |− ∃®
s, h |= ∃®
x .F 2 . By the semantics of SLU
’s
formulas,
s
can
be
extended
with
values v® of x® to obtain
ID
s ′ = [s | x:
® v]
® such that s ′, h |= F 2 . Since F 2 |− F 3 is valid, it follows that s ′, h |= F 3 . Since s ′ = [s | x:
® v],
®
|= ∃®
it is implied by the semantics of SLU
’s
formulas
again
that
s,
h
x
.F
.
We
have
shown
that
for
3
ID
x .F 3 . Therefore, F 1 |− ∃®
x .F 3 is valid.
□
an arbitrary model s, h, if s, h |= F 1 , then s, h |= ∃®
Substitution. We write [e 1 /v 1 , . . . , en /vn ], or θ for short, to denote a simultaneous substitution;
and F [e 1 /v 1 , . . . , en /vn ] denotes a formula that is obtained from F by simultaneously replacing all
occurrences of the variables v 1 , . . . , vn by the expressions e 1 , . . . , en . The simultaneous substitution,
or substitution for short, has the following properties, given that FV(F ) and FV(e) are lists of all
free variables respectively occurring in the formula F and the expression e.
Proposition 3.7 (Substitution law for formulas [Reynolds 2008]). Given a separation logic
formula F and a substitution θ ≡ [e 1 /v 1 , . . . , en /vn ]. Let s, h be a separation logic model, where dom(s)
contains (FV(F ) \ {v 1 , . . . , vn }) ∪ FV(e 1 ) ∪ . . . ∪ FV(en ), and let ŝ = [s | v 1 : ⟦e 1 ⟧s | . . . | vn : ⟦en ⟧s ].
Then s, h |= Fθ if and only if ŝ, h |= F
Theorem 3.8 (Substitution law for entailments). Given a separation logic entailment F 1 |− F 2
and a substitution θ . If F 1 |− F 2 is valid, then F 1θ |− F 2θ is also valid.
Proof. Suppose that θ ≡ [e 1 /v 1 , . . . , en /vn ]. Consider an arbitrary model s, h such that s, h |= F 1θ .
Let ŝ = [s | v 1 : ⟦e 1 ⟧s | . . . | vn : ⟦en ⟧s ]. By Proposition 3.7, s, h |= F 1θ implies that ŝ, h |= F 1 . Since
F 1 |− F 2 is valid, it follows that ŝ, h |= F 2 . By Proposition 3.7 again, s, h |= F 2θ . We have shown that
for an arbitrary model s, h, if s, h |= F 1θ , then s, h |= F 2θ . Therefore, F 1θ |− F 2θ is valid.
□
Proceedings of the ACM on Programming Languages, Vol. 2, No. POPL, Article 9. Publication date: January 2018.
9:8
Quang-Trung Ta, Ton Chanh Le, Siau-Cheng Khoo, and Wei-Ngan Chin
Unknown entailments and unknown assumptions. Recall that we propose to synthesize lemmas by firstly discovering essential lemma templates and then refining them. The template refinement is conducted in 3 steps: (i) creating unknown entailments, which contain unknown relations
representing the lemmas’ desired pure constraints, (ii) proving the entailments by induction and
collecting unknown assumptions about the relations, and (iii) solving the assumptions to discover
the lemmas. We formally define the unknown entailments and the unknown assumptions as follows.
Definition 3.9 (Unknown entailment). An entailment F 1 |− F 2 is called an unknown entailment if the
antecedent F 1 or the consequent F 2 contains an unknown relation U (®
x).
Definition 3.10 (Unknown assumption). A pure implication Π 1 → Π 2 is called an unknown assumption
of the unknown relation U (®
x) if U (®
x) occurs in at least one of the two pure formulas Π 1 and Π 2 .
Syntactic equivalence. An entailment induction proof often contains a step that finds a substitution to unify the antecedents of an induction hypothesis and of the goal entailment. In this work,
we syntactically check the unification between two spatial formulas by using a syntactic equivalence
relation defined earlier in [Ta et al. 2016]. We formally restate this relation in the below.
Definition 3.11 (Syntactic equivalence [Ta et al. 2016]). The syntactical equivalence relation of two
spatial formulas Σ1 and Σ2 , denoted as Σ1 Σ2 , is inductively defined as follows:
ι
ι
(1) emp emp
(2) u 7→v 1 , . . . , vn u 7→v 1 , . . . , vn
(3) P(u 1 , . . . , un ) P(u 1 , . . . , un )
(4) (Σ1 Σ1′ ) ∧ (Σ2 Σ2′ ) → (Σ1 ∗ Σ2 Σ1′ ∗ Σ2′ ) ∧ (Σ1 ∗ Σ2 Σ2′ ∗ Σ1′ )
3.2
Structural induction for entailment proofs and lemma syntheses
We develop the lemma synthesis framework based on a standard structural induction proof. This
proof technique is an instance of Noetherian induction, a.k.a. well-founded induction [Bundy 2001].
We will briefly explain both Noetherian induction and structural induction here. Nonetheless, our
lemma synthesis idea can also be integrated into other induction-based proof techniques.
Noetherian induction. Given P is a conjecture on structures of type τ where ≺τ is a well-founded
relation among these structures, i.e., there is no infinite chain like . . . ≺τ α n ≺τ . . . ≺τ α 1 . Then
the Noetherian induction principle states that: P is said to hold for all these structures, if for any
structure α, the fact that P(β) holds for all structures β ≺τ α implies that P(α) also holds. Formally:
∀α : τ . (∀β : τ ≺τ α . P(β)) → P(α)
∀α : τ . P(α)
Substructural relation. We prove entailments by applying Noetherian induction on the structure
of inductive heap predicates. The substructural relation is formally defined in Definition 3.12.
Definition 3.12 (Substructural relation). A heap predicate P1 (v)
® is said to be a substructure of P2 (®
u),
denoted by P1 (v)
® ≺ P2 (®
u), if P1 (v)
® occurs in a formula obtained from directly unfolding P2 (®
u) or
from unfolding any substructure of P2 (®
u). These conditions are formally stated as follows:
def
(1) ∃w,
® Σ, Π, F (®
u). (F (®
u) ∃w.(P
® 1 (v)
® ∗ Σ ∧ Π) ∧ F (®
u) ⇒ P2 (®
u))
def
′
®
®
®
®
(2) ∃w,
® Σ, Π, F (t ), P (t ). (F (t ) ∃w.(P
® 1 (v)
® ∗ Σ ∧ Π) ∧ F (t ) ⇒ P ′ (t®) ∧ P ′ (t®) ≺ P2 (®
u))
In the above definition, P1 and P2 can be the same or different predicate symbols. The latter happens
when they are mutually recursively defined, such as ListE and ListO in Example 3.3.
Theorem 3.13 (Well-foundedness). Given an inductive heap predicate P1 (®
u 1 ). If it is satisfiable,
i.e., P1 (®
u 1 ) . false, then under the least fixed point semantics of inductive heap predicates, there is no
infinite chain like . . . ≺ Pn (®
un ) ≺ . . . ≺ P1 (®
u 1 ).
Proof. Suppose that there exists an infinite chain . . . ≺ Pn (®
un ) ≺ . . . ≺ P1 (®
u 1 ). For all i ≥ 1, we can
always insert all predicates derived when unfolding Pi (®
ui ) to obtain Pi+1 (®
ui+1 ) into the current
Proceedings of the ACM on Programming Languages, Vol. 2, No. POPL, Article 9. Publication date: January 2018.
Automated Lemma Synthesis in Symbolic-Heap Separation Logic
9:9
chain. Therefore, w.l.o.g., we can assume that Pi+1 (®
ui+1 ) is obtained from directly unfolding Pi (®
ui ),
for all i ≥ 1. Hence, . . . ≺ Pn (®
un ) ≺ . . . ≺ P1 (®
u 1 ) is also the infinite unfolding chain of P1 (®
u 1 ). In the
least fixed point semantics, if a predicate is unfolded infinitely, it can be evaluated to false. Therefore,
P1 (®
u 1 ) ≡ false. This contradicts with the theorem’s hypothesis that P1 (®
u 1 ) is satisfiable.
□
Structural induction. We apply the substructural relation ≺ to propose a structural induction
principle for the entailment proof. Consider an entailment E whose antecedent contains an inductive
predicate P(®
u), that is, E ≜ F 1 ∗P(®
u) |− F 2 , for some formulas F 1 , F 2 . We write E(P(®
u)) to parameterize
E by P(®
u); and E(P(v))
® is an entailment obtained from E(P(®
u)) by replacing P(®
u) by P(v)
® and
respectively replacing variables in u® by variables in v.
® Moreover, we write |= E(P(®
u)) to denote that
E holds for P(®
u). The structural induction principle is formally stated as follows:
Theorem 3.14 (Structural induction). The entailment E(P(®
u)) ≜ F 1 ∗ P(®
u) |− F 2 is valid, if for all
predicate P(®
u), the fact that E holds for all sub-structure predicates P(v)
® of P(®
u) implies that E also
holds for P(®
u). Formally:
∀P(®
u). (∀P(v)
® ≺ P(®
u). |= E(P(v)))
® → |= E(P(®
u))
∀P(®
u). |= E(P(®
u))
Proof. We consider two scenarios w.r.t. P(®
u) in the entailment E(P(®
u)) ≜ F 1 ∗ P(®
u) |− F 2 as follows.
(1) If P(®
u) ≡ false. Then F 1 ∗ P(®
u) ≡ false, therefore E(P(®
u)) is valid. (2) If P(®
u) . false. Then by
Theorem 3.13, the substructural relation ≺, which applies to P(®
u), is well-founded. In this scenario,
the above induction principle is an instance of the Noetherian induction [Bundy 2001] where the
substructural relation ≺ is chosen as the well-founded relation. Therefore, the correctness of this
theorem is automatically implied by the soundness of the Noetherian induction principle.
□
4
THE STRUCTURAL INDUCTION PROOF SYSTEM
In this section, we present a structural induction proof system for separation logic entailments.
Initially, this system aims to prove normal entailments using a set of inference rules (Section 4.1).
These rules include the two lemma application rules which apply synthesized lemmas to assist in
proving entailments (Figure 8). We use the same proof system to reason about unknown entailments,
introduced in the lemma synthesis, using a set of synthesis rules (Section 4.2). The proof system
also includes a proof search procedure which selectively applies the aforementioned rules to prove
a goal entailment (Section 4.3). We will explain the details in the following sections.
4.1
Inference rules for standard entailments
Each inference rule of our proof system contains zero or more premises, a conclusion, and possibly
a side condition. The premises and the conclusion are in the form of H, L, F 1 |− F 2 , where H and
L are respectively sets of induction hypotheses and valid lemmas, and F 1 |− F 2 is the (sub-)goal
entailment. An inference rule can be interpreted as follows: if all entailments in its premises are
valid, and its side condition (if present) is satisfied, then its goal entailment is also valid.
For brevity, we write F to denote a symbolic-heap formula ∃®
x .(Σ ∧ Π), where x® is a list of quantified
variables (possibly empty). Furthermore, we define F ∗ Σ ′ ≜ ∃®
x .(Σ ∗ Σ ′ ∧ Π) and F ∧ Π ′ ≜
′
′
′
∃®
x .(Σ ∧ Π ∧ Π ), given that FV(Σ ) ∩ x® = ∅ and FV(Π ) ∩ x® = ∅. We also write u® = v® to denote
(u 1 = v 1 ) ∧ . . . ∧ (un = vn ), given that u® ≜ u 1 , . . . , un and v® ≜ v 1 , . . . , vn are two variable lists of the
same size. Finally, u® # v® indicates that the two lists u® and v® are disjoint, i.e., ∄w.(w ∈ u® ∧ w ∈ v).
®
The set of inference rules comprises logical rules (Figure 6) reasoning about the logical structure of
the entailments, induction rules (Figure 7) proving the entailments by structural induction, and
lemma application rules (Figure 8) applying supporting lemmas to assist in proving the entailments.
Proceedings of the ACM on Programming Languages, Vol. 2, No. POPL, Article 9. Publication date: January 2018.
9:10
Quang-Trung Ta, Ton Chanh Le, Siau-Cheng Khoo, and Wei-Ngan Chin
⊥1L
=L
∃L
∃R
PR
ι1
ι2
H, L, F 1 ∗ u 7→v® ∗ u 7→t® |− F 2
⊥2L
EL
H, L, F 1 ∧ u=v |− F 2
H, L, F 1 [u/x] |− F 2
|−Π
H, L, Π1 |− Π2
H, L, F 1 |− ∃®
x .(F 2 ∗ emp)
H, L, F 1 ∧ ¬Π |− F 2
H, L, F 1 ∧ Π |− F 2
CA
H, L, F 1 |− ∃®
x .F 2 [u/v]
H, L, F 1 |− ∃®
x, v.(F 2 ∧ u=v)
x .(F 2 ∗ FiP (®
u))
H, L, F 1 |− ∃®
H, L, F 1 |− F 2
∗7→
def
Π1 →Π2
H, L, F 1 |− ∃®
x .F 2
ER
H, L, F 1 ∗ emp |− F 2
u < FV(F 2 )
H, L, F 1 |− ∃®
x .(F 2 ∗ P(®
u))
Π1 →false
H, L, F 1 |− F 2
H, L, F 1 [u/v] |− F 2 [u/v]
H, L, ∃x .F 1 |− F 2
H, L, F 1 ∧ Π 1 |− F 2
F iP (u)
® ⇒ P(u)
®
∗P
H, L, F 1 |− ∃®
x .(F 2 ∧ u=t ∧ v=
® w)
®
ι
ι
H, L, F 1 ∗ u 7→v® |− ∃®
x .(F 2 ∗ t 7→w)
®
H, L, F 1 |− ∃®
x .(F 2 ∧ u=
® v)
®
H, L, F 1 ∗ P(®
u) |− ∃®
x .(F 2 ∗ P(v))
®
u < x,
® v® # x®
u® # x®
Fig. 6. Logical rules
Logical rules. The set of logical rules in Figure 6 consists of:
• Axiom rules |−Π , ⊥1L , ⊥2L . These rules prove pure entailments by invoking an off-the-shelf prover
such as Z3 [Moura and Bjørner 2008] (as in the rule |−Π ), or prove entailments whose antecedents
ι1
ι2
are inconsistent, i.e., they contain overlaid singleton heaps (u 7→v® and u 7→t® in the rule ⊥1L ) or
contradictions (Π 1 → false in the rule ⊥2L ).
• Normalization rules ∃L , ∃R , =L , EL , ER . These rules simplify the goal entailment by either eliminating existentially quantified variables (∃L , ∃R ), or removing equalities (=L ) or empty heap
predicates (EL , ER ) from the entailment.
• Case analysis rule CA. This rule performs a case analysis on a pure condition Π by deriving
two sub-goal entailments whose antecedents respectively contain Π and ¬Π. The underlying
principle of this rule is known as the law of excluded middle [Whitehead and Russell 1912].
• Unfolding rule PR . This rule derives a new entailment by unfolding a heap predicate in the goal
entailment’s consequent by its inductive definition.
• Matching rules ∗7→, ∗P. These rules remove identical singleton heap predicates (∗7→) or inductive
heap predicates (∗P) from two sides of the goal entailments. Here, we ensure that these predicates
are identical by adding equality constraints about their parameters into the derived entailments’
consequents.
H ′, L, Σ1 ∗ F 1P (®
u) ∧ Π 1 |− F 2
ID
IH
...
P (®
H ′, L, Σ1 ∗ Fm
u) ∧ Π1 |− F 2
P (u);
P(u)
® = F 1P (u)
® ∨ ... ∨ Fm
® †
def
H, L, Σ1 ∗ P(®
u) ∧ Π 1 |− F 2
† : H ′ ≜ H ∪ {H }, where H is obtained by freshly renaming all variables of Σ1 ∗ P(®
u) ∧ Π1 |− F 2
H ∪ {Σ3 ∗ P(v)
® ∧ Π3 |− F 4 }, L, F 4θ ∗ Σ ∧ Π 1 |− F 2 P(u)
® ≺ P(v);
®
® Σ3 θ ∗ P(v)θ
® ∗ Σ) ∧ (Π1 →Π3 θ )
H ∪ {Σ3 ∗ P(v)
® ∧ Π 3 |− F 4 }, L, Σ1 ∗ P(®
u) ∧ Π1 |− F 2 ∃θ, Σ.(Σ1 ∗ P(u)
Fig. 7. Induction rules
Induction rules. The structural induction principle is integrated into our proof system via the
two induction rules ID and IH (Figure 7). These rules are explained in details as follows:
• Rule ID. This rule performs the structural induction proof on a chosen heap predicate P(®
u) in
the antecedent of the goal entailment Σ1 ∗ P(®
u) ∧ Π 1 |− F 2 . In particular, the predicate P(®
u) is
def P
P
unfolded by its inductive definition P(®
u) = F 1 (®
u) ∨...∨ Fm (®
u) to derive new sub-goal entailments
as shown in this rule’s premises. Moreover, the goal entailment is also recorded as an induction
hypothesis in the set H ′ so that it can be utilized later to prove the sub-goal entailments.
Proceedings of the ACM on Programming Languages, Vol. 2, No. POPL, Article 9. Publication date: January 2018.
Automated Lemma Synthesis in Symbolic-Heap Separation Logic
9:11
• Rule IH. This rule applies an appropriate recorded induction hypothesis H ≜ Σ3 ∗P(v)∧Π
®
3 |− F 4
to prove the goal entailment Σ1 ∗ P(®
u) ∧ Π1 |− F 2 . It first checks the well-founded condition P(®
u) ≺
P(v),
® i.e. P(®
u) is a substructure of P(v),
® which is required by the structural induction principle. In
practice, this condition can be easily examined by labeling each inductive heap predicate Q(®
x)
in a proof tree with a set of its ancestor predicates, which are consecutively unfolded to derive
Q(®
x). By that mean, P(®
u) ≺ P(v)
® iff P(v)
® appears in the label of P(®
u). Afterwards, the induction
hypothesis application is performed in two steps:
– Unification step: unify the antecedents of both the goal entailment and the induction hypothesis by syntactically finding a substitution θ and a spatial formula Σ such that Σ1 ∗ P(®
u)
® ∗ Σ and Π 1 →Π 3θ . If these conditions hold, then it is certain that the entailment
Σ3θ ∗ P(v)θ
E ≜ Σ1 ∗ P(®
u) ∧ Π 1 |− Σ3θ ∗ P(v)θ
® ∗ Σ ∧ Π 3θ ∧ Π 1 is valid.
– Proving step: if such θ and Σ exist, then derive a new sub-goal entailment F 4θ ∗ Σ ∧ Π 1 |− F 2 . We
will explain why this sub-goal entailment is derived. The induction hypothesis Σ3 ∗P(v)∧
® Π 3 |− F 4
implies that Hθ ≜ Σ3θ ∗ P(v)θ
® ∧ Π 3θ |− F 4θ is also valid, by Theorem 3.8. From E and Hθ , we
then have a derivation chain: Σ1 ∗ P(®
u) ∧ Π 1 |− Σ3θ ∗ P(v)θ
® ∗ Σ ∧ Π 3θ ∧ Π 1 |− F 4θ ∗ Σ ∧ Π 1 .
Therefore, if the sub-goal entailment F 4θ ∗ Σ ∧ Π 1 |− F 2 can be proved, then the goal entailment
holds. Here, we propagate the pure condition Π 1 through the chain as we want the antecedent
of the sub-goal entailment to be the strongest in order to prove F 2 .
H, L ∪ {Σ3 ∧ Π 3 |− F 4 }, F 4θ ∗ Σ ∧ Π 1 |− F 2
LML
LMR
H, L ∪ {Σ3 ∧ Π3 |− F 4 }, Σ1 ∧ Π1 |− F 2
∃θ, Σ. (Σ1 Σ3θ ∗ Σ) ∧ (Π1 →Π 3θ )
® 4 ∧ Π4 )}, F 1 |− ∃®
x .(F 3θ ∗ Σ ∧ Π2 )
H, L ∪ {F 3 |− ∃w.(Σ
H, L ∪ {F 3 |− ∃w.(Σ
® 4 ∧ Π 4 )}, F 1 |− ∃®
x .(Σ2 ∧ Π 2 )
∃θ, Σ. (Σ4θ ∗ Σ Σ2 ) ∧ (wθ
® ⊆ x)
®
Fig. 8. Lemma application rules
Lemma application rules. The two rules LML , LMR in Figure 8 derive a new sub-goal entailment
by applying a lemma on the goal entailment’s antecedent or consequent. In particular:
• The rule LML applies a lemma on the goal entailment’s antecedent. It is similar to the induction
application rule IH, except that we do not need to check the well-founded condition of the
structural induction proof, since the applied lemma is already proved valid.
• The rule LMR applies a lemma on the goal entailment’s consequent. It also needs to perform
an unification step: finding a substitution θ and a formula Σ so that the heap parts of the goal
entailment’s and the lemma’s consequents are unified, i.e., Σ4θ ∗ Σ Σ2 . If this step succeeds, then
E 1 ≜ Σ4θ ∗Σ∧Π 2 |− Σ2 ∧Π 2 is valid. The lemma F 3 |− ∃w.(Σ
® 4 ∧Π 4 ) implies that F 3 |− ∃w.Σ
® 4 is valid.
|
Hence, F 3θ − ∃wθ
® .Σ4θ is valid by Theorem 3.8, following that F 3θ ∗ Σ ∧ Π 2 |− ∃wθ
® .(Σ4θ ∗ Σ ∧ Π 2 )
is also valid. By the rule’s side condition wθ
® ⊆ x,
® we have another valid entailment E 2 ≜
F 3θ ∗ Σ ∧ Π 2 |− ∃®
x .(Σ4θ ∗ Σ ∧ Π 2 ). Therefore, by proving the entailment F 1 |− ∃®
x .(F 3θ ∗ Σ ∧ Π 2 )
in this rule’s premise and applying Theorem 3.6 twice sequentially on E 2 and then E 1 , we can
conclude that the goal entailment F 1 |− ∃®
x .(Σ2 ∧ Π 2 ) is also valid.
4.2
Synthesis rules for unknown entailments
Figure 9 presents synthesis rules, which deal with unknown entailments introduced in the lemma
synthesis. These rules share the similar structure to the inference rules, except that each of them
also contains a special premise indicating an assumption set A of the unknown relations. We will
describe in details the synthesis rules as follows:
Proceedings of the ACM on Programming Languages, Vol. 2, No. POPL, Article 9. Publication date: January 2018.
9:12
Quang-Trung Ta, Ton Chanh Le, Siau-Cheng Khoo, and Wei-Ngan Chin
• Axiom synthesis rules U1Π , U2Π , U1Σ , U2Σ . These rules conclude the validity of their unknown goal
entailments under the assumption sets A in the rules’ premises. The rule U1Π and U2Π handle
pure entailments with unknown relations either in the antecedents or the consequents. The
rule U1Σ and U2Σ deal with unknown entailments whose antecedents contain non-empty spatial
formulas while the consequents are pure formulas or vice versa. In both cases, the antecedents
must be inconsistent to make the goal entailments valid. Here, we only create assumptions on
the pure parts, i.e., Π 1 ∧ U (®
x) → false, since inconsistency in their heap parts, if any, can be
detected earlier by the rule ⊥1L . Also, we do not consider the case that the unknown relations
only appear in the consequents since these relations cannot make the antecedents inconsistent.
• Induction hypothesis synthesis rule UIH . This rule applies an induction hypothesis to prove a
derived unknown entailment. The induction hypothesis also contains unknown relations since
it is recorded earlier from an unknown goal entailment. The rule UIH is similar to the normal
induction hypothesis application rule IH, except that it does not contain a side condition like
Π 1 ∧ U (®
x) → (Π 3 ∧ U (®
y))θ , due to the appearance of the unknown relation U . Instead, this
condition will be registered in the unknown assumption set A in the premises of UIH .
A ≜ {Π 1 ∧ U (®
x) → Π2 }
U1Π
U1Σ
H, L, Π1 ∧ U (®
x) |− Π2
A ≜ {Π 1 → ∃w.(Π
® 2 ∧ U (®
x))}
U2Π
U2Σ
H, L, Π1 |− ∃w.(Π
® 2 ∧ U (®
x))
A ≜ {Π 1 ∧ U (®
x) → false}
H, L, Σ1 ∧ Π1 ∧ U (®
x) |− Π2
Σ1 ,emp
A ≜ {Π 1 ∧ U (®
x) → false}
H, L, Π 1 ∧ U (®
x) |− ∃w.(Σ
® 2 ∧ Π2 )
Σ2 ,emp
H ∪ {Σ3 ∗ P(v)
® ∧ Π3 ∧ U (®
y) |− F 4 }, L, F 4θ ∗ Σ ∧ Π1 |− F 2
A ≜ {Π 1 ∧ U (®
x) → (Π 3 ∧ U (®
y))θ }
†
u) ∧ Π1 ∧ U (®
x) |− F 2
H ∪ {Σ3 ∗ P(v)
® ∧ Π 3 ∧ U (®
y) |− F 4 }, L, Σ1 ∗ P(®
with † : P(®
u) ≺ P(v);
® ∃θ, Σ. (Σ1 ∗ P(®
u) Σ3θ ∗ P(v)θ
® ∗ Σ)
UIH
Fig. 9. Synthesis rules
4.3
The entailment proving procedure
Figure 10 presents the core procedure Prove of our proof system. In particular, the input of Prove
includes an induction hypothesis set H , a valid lemma set L, and a goal entailment F 1 |− F 2 .
These three inputs correlate to an inference step of the proof system. We also use an additional
parameter mode to control when new lemmas can be synthesized (if mode = SynLM) or strictly
not (if mode = NoSyn). The procedure Prove returns the validity of the goal entailment, a set of
new lemmas synthesized during the proof, and a set of assumptions to make the entailment valid.
There are two main contexts where this procedure is initially invoked:
– In the entailment proving phase, Prove is invoked to prove a normal entailment F 1 |− F 2 , which
does not contain any unknown relation, with the initial setting H = ∅, L = ∅, mode = SynLM.
– In the lemma synthesis phase, Prove is invoked either to prove an unknown entailment F 1 |− F 2
related to a lemma template, or to verify whether a discovered lemma is an inductive lemma. In
the first case, Prove will establish sufficient assumptions about unknown relations appearing
in the entailment so that the entailment can become valid. Moreover, Prove is invoked with
mode = NoSyn in both the two cases to avoid entering into nested lemma synthesis phases.
In Figure 10, we also present formal specifications of Prove in pairs of pre- and post-conditions
(Requires/Ensures). These specifications relate to three cases when Prove is invoked to prove an
unknown entailment with the lemma synthesis always being disabled (mode = NoSyn), or to
prove a normal entailment with the lemma synthesis being disabled (mode = NoSyn) or enabled
(mode = SynLM). We write res to represent the returned result of Prove. In addition, hasU nk(E)
Proceedings of the ACM on Programming Languages, Vol. 2, No. POPL, Article 9. Publication date: January 2018.
Automated Lemma Synthesis in Symbolic-Heap Separation Logic
9:13
indicates that the entailment E has an unknown relation, valid(L) and valid(A) mean that all
lemmas in L and all assumptions in A are semantically valid. Moreover, valid ID (H, L, E) specifies
that the entailment E is semantically valid under the induction hypothesis set H and the lemma
set L. We will refer to these specifications when proving the proof system’s soundness. The formal
verification of Prove w.r.t. these specifications is illustrated in Appendix B.
Procedure Prove(H, L, F 1 |− F 2 , mode)
Input: F 1 |− F 2 is the goal entailment, H is a set of induction hypotheses, L is a set of valid lemmas, and
mode controls whether new lemmas will be synthesized (SynLM) or strictly not (NoSyn)
Output: The goal entailment’s validity (Valid⟨ξ ⟩ or Unkn, where ξ is the witness valid proof tree),
a set of new synthesized lemmas, and a set of unknown assumptions
Requires: (mode = NoSyn) ∧ hasU nk(F 1 |−F 2 ) ∧ valid(L)
Ensures: (res = (Unkn, ∅, ∅)) ∨ ∃ξ , A.((res = (Valid⟨ξ ⟩, ∅, A)) ∧ (valid(A) → valid ID (H, L, F 1 |−F 2 )))
Requires: (mode = NoSyn) ∧ ¬hasU nk(F 1 |−F 2 ) ∧ valid(L)
Ensures: (res = (Unkn, ∅, ∅)) ∨ ∃ξ .((res = (Valid⟨ξ ⟩, ∅, ∅)) ∧ valid ID (H, L, F 1 |−F 2 ))
Requires: (mode = SynLM) ∧ ¬hasU nk(F 1 |−F 2 ) ∧ valid(L)
Ensures: (res = (Unkn, ∅, ∅)) ∨ ∃ξ ,Lsyn .((res = (Valid⟨ξ ⟩, Lsyn , ∅)) ∧ valid(Lsyn ) ∧ valid ID (H, L, F 1 |−F 2 ))
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
R ← {Unify(R, (H, L, F 1 |− F 2 )) | R ∈ R}
//Find applicable inference and synthesis rules from R
Lsyn ← ∅
//Initialize the new synthesized lemma set
if (mode = SynLM) and NeedLemmas(F 1 |− F 2 , R) then
//Synthesize new lemmas
Lsyn ← SynthesizeLemma(L, F 1 |− F 2 )
R ← R ∪ {Unify(R, (H, Lsyn , F 1 |− F 2 )) | R ∈ {LML , LMR }}
//Update lemma application rules
for each R in R do
A←∅
if IsSynthesisRule(R) then A ← Assumptions(R)
//Initialize the assumption set
//R ∈ {U1Π , U2Π , U1Σ , U2Σ , UIH }
//R ∈ { |−Π , ⊥1L , ⊥2L , U1Π , U2Π , U1Σ , U2Σ }
if IsAxiomRule(R) then
ξ ← CreateWitnessProofTree(F 1 |− F 2 , R, ∅)
return (Valid⟨ξ ⟩, Lsyn , A)
(Hi , Li , F 1i |− F 2i )i = 1...n ← Premises(R)
i ,A )
(r i , Lsyn
//Prove all sub-goals
i i = 1...n ← Prove(Hi , L ∪ Lsyn ∪ Li , F 1i |− F 2i , mode)i = 1...n
if r i = Valid⟨ξ i ⟩ for all i = 1 . . . n then
ξ ← CreateWitnessProofTree(F 1 |− F 2 , R, {ξ 1 , . . . , ξ n })
1 ∪ . . . ∪ Ln , A ∪ A ∪ . . . ∪ A )
return (Valid⟨ξ ⟩, Lsyn ∪ Lsyn
1
n
syn
return (Unkn, ∅, ∅)
//All rules fail to prove F 1 |− F 2
Fig. 10. The main proof search procedure with description (Input/Output) and specification (Requires/Ensures)
Given the goal entailment F 1 |− F 2 , the procedure Prove first finds from all rules R (Figures 6, 7,
8, and 9) a set of potential inference and synthesis rules R, which can be unified with F 1 |− F 2
(line 1). When the lemma synthesis mode is enabled (mode = SynLM), it invokes a subroutine
NeedLemmas to examine the selected rules R and the goal entailment F 1 |− F 2 to decide whether
it really needs to synthesize new lemmas (line 3). Note that the input valid lemma set L is also
exploited to make the lemma synthesis more effective (line 4). The new synthesized lemmas, if any,
will be utilized to discover new lemma application rules (lines 5). Thereafter, Prove successively
applies each rule R ∈ R to derive new sub-goal entailments, as in the premises of R, and recursively
searches for their proofs (line 6 – 16). It returns the valid result (Valid⟨ξ ⟩, where ξ is the witness
proof tree) if the selected rule R does not introduce any new sub-goals (lines 9 – 11), or all derived
Proceedings of the ACM on Programming Languages, Vol. 2, No. POPL, Article 9. Publication date: January 2018.
9:14
Quang-Trung Ta, Ton Chanh Le, Siau-Cheng Khoo, and Wei-Ngan Chin
sub-goals are successfully proved (line 12 – 16). In essence, the proof tree ξ is composed of a root
(the goal entailment F 1 |− F 2 ), a label (the applied rule R), and sub-trees, if any, corresponding to
the sub-goal entailments’ proofs (lines 10 and 15). Its form is intuitively similar to the proof trees
depicted in Figures 1, 2, and 3. On the other hand, Prove announces the unknown result (Unkn)
when all selected rules in R fail to prove the goal entailment (line 17). Besides, Prove also returns a
set of new synthesized lemmas and a set of unknown assumptions. These lemmas are discovered
when Prove is invoked in the entailment proving phase. The unknown assumptions are collected
by the synthesis rules (line 8), when Prove is executed in the lemma synthesis phase.
Details of the lemma synthesis will be presented in Section 5. In the following, we will explain
when Prove decides to synthesize new lemmas (line 3). The procedure NeedLemmas returns true
when all the following conditions are satisfied:
– F 1 |− F 2 is not an unknown entailment, which implies that lemmas are possibly needed.
– R does not contain any axiom or normalization ( |−Π , ⊥1L , ⊥2L , ∃L , ∃R , =L , EL , ER ), matching rules
of identical heap predicates (∗7→, ∗P), or unfolding rules that introduce identical predicates
(ID, PR ). This condition indicates that all rules in R cannot make any immediate proof progress.
– R does not have any induction hypothesis or lemma application rules (IH, LML , LMR ), or any
case analysis rule (CA) that potentially leads to the application of an induction hypothesis or a
lemma. This condition conveys that existing induction hypotheses and lemmas are inapplicable
– F 1 |− F 2 is not a good induction hypothesis candidate, which indicates that the induction hypothesis recorded from F 1 |− F 2 cannot be used to prove other derived entailments.
While the first three conditions can be checked syntactically on F 1 |− F 2 and R, the last condition
can be tested by a trial and error method. Specifically, F 1 |− F 2 will firstly be recorded as a temporary
induction hypothesis candidate, and then each inductive heap predicate in F 1 will be consecutively
unfolded to search for an application of the recorded induction hypothesis via the rule IH. If no
induction hypothesis application can be found, then F 1 |− F 2 is evidently not a good candidate.
4.4
Soundness of the proof system
Recall that our proof search procedure Prove is implemented in a recursive manner. When it is
invoked to prove a normal goal entailment E, which does not contain any unknown relation,
the initial setting H = ∅, L = ∅, mode = SynLM indicates that no induction hypothesis or lemma
is provided beforehand, and the proof system can synthesize new lemmas to assist in proving
E. When synthesizing the new supporting lemmas, the proof system can be utilized to prove an
unknown entailment related to a lemma template or to verify a discovered lemma, which is a normal
entailment not containing any unknown relation. In addition, the proof system is also invoked to
prove sub-goal entailments, which are normal entailments derived from E. All of these scenarios
are summarized by the three specifications in Figure 10.
In the following, we present Propositions 4.1, 4.2, and 4.3, which respectively specify the soundness
of our proof system in the three typical scenarios: (i) proving an unknown entailment with the
lemma-synthesis-disabled mode (the first pre/post specification of Prove), (ii) verifying a discovered
lemma, i.e., proving a normal entailment with the lemma-synthesis-disabled mode (the second
pre/post specification), or (iii) proving a normal entailment with the lemma-synthesis-enabled
mode (the third pre/post specification). Finally, we describe the overall soundness of the proof
system in Theorem 4.4 when Prove is invoked with the initial setting H = ∅, L = ∅, mode = SynLM.
Note that Propositions 4.1 and 4.2 are relevant to the lemma synthesis in Section 5. Proposition 4.3
directly relates to the overall soundness in Theorem 4.4.
Proceedings of the ACM on Programming Languages, Vol. 2, No. POPL, Article 9. Publication date: January 2018.
Automated Lemma Synthesis in Symbolic-Heap Separation Logic
9:15
Proposition 4.1 (Proof of an unknown entailment). Given an unknown entailment E. If the
procedure Prove returns Valid⟨_⟩ and generates an assumption set A when proving E in the lemmasynthesis-disabled mode (NoSyn), using an empty induction hypothesis set (H =∅) and a valid lemma
set L as its inputs, then E is semantically valid, given that all assumptions in A are valid.
Proposition 4.2 (Proof of a normal entailment when the lemma synthesis is disabled).
Given a normal entailment E which does not contain any unknown relation. If the procedure Prove
returns Valid⟨_⟩ when proving E in the lemma-synthesis-disabled mode (NoSyn), using an empty
induction hypothesis set (H =∅) and a valid lemma set L as its inputs, then E is semantically valid.
Proposition 4.3 (Proof of a normal entailment when the lemma synthesis is enabled).
Given a normal entailment E which does not contain any unknown relation. If the procedure Prove returns Valid⟨_⟩ and synthesizes a set of lemmas Lsyn when proving E in the lemma-synthesis-enabled
mode (SynLM), using an empty induction hypothesis set (H =∅) and a valid lemma set L as its inputs,
then the entailment E and all lemmas in Lsyn are semantically valid.
Proofs of Propositions 4.1, 4.2, 4.3. We first show that all inference and synthesis rules are sound,
and the proof system in the NoSyn mode is sound. Based on that, we can prove Propositions 4.1
and 4.2. The proof of Proposition 4.3 will be argued based on the lemma synthesis’s soundness in
Section 5. Details of all these proofs are presented in Appendix A.
□
Theorem 4.4 (The overall soundness of the proof system). Given a normal entailment E which
does not contain any unknown relation, if the procedure Prove returns Valid⟨_⟩ when proving E in
the initial context that there is no induction hypothesis or lemma provided beforehand and the lemma
synthesis is enabled (H = ∅, L = ∅, mode = SynLM), then E is semantically valid.
Proof. Since E does not contain any unknown relation, both the input induction hypothesis set
and lemma set are empty (H =∅, L=∅), and Prove is invoked in the SynLM mode, it follows from
Proposition 4.3 that if Prove returns Valid⟨_⟩ when proving E, then E is semantically valid.
□
5
THE LEMMA SYNTHESIS FRAMEWORK
We are now ready to describe our lemma synthesis framework. It consists of a main procedure,
presented in Subsection 5.1, and auxiliary subroutines, described in Subsections 5.2, 5.3, and 5.4.
Similar to the proof system in Section 4, we also provide the input/output description and the formal
specification for each of these synthesis procedures. We use the same keyword res to represent the
returned result and valid(L) indicates that all lemmas in L are semantically valid.
5.1 The main synthesis procedure
Figure 11 presents the main lemma synthesis procedure (SynthesizeLemma). Its
inputs include a goal entailment F 1 |− F 2
which needs to be proved by new lemmas,
and a set of previously synthesized lemmas L which will be exploited to make
the current synthesis more effective. The
procedure SynthesizeLemma first identifies a set of desired lemma templates
based on the entailment’s heap structure, via the invocation of the procedure
CreateTemplate (line 18). These lemma
Procedure SynthesizeLemma(L, F 1 |− F 2 )
Input: F 1 |−F 2 is the goal entailment, L is a valid lemma set
Output: A set of new synthesized lemmas
Requires: valid(L)
Ensures: valid(res)
18: for each Σ1 |− ∃v.Σ
® 2 in CreateTemplate(L, F 1 |− F 2 ) do
19:
L1 ← RefineAnte(L, Σ1 ∧ true |− Σ2 )
20:
L2 ← RefineConseq(L, Σ1 |− ∃v.Σ
® 2)
21:
if (L1 ∪ L2 ) , ∅ then return (L1 ∪ L2 )
22: return ∅
Fig. 11. The main lemma synthesis procedure
Proceedings of the ACM on Programming Languages, Vol. 2, No. POPL, Article 9. Publication date: January 2018.
9:16
Quang-Trung Ta, Ton Chanh Le, Siau-Cheng Khoo, and Wei-Ngan Chin
templates are of the form Σ1 |− ∃v.Σ
® 2 , in which Σ1 and Σ2 are spatial formulas constructed from
heap predicate symbols appearing in F 1 and F 2 , respectively, and v® are all free variables in Σ2 . By this
construction, each synthesized lemma will share a similar heap structure with the goal entailment,
hence they can be unified by the lemma application rules LML and LMR . We will formally define
the lemma templates in Subsection 5.2. In our implementation, CreateTemplate returns a list of
possible lemma templates, which are sorted in the ascending order of their simplicity, i.e., templates
containing less spatial atoms are on the top of the list. Moreover, any template sharing the same
heap structure with a previously synthesized lemma in L will not be considered.
The framework then successively refines each potential lemma template by continuously discovering
and adding in pure constraints of its variables until valid inductive lemmas are found. In essence,
for a given lemma template, the refinement is performed by a 3-step recipe:
(1) Establishing an unknown relation representing a desired constraint inside the template and
creating an unknown entailment.
(2) Proving the unknown entailment by structural induction and collecting assumptions about
the unknown relation.
(3) Solving the assumptions to find out the actual definition of the unknown relation, thus
discovering the desired inductive lemma.
There are two possible places to refine a lemma template: on its antecedent (line 19) and its
consequent (line 20). We aim to synthesize a lemma which has an as weak as possible antecedent
or an as strong as possible consequent. We will elaborate the details in Subsections 5.3 and 5.4.
Our framework immediately returns a non-empty set of synthesized lemmas (lines 21) once it
successfully refines a template. Otherwise, it returns an empty set (∅) indicating that no lemma
can be synthesized (line 22).
5.2
Discovering lemma templates
The lemma templates for a given goal entailment can be discovered from the entailment’s heap
structure. In the followings, we present the formal definition of the lemma templates and also
illustrate by examples how to create them.
Definition 5.1 (Lemma template). A lemma template for a goal entailment F 1 |− F 2 is an entailment
of the form Σ1 |− ∃v.Σ
® 2 , where:
(1) Σ1 and Σ2 are spatial formulas containing at least one inductive heap predicate.
(2) Heap predicate symbols in Σ1 and Σ2 are sub-multisets of those in F 1 and F 2 .
(3) Variables in Σ1 and Σ2 are separately named (no variables appear twice) and v® ≡ FV(Σ2 ).
The condition (1) limits the templates for only inductive lemmas. The condition (2) guarantees that
the synthesized lemmas are unifiable with the goal entailment via applications of the rules LML
and LMR . Moreover, the condition (3) ensures that each template is as general as possible so that
desirable pure constraints can be subsequently discovered in the next phases.
For instance, the following lemma templates can be created given the motivating entailment in
Section 2: E 1 ≜ dllrev(x, y, u, v, n) ∗ dll(v, u, z, t, 200) ∧ n≥100 |− ∃r .(dll(x, y, r , z, n+199) ∗ z7→r, t).
Note that the template T1 is used to synthesize the lemma L 1 ≜ dllrev(a, b, c, d, m) |− dll(a, b, c, d, m).
T1 ≜ dllrev(x 1 , x 2 , x 3 , x 4 , n 1 ) |− ∃x 5 , x 6 , x 7 , x 8 , n 2 .dll(x 5 , x 6 , x 7 , x 8 , n 2 )
dll(x 1 , x 2 , x 3 , x 4 , n 1 ) |− ∃x 5 , x 6 , x 7 , x 8 , x 9 , x 10 , x 11 , n 2 .(dll(x 5 , x 6 , x 7 , x 8 , n 2 ) ∗ x 9 7→x 10 , x 11 )
dllrev(x 1 , x 2 , x 3 , x 4 , n 1 ) |− ∃x 5 , x 6 , x 7 , x 8 , x 9 , x 10 , x 11 , n 2 .(dll(x 5 , x 6 , x 7 , x 8 , n 2 ) ∗ x 9 7→x 10 , x 11 )
dllrev(x 1 , x 2 , x 3 , x 4 , n 1 ) ∗ dll(x 5 , x 6 , x 7 , x 8 , n 2 ) |− ∃x 9 , x 10 , x 11 , x 12 , n 3 .dll(x 9 , x 10 , x 11 , x 12 , n 3 )
Proceedings of the ACM on Programming Languages, Vol. 2, No. POPL, Article 9. Publication date: January 2018.
Automated Lemma Synthesis in Symbolic-Heap Separation Logic
9:17
Similarly, there can be several possible lemma templates relating to the entailments E 2 and E 3 in
Figure 1. Among them, the templates T2 and T3 below are used to synthesize the lemmas L 2 and L 3 :
T2 ≜ dll(x 1 , x 2 , x 3 , x 4 , n 1 ) ∗ dll(x 5 , x 6 , x 7 , x 8 , n 2 ) |− ∃x 9 , x 10 , x 11 , x 12 , n 3 .dll(x 9 , x 10 , x 11 , x 12 , n 3 )
T3 ≜ dll(x 1 , x 2 , x 3 , x 4 , n 1 ) |− ∃x 5 , x 6 , x 7 , x 8 , x 9 , x 10 , x 11 , n 2 .(dll(x 5 , x 6 , x 7 , x 8 , n 2 ) ∗ x 9 7→x 10 , x 11 )
5.3
Refining the lemma template’s antecedent
Figure 12 presents the antecedent refinement procedure (RefineAnte), which aims to strengthen
the antecedent of a lemma template Σ1 ∧ Π 1 |− Σ2 with pure constraints of all its variables. Note
that when RefineAnte is invoked in the first time by SynthesizeLemma (Figure 11, line 19), Π 1 is
set to true and the existential quantification ∃v® in the original template Σ1 |− ∃v.Σ
® 2 is removed
since the antecedent will be strengthened with constraints of all variables in the template.
Initially, RefineAnte creates an unProcedure RefineAnte(L, Σ1 ∧ Π 1 |− Σ2 )
known entailment Σ1 ∧Π 1 ∧U (®
u) |− Σ2 ,
Input: Σ1 ∧Π1 |−Σ2 is a lemma template, L is a valid lemma set
where U (®
u) is an unknown relation
Output: a set of at most one synthesized lemma
of all variables u® in Σ1 , Σ2 , Π 1 (lines
Requires: valid(L)
23, 24). Then, it proves the entailment
Ensures: (res = ∅) ∨ ∃ Π 1′ .(res = {Σ1 ∧Π 1 ∧Π 1′ |−Σ2 } ∧ valid(res))
by induction and collects a set A of
23: u® ← FreeVars(Σ1 , Σ2 , Π 1 )
unknown assumptions, if any, about
24: U (®
u) ← CreateUnknownRelation(®
u)
U (®
u) (line 25). In this proof deriva- 25: r, _, A ← Prove(∅, L, Σ ∧ Π ∧ U (®
u) |− Σ2 , NoSyn)
1
1
tion (also in other invocations of Prove 26: if r = Valid⟨_⟩ then
in the lemma synthesis), we prevent 27: for each A ′ in SuperSet(A) do
the proof system from synthesizing 28:
U (®
u) ← Solve(A ′, U (®
u))
new lemmas (by passing NoSyn) to 29:
if Σ1 ∧ Π 1 ∧ U (®
u) . false then
avoid initiating a new lemma synthe- 30:
u) |− Σ2 )
L ← (Σ1 ∧ Π 1 ∧ U (®
r ′, _, _ ← Prove(∅, L, L, NoSyn)
//Verify ...
sis phase inside the current synthesis, 31:
if r ′ = Valid⟨ξ ⟩ and IH ∈ ξ then
//and return ...
i.e., to prohibit the nested lemma syn- 32:
return {L}
//the first inductive lemma,
thesis. In addition, the assumption set 33:
34:
else
return
RefineAnte(L,
L) //or continue refining.
A will be examined, via the procedure
35:
return
∅
//No lemmas are found.
Solve, to discover a solution of U (®
u)
(lines 27, 28). We implement in Solve
Fig. 12. Refining a lemma template’s antecedent
a constraint solving technique using
Farkas’ lemma [Colón et al. 2003; Schrijver 1986]. This technique assigns U (®
u) to a predefined linear
arithmetic formula with unknown coefficients, and applies Farkas’ lemma to transform A into a
set of constraints involving only the unknown coefficients. Then, it utilizes an off-the-shelf prover
such as Z3 [Moura and Bjørner 2008] to find a concrete model of the coefficients, thus obtains the
actual definition U (®
u) of U (®
u). We will describe this technique in Subsection 5.6.
Furthermore, we aim to find a non-spurious solution U (®
u) which does not refute the antecedent,
i.e., Σ1 ∧ Π 1 ∧ U (®
u) . false, to avoid creating an useless lemma: false |− Σ2 (line 29). To examine
this refutation, we follow the literature [Brotherston et al. 2014; Le et al. 2016] to implement an
unsatisfiability checking algorithm, which over-approximates a symbolic-heap formula to a pure
formula and invokes the off-the-shelf prover (such as Z3) to check the pure formula’s unsatisfiability,
thus concludes about the unsatisfiability of the original formula.
In general, discovering a non-spurious solution U (®
u) is challenging, because:
– The assumption set A can be complicated since the parameters u® of the unknown relation U (®
u)
are all variables in the lemma template. This complexity can easily overwhelm the underlying
prover when finding the model of the corresponding unknown coefficients.
Proceedings of the ACM on Programming Languages, Vol. 2, No. POPL, Article 9. Publication date: January 2018.
9:18
Quang-Trung Ta, Ton Chanh Le, Siau-Cheng Khoo, and Wei-Ngan Chin
– The discovered proof tree of the unknown entailment might not be similar to the actual proof
tree of the desired lemma, due to the occurrence of the unknown relation. Therefore, the set A
might contain noise assumptions of U (®
u), which results in a spurious solution. Nonetheless, a
part of the expected solution can still be discovered from a subset of A, which corresponds to
the common part of the discovered and the desired proof trees.
The above challenges inspire us to design an exhaustive approach to solve A (line 27). In particular,
RefineAnte first solves the entire set A (the first element in SuperSet(A)) to find a complete
solution, which satisfies all assumptions in A. If such solution is not available, RefineAnte iteratively
examines each subset of A (among the remaining elements in SuperSet(A)) to discover a partial
solution, which satisfies some assumptions in A.
The discovered solution (complete or partial) will be verified whether it can form a valid inductive
lemma. In particular, the proof system will be invoked to prove the candidate lemma L (line 31). The
verification is successful when L is proved valid and its witness proof tree ξ contains an induction
hypothesis application (labeled by IH) (line 32). The latter condition ensures that the returned
lemma is actually an inductive lemma.
We also follow an incremental approach to refine the lemma template’s antecedent. That is, the
antecedent will be strengthened with the discovered solution U (®
u) to derive a new template. The
new template will be refined again until the first valid lemma is discovered (lines 32 – 34). Note
that this refinement stops at the first solution to ensure that the discovered antecedent is as weak
as possible. Finally, RefineAnte returns ∅ if no valid inductive lemma can be discovered (line 35).
For example, given the template T1 ≜ dllrev(x 1 , x 2 , x 3 , x 4 , n 1 ) |− ∃x 5 , x 6 , x 7 , x 8 , n 2 .dll(x 5 , x 6 , x 7 , x 8 , n 2 ),
RefineAnte creates an unknown relation U (x 1 , ..., x 8 , n 1 , n 2 ) and introduces the entailment Eu1 :
Eu1 ≜ dllrev(x 1 , x 2 , x 3 , x 4 , n 1 ) ∧ U (x 1 , ..., x 8 , n 1 , n 2 ) |− dll(x 5 , x 6 , x 7 , x 8 , n 2 )
x, n)
® →
A1 ≜ {x 1 =x 3 ∧ n 1 =1 ∧ U (®
A2 ≜ {U (®
x, n)
® → false}
x 1 =x 5 ∧x 2 =x 6 ∧x 4 =x 8 ∧x 5 =x 7 ∧n 2 =1} 1
UΠ
U1Σ
x 1 =x 3 ∧ n 1 =1 ∧ U (®
x, n)
® |− x 1 =x 5 ∧
x 3 7→u, x 4 ∧ U (®
x, n)
® |− a 5 =x 5 ∧
x 2 =x 6 ∧ x 4 =x 8 ∧ x 5 =x 7 ∧ n 2 =1
a 6 =x 6 ∧ a 7 =x 7 ∧ a 8 =x 8 ∧ b2 =n 2
∗7→
∗P
dll(a 5 , a 6 , a 7 , a 8 , b2 ) ∗ x 3 7→u, x 4 ∧
x 1 7→x 2 , x 4 ∧ x 1 =x 3 ∧ n 1 =1 ∧ U (®
x, n)
®
® }
A3 ≜ {U (®
x, n)
® → U (®
a, b)θ
|− x 5 7→x 6 , x 8 ∧ x 5 =x 7 ∧ n 2 =1
U (®
x, n)
® |− dll(x 5 , x 6 , x 7 , x 8 , n 2 )
PR
UIH
x 1 7→x 2 , x 4 ∧ x 1 =x 3 ∧ n 1 =1 ∧ U (®
dllrev(x 1 , x 2 , u, x 3 , n 1 −1) ∗ x 3 7→u, x 4 ∧ U (®
x, n)
®
x, n)
®
|− dll(x 5 , x 6 , x 7 , x 8 , n 2 )
|− dll(x 5 , x 6 , x 7 , x 8 , n 2 )
ID
Eu1 ≜ dllrev(x 1 , x 2 , x 3 , x 4 , n 1 ) ∧ U (x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 , x 8 , n 1 , n 2 ) |− dll(x 5 , x 6 , x 7 , x 8 , n 2 )
Fig. 13. A possible proof tree of the unknown entailment Eu1
® ≡ U (a 1 , a 2 , a 3 , a 4 , a 5 , a 6 , a 7 , a 8 , b1 , b2 );
where U (®
x, n)
® ≡ U (x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 , x 8 , n 1 , n 2 ); U (®
a, b)
® |− dll(a 5 , a 6 , a 7 , a 8 , b2 );
the rule ID is performed to record the IH H ≜ dllrev(a 1 , a 2 , a 3 , a 4 , b1 ) ∧ U (®
a, b)
the rule UIH applies H with θ = [x 1 /a 1 , x 2 /a 2 , u/a 3 , x 3 /a 4 , n 1 −1/b1 ];
Figure 13 presents a possible proof tree of Eu1 . From this proof, we obtain a set A = A1 ∪ A2 ∪ A3
of three unknown assumptions about the relation U (x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 , x 8 , n 1 , n 2 ):
(1) x 1 =x 3 ∧ n 1 =1 ∧ U (x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 , x 8 , n 1 , n 2 ) → x 1 =x 5 ∧ x 2 =x 6 ∧ x 4 =x 8 ∧ x 5 =x 7 ∧ n 2 =1
(2) U (x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 , x 8 , n 1 , n 2 ) → false
(3) U (x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 , x 8 , n 1 , n 2 ) → U (x 1 , x 2 , u, x 3 , a 5 , a 6 , a 7 , a 8 , n 1 −1, b2 )
Our framework first attempts to solve the full assumption set A. Unfortunately, there is only a
spurious solution U (x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 , x 8 , n 1 , n 2 ) ≡ false, since the assumption (2) is too strong.
Proceedings of the ACM on Programming Languages, Vol. 2, No. POPL, Article 9. Publication date: January 2018.
Automated Lemma Synthesis in Symbolic-Heap Separation Logic
9:19
It then tries to find another solution by partially solving the set A. In this case, it can discover the
following partial solution U when solving a subset containing only the assumption (1):
U (x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 , x 8 , n 1 , n 2 ) ≡ x 1 =x 5 ∧ x 2 =x 6 ∧ x 3 =x 7 ∧ x 4 =x 8 ∧ n 1 =n 2
Since the above is only a partial solution, the framework needs to verify whether it can construct
a valid inductive lemma. Indeed, U can be used to derive the following entailment Eu1 , which can
be simplified to become dllrev(x 1 , x 2 , x 3 , x 4 , n 1 ) |− dll(x 1 , x 2 , x 3 , x 4 , n 1 ). The latter entailment can be
equivalently transformed into the motivating lemma L 1 by a renaming on its variables.
Eu1 ≜ dllrev(x 1 , x 2 , x 3 , x 4 , n 1 ) ∧ x 1 =x 5 ∧ x 2 =x 6 ∧ x 3 =x 7 ∧ x 4 =x 8 ∧ n 1 =n 2 |− dll(x 5 , x 6 , x 7 , x 8 , n 2 )
5.4
Refining the lemma template’s consequent
The consequent refinement is presented in the procedure RefineConseq (Figure 14). Unlike the
antecedent refinement, this refinement is not straightforward by simply adding pure constraints
into the template’s consequent, since the existing template’s antecedent might not be strong enough
to prove any formulas derived from the consequent. For example, all entailments derived from
adding pure constraints to only the consequent of the lemma template T3 are invalid, because when
n 1 =1, the list dll(x 1 , x 2 , x 3 , x 4 , n 1 ) has the length of 1, thus cannot be split into a singleton heap
x 9 7→x 10 , x 11 and a list dll(x 5 , x 6 , x 7 , x 8 , n 2 ), whose length is at least 1, by the definition of dll.
T3 ≜ dll(x 1 , x 2 , x 3 , x 4 , n 1 ) |− ∃x 5 , x 6 , x 7 , x 8 , x 9 , x 10 , x 11 , n 2 .(dll(x 5 , x 6 , x 7 , x 8 , n 2 ) ∗ x 9 7→x 10 , x 11 )
To overcome this problem,
|
® 2)
we decompose the consequent Procedure RefineConseq(L, Σ1 − ∃v.Σ
|
® 2 is a lemma template, L is a set of valid lemmas
refinement into two phases, Input: Σ1 − ∃v.Σ
preprocessing and fine-tuning. Output: a set of at most one synthesized lemma
For an input lemma template Requires: valid(L)
|− ® 2 ∧Π 2 )} ∧ valid(res))
Σ1 |− ∃v.Σ
® 2 , in the first phase, Ensures: (res = ∅) ∨ ∃ Π1 ,Π2 .(res = {Σ1 ∧Π1 ∃v.(Σ
36: T ← Preprocess(L, Σ1 |− ∃v.Σ
®
)
2
the framework infers a pure
37: if T = {Σ1 ∧ Π 1 |− ∃v.Σ
® 2 } then
constraint Π 1 so that the entail® 2 ∧ true))
38:
return FineTuneConseq(L, Σ1 ∧ Π1 |− ∃v.(Σ
ment Σ1 ∧ Π 1 |− ∃v.Σ
® 2 is valid
39: return ∅
(lines 36, 37). In the second phase, it incrementally
Fig. 14. Refining a lemma template’s consequent
strengthens the consequent of
the derived entailment until discovers a valid inductive lemma (line 38). Note that, we retain the
existential quantification in this consequent, and any constraints added into the consequent will be
bound by this quantification. These two phases will be elaborated in details as follows.
Preprocessing a lemma template.
Figure 15 presents our antecedent preprocessing procedure (Preprocess). Similar to the antecedent
refinement in Section 5.3, this procedure strengthens the antecedent of the template Σ1 |− ∃v.Σ
® 2
with a non-spurious condition Π 1 to make the lemma template valid. We also prevent the framework
from entering nested lemma synthesis phases by invoking Prove in the NoSyn mode (line 43).
However, this preprocessing step differs from the antecedent refinement (Section 5.3) when it
creates an unknown entailment by introducing the unknown relation U (®
u) on only the antecedent’s
free variables u® (lines 40, 41). We also equip the consequent ∃v.Σ
® 2 of the unknown entailment
with a conjunction Πinv of its inductive heap predicates’ pure invariants (lines 42, 43). These
invariants are pure formulas representing Boolean constraints of the predicates’ variables. We will
briefly describe the invariant construction in Section 5.5. This construction is also well-studied in
separation logic literature [Brotherston et al. 2014; Chin et al. 2012; Le et al. 2016].
Proceedings of the ACM on Programming Languages, Vol. 2, No. POPL, Article 9. Publication date: January 2018.
9:20
Quang-Trung Ta, Ton Chanh Le, Siau-Cheng Khoo, and Wei-Ngan Chin
In theory, equipping additional pure invariants of inductive heap predicates
does not weaken or strengthen the
lemma template’s consequent, i.e., Σ2 ≡
Σ2 ∧ Πinv . In our approach, Preprocess
solves the entire assumption constraint
set A at once (line 45), and not incrementally as in the antecedent refinement (Section 5.3). Therefore, the additional pure invariant Πinv is useful
for Preprocess to solve the entire set A
more precisely and effectively.
Procedure Preprocess(L, Σ1 |− ∃v.Σ
® 2)
® 2 is a lemma template, L is a valid lemma set
Input: Σ1 |−∃v.Σ
Output: a set of at most one refined template
Requires: valid(L)
Ensures: (res = ∅) ∨ ∃ Π1 .(res = {Σ1 ∧Π 1 |−∃v.Σ
® 2 } ∧ valid(res))
40: u® ← FreeVars(Σ1 )
41: U (®
u) ← CreateUnknownRelation(®
u)
Ó
42: Πinv ←
x))
P(x® ) ∈ Σ2 Invariant(P(®
43: r , _, A ← Prove(∅, L, Σ1 ∧U (®
u) |−∃v.(Σ
® 2 ∧Πinv ), NoSyn)
44: if r = Valid⟨_⟩ then
45:
U (®
u) ← Solve(A, U (®
u))
u) . false then return {Σ1 ∧ U (®
u) |− ∃v.Σ
® 2}
46:
if Σ1 ∧ U (®
//No refined templates are found.
For example, given the template T3 , 47: return ∅
Preprocess sets up an unknown reFig. 15. Preprocess a lemma template’s antecedent
lation U (x 1 , x 2 , x 3 , x 4 , n 1 ) in the template’s antecedent, and introduces the invariant n 2 ≥1 of dll(x 5 , x 6 , x 7 , x 8 , n 2 ) in the template’s
consequent to create the following unknown entailment Eu2 .
Eu2 ≜ dll(x 1 , x 2 , x 3 , x 4 , n 1 ) ∧ U (x 1 , x 2 , x 3 , x 4 , n 1 ) |− ∃x 5, ...,11 , n 2 .(dll(x 5 , x 6 , x 7 , x 8 , n 2 ) ∗ x 9 7→x 10 , x 11 ∧ n 2 ≥1)
A2 ≜ {U (x 1, 2, 3, 4, n 1 ) → ∃x 5, . . ., 11, n 2 .(x 1 =x 9 ∧ x 2 =x 10 ∧
u=x 11 ∧ u=x 5 ∧ x 1 =x 6 ∧ x 3 =x 7 ∧ x 4 =x 8 ∧ n 1 −1=n 2 ∧ n 2 ≥1)} 1
UΠ
U2Σ U (x
1, 2, 3, 4, n 1 ) |− ∃x 5, . . ., 11, n 2 .(x 1 =x 9 ∧ x 2 =x 10 ∧
x 1 =x 3 ∧ n 1 =1 ∧ U (x 1, 2, 3, 4, n 1 ) |−
u=x 11 ∧ u=x 5 ∧ x 1 =x 6 ∧ x 3 =x 7 ∧ x 4 =x 8 ∧ n 1 −1=n 2 ∧ n 2 ≥1)
∃x 5, . . ., 11, n 2 .(dll(x 5, x 6, x 7, x 8, n 2 ) ∧
∗P
dll(u, x 1, x 3, x 4, n 1 −1) ∧ U (x 1, 2, 3, 4, n 1 ) |− ∃x 5, . . ., 11, n 2 .
x 1 =x 9 ∧ x 2 =x 10 ∧ x 4 =x 11 ∧ n 2 ≥1)
∗7→
(dll(x 5, x 6, x 7, x 8, n 2 ) ∧ x 1 =x 9 ∧ x 2 =x 10 ∧ u=x 11 ∧ n 2 ≥1)
x 1 7→x 2, x 4 ∧ x 1 =x 3 ∧ n 1 =1 ∧ U (x 1, 2, 3, 4, n 1 ) |−
∗7→
x 1 7→x 2, u ∗ dll(u, x 1, x 3, x 4, n 1 −1) ∧ U (x 1, 2, 3, 4, n 1 ) |−
∃x 5, . . ., 11, n 2 .(dll(x 5, x 6, x 7, x 8, n 2 ) ∗
∃x 5, . . ., 11, n 2 .(dll(x 5, x 6, x 7, x 8, n 2 ) ∗ x 9 7→x 10, x 11 ∧ n 2 ≥1)
x 9 7→x 10, x 11 ∧ n 2 ≥1)
ID
Eu2 ≜ dll(x 1, x 2, x 3, x 4, n 1 ) ∧ U (x 1, x 2, x 3, x 4, n 1 ) |− ∃x 5, . . ., 11, n 2 .(dll(x 5, x 6, x 7, x 8, n 2 ) ∗ x 9 7→x 10, x 11 ∧ n 2 ≥1)
A1 ≜ {x 1 =x 3 ∧ n 1 =1 ∧ U (x 1, 2, 3, 4, n 1 ) → false }
Fig. 16. A possible proof tree of the unknown entailment Eu2
This entailment will be proved by induction to collect constraints about the unknown relation U . We
present its detailed proof in Figure 16. Observe that the assumption constraint U (x 1 , x 2 , x 3 , x 4 , n 1 ) →
∃x 5 , . . . , x 11 , n 2 .(x 1 =x 9 ∧x 2 =x 10 ∧u=x 11 ∧u=x 5 ∧x 1 =x 6 ∧x 3 =x 7 ∧x 4 =x 8 ∧n 1 −1=n 2 ∧n 2 ≥1) in A2 can
be simplified by eliminating existentially quantified variables to become U (x 1 , x 2 , x 3 , x 4 , n 1 ) → n 1 ≥2.
Therefore, we can obtain a set A = A1 ∪ A2 containing the following unknown assumptions:
(1) x 1 =x 3 ∧ n 1 =1 ∧ U (x 1 , x 2 , x 3 , x 4 , n 1 ) → false
(2) U (x 1 , x 2 , x 3 , x 4 , n 1 ) → n 1 ≥2
The procedure Preprocess can solve the full assumption constraint set A to discover a potential
solution U (x 1 , x 2 , x 3 , x 4 , n 1 ) ≡ n 1 ≥2, which allows T3 to be refined to a new lemma template T3′:
T3′ ≜ dll(x 1 , x 2 , x 3 , x 4 , n 1 ) ∧ n 1 ≥2 |− ∃x 5 , . . . , x 11 , n 2 .(dll(x 5 , x 6 , x 7 , x 8 , n 2 ) ∗ x 9 7→x 10 , x 11 )
Fine-tuning a lemma template’s consequent.
This step aims to refine further the consequent of the lemma template discovered in the preprocessing phase. The refinement is performed by the recursive procedure FineTuneConseq (Figure 17).
Its input is a (refined) lemma template Σ1 ∧ Π 1 |− ∃v.(Σ
® 2 ∧ Π 2 ). When FineTuneConseq is invoked
in the first time by RefineConseq, Π 2 is set to true (Figure 14, line 38).
Initially, FineTuneConseq establishes an unknown relation U (®
u) on all variables in the templates
and creates an unknown entailment Σ1 ∧ Π 1 |− ∃v.(Σ
® 2 ∧ Π 2 ∧ U (®
u)) (lines 48, 49). Then, it proves the
Proceedings of the ACM on Programming Languages, Vol. 2, No. POPL, Article 9. Publication date: January 2018.
Automated Lemma Synthesis in Symbolic-Heap Separation Logic
9:21
unknown entailment by induction to collect a set A of unknown assumptions (line 50). Similarly to
the antecedent refinement (Section 5.3), A will be exhaustively solved: the entire set and its subsets
will be examined until a feasible solution is found to obtain a valid inductive lemma (lines 52 – 59).
We also prevent the proof
system from entering nested
lemma synthesis phases, controlled by the argument NoSyn,
when Prove is invoked to collect the unknown assumption
set A (line 50) or to verify
the inductive lemma (line 55).
Witness proof tree of a candidate lemma, if having, is examined for the occurrence of the
induction hypothesis application rule (IH), to determine if
this candidate is a valid inductive lemma (line 56).
Procedure FineTuneConseq(L, Σ1 ∧ Π1 |− ∃v.(Σ
® 2 ∧ Π 2 ))
® 2 ∧Π 2 ) is a lemma template, L is a valid lemma set
Input: Σ1 ∧Π 1 |−∃v.(Σ
Output: a set of at most one synthesized lemma
Requires: valid(L)
Ensures: (res = ∅) ∨ ∃ Π2′ .(res = {Σ1 ∧Π 1 |−∃v.(Σ
® 2 ∧Π 2 ∧Π2′ )} ∧ valid(res))
48: u® ← FreeVars(Σ1 , Σ2 , Π 1 , Π 2 )
49: U (®
u) ← CreateUnknownRelation(®
u)
® 2 ∧ Π 2 ∧U (®
u)), NoSyn)
50: r , _, A ← Prove(∅, L, Σ1 ∧ Π 1 |− ∃v.(Σ
51: if r = Valid⟨_⟩ then
52:
for each A ′ in SuperSet(A) do
U (®
u) ← Solve(A ′, U (®
u))
53:
54:
L ← (Σ1 ∧ Π1 |− ∃v.(Σ
® 2 ∧ Π 2 ∧ U (®
u))
55:
r ′, _, _ ← Prove(∅, L, L, NoSyn)
//Verify ...
56:
if r ′ = Valid⟨ξ ⟩ and IH ∈ ξ then //the inductive lemma, and ...
57:
Lsyn ← FineTuneConseq(L, L) //strengthen its consequent.
58:
if Lsyn = ∅ then return {L}
However, unlike the antecedent 59:
else return Lsyn
refinement, FineTuneConseq 60: return ∅
//No lemmas are found.
keeps refining the template
until its consequent cannot be
Fig. 17. Fine-tuning the consequent of a lemma template
refined further (line 56, 59).
This repetition ensures that the returned lemma’s consequent, if discovered, is as strong as possible.
For example, given the preprocessed template T3′, FineTuneConseq creates an unknown relation
U (x 1 , . . . , x 11 , n 1 , n 2 ), or U (®
x, n)
® for short, involving all variables in T3′ and constructs the unknown
entailment Eu3 . This entailment will be proved again to collect constraints about U (®
x, n).
®
x, n))
®
Eu3 ≜ dll(x 1 , x 2 , x 3 , x 4 , n 1 ) ∧ n 1 ≥2 |− ∃x 5 , . . . , x 11 , n 2 .(dll(x 5 , x 6 , x 7 , x 8 , n 2 ) ∗ x 9 7→x 10 , x 11 ∧ U (®
A3 ≜ {n 1≥2 ∧ r =x 3 ∧ n 1 −2=1 → ∃x 5, . . ., 11,n 2,v .(U (x,
® n)
® ∧ x 1 =x 5 ∧
x 2 =x 6 ∧ u=v ∧ v=x 7 ∧ r =x 8 ∧ r =x 9 ∧ u=x 10 ∧ x 4 =x 11 ∧ n 2 −1=1)}
n 1 ≥2 ∧ r =x 3 ∧ n 1 −2=1 |− ∃x 5, . . ., 11,n 2,v .(U (x,
® n)
® ∧ x 1 =x 5 ∧
x 2 =x 6 ∧ u=v ∧ v=x 7 ∧ r =x 8 ∧ r =x 9 ∧ u=x 10 ∧ x 4 =x 11 ∧ n 2 −1=1)
r 7→u,x 4 ∧ n 1 ≥2 ∧ r =x 3 ∧ n 1 −2=1 |− ∃x 5, . . ., 11,n 2,v .(x 9 7→x 10,x 11 ∧
U (x,
® n)
® ∧ x 1 =x 5 ∧ x 2 =x 6 ∧ u=v ∧ v=x 7 ∧ r =x 8 ∧ n 2 −1=1)
A1
...
U1Π
A4
∗7→
...
A2
dll(r,u,x 3,x 4,n 1 −2) ∧ n 1 ≥2 |− ∃x 5, . . ., 11,n 2,v .(x 9 7→x 10,x 11 ∧
U (x,
® n)
® ∧ x 1 =x 5 ∧ x 2 =x 6 ∧ u=v ∧ v=x 7 ∧ r =x 8 ∧ n 2 −1=1)
...
u7→x 1,r ∗dll(r,u,x 3,x 4,n 1 −2) ∧ n 1 ≥2 |− ∃x 5, . . ., 11,n 2,v .(v7→x 5,x 8 ∗
x 9 7→x 10,x 11 ∧ U (x,
® n)
® ∧ x 1 =x 5 ∧ x 2 =x 6 ∧ u=v ∧ v=x 7 ∧ n 2 −1=1)
ID
∗7→
dll(u,x 1,x 3,x 4,n 1 −1)∧n 1≥2 |−∃x 5, . . ., 11,n 2,v .(v7→x 5,x 8 ∗x 9 7→x 10,x 11 ∧U (x,
® n)∧x
®
1 =x 5 ∧x 2 =x 6 ∧u=v∧v=x 7 ∧n 2 −1=1)
ID
dll(u,x 1,x 3,x 4,n 1 −1)∧n 1≥2 |−∃x 5, . . ., 11,n 2,v .(dll(v,x 5,x 7,x 8,n 2 −1)∗x 9 7→x 10,x 11 ∧U (x,
® n)∧x
®
1 =x 5 ∧x 2 =x 6 ∧u=v)
PR
x 1 7→x 2,u ∗ dll(u,x 1,x 3,x 4,n 1 −1)∧n 1≥2 |−∃x 5, . . ., 11,n 2,v .(x 5 7→x 6,v∗dll(v,x 5,x 7,x 8,n 2 −1)∗x 9 7→x 10,x 11 ∧U (x,
® n))
®
∗7→
PR
x 1 7→x 2,u ∗ dll(u,x 1,x 3,x 4,n 1 −1)∧n 1≥2 |−∃x 5, . . ., 11,n 2 .(dll(x 5,x 6,x 7,x 8,n 2 )∗x 9 7→x 10,x 11 ∧U (x,
® n))
®
ID
Eu3 ≜ dll(x 1,x 2,x 3,x 4,n 1 ) ∧ n 1≥2 |− ∃x 5, . . ., 11,n 2 .(dll(x 5,x 6,x 7,x 8,n 2 ) ∗ x 9 7→x 10,x 11 ∧ U (x,
® n))
®
Fig. 18. A partial proof tree of the unknown entailment Eu3
Proceedings of the ACM on Programming Languages, Vol. 2, No. POPL, Article 9. Publication date: January 2018.
9:22
Quang-Trung Ta, Ton Chanh Le, Siau-Cheng Khoo, and Wei-Ngan Chin
We present a partial proof tree of Eu3 in Figure 18, where FineTuneConseq is able to collect a set of
unknown assumptions A = A1 ∪ A2 ∪ A3 ∪ A4 . More specifically, A3 ≜ {n 1 ≥2 ∧r =x 3 ∧n 1 −2=1 |−
∃x 5, ...,11 , n 2 , v. (U (®
x, n)
® ∧ x 1 =x 5 ∧ x 2 =x 6 ∧u=v ∧v=x 7 ∧ r =x 8 ∧ r =x 9 ∧u=x 10 ∧ x 4 =x 11 ∧ n 2 −1=1)} is
an assumption subset derived from a potential proof path of an inductive lemma, and A1 , A2 , A4
relate to other proof paths. The set A can be partially solved by considering only A3 . Here, a
possible solution of A3 is U p (x 1 , ..., x 11 , n 1 , n 2 ) ≡ x 1 =x 5 ∧ x 2 =x 6 ∧ x 3 =x 8 ∧ x 4 =x 11 ∧ x 7 =x 10 ∧ x 8 =x 9 .
This solution can be substituted back to Eu3 to derive the refined lemma template T3′′.
T3′′ ≜ dll(x 1 , x 2 , x 3 , x 4 , n 1 ) ∧ n 1 ≥2 |− ∃x 7 , n 2 .(dll(x 1 , x 2 , x 7 , x 3 , n 2 ) ∗ x 3 7→x 7 , x 4 )
Then, FineTuneConseq constructs another unknown entailment U ′(x 1 , x 2 , x 3 , x 4 , x 7 , n 1 , n 2 ) on all
variables of T3′′ and creates a new unknown entailment, which will be proved again to find a solution
′
U p (x 1 , x 2 , x 3 , x 4 , x 7 , n 1 , n 2 ) ≡ n 1 =n 2 +1. This solution helps to refine the template T3′′ to obtain an
inductive lemmaT 3 ≜ dll(x 1 , x 2 , x 3 , x 4 , n 1 )∧n 1 ≥2 |− ∃x 7 .(dll(x 1 , x 2 , x 7 , x 3 , n 1 −1)∗x 3 7→x 7 , x 4 ), which
can be equivalently transformed to the motivating lemma L 3 in Section 2.
5.5
Inferring pure invariants of inductive heap predicates
We present in this subsection the construction of inductive heap predicates’ invariants, which are
mainly utilized to preprocess the lemma templates’ antecedents (Section 5.4). Our construction is
inspired from separation logic literature [Brotherston et al. 2014; Chin et al. 2012; Le et al. 2016].
Definition 5.2 (Invariant of inductive heap predicates). Given an inductive heap predicate P(®
x), a pure
formula Invariant(P(®
x)) is called an invariant of P(®
x) iff s, h |= P(®
x) implies that s |= Invariant(P(®
x)),
for all model s, h. Formally, ∀s, h. (s, h |= P(®
x) → s |= Invariant(P(®
x))).
Constructing pure invariants. We also exploit a template-based approach to discover the pure
invariants. For a system of k (mutually) inductive heap predicates P1 (®
x 1 ), . . . , Pk (®
x k ), we first create
k unknown relations UP1 (®
x 1 ), . . . , UPk (®
x k ) to respectively represent their desired invariants. Then,
we follow the predicates’ definitions to establish constraints about the unknown relations.
In particular, consider each definition case F ji (x®i ) of a predicate Pi (®
x i ), which is of the form
u 1 ), ..., Qn (®
un ) are all inductive heap predicates in Σij ,
F ji (x®i ) ≜ ∃®
u.(Σij ∧ Πij ). Suppose that Q1 (®
where Q1 , ..., Qn ∈ {P1 , ..., Pk }, we create the following unknown assumption:
UQ1 (®
u 1 ) ∧ . . . ∧ UQn (®
un ) ∧ Πij
→
UPi (®
xi )
Thereafter, the system of all unknown assumptions can be solved by the constraint solving technique
based on Farkas’ lemma (Section 5.6) to discover the actual invariants of P1 (x®1 ), . . . , Pk (x®k ).
For example, given the predicate dll(hd, pr, tl, nt, len) in Section 2, whose definition is
dll(hd, pr , tl, nt, len)
=
def
(hd7→pr , nt ∧ hd=tl ∧ len=1) ∨ ∃u.(hd7→pr , u ∗ dll(u, hd, tl, nt, len−1))
We create an unknown relation Udll (hd, pr, tl, nt, len) representing its invariant and follow the
definition of dll(hd, pr, tl, nt, len) to establish the following constraints:
(1) hd=tl ∧ len=1 → Udll (hd, pr , tl, nt, len)
(2) Udll (u, hd, tl, nt, len−1) → Udll (hd, pr, tl, nt, len)
We then solve these constraints to obtain the solution U dll (hd, pr, tl, nt, len) ≡ len ≥ 1, which is
also the invariant of dll(hd, pr, tl, nt, len).
5.6
Solving assumption constraints with Farkas’ lemma
In this subsection, we describe the underlying constraint solving technique based on Farkas’ lemma.
This technique is implemented in the procedure Solve, which is frequently invoked in the lemma
synthesis to solve an unknown assumption set (Sections 5.3, 5.4, and 5.5). We will formally restate
Farkas’ lemma and explain how it is applied in constraint solving.
Proceedings of the ACM on Programming Languages, Vol. 2, No. POPL, Article 9. Publication date: January 2018.
Automated Lemma Synthesis in Symbolic-Heap Separation Logic
9:23
Ó Ín
Farkas’ lemma [Schrijver 1986]. Given a conjunction of linear constraints m
j=1 i=1a i j x i + b j ≥ 0,
Í
which is satisfiable, and a linear constraint ni=1c i x i + γ ≥ 0, we have:
n
n
m
m
m Í
n
Í
Í
Í
Ó
Ó
∀x 1 . . . x n .
ai j x i + b j ≥ 0 →
j=1 i=1
ci xi + γ ≥ 0
iff
i=1
∃λ 1 . . . λm ≥ 0.
ci =
i=1
λ j ai j ∧
j=1
λj bj ≤ γ
j=1
Solving constraints. Given an assumption set A of an unknown relation U (x 1 , . . . , xm , u 1 , . . . , un ),
where x 1 , . . . , xm are spatial variables and u 1 , . . . , un are integer variables, the procedure Solve
aims to find an actual definition U of U in the following template, where c i j , di j are unknown
coefficients, M, N are pre-specified numbers of conjuncts.
ÓM Ím
ÓN Ín
U (x 1 , . . . , xm , u 1 , . . . , un )
≜
j=1
i=1c i j x i
+ c 0j ≥ 0 ∧
j=1
i=1di j ui
+ d 0j ≥ 0
Recall that our lemma synthesis framework can incrementally discover the final solution U in
several passes. Hence, we can set M, N small, e.g., 1≤M,N ≤3, or M=0, N ≤6 (no spatial contraints), or
M ≤6, N =0 (no arithmetic constraints) to make the constraint solving more effective. We restrict the
coefficients c i j to represent equality or disequality constraints of spatial variables, e.g., x k =xl , x k ,xl ,
x k =0, and x k ,0, where 0 denotes nil. The constraint x k =xl can be encoded as x k −xl ≥0 ∧ −x k +xl ≥0,
or the encoding of x k ,xl is x k −xl −1≥0. Therefore, it requires that −1≤c i j ≤1 forÍ
i>0, and c 0j =0
(forÍequalities) or c 0j =−1 (for disequalities). We also add the restrictions −1 ≤ m
i=1c i j ≤ 1 and
1≤ m
|c
|
≤
2
to
ensure
that
the
spatial
constraints
involve
at
most
two
variables.
i=1 i j
In summary, the assumption set A of the unknown relation U can be solved in three steps:
– Normalize assumptions in the set A into the form of Horn clause, and substitute all occurrences
of U in the normalized assumptions by its template to obtain a set of normalized constraints.
– Apply Farkas’ lemma to eliminate universal quantification to obtain new constraints with only
existential quantification over the unknown coefficients c i j , di j and the factors λ j .
– Use an off-the-shelf prover, such as Z3, to find a model of the unknown coefficients from their
constraints, and replace them back in the template to discover the actual definition of U .
5.7
Soundness of the lemma synthesis framework
We claim that our lemma synthesis framework is sound, that is, all lemmas synthesized by the
framework are semantically valid. We formally state the soundness in the following Theorem 5.3.
Theorem 5.3 (Soundness of the lemma synthesis). Given a normal goal entailment E which does
not contain any unknown relation and a set of valid input lemma L, if the lemma synthesis procedure
SynthesizeLemma returns a set of lemmas Lsyn , then all lemmas in Lsyn are semantically valid.
Proof. Figure 11 shows that all lemmas returned by SynthesizeLemma are directly discovered
by RefineAnte and RefineConseq. In addition, all lemmas returned by RefineConseq are realized by FineTuneConseq (Figure 14, line 38). Moreover, all lemmas discovered by RefineAnte and
FineTuneConseq are verified by Prove in a setting that disables the lemma synthesis (NoSyn) and
utilizes the valid lemma set L (Figure 12, line 31 and Figure 17, line 55). It follows from Proposition
4.2 that if Prove returns Valid⟨_⟩ when verifying a lemma, then the lemma is semantically valid.
Consequently, all lemmas returned by SynthesizeLemma are semantically valid.
□
6
EXPERIMENTS
We have implemented the lemma synthesis framework into a prototype prover, named SLS3 and
have conducted two experiments to evaluate its ability in proving entailments. Both the prover and
the experiment details are available online at https://songbird-prover.github.io/lemma-synthesis.
3 SLS
is built on top of an existing prover Songbird [Ta et al. 2016]; its name stands for “Songbird + Lemma Synthesis”
Proceedings of the ACM on Programming Languages, Vol. 2, No. POPL, Article 9. Publication date: January 2018.
9:24
Quang-Trung Ta, Ton Chanh Le, Siau-Cheng Khoo, and Wei-Ngan Chin
Table 1. Evaluation on the existing entailment benchmarks, where participants are Slide (Sld), Sleek (Slk),
Spen (Spn), Cyclist (Ccl), Songbird (Sbd) and our prototype prover SLS
slrd_indt
slrd_entl
sll_entl
Benchmark
Proved Entailments
Total Proving Time (s)
Average Time (s)
Lemma Syn
Lemma App
Category
#En Sld Slk Spn Ccl Sbd Sls Sld Slk Spn Ccl Sbd Sls Sld Slk Spn Ccl Sbd Sls #Lm T (s) A (s) O (%) #Cv #Sp #Cb
bolognesa 57 0 0 57 0 57 57 0.0 0.0 23.6 0.0 140.3 18.5 – – 0.41 – 2.46 0.32 0 0.0 – 0.0
0
0 0
clones
60 0 60 60 60 60 60 0.0 3.7 3.5 0.5 3.9 0.7 – 0.06 0.06 0.01 0.07 0.01 0 0.0 – 0.0
0
0 0
smallfoot 54 0 54 54 54 54 54 0.0 2.7 2.5 11.8 3.5 4.2 – 0.05 0.05 0.22 0.06 0.08 0 0.0 – 0.0
0
0 0
singly-ll
64 12 48 3 63 64 64 1.0 6.3 0.1 2.1 8.3 1.7 0.08 0.13 0.04 0.03 0.13 0.03 0 0.0 – 0.0
0
0 0
doubly-ll 37 14 17 9 29 25 35 38.3 3.1 0.4 112.5 11.3 91.9 2.74 0.18 0.04 3.88 0.45 2.63 18[10] 51.2 2.8 55.7 8 (68) 0 2 (2)
nested-ll 11 0 5 11 7 11 11 0.0 2.2 0.5 16.7 2.3 0.4 – 0.44 0.04 2.38 0.21 0.04 0 0.0 – 0.0
0
0 0
skip-list
13 0 4 13 5 13 13 0.0 1.1 1.1 0.6 8.1 1.3 – 0.27 0.08 0.11 0.63 0.10 0 0.0 – 0.0
0
0 0
tree
26 12 14 0 23 23 24 110.0 2.9 0.0 58.8 11.5 2.2 9.16 0.21 – 2.55 0.50 0.09 0 0.0 – 0.0
0
0 0
ll/ll2
24 0 0 0 24 24 24 0.0 0.0 0.0 60.2 11.9 78.5 – – – 2.51 0.50 3.27 14[10] 12.6 0.9 16.1 4 (4) 0 6 (11)
ll-even/odd 20 0 0 0 20 20 20 0.0 0.0 0.0 34.6 50.3 1.3 – – – 1.73 2.52 0.07 0 0.0 – 0.0
0
0 0
ll-left/right 20 0 0 0 20 20 20 0.0 0.0 0.0 19.5 10.6 1.3 – – – 0.97 0.53 0.06 0 0.0 – 0.0
0
0 0
misc.
32 0 0 0 31 32 32 0.0 0.0 0.0 254.2 55.0 19.5 – – – 8.20 1.72 0.61 2[2] 1.3 0.7 6.7 2 (3) 0 0
Total
418 38 202 207 336 403 414 149.2 21.9 31.6 571.5 317.2 221.6 3.93 0.11 0.15 1.70 0.79 0.54 34[22] 65.2 1.9 29.4 14 (75) 0 8 (13)
The first experiment was done on literature benchmarks, which include sll_entl, slrd_entl from
the separation logic competition SL-COMP 2014 [Sighireanu and Cok 2016], and slrd_indt by Ta
et al. [2016]. These benchmarks focus on representing the data structures’ shapes, hence, their
pure formulas solely contain equality constraints among spatial variables. Therefore, we decided to
compose a richer entailment benchmark slrd_lm, which captures also arithmetic constraints of the
data structures’ size and content, and then performed the second experiment. Moreover, we only
considered entailments that are marked as valid. Our experiments were conducted on a Ubuntu
14.04 LTS machine with CPU Intel® CoreTM i7-6700 (3.4GHz) and RAM 16GB. We compared SLS
against state-of-the-art separation logic provers, including Slide [Iosif et al. 2013], Sleek [Chin
et al. 2012], Spen [Enea et al. 2014], Cyclist [Brotherston et al. 2011] and Songbird [Ta et al. 2016].
These provers were configured to run with a timeout of 180 seconds for each entailment.
We present the first experiment in Table 1. The benchmark sll_entl relates to only singly linked list
and is categorized by its original sub-benchmarks. The two benchmarks slrd_entl and slrd_indt
are classified based on the related data structures, which are variants of linked lists and trees. We
report in each category the number of entailments successfully proved by each prover, where the
best results are in bold (the total number of entailments is shown in the column #En). We also
report the total and the average time in seconds of each prover for all proved entailments. To be fair,
time spent on unproved entailments are not considered since a prover might spend up to the timeout
of 180(s) for each such entailment. We also provide the statistics of how lemmas are synthesized and
applied by SLS. The numbers of lemmas synthesized and used are presented in the column #Lm,
where x[y] means that there are totally x lemmas synthesized, and only y of them are successfully
used. The total and the average time spent on synthesizing all lemmas are displayed in the columns
T and A. The synthesis overhead is shown in the column O, which is the percentage (%) of the total
synthesizing time over the total proving time. We also classify the synthesized lemmas into three
groups of conversion, split, and combination lemmas (#Cv, #Sp, and #Cb), in a spirit similar to the
motivating lemmas L 1 , L 2 and L 3 (Section 2). The number x(y) in each group indicates that there
are x lemmas applied, and they are repeatedly applied y times.
Table 1 shows that SLS can prove most of the entailments in existing benchmarks (414/418 ≈ 99%)
on an average of 0.54 seconds per entailment, which is 1.46 times faster than the second best
prover Songbird. Moreover, SLS outperforms the third best prover Cyclist in both the number of
proved entailments (more than 78 entailments) and the average proving time (3.15 times faster).
Other provers like Sleek, Spen and Slide can prove no more than 50% of the entailments that
SLS can prove. In addition, SLS achieves the best results in all categories, which demonstrates
Proceedings of the ACM on Programming Languages, Vol. 2, No. POPL, Article 9. Publication date: January 2018.
Automated Lemma Synthesis in Symbolic-Heap Separation Logic
9:25
Table 2. Evaluation on the benchmark slrd_lm, where • marks categories with arithmetic constraints
Benchmark slrd_lm
Category
#En
ll/rev
20
ll-even/odd
17
ll/rev+arith•
72
ll-sorted•
25
dll/rev/null
22
dll/rev/null+arith• 94
ll/dll-mixed
5
ll/dll-mixed+arith• 15
tree/tseg
18
tree/tseg+arith•
12
Total
300
Proved Entails
Ccl Sbd Sls
17 18
20
17 17 17
0 10
72
0 3
19
22 22 22
0 17
94
5 5
5
0 5
15
7 7
18
0 3
12
68 107 294
Proving Time (s)
Ccl Sbd Sls
446.5 87.1 205.0
3.5 0.6
0.4
0.0 0.4 203.2
0.0 0.1
8.7
8.4 17.6 149.8
0.0 27.6 2480.6
0.4 0.1
0.2
0.0 0.4 328.3
1.3 0.3 135.0
0.0 3.5 513.7
460.0 137.7 4024.8
Average Time (s)
Ccl Sbd Sls
26.27 4.84 10.25
0.21 0.03 0.02
– 0.04 2.82
– 0.03 0.46
0.38 0.80 6.81
– 1.63 26.39
0.07 0.03 0.03
– 0.08 21.89
0.18 0.04 7.50
– 1.15 42.81
6.76 1.29 13.69
Lemma Synthesis
#Lm
T (s) A (s) O (%)
28 [20] 8.1 0.3 3.9
0
0.0
– 0.0
81 [81] 50.9 0.6 25.1
14 [14] 4.9 0.3 55.8
12 [7] 59.1 4.9 39.5
236 [158] 1902.7 8.1 76.7
0
0.0
– 0.0
12 [10] 227.2 18.9 69.2
18 [12] 3.8 0.2 2.8
20 [16] 211.2 10.6 41.1
421 [318] 2467.7 5.9 61.3
Lemma Application
#Cv
#Sp
#Cb
18 (25)
0
2 (2)
0
0
0
0
81 (143)
0
0
0
14 (30)
7 (8)
0
0
74 (74) 69 (103) 15 (15)
0
0
0
8 (8)
2 (2)
0
8 (11)
0
4 (5)
8 (8)
4 (5)
4 (4)
123 (134) 156 (253) 39 (56)
the effectiveness of our lemma synthesis technique. Regarding the 4 entailments that SLS cannot
prove, they are related to the doubly linked list or the tree data structures (doubly-ll/tree), where
the needed lemmas are too complicated to be synthesized within a timeout of 180 seconds.
In the first experiment, the lemma synthesis is only triggered in some categories with an overhead
of 29.4%. Although this overhead is considerable, it does not overwhelm the overall proving process
since much of the proving time is saved via the lemma application. The lemma synthesis’s efficacy,
determined by the ratio of the number of lemmas applied to the number of lemmas synthesized, is
about 65% (22 lemmas used in the total of 34 lemmas synthesized). More interestingly, these 22
lemmas were applied totally 88 times. This fact implies that a lemma can be (re)used multiple times.
In this experiment, the conversion lemmas is applied more often than other lemmas (75/88 times).
In the second experiment, we apply the best three provers in the first experiment, i.e., Cyclist,
Songbird, and SLS, on the more challenging benchmark slrd_lm. This benchmark was constructed
by enhancing the existing inductive heap predicates with richer numeric properties of the data
structures’ size and content. These new predicates enable more challenging entailments to be
designed, such as the motivating entailment E 1 from Section 2.
E 1 ≜ dllrev(x, y, u, v, n) ∗ dll(v, u, z, t, 200) ∧ n≥100 |− ∃r .(dll(x, y, r , z, n+199) ∗ z7→r , t)
Details of the second experiment is presented in Table 2. Entailments in the benchmark slrd_lm are
categorized based on the participating heap predicates and their numeric properties. To minimize
the table’s size, we group various heap predicates related to the same data structure in one category.
For example, the category dll/rev/null contains entailments about doubly linked lists, including
normal lists (dll), reversed lists (dllrev), or null-terminated lists (dllnull). Entailments in the category
ll/dll-mixed involve both the singly and the doubly linked lists.
Table 2 shows that SLS outperforms other provers at all categories. In total, SLS can prove 98% of
the benchmark problems (294/300 entailments), which is 2.7 times better than the second best prover
Songbird (107/300 entailments). More impressively, in the 5 categories with arithmetic constraints,
marked by •, SLS can prove more than 5.5 times the number of entailments that Songbird can
(212 vs. 38 entailments). On the other hand, Cyclist performs poorly in this experiment because it
does not support numeric properties yet. However, among 82 non-arithmetic entailments, Cyclist
can prove 68 entailments (about 80%), whereas SLS can prove all of them.
In the second experiment, the lemma synthesis overhead is higher (61.3%) since there are many
lemmas synthesized (421). However, the overall efficacy is also improved (75.5%) when 318 synthesized lemmas are actually used to prove goal entailments. It is worth noticing that 100% lemmas
synthesized in the two arithmetic categories ll/rev+arith and ll-sorted are utilized; this shows the
usefulness of our proposed framework in proving sophisticated entailments. In this experiment, the
Proceedings of the ACM on Programming Languages, Vol. 2, No. POPL, Article 9. Publication date: January 2018.
9:26
Quang-Trung Ta, Ton Chanh Le, Siau-Cheng Khoo, and Wei-Ngan Chin
split lemmas are synthesized (49.1% of the total synthesized lemmas) and prominently used (57% of
the total number of lemma application). This interesting fact shows that the slrd_lm benchmark,
though handcrafted, was well designed to complement the existing benchmarks.
7
RELATED WORK
There have been various approaches proposed to prove separation logic entailments. A popular
direction is to restrict the inductive heap predicates to certain classes such as: predicates whose
syntax and semantics are defined beforehand [Berdine et al. 2004, 2005b; Bozga et al. 2010; Pérez and
Rybalchenko 2011, 2013; Piskac et al. 2013, 2014], predicates describing variants of linked lists [Enea
et al. 2014], or predicates satisfying a particular bounded tree width property [Iosif et al. 2013, 2014].
These restrictions enable the invention of practical and effective entailment proving techniques.
However, these predicate classes cannot model sophisticated constraints of data structures, which
involve not only the shape but also the size or the content, like the predicates dll and dllrev in
Section 2. In addition, the existing techniques are tied to fixed sets of inductive heap predicates and
could not be automatically extended to handle new predicates. This extension requires extra efforts.
Another research direction is to focus on a broader class of user-defined inductive heap predicates.
In particular, Chin et al. [2012] proposed a proof system based on the unfold-and-match technique:
heap predicates in a goal entailment can be unfolded by their definitions to produce possibly
identical predicates in the antecedent and consequent, which can be matched and removed to derive
simpler sub-goal entailments. However, an inductive heap predicate can be unfolded infinitely
often, which leads to the infinite derivation of an entailment proof. To deal with such situation,
Brotherston et al. [2011] and Chu et al. [2015] proposed inspiring techniques respectively based
on cyclic and induction proofs where the infinite unfolding sequences can be avoided by induction
hypothesis applications. In their works, the induction hypotheses are discovered directly from the
candidate entailments; they might not be sufficiently general to prove sophisticated entailments.
Therefore, these entailments’ proofs often require the supporting lemmas, such as L 1 , L 2 , L 3 (Section
2). These lemmas are also needed by other non-induction based verification systems, such as [Chin
et al. 2012; Qiu et al. 2013]. At present, these systems require users to manually provide the lemmas.
To the best of our knowledge, there have been two approaches aiming to automatically discover
the supporting lemmas. The first approach is the mutual induction proof presented in [Ta et al.
2016]. This work speculates lemmas from all entailments which are already derived in an ongoing induction proof to assist in proving future entailments introduced within the same proof.
This speculation provides more lemma/induction hypothesis candidates than the cyclic-based
[Brotherston et al. 2011] and induction-based [Chu et al. 2015] techniques. Consequently, it can
increase the chance of successfully proving the entailments. However, the mutual induction proof
cannot handle sophisticated entailments, such as E 1 in Section 2. All entailments derived from E 1
may contain specific constraints and cannot be applied to prove other derived entailments.
The second approach is the lemma generation presented in [Enea et al. 2015]. This work considers
an interesting class of inductive heap predicates satisfying the notions of compositionality and completion. Under these properties, a class of lemmas can be enumerated beforehand, either to convert
inductive predicates of the same arity, or to combine two inductive predicates. However, this technique cannot generate lemmas which convert predicates of different arities, or combine a singleton
heap predicate with an inductive heap predicate, or split an inductive heap predicate into other predicates. In addition, an inductive heap predicate satisfying the compositionality property has exactly
one base-case definition, whose heap part is also empty. Moreover, each inductive-case definition
must contain a singleton heap predicate whose root address is one of the inductive heap predicate’s
Proceedings of the ACM on Programming Languages, Vol. 2, No. POPL, Article 9. Publication date: January 2018.
Automated Lemma Synthesis in Symbolic-Heap Separation Logic
9:27
arguments, like the compositional predicate ls(x, y) = (x=y) ∨ ∃u.(x7→u ∗ ls(u, y)). This technique,
therefore, cannot generate lemmas for predicates with non-empty base cases, e.g., dll and dllrev in
Section 2, or lemmas for the predicates defined in a reverse-fashion, like the non-compositional preddef
icate lsrev(x, y) = (x=y) ∨ ∃u.(lsrev(x, u) ∗ u7→y). These reverse-fashion predicates are prevalent in
SL-COMP 2014’s benchmarks, such as RList, ListO, DLL_plus, DLL_plus_rev, DLL_plus_mid.
def
In the synthesis context, there is an appealing approach called SyGuS, a.k.a., Syntax-Guided Synthesis [Alur et al. 2015]. This approach aims to infer computer programs satisfying certain restrictions
on its syntax and semantics (respectively constrained by a context free grammar and a SMT formula). Techniques following SyGuS often operate on a learning phase which proposes a candidate
program, and a verification phase which checks the proposal against the semantic restriction. To
some extent, our lemma synthesis approach is similar to SyGuS since we also discover potential
lemmas and verify their validity. However, we focus on synthesizing separation logic lemmas but
not the computer programs. Our syntactic restriction is more goal-directed since the lemmas is
controlled by specific templates, unlike the context-free-grammar restriction of SyGuS. Moreover,
our semantic restriction cannot be represented by a SMT formula since we require that if a lemma
can be proved valid, its proof must contain an induction hypothesis application. Therefore, we
believe that the induction proof is necessary in both the lemma discovery and verification phases.
This proof technique is currently not supported by any SyGuS-based approaches.
8
CONCLUSION
We have proposed a novel framework for synthesizing lemmas to assist in proving separation logic
entailments in the fragment of symbolic-heap separation logic with inductive heap predicates and
linear arithmetic. Our framework is able to synthesize various kinds of inductive lemmas, which
help to modularize the proofs of sophisticated entailments. The synthesis of inductive lemmas is
non-trivial since induction proof is required by both the lemma discovery and validation phases. In
exchange, these lemmas can significantly improve the completeness of induction proof in separation
logic. We have shown by experiment that our lemma-synthesis-assisted prover SLS is able to prove
many entailments that could not be proved by the state-of-the-art separation logic provers.
We shall now discuss two limitations of our approach. Firstly, our current implementation cannot
simultaneously derive new constraints from both the antecedent and the consequent of a lemma
template. Theoretically, our framework can handle a lemma template with different unknown
relations on both these two sides. However, the set of unknown assumptions which is introduced
corresponding to these relations is far too complicated to be discharged by the current underlying
prover. Secondly, we only support to infer linear arithmetic constraints with Farkas’ lemma. In
future, we would like to extend the lemma synthesis framework with suitable constraint solving
techniques to support more kinds of pure constraints, such as sets or multisets of values.
ACKNOWLEDGMENTS
We would like to thank the reviewers of POPL’18 PC and AEC for the constructive comments on the
paper and the artifact. We wish to thank Dr. Aleksandar Nanevski for his valuable suggestions on
preparing the final version of this paper, and Dr. Andrew C. Myers for his dedication as the program
chair of POPL’18. We are grateful for the encouraging feedback from the reviewers of OOPSLA’17 on
our previous submission. The first author wish to thank Ms. Mirela Andreea Costea and Dr. Makoto
Tatsuta for the inspiring discussions about the entailment proof. This research is partially supported
by an NUS research grant R-252-000-553-112 and an MoE Tier-2 grant MOE2013-T2-2-146.
Proceedings of the ACM on Programming Languages, Vol. 2, No. POPL, Article 9. Publication date: January 2018.
9:28
Quang-Trung Ta, Ton Chanh Le, Siau-Cheng Khoo, and Wei-Ngan Chin
REFERENCES
Aws Albarghouthi, Josh Berdine, Byron Cook, and Zachary Kincaid. 2015. Spatial Interpolants. In European Symposium on
Programming (ESOP). 634–660.
Rajeev Alur, Rastislav Bodík, Eric Dallal, Dana Fisman, Pranav Garg, Garvit Juniwal, Hadas Kress-Gazit, P. Madhusudan,
Milo M. K. Martin, Mukund Raghothaman, Shambwaditya Saha, Sanjit A. Seshia, Rishabh Singh, Armando Solar-Lezama,
Emina Torlak, and Abhishek Udupa. 2015. Syntax-Guided Synthesis. In Dependable Software Systems Engineering. 1–25.
Josh Berdine, Cristiano Calcagno, and Peter W. O’Hearn. 2004. A Decidable Fragment of Separation Logic. In International
Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS). 97–109.
Josh Berdine, Cristiano Calcagno, and Peter W. O’Hearn. 2005a. Smallfoot: Modular Automatic Assertion Checking with
Separation Logic. In International Symposium on Formal Methods for Components and Objects. 115–137.
Josh Berdine, Cristiano Calcagno, and Peter W. O’Hearn. 2005b. Symbolic Execution with Separation Logic. In Asian
Symposium on Programming Languages and Systems (APLAS). 52–68.
Josh Berdine, Byron Cook, and Samin Ishtiaq. 2011. SLAyer: Memory Safety for Systems-Level Code. In International
Conference on Computer Aided Verification (CAV). 178–183.
Marius Bozga, Radu Iosif, and Swann Perarnau. 2010. Quantitative Separation Logic and Programs with Lists. J. Autom.
Reasoning 45, 2 (2010), 131–156.
James Brotherston, Dino Distefano, and Rasmus Lerchedahl Petersen. 2011. Automated Cyclic Entailment Proofs in
Separation Logic. In International Conference on Automated Deduction (CADE). 131–146.
James Brotherston, Carsten Fuhs, Juan A. Navarro Pérez, and Nikos Gorogiannis. 2014. A decision procedure for satisfiability
in Separation Logic with inductive predicates. In Joint Meeting of International Conference on Computer Science Logic and
Symposium on Logic in Computer Science, CSL-LICS. 25:1–25:10.
James Brotherston, Nikos Gorogiannis, Max I. Kanovich, and Reuben Rowe. 2016. Model checking for Symbolic-Heap
Separation Logic with inductive predicates. In Symposium on Principles of Programming Languages (POPL). 84–96.
James Brotherston and Alex Simpson. 2011. Sequent calculi for induction and infinite descent. J. Log. Comput. 21, 6 (2011),
1177–1216.
Alan Bundy. 2001. The Automation of Proof by Mathematical Induction. In Handbook of Automated Reasoning (in 2 volumes).
845–911.
Cristiano Calcagno, Dino Distefano, Jérémy Dubreil, Dominik Gabi, Pieter Hooimeijer, Martino Luca, Peter W. O’Hearn,
Irene Papakonstantinou, Jim Purbrick, and Dulma Rodriguez. 2015. Moving Fast with Software Verification. In NASA
International Symposium on Formal Methods (NFM). 3–11.
Wei-Ngan Chin, Cristina David, Huu Hai Nguyen, and Shengchao Qin. 2012. Automated verification of shape, size and
bag properties via user-defined predicates in Separation Logic. Science of Computer Programming (SCP) 77, 9 (2012),
1006–1036.
Duc-Hiep Chu, Joxan Jaffar, and Minh-Thai Trinh. 2015. Automatic induction proofs of data-structures in imperative
programs. In Conference on Programming Language Design and Implementation (PLDI). 457–466.
Michael Colón, Sriram Sankaranarayanan, and Henny Sipma. 2003. Linear Invariant Generation Using Non-linear Constraint
Solving. In International Conference on Computer Aided Verification (CAV). 420–432.
Byron Cook, Christoph Haase, Joël Ouaknine, Matthew J. Parkinson, and James Worrell. 2011. Tractable Reasoning in a
Fragment of Separation Logic. In International Conference on Concurrency Theory (CONCUR). 235–249.
Dino Distefano and Matthew J. Parkinson. 2008. jStar: towards practical verification for java. 213–226.
Constantin Enea, Ondrej Lengál, Mihaela Sighireanu, and Tomás Vojnar. 2014. Compositional Entailment Checking for a
Fragment of Separation Logic. In Asian Symposium on Programming Languages and Systems (APLAS). 314–333.
Constantin Enea, Mihaela Sighireanu, and Zhilin Wu. 2015. On Automated Lemma Generation for Separation Logic with
Inductive Definitions. In International Symposium on Automated Technology for Verification and Analysis (ATVA). 80–96.
Radu Iosif, Adam Rogalewicz, and Jiri Simácek. 2013. The Tree Width of Separation Logic with Recursive Definitions. In
International Conference on Automated Deduction (CADE). 21–38.
Radu Iosif, Adam Rogalewicz, and Tomás Vojnar. 2014. Deciding Entailments in Inductive Separation Logic with Tree
Automata. In International Symposium on Automated Technology for Verification and Analysis (ATVA). 201–218.
Quang Loc Le, Jun Sun, and Wei-Ngan Chin. 2016. Satisfiability Modulo Heap-Based Programs. In International Conference
on Computer Aided Verification (CAV). 382–404.
Leonardo Mendonça De Moura and Nikolaj Bjørner. 2008. Z3: An Efficient SMT Solver. In International Conference on Tools
and Algorithms for Construction and Analysis of Systems (TACAS). 337–340.
Huu Hai Nguyen and Wei-Ngan Chin. 2008. Enhancing Program Verification with Lemmas. In International Conference on
Computer Aided Verification (CAV). 355–369.
Peter W. O’Hearn, John C. Reynolds, and Hongseok Yang. 2001. Local Reasoning about Programs that Alter Data Structures.
In International Conference on Computer Science Logic (CSL). 1–19.
Proceedings of the ACM on Programming Languages, Vol. 2, No. POPL, Article 9. Publication date: January 2018.
Automated Lemma Synthesis in Symbolic-Heap Separation Logic
9:29
Juan Antonio Navarro Pérez and Andrey Rybalchenko. 2011. Separation Logic + Superposition Calculus = Heap Theorem
Prover. In Conference on Programming Language Design and Implementation (PLDI). 556–566.
Juan Antonio Navarro Pérez and Andrey Rybalchenko. 2013. Separation Logic Modulo Theories. In Asian Symposium on
Programming Languages and Systems (APLAS). 90–106.
Ruzica Piskac, Thomas Wies, and Damien Zufferey. 2013. Automating Separation Logic Using SMT. In International
Conference on Computer Aided Verification (CAV). 773–789.
Ruzica Piskac, Thomas Wies, and Damien Zufferey. 2014. Automating Separation Logic with Trees and Data. In International
Conference on Computer Aided Verification (CAV). 711–728.
William Pugh. 1991. The Omega Test: a fast and practical integer programming algorithm for dependence analysis. In
Proceedings Supercomputing ’91, Albuquerque, NM, USA, November 18-22, 1991. 4–13.
Xiaokang Qiu, Pranav Garg, Andrei Stefanescu, and Parthasarathy Madhusudan. 2013. Natural proofs for structure, data,
and separation. In Conference on Programming Language Design and Implementation (PLDI). 231–242.
John C. Reynolds. 2002. Separation Logic: A Logic for Shared Mutable Data Structures. In Symposium on Logic in Computer
Science (LICS). 55–74.
John C. Reynolds. 2008. An Introduction to Separation Logic. Lecture Notes for the PhD Fall School on Logics and Semantics
of State, Copenhagen 2008. Retrieved on 2017, March 16th. http://www.cs.cmu.edu/~jcr/copenhagen08.pdf
Alexander Schrijver. 1986. Theory of Linear and Integer Programming. John Wiley & Sons, Inc., New York, NY, USA.
Mihaela Sighireanu and David R. Cok. 2016. Report on SL-COMP 2014. Journal on Satisfiability, Boolean Modeling and
Computation 9 (2016), 173–186.
Quang-Trung Ta, Ton Chanh Le, Siau-Cheng Khoo, and Wei-Ngan Chin. 2016. Automated Mutual Explicit Induction Proof
in Separation Logic. In International Symposium on Formal Methods (FM). 659–676.
Alfred North Whitehead and Bertrand Russell. 1912. Principia Mathematica. University Press.
Proceedings of the ACM on Programming Languages, Vol. 2, No. POPL, Article 9. Publication date: January 2018.
9:30
A
Quang-Trung Ta, Ton Chanh Le, Siau-Cheng Khoo, and Wei-Ngan Chin
SOUNDNESS PROOF
A.1
Soundness of inference rules
We prove soundness of inference rules in Section 4 by showing that if entailments in their premises
are valid, and their side conditions, if any, are satisfied, then goal entailments in their conclusions
are also valid.
1. Axiom rules |−Π , ⊥1L , ⊥2L
|−Π
H, L, Π 1 |− Π 2
Π 1 →Π2
⊥1L
ι1
ι2
H, L, F 1 ∗ u 7→v® ∗ u 7→t® |− F 2
⊥2L
H, L, F 1 ∧ Π 1 |− F 2
Π1 →false
– When the rule |−Π is applied, SMT solvers or theorem provers, such as Z3 [Moura and Bjørner
2008] or Omega Calculator [Pugh 1991], can be invoked to check the side condition: Π 1 → Π 2 .
If this side condition holds, then clearly the entailment Π 1 |− Π 2 is valid.
– For the two rules ⊥1L , ⊥2L , it is easy to verify their goal entailments’ antecedents are unsatisfiable
ι1
ι2
since they either contain an overlapped singleton heap (u 7→v® ∗ u 7→t® in the rule ⊥1L ), or a
contradiction (Π 1 → false in the rule ⊥1L ). Therefore, these entailments are evidently valid.
2. Rule =L , ∃L , ∃R
=L
H, L, F 1 [u/v] |− F 2 [u/v]
H, L, F 1 ∧ u=v |− F 2
∃L
H, L, F 1 [u/x] |− F 2
H, L, ∃x .F 1 |− F 2
u < FV(F 2 )
∃R
x .F 2 [u/v]
H, L, F 1 |− ∃®
H, L, F 1 |− ∃®
x, v.(F 2 ∧ u=v)
– For the rule =L , consider an arbitrary model s, h of the goal entailment’s antecedent. Then
s, h |= F 1 ∧ u=v. It follows that s(u) = s(v), therefore s, h |= F 1 [u/v]. Since the entailment
F 1 [u/v] |− F 2 [u/v] in the rule’s premise is valid, it implies s, h |= F 2 [u/v]. This means s, h |= F 2 ,
due to s(u) = s(v). Therefore, F 1 ∧ u=v |− F 2 is valid.
– To prove correctness of the rule ∃L , we consider an arbitrary model s, h such that s, h |= ∃x .F 1 .
By semantics of the ∃ quantification, there is an integer value v ∈ Int such that s ′, h |= F 1 , with
s ′ = [s |x:v]. Then s ′′, h |= F 1 [u/x] with s ′′ is extended from s ′ such that s ′′(u) = s ′(x). Since
s ′ = [s |x:v], it follows that s ′′ = [s |u:v]. On the other hand, given that entailment in the rule’s
premise is valid, then s ′′, h |= F 2 . It implies that [s |u:v] |= F 2 . In addition u < FV(F 2 ). Therefore
s, h |= F 2 . Since s, h is chosen arbitrarily, it follows that the rule ∃L is correct.
– Correctness of ∃R is straight forward. Suppose that s, h is an arbitrary model such that s, h |= F 1 .
Since entailment in the rule’s premise is valid, then s, h |= F 2 [e/x]. It follows that s, h also satisfies
∃x .F 2 , by simply choosing value v of x such that v = ⟦e⟧s . Since s, h is chosen arbitrarily, it
follows that the entailment in the rule’s conclusion is valid. Therefore ∃R is valid.
3. Rule EL and ER
EL
H, L, F 1 |− F 2
H, L, F 1 ∗ emp |− F 2
ER
H, L, F 1 |− ∃®
x .F 2
H, L, F 1 |− ∃®
x .(F 2 ∗ emp)
– It is evident that two formulas F 1 ∗ emp and F 1 in the rule EL are semantically equivalent. In
addition, F 2 ∗ emp and F 2 in the rule ER are also semantically equivalent. It follows that both the
two rules EL and ER are correct.
Proceedings of the ACM on Programming Languages, Vol. 2, No. POPL, Article 9. Publication date: January 2018.
Automated Lemma Synthesis in Symbolic-Heap Separation Logic
9:31
4. Rule CA
CA
H, L, F 1 ∧ Π |− F 2
H, L, F 1 ∧ ¬Π |− F 2
H, L, F 1 |− F 2
– The rule’s premises provide that both the two entailments F 1 ∧ Π |− F 2 and F 1 ∧ ¬Π |− F 2 are
valid. For an arbitrary model s, h such that s, h |= F 1 , we consider two cases as follows.
(1) s, h |= F 1 ∧ Π. Since F 1 ∧ Π |− F 2 is valid, it follows that s, h |= F 2 .
(2) s, h ̸ |= F 1 ∧ Π. Since s, h |= F 1 , it follows that s, h ̸ |= Π. Consequently, s, h |= ¬Π, by the
semantics of separation logic formulas. We have shown that s, h |= F 1 and s, h |= ¬Π, it
implies that s, h |= F 1 ∧ ¬Π. Since F 1 ∧ ¬Π |− F 2 is valid, it follows that s, h |= F 2 .
In both the above cases, we show that for an arbitrary model s, h, if s, h |= F 1 , then s, h |= F 2 .
Therefore, the entailment F 1 |− F 2 in the conclusion of the rule CA is valid.
5. Rule ∗7→ and ∗P
∗7→
H, L, F 1 |− ∃®
x .(F 2 ∧ u=t ∧ v=
® w)
®
ι
ι
H, L, F 1 ∗ u 7→v® |− ∃®
x .(F 2 ∗ t 7→w)
®
u < x,
® v® # x®
∗P
x .(F 2 ∧ u=
® v)
®
H, L, F 1 |− ∃®
u® # x®
H, L, F 1 ∗ P(®
u) |− ∃®
x .(F 2 ∗ P(v))
®
ι
– To prove the rule ∗7→, we consider an arbitrary model s, h such that s, h |= F 1 ∗u 7→v.
® Then, there
ι
|
|
exists h 1 # h 2 such that h = h 1 ◦ h 2 , and s, h 1 = F 1 , and s, h 2 = u 7→v.
® On one hand, entailment
in the rule’s premise is valid, it follows that s, h 1 |= ∃®
x .(F 2 ∧ u=t ∧ v=
® w).
® By semantics of ∃
quantification, s ′, h 1 |= F 2 ∧ u=t ∧ v=
® w,
® with s ′ is a model extended from s with integer values
of x.
® On the other hand, the rule’s side condition gives u < x,
® and v® # x,
® and s ′ is extend from s
ι
ι
′
with values of x,
® and s, h 2 |= u 7→v,
® it follows that s , h 2 |= u 7→v.
® Combining these two hands,
ι
′
with the fact that h 1 # h 2 and h = h 1 ◦ h 2 , the following holds: s , h |= F 2 ∗ u 7→v® ∧ u=t ∧ v=
® w.
® By
ι
semantics of equality (=), the following also holds: s ′, h |= F 2 ∗ t 7→w® ∧u=t ∧ v=
® w.
® By weakening
ι
this formula via dropping the condition u=t ∧ v=
® w,
® it is evident that s ′, h |= F 2 ∗ t 7→w.
® Since
ι
s ′ is extended from s with values of x,
® it is evident that s, h |= ∃®
x .(F 2 ∗ t 7→w).
® Recall that s, h is
chosen arbitrarily, this implies that the rule ∗7→ is sound.
– Soundness of ∗P can be proved in the similar way with the rule ∗7→ above.
6. Rule PR
PR
H, L, F 1 |− ∃®
x .(F 2 ∗ FiP (®
u))
x .(F 2 ∗ P(®
u))
H, L, F 1 |− ∃®
def
F iP (u)
® ⇒ P(u)
®
– Consider an arbitrary model s, h such that s, h |= F 1 . Since entailment in the rule’s premise is
u)). In addition, the rule’s side condition that FiP (®
valid, it follows that s, h |= ∃®
x .(F 2 ∗ FiP (®
u) is
one of the definition cases of P(®
u) clearly implies that s, h |= ∃®
x .(F 2 ∗ P(®
u)). Since s, h is chosen
arbitrarily, it follows that entailment in the rule’s conclusion is valid.
Proceedings of the ACM on Programming Languages, Vol. 2, No. POPL, Article 9. Publication date: January 2018.
9:32
Quang-Trung Ta, Ton Chanh Le, Siau-Cheng Khoo, and Wei-Ngan Chin
7. Rule ID
H ′, L, Σ1 ∗ F 1P (®
u) ∧ Π 1 |− F 2
ID
...
P (®
H ′, L, Σ1 ∗ Fm
u) ∧ Π 1 |− F 2
H, L, Σ1 ∗ P(®
u) ∧ Π1 |− F 2
P (u);
P(u)
® = F 1P (u)
® ∨ ... ∨ Fm
® †
def
† : H ′ ≜ H ∪ {H }, where H is obtained by freshly renaming all variables of Σ1 ∗ P(®
u) ∧ Π1 |− F 2
P (®
– We show that if all of the entailments F 1 ∗ F 1P (®
u) |− F 2 , . . . , F 1 ∗ Fm
u) |− F 2 in the rule premise
are valid, then so is the entailment F 1 ∗ P(®
u) |− F 2 in the conclusion.
Indeed, consider an arbitrary model s, h such that s, h |= F 1 ∗ P(®
u). The side condition of the rule
P (®
P (®
gives that P(®
u) ≜ F 1P (®
u) ∨ . . . ∨ Fm
u), i.e., F 1P (®
u), . . . , Fm
u) are all definition cases of P(®
u). Since
s, h |= F 1 ∗ P(®
u), it follows that s, h |= F 1 ∗ FiP (®
u), for all i = 1 . . . m. On the other hand, F 1 ∗ F 1P (®
u),
P
. . . , F 1 ∗ Fm (®
u) are antecedents of all entailments in this rule’s premises, and these entailments
have the same consequent F 2 . Therefore, s, h |= F 2 . Since s, h is chosen arbitrarily, it follows that
entailment in the rule’s conclusion is valid. This confirms soundness of the rule ID.
8. Rule IH:
IH
H ∪ {Σ3 ∗ P(v)
® ∧ Π3 |− F 4 }, L, F 4θ ∗ Σ ∧ Π1 |− F 2 P(u)
® ≺ P(v);
®
∃θ,
Σ.(Σ
∗ P(u)
® Σ3 θ ∗ P(v)θ
® ∗ Σ) ∧ (Π1 →Π3 θ )
1
u) ∧ Π 1 |− F 2
H ∪ {Σ3 ∗ P(v)
® ∧ Π3 |− F 4 }, L, Σ1 ∗ P(®
– The rule’s side conditions Σ1 ∗ P(®
u) Σ3θ ∗ P′(v)θ
® ∗ Σ and Π 1 →Π 3θ imply that the entailment
Σ1 ∗ P(®
u) ∧ Π 1 |− Σ3θ ∗ P′(v)θ
® ∗ Σ ∧ Π 3θ ∧ Π 1 is valid.
By applying Theorem 3.8, the induction hypothesis Σ3 ∗ P′(v)
® ∧ Π 3 |− F 4 implies that Σ3θ ∗
′
|
P (v)θ
® ∧ Π 3θ − F 4θ is valid. It follows that the following entailment is also valid: Σ3θ ∗ P′(v)θ
® ∗
Σ ∧ Π 3θ ∧ Π 1 |− F 4θ ∗ Σ ∧ Π 1 .
We have shown that the two entailments Σ1 ∗ P(®
u) ∧ Π 1 |− Σ3θ ∗ P′(v)θ
® ∗ Σ ∧ Π 3θ ∧ Π 1 and
′
Σ3θ ∗ P (v)θ
® ∗ Σ ∧ Π 3θ ∧ Π 1 |− F 4θ ∗ Σ ∧ Π 1 are valid. In addition, the rule’s premise gives
that F 4θ ∗ Σ ∧ Π 1 |− F 2 is valid. It follows that the entailment Σ1 ∗ P(®
u) ∧ Π1 |− F 2 in the rule’s
conclusion is valid as well. Therefore, the rule IH is correct.
9. Rule LML , LMR :
H, L ∪ {Σ3 ∧ Π 3 |− F 4 }, F 4θ ∗ Σ ∧ Π1 |− F 2
LML
LMR
H, L ∪ {Σ3 ∧ Π 3 |− F 4 }, Σ1 ∧ Π 1 |− F 2
∃θ, Σ. (Σ1 Σ3θ ∗ Σ) ∧ (Π1 →Π 3θ )
H, L ∪ {F 3 |− ∃w.(Σ
® 4 ∧ Π 4 )}, F 1 |− ∃®
x .(F 3θ ∗ Σ ∧ Π 2 )
® 4 ∧ Π4 )}, F 1 |− ∃®
x .(Σ2 ∧ Π 2 )
H, L ∪ {F 3 |− ∃w.(Σ
∃θ, Σ. (Σ4θ ∗ Σ Σ2 ) ∧ (wθ
® ⊆ x)
®
– Soundness of the lemma application rule LML can be proved similar to IH. Specifically, this rule’s
side conditions Σ1 Σ3θ ∗ Σ and Π 1 →Π 3θ imply that the entailment Σ1 ∧ Π 1 |− Σ3θ ∗ Σ ∧ Π 3θ ∧ Π 1
is valid. On the other hand, by applying Theorem 3.8, the lemma Σ3 ∧ Π 3 |− F 4 implies that
the entailment Σ3θ ∧ Π 3θ |− F 4θ is valid. It follows that the following entailment is also valid:
Σ3θ ∗ Σ ∧ Π 3θ ∧ Π 1 |− F 4θ ∗ Σ ∧ Π 1 . We have shown that both the two entailments Σ1 ∧ Π 1 |−
Σ3θ ∗ Σ ∧ Π 3θ ∧ Π 1 and Σ3θ ∗ Σ ∧ Π 3θ ∧ Π 1 |− F 4θ ∗ Σ ∧ Π 1 are valid. In addition, this rule’s
premise provides that the entailment F 4θ ∗ Σ ∧ Π 1 |− F 2 is also valid. Consequently, the goal
entailment Σ1 ∧ Π 1 |− F 2 is valid as well. Therefore, soundness of the rule LML is claimed.
® 4 is valid, therefore
® 4 ∧ Π 4 ) implies that F 3 |− ∃w.Σ
– For the rule LMR , the lemma F 3 |− ∃w.(Σ
F 3θ |− ∃wθ
® .Σ4θ is also valid by Theorem 3.8. It follows that F 3θ ∗ Σ ∧ Π 2 |− ∃wθ
® .(Σ4θ ∗ Σ ∧ Π 2 )
is also valid. In addition, this rule’s side condition provides that wθ
® ⊆ x,
® therefore F 3θ ∗ Σ ∧
Proceedings of the ACM on Programming Languages, Vol. 2, No. POPL, Article 9. Publication date: January 2018.
Automated Lemma Synthesis in Symbolic-Heap Separation Logic
9:33
Π 2 |− ∃®
x .(Σ4θ ∗ Σ ∧ Π 2 ) is valid. Furthermore, the rule’s premise provides that the entailment
F 1 |− ∃®
x .(F 3θ ∗ Σ ∧ Π 2 ) is valid. Hence, by Theorem 3.6, the entailment F 1 |− ∃®
x .(Σ4θ ∗ Σ ∧ Π 2 )
is valid. On the other hand, the rule’s side condition Σ4θ ∗ Σ Σ2 implies that the entailment
x .(Σ4θ ∗Σ∧Π 2 )
Σ4θ ∗Σ∧Π 2 |− Σ2 ∧Π 2 is valid. We have shown that both the two entailments F 1 |− ∃®
and Σ4θ ∗ Σ ∧ Π 2 |− Σ2 ∧ Π 2 are valid, it follows by Theorem 3.6 again that the goal entailment
x .(Σ2 ∧ Π2 ) is also valid. Correctness of the rule LMR , therefore, is claimed.
F 1 |− ∃®
A.2
Soundness of the synthesis rules
We now present soundness proof the synthesis rules, presented in Figure 9, by showing that if
all assumptions and entailments in their premises are valid, and their side conditions, if any, are
satisfied, then goal entailments in their conclusions are also valid.
1. Rule U1Π and U2Π
U1Π
A ≜ {Π 1 ∧ U (®
x) → Π 2 }
U2Π
H, L, Π 1 ∧ U (®
x) |− Π 2
A ≜ {Π1 → ∃w.(Π
® 2 ∧ U (®
x))}
H, L, Π1 |− ∃w.(Π
® 2 ∧ U (®
x))
– Consider an arbitrary model s, h such that s, h |= Π 1 ∧ U (®
x). Since the unknown assumption
Π 1 ∧ U (®
x) → Π 2 is assumed valid, then it follows from the semantics of → that s, h |= Π 2 . Since
s, h is chosen arbitrarily, it follows that the goal entailment Π 1 ∧ U (®
x) |− Π 2 is also valid.
– The proof of the rule U2Π is similar to the proof of U1Π .
2. Rule U1Σ and U2Σ
U1Σ
A ≜ {Π 1 ∧ U (®
x) → false}
H, L, Σ1 ∧ Π1 ∧ U (®
x) |− Π 2
Σ1 ,emp
U2Σ
A ≜ {Π1 ∧ U (®
x) → false}
H, L, Π 1 ∧ U (®
x) |− ∃w.(Σ
® 2 ∧ Π2 )
Σ2 ,emp
– If the unknown assumption Π 1 ∧ U (®
x) → false in the premise of the rule U1Σ is valid, then it is
evident that Π 1 ∧ U (®
x) ≡ false, since only false can prove false. It follows that Σ1 ∧ Π 1 ∧ U (®
x) ≡
false. Consequently, the goal entailment Σ1 ∧ Π 1 ∧ U (®
x) |− Π 2 is valid, since its antecedent is a
contradiction.
x) → false in the rule’s
– Soundness proof the rule U2Σ is similar where the assumption Π 1 ∧ U (®
premise also implies that the goal entailment’s antecedent is a contradiction (Π 1 ∧ U (®
x) ≡ false)
3. Rule UIH
H ∪ {Σ3 ∗ P(v)
® ∧ Π 3 ∧ U (®
y) |− F 4 }, L, F 4θ ∗ Σ ∧ Π 1 |− F 2
UIH
A ≜ {Π1 ∧ U (®
x) → (Π3 ∧ U (®
y))θ }
u) ∧ Π 1 ∧ U (®
x) |− F 2
H ∪ {Σ3 ∗ P(v)
® ∧ Π3 ∧ U (®
y) |− F 4 }, L, Σ1 ∗ P(®
with † : P(®
u) ≺ P(v);
® ∃θ, Σ. (Σ1 ∗ P(®
u) Σ3θ ∗ P(v)θ
® ∗ Σ)
†
– Since the rule UIH applies an induction hypothesis, which contains an unknown relation, to prove
another unknown entailment, the soundness of this rule can be proved in a similar manner to
the normal induction hypothesis application rule IH. Note that in the proof of UIH , although the
side condition Π 1 ∧ U (®
x) → Π 3θ ∧ U (®
y)θ contains unknown relations U (®
x) and U (®
y)θ , validity
of this side condition is clearly implied by the assumption A≜{Π 1 ∧ U (®
x) → Π 3θ ∧ U (®
y)θ } in
the rule’s premises.
A.3
Soundness proofs of Propositions 4.1, 4.2, 4.3
Proceedings of the ACM on Programming Languages, Vol. 2, No. POPL, Article 9. Publication date: January 2018.
9:34
Quang-Trung Ta, Ton Chanh Le, Siau-Cheng Khoo, and Wei-Ngan Chin
Proposition 4.1 (Proof of an unknown entailment). Given an unknown entailment E. If the
procedure Prove returns Valid⟨_⟩ and generates an assumption set A when proving E in the lemmasynthesis-disabled mode (NoSyn), using an empty induction hypothesis set (H =∅) and a valid lemma
set L as its inputs, then E is semantically valid, given that all assumptions in A are valid.
Proposition 4.2 (Proof of a normal entailment when the lemma synthesis is disabled).
Given a normal entailment E which does not contain any unknown relation. If the procedure Prove
returns Valid⟨_⟩ when proving E in the lemma-synthesis-disabled mode (NoSyn), using an empty
induction hypothesis set (H =∅) and a valid lemma set L as its inputs, then E is semantically valid.
Proposition 4.3 (Proof of a normal entailment when the lemma synthesis is enabled).
Given a normal entailment E which does not contain any unknown relation. If the procedure Prove returns Valid⟨_⟩ and synthesizes a set of lemmas Lsyn when proving E in the lemma-synthesis-enabled
mode (SynLM), using an empty induction hypothesis set (H =∅) and a valid lemma set L as its inputs,
then the entailment E and all lemmas in Lsyn are semantically valid.
Proof of Propositions 4.1, 4.2, 4.3. We have argued in Sections A.1 and A.2 that all of our inference
rules and synthesis rules are sound. In addition, our proof system, without the lemma synthesis
component, is clearly an instance of Noetherian induction where the sub-structural relation is
proved to be well-founded (Theorem 3.13) and is utilized by the induction hypothesis application
rule IH. Therefore, soundness of the proof system, when the lemma synthesis is disabled (Prove
is invoked with mode = NoSyn), is guaranteed by the Noetherian induction principle and the
soundness of all inference and synthesis rules. Now, we consider each proposition as follows.
– In Proposition 4.1, the proof system is invoked in the lemma-synthesis-disabled mode (NoSyn).
Based on our argument above about the proof system’s soundness in this mode, it is clear that
if Prove returns Valid⟨_⟩ and generates a set of assumptions A when proving the unknown
entailment E, then E is semantically valid given that all unknown assumptions in A are valid.
– In Proposition 4.2, the proof system is also invoked in the lemma-synthesis-disabled mode
(NoSyn). Since the goal entailment does not contain any unknown relation, there is no unknown
assumption introduced while proving E. Hence, if Prove returns Valid⟨_⟩, then E is semantically
valid.
– In Proposition 4.3, the proof system can synthesize a set of new lemma Lsyn when proving E.
Hence, we rely on the soundness of the lemma synthesis framework (Section 5) to argue about
the soundness of the proof system. Details about the lemma synthesis’s soundness are presented
in Theorem 5.3. Now, suppose that the lemma synthesis is already proved sound, then all lemmas
synthesized in Lsyn are semantically valid. Following that, we can separate the proof derivation
of E into multiple phases, in which the auxiliary phases are used to synthesize all lemmas in
Lsyn , and the main phase is to prove E, using both the input lemmas in L or the new synthesized
lemmas in Lsyn . Certainly, this main phase reflects the structural induction aspect of our proof
system, without the lemma synthesis component. In addition, all lemmas given in L and Lsyn
are valid. Therefore, if Prove returns Valid⟨_⟩ when proving E, then E is semantically valid. □
Proceedings of the ACM on Programming Languages, Vol. 2, No. POPL, Article 9. Publication date: January 2018.
Automated Lemma Synthesis in Symbolic-Heap Separation Logic
B
9:35
FORMAL VERIFICATION OF THE PROOF SYSTEM
H1 , L1 , F 11 |− F 21
...
R
Hn , Ln , F 1n |− F 2n
A ≜ {Π 1 → Π 2 }
H, L, F 1 |− F 2
†
Fig. 19. General form of inference or synthesis rules
In Figure 19, we present a general form of an inference rule or a synthesis rule R where H, L, F 1 |−
F 2 is the rule’s conclusion; (H1 , L1 , F 11 |− F 21 ) , . . . , (Hn , Ln , F 1n |− F 2n ) and A are premises
corresponding to the derived sub-goal entailments and the unknown assumption set; and † is the
rule’s side condition. We define the retrieval functions Asms(R), Prems(R), and Concl(R) to extract
these components of the rule R, as follows:
Asms(R)
≜
{Π 1 → Π 2 }
Prems(R) ≜
n
{Hi , Li , F 1i |−F 2i }i=1
Concl(R) ≜
(H, L, F 1 |−F 2 )
Note that the first two functions Asms(R) and Prems(R) may return an empty set. Moreover, with
respect to the set R of all inference and synthesis rules given in Figures 6, 7, 8, and 9, we always
have L = Li .
Due to the soundness of these rules, we have:
Ó
valid(A) ∧ ni=1 valid ID (Hi , Li , F 1i |−F 2i ) → valid ID (H, L, F 1 |−F 2 ).
From this, we define the predicate sound(R) to denote the soundness of a rule R ∈ R as follows:
sound(R) ≜ valid(Asms(R)) ∧ valid ID (Prems(R)) → valid ID (Concl(R)).
The soundness of each inference or synthesis rule R, which is proved in Appendix A.1 or A.2, is
always assured at the beginning of the formal verification for each procedure in our framework.
Therefore, this assurance clearly connects the soundness of our procedures to the soundness of
these rules. Such guarantee can be considered as an invariant of the set of all rules R exploited in
these procedures. We denote the invariant as sound(R) ≜ ∀R ∈ R. sound(R), which specifies that
every rule R ∈ R is sound.
Below are the specifications of two retrieval procedures Assumptions(R) and Premises(R) to get
the set of assumptions and the set of premises from R.
Procedure Assumptions(R) // Retrieve assumptions on unknown relations in R
Requires: true
Ensures: res = Asms(R)
Procedure Premises(R) // Retrieve premises of R
Requires: true
Ensures: res = Prems(R)
In addition, we also give the specifications of two auxiliary procedures IsAxiomRule(R) and
IsSynthesisRule(R) which returns true if the rule R is an axiom rule or a synthesis rule.
Proceedings of the ACM on Programming Languages, Vol. 2, No. POPL, Article 9. Publication date: January 2018.
9:36
Quang-Trung Ta, Ton Chanh Le, Siau-Cheng Khoo, and Wei-Ngan Chin
Procedure IsAxiomRule(R) // Check if R is an axiom rule
Requires: R ∈ { |−Π , ⊥1L , ⊥2L , U1Π , U2Π , U1Σ , U2Σ }
Ensures: (res = true) ∧ (Prems(R) = ∅)
Requires: R < { |−Π , ⊥1L , ⊥2L , U1Π , U2Π , U1Σ , U2Σ }
Ensures: res = false
The above specifications of IsAxiomRule(R) specify two possible cases of the input R, respectively
when R is an axiom rule (R ∈ { |−Π , ⊥1L , ⊥2L , U1Π , U2Π , U1Σ , U2Σ }), or otherwise. In the former case, we
can ensure that there are no premises in R (i.e., Prems(R) = ∅).
Procedure IsSynthesisRule(R) // Check if R is a synthesis rule
Requires: R ∈ {U1Π , U2Π , U1Σ , U2Σ , UIH }
Ensures: res = true
Requires: R < {U1Π , U2Π , U1Σ , U2Σ , UIH }
Ensures: (res = false) ∧ (Asms(R) = ∅)
The specifications of the procedure IsSynthesisRule denote two cases when R is a synthesis rule or
not. In the latter case, it is obvious that the assumption set of R is empty (i.e., Asms(R) = ∅).
In the following subsections, we illustrate the formal verification of the main proof search procedure
Prove in Figure 10 on its three specifications, depicted by pairs of Requires/Ensures. We perform a
Hoare-style forward verification, in which −→ is a weakening on program states.
The formal verification of other lemma synthesis procedures in Section 5, i.e., SynthesizeLemma,
RefineAnte, RefineConseq, Preprocess, and FineTuneConseq in Figures 11, 12, 14, 15, and 17 are
straightforward.
Proceedings of the ACM on Programming Languages, Vol. 2, No. POPL, Article 9. Publication date: January 2018.
Automated Lemma Synthesis in Symbolic-Heap Separation Logic
B.1
9:37
Verification of the first specification of the proof search procedure in Figure 10
Procedure Prove(H, L, F 1 |− F 2 , mode)
Requires: (mode = NoSyn) ∧ hasU nk(F 1 |−F 2 ) ∧ valid(L)
Ensures: (res = (Unkn, ∅, ∅)) ∨ ∃ ξ , A. (res = (Valid⟨ξ ⟩, ∅, A) ∧ (valid(A) → valid ID (H, L, F 1 |−F 2 )))
Invariant: sound(R)
// { (mode = NoSyn) ∧ hasU nk(F 1 |−F 2 ) ∧ valid(L) ∧ sound(R) }
1: R ← {Unify(R, (H, L, F 1 |− F 2 )) | R ∈ R}
// { (mode = NoSyn) ∧ hasU nk(F 1 |−F 2 ) ∧ valid(L) ∧ sound(R) }
//sound(R) ∧ R ⊆ R → sound(R)
2: Lsyn ← ∅
// { (mode = NoSyn) ∧ hasU nk(F 1 |−F 2 ) ∧ valid(L) ∧ Lsyn = ∅ ∧ sound(R) }
3: if mode = SynLM and NeedLemmas(F 1 |− F 2 , R) then
4:
Lsyn ← SynthesizeLemma(L, F 1 |− F 2 )
//unreachable code
R ← R ∪ {Unify(R, (H, Lsyn , F 1 |− F 2 )) | R ∈ {LML , LMR }}
5:
// { (mode = NoSyn) ∧ hasU nk(F 1 |−F 2 ) ∧ valid(L) ∧ Lsyn = ∅ ∧ sound(R) }
6: for each R in R do
//sound(R) ∧ R ∈ R → sound(R)
// { (mode = NoSyn) ∧ hasU nk(F 1 |−F 2 ) ∧ valid(L) ∧ Lsyn = ∅ ∧ sound(R) }
7:
A←∅
8:
if IsSynthesisRule(R) then A ← Assumptions(R)
// { (mode = NoSyn) ∧ hasU nk(F 1 |−F 2 ) ∧ valid(L) ∧ Lsyn = ∅ ∧ sound(R) ∧ A = Asms(R) }
9:
if IsAxiomRule(R) then
// { . . . ∧ Lsyn = ∅ ∧ sound(R) ∧ Asms(R) = A ∧ Prems(R) = ∅ }
// −→ { Lsyn = ∅ ∧ (valid(A) → valid ID (H, L, F 1 |−F 2 )) }
//valid ID (∅) = true
10:
ξ ← CreateWitnessProofTree(F 1 |− F 2 , R, ∅)
11:
return (Valid⟨ξ ⟩, Lsyn , A)
// { Lsyn = ∅ ∧ (valid(A) → valid ID (H, L, F 1 |−F 2 )) ∧ res = (Valid⟨ξ ⟩, Lsyn , A) }
// −→ { ∃ ξ , A. (res = (Valid⟨ξ ⟩, ∅, A) ∧ (valid(A) → valid ID (H, L, F 1 |−F 2 ))) }
12:
(Hi , Li , F 1i |− F 2i )i = 1...n ← Premises(R)
// { (mode = NoSyn) ∧ hasU nk(F 1 |−F 2 ) ∧ valid(L) ∧ Lsyn = ∅ ∧ sound(R)
n ∧ Ón (L = L) }
//
∧ Asms(R) = A ∧ Prems(R) = {Hi , Li , F 1i |− F 2i }i=1
i=1 i
i ,A )
13:
(r i , Lsyn
i i = 1...n ← Prove(Hi , L ∪ Lsyn ∪ Li , F 1i |− F 2i , mode)i = 1...n
Ó
// { . . . ∧ Lsyn = ∅ ∧ (valid(A) ∧ ni=1 valid ID (Hi , Li , F 1i |− F 2i ) → valid ID (H, L, F 1 |−F 2 )) ∧
Ón
i
//
i=1 (Lsyn = ∅ ∧ ((r i = Unkn ∧ Ai = ∅)
//
∨ (hasU nk(F 1i |−F 2i ) ∧ (∃ξ i . (r i = Valid⟨ξ i ⟩ ∧ (valid(Ai ) → valid ID (Hi , Li , F 1i |−F 2i )))))
//
∨ (¬hasU nk(F 1i |−F 2i ) ∧ (∃ξ i . (r i = Valid⟨ξ i ⟩ ∧ Ai = ∅ ∧ valid ID (Hi , Li , F 1i |−F 2i ))))) }
Ó
// −→ { Lsyn = ∅ ∧ (valid(A) ∧ ni=1 valid ID (Hi , Li , F 1i |− F 2i ) → valid ID (H, L, F 1 |−F 2 )) ∧
Ón
i
//
i=1 (Lsyn = ∅ ∧ ((r i = Unkn ∧ Ai = ∅)
//
∨ (∃ξ i . (r i = Valid⟨ξ i ⟩ ∧ (valid(Ai ) → valid ID (Hi , Li , F 1i |−F 2i ))))) }
//valid(∅) = true
Fig. 20. Verification of the proof search procedure Prove on the first specification (Part 1)
Proceedings of the ACM on Programming Languages, Vol. 2, No. POPL, Article 9. Publication date: January 2018.
9:38
14:
15:
16:
17:
Quang-Trung Ta, Ton Chanh Le, Siau-Cheng Khoo, and Wei-Ngan Chin
Ó
// { Lsyn = ∅ ∧ (valid(A) ∧ ni=1 valid ID (Hi , Li , F 1i |− F 2i ) → valid ID (H, L, F 1 |−F 2 )) ∧
Ón
i
//
i=1 (Lsyn = ∅ ∧ ((r i = Unkn ∧ Ai = ∅)
//
∨ (∃ξ i . (r i = Valid⟨ξ i ⟩ ∧ (valid(Ai ) → valid ID (Hi , Li , F 1i |−F 2i ))))) }
if r i = Valid⟨ξ i ⟩ for all i = 1 . . . n then
Ó
// { Lsyn = ∅ ∧ (valid(A) ∧ ni=1 valid ID (Hi , Li , F 1i |− F 2i ) → valid ID (H, L, F 1 |−F 2 )) ∧
Ón
i
|−
//
i=1 (r i = Valid⟨ξ i ⟩ ∧ Lsyn = ∅ ∧ (valid(Ai ) → valid ID (Hi , Li , F 1i F 2i ))) }
|
−
ξ ← CreateWitnessProofTree(F 1 F 2 , R, {ξ 1 , . . . , ξ n })
1 ∪ . . . ∪ Ln , A ∪ A ∪ . . . ∪ A )
return (Valid⟨ξ ⟩, Lsyn ∪ Lsyn
1
n
syn
Ó
// { Lsyn = ∅ ∧ (valid(A) ∧ ni=1 valid ID (Hi , Li , F 1i |− F 2i ) → valid ID (H, L, F 1 |−F 2 )) ∧
Ón
i
|−
//
i=1 (Lsyn = ∅ ∧ (valid(Ai ) → valid ID (Hi , Li , F 1i F 2i )))
//
∧ res = (Valid⟨ξ ⟩, ∅, A ∪ A1 ∪ . . . ∪ An ) }
return (Unkn, ∅, ∅)
// { res = (Unkn, ∅, ∅) }
Fig. 21. Verification of the proof search procedure Prove on the first specification (Part 2)
Given the post-state after line 16 in Figure 21, we prove the postcondition of Prove as follows,
where −→ is a weakening on program states.
// the program state at line 16
Ó
Lsyn = ∅ ∧ (valid(A) ∧ ni=1 valid ID (Hi , Li , F 1i |− F 2i ) → valid ID (H, L, F 1 |−F 2 ))
Ón
i
∧ i=1 (Lsyn
= ∅ ∧ (valid(Ai ) → valid ID (Hi , Li , F 1i |−F 2i )))
∧ res = (Valid⟨ξ
Ón ⟩, ∅, A ∪ A1 ∪ . . . ∪ An ) }
−→ (valid(A)
∧
i=1 valid ID (Hi , Li , F 1i |− F 2i ) → valid ID (H, L, F 1 |−F 2 ))
Ó
∧ ni=1 (valid(Ai ) → valid ID (Hi , Li , F 1i |−F 2i ))
∧ res = (Valid⟨ξ ⟩, ∅, A ∪ A1 ∪ . . . ∪ An )
Ó
−→ (valid(A)∧ ni=1 valid(Ai ) → valid ID (H, L, F 1 |−F 2 )) ∧ res=(Valid⟨ξ ⟩, ∅, A∪A1 ∪ . . . ∪An )
−→ (valid(A∪A1 ∪ . . . ∪An ) → valid ID (H, L, F 1 |−F 2 )) ∧ res=(Valid⟨ξ ⟩, ∅, A∪A1 ∪ . . . ∪An )
−→ ∃ ξ , A. (res = (Valid⟨ξ ⟩, ∅, A) ∧ (valid(A) → valid ID (H, L, F 1 |−F 2 ))
−→ (res = (Unkn, ∅, ∅)) ∨ ∃ ξ , A. (res = (Valid⟨ξ ⟩, ∅, A) ∧ (valid(A) → valid ID (H, L, F 1 |−F 2 )))
// the post condition in the first specification of Prove
Proceedings of the ACM on Programming Languages, Vol. 2, No. POPL, Article 9. Publication date: January 2018.
Automated Lemma Synthesis in Symbolic-Heap Separation Logic
B.2
9:39
Verification of the second specification of the proof search procedure in Figure 10
Procedure Prove(H, L, F 1 |− F 2 , mode)
Requires: (mode = NoSyn) ∧ ¬hasU nk(F 1 |−F 2 ) ∧ valid(L)
Ensures: (res = (Unkn, ∅, ∅)) ∨ ∃ ξ . (res = (Valid⟨ξ ⟩, ∅, ∅) ∧ valid ID (H, L, F 1 |−F 2 ))
Invariant: sound(R)
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
// { (mode = NoSyn) ∧ ¬hasU nk(F 1 |−F 2 ) ∧ valid(L) ∧ sound(R) }
R ← {Unify(R, (H, L, F 1 |− F 2 )) | R ∈ R}
//sound(R) ∧ R ⊆ R → sound(R)
// { (mode = NoSyn) ∧ ¬hasU nk(F 1 |−F 2 ) ∧ valid(L) ∧ sound(R)
//
∧ R ∩ {U1Π , U2Π , U1Σ , U2Σ , UIH } = ∅ }
//since ¬hasU nk(F 1 |−F 2 )
Lsyn ← ∅
// { (mode = NoSyn) ∧ ¬hasU nk(F 1 |−F 2 ) ∧ valid(L) ∧ Lsyn = ∅
//
∧ sound(R) ∧ R ∩ {U1Π , U2Π , U1Σ , U2Σ , UIH } = ∅ }
if mode = SynLM and NeedLemmas(F 1 |− F 2 , R) then
//unreachable code
Lsyn ← SynthesizeLemma(L, F 1 |− F 2 )
R ← R ∪ {Unify(R, (H, Lsyn , F 1 |− F 2 )) | R ∈ {LML , LMR }}
// { (mode = NoSyn) ∧ ¬hasU nk(F 1 |−F 2 ) ∧ valid(L) ∧ Lsyn = ∅
//
∧ sound(R) ∧ R ∩ {U1Π , U2Π , U1Σ , U2Σ , UIH } = ∅ }
for each R in R do
// { (mode = NoSyn) ∧ ¬hasU nk(F 1 |−F 2 ) ∧ valid(L) ∧ Lsyn = ∅
//since R ∈ R
//
∧ sound(R) ∧ R < {U1Π , U2Π , U1Σ , U2Σ , UIH } }
A←∅
// { (mode = NoSyn) ∧ ¬hasU nk(F 1 |−F 2 ) ∧ valid(L) ∧ Lsyn = ∅ ∧ A = ∅
//
∧ sound(R) ∧ R < {U1Π , U2Π , U1Σ , U2Σ , UIH } }
if IsSynthesisRule(R) then A ← Assumptions(R)
//unreachable since R is not a synthesis rule
// { . . .∧(mode = NoSyn)∧¬hasU nk(F 1 |−F 2 )∧valid(L)∧Lsyn = ∅∧A = ∅∧Asms(R) = ∅∧sound(R) }
//valid(∅)=true and valid ID (∅)=true
if IsAxiomRule(R) then
// { . . . ∧ Lsyn = ∅ ∧ A = ∅ ∧ Asms(R) = ∅ ∧ Prems(R) = ∅ ∧ valid ID (H, L, F 1 |−F 2 ) }
ξ ← CreateWitnessProofTree(F 1 |− F 2 , R, ∅)
return (Valid⟨ξ ⟩, Lsyn , A)
// { . . . ∧ Lsyn = ∅ ∧ A = ∅ ∧ valid ID (H, L, F 1 |−F 2 )) ∧ res = (Valid⟨ξ ⟩, Lsyn , A) }
// −→ { ∃ ξ . (res = (Valid⟨ξ ⟩, ∅, ∅) ∧ valid ID (H, L, F 1 |−F 2 )) }
(Hi , Li , F 1i |− F 2i )i = 1...n ← Premises(R)
Ó
// { . . . ∧ (mode = NoSyn) ∧ valid(L) ∧ Lsyn = ∅ ∧ A = ∅ ∧ ¬hasU nk(F 1 |−F 2 ) ∧ ni=1 (Li = L)
n ∧ sound(R) }
//
∧ Asms(R) = ∅ ∧ Prems(R) = {Hi , Li , F 1i |− F 2i }i=1
Ó
// −→
//since ¬hasU nk(F 1 |−F 2 ) → ni=1 (¬hasU nk(F 1i |−F 2i ))
Ón
// { (mode = NoSyn) ∧ valid(L) ∧ Lsyn = ∅ ∧ A = ∅ ∧ i=1 (¬hasU nk(F 1i |−F 2i ) ∧ Li = L)
n ∧ sound(R) }
//
∧ Asms(R) = ∅ ∧ Prems(R) = {Hi , Li , F 1i |− F 2i }i=1
i
(r i , Lsyn , Ai )i = 1...n ← Prove(Hi , L ∪ Lsyn ∪ Li , F 1i |− F 2i , mode)i = 1...n
Ó
// { . . . ∧ Lsyn = ∅ ∧ A = ∅ ∧ ( ni=1 valid ID (Hi , L, F 1i |− F 2i ) → valid ID (H, L, F 1 |−F 2 )) ∧
Ón
i
|−
//
i=1 (((r i = Unkn) ∨ (∃ξ i . (r i = Valid⟨ξ i ⟩ ∧ valid ID (Hi , L, F 1i F 2i )))) ∧ Lsyn =∅ ∧ Ai =∅) }
// −→
Ó
// { Lsyn = ∅ ∧ A = ∅ ∧ ( ni=1 valid ID (Hi , L, F 1i |− F 2i ) → valid ID (H, L, F 1 |−F 2 )) ∧
Ón
i
|−
//
i=1 (((r i = Unkn) ∨ (∃ξ i . (r i = Valid⟨ξ i ⟩ ∧ valid ID (Hi , L, F 1i F 2i )))) ∧ Lsyn =∅ ∧ Ai =∅) }
Fig. 22. Verification of the proof search procedure Prove on the second specification (Part 1)
Proceedings of the ACM on Programming Languages, Vol. 2, No. POPL, Article 9. Publication date: January 2018.
9:40
14:
15:
16:
17:
Quang-Trung Ta, Ton Chanh Le, Siau-Cheng Khoo, and Wei-Ngan Chin
Ó
// { Lsyn = ∅ ∧ A = ∅ ∧ ( ni=1 valid ID (Hi , L, F 1i |− F 2i ) → valid ID (H, L, F 1 |−F 2 )) ∧
Ón
i
|−
//
i=1 (((r i = Unkn) ∨ (∃ξ i . (r i = Valid⟨ξ i ⟩ ∧ valid ID (Hi , L, F 1i F 2i )))) ∧ Lsyn =∅ ∧ Ai =∅) }
if r i = Valid⟨ξ i ⟩ for all i = 1 . . . n then
Ó
// { Lsyn = ∅ ∧ A = ∅ ∧ ( ni=1 valid ID (Hi , L, F 1i |− F 2i ) → valid ID (H, L, F 1 |−F 2 )) ∧
Ón
i
|−
//
i=1 (r i = Valid⟨ξ i ⟩ ∧ valid ID (Hi , L, F 1i F 2i ) ∧ Lsyn =∅ ∧ Ai =∅) }
ξ ← CreateWitnessProofTree(F 1 |− F 2 , R, {ξ 1 , . . . , ξ n })
1 ∪ . . . ∪ Ln , A ∪ A ∪ . . . ∪ A )
return (Valid⟨ξ ⟩, Lsyn ∪ Lsyn
1
n
syn
Ó
// { . . . ∧ res = (Valid⟨ξ ⟩, ∅, ∅) ∧ ni=1 valid ID (Hi , L, F 1i |−F 2i ) ∧
Ó
// ( ni=1 valid ID (Hi , L, F 1i |− F 2i ) → valid ID (H, L, F 1 |−F 2 )) }
// −→ { (res = (Valid⟨ξ ⟩, ∅, ∅) ∧ valid ID (H, L, F 1 |−F 2 ))) }
return (Unkn, ∅, ∅)
// { res = (Unkn, ∅, ∅) }
Fig. 23. Verification of the proof search procedure Prove on the second specification (Part 2)
Proceedings of the ACM on Programming Languages, Vol. 2, No. POPL, Article 9. Publication date: January 2018.
Automated Lemma Synthesis in Symbolic-Heap Separation Logic
B.3
9:41
Verification of the third specification of the proof search procedure in Figure 10
Procedure Prove(H, L, F 1 |− F 2 , mode)
Requires: (mode = SynLM) ∧ ¬hasU nk(F 1 |−F 2 ) ∧ valid(L)
Ensures: (res = (Unkn, ∅, ∅)) ∨ ∃ ξ , Lsyn .(res = (Valid⟨ξ ⟩, Lsyn , ∅) ∧ valid(Lsyn ) ∧ valid ID (H, L, F 1 |−F 2 ))
Invariant: sound(R)
// { (mode = SynLM) ∧ ¬hasU nk(F 1 |−F 2 ) ∧ valid(L) ∧ sound(R) }
1: R ← {Unify(R, (H, L, F 1 |− F 2 )) | R ∈ R}
// { (mode = SynLM) ∧ ¬hasU nk(F 1 |−F 2 ) ∧ valid(L) ∧ sound(R)
//sound(R) ∧ R ⊆ R → sound(R)
//since ¬hasU nk(F 1 |−F 2 )
//
∧ R ∩ {U1Π , U2Π , U1Σ , U2Σ , UIH } = ∅ }
2: Lsyn ← ∅
// { (mode = SynLM) ∧ ¬hasU nk(F 1 |−F 2 ) ∧ valid(L) ∧ Lsyn = ∅
//
∧ sound(R) ∧ R ∩ {U1Π , U2Π , U1Σ , U2Σ , UIH } = ∅ }
3: if mode = SynLM and NeedLemmas(F 1 |− F 2 , R) then
4:
Lsyn ← SynthesizeLemma(L, F 1 |− F 2 )
R ← R ∪ {Unify(R, (H, Lsyn , F 1 |− F 2 )) | R ∈ {LML , LMR }}
5:
// { (mode = SynLM) ∧ ¬hasU nk(F 1 |−F 2 ) ∧ valid(L) ∧ valid(Lsyn ) //w.r.t. the spec of SynthesizeLemma
//
∧ sound(R) ∧ R ∩ {U1Π , U2Π , U1Σ , U2Σ , UIH } = ∅ }
6: for each R in R do
// { (mode = SynLM) ∧ ¬hasU nk(F 1 |−F 2 ) ∧ valid(L) ∧ valid(Lsyn )
//
∧ sound(R) ∧ R < {U1Π , U2Π , U1Σ , U2Σ , UIH } }
//since R ∈ R
7:
A←∅
// { (mode = SynLM) ∧ ¬hasU nk(F 1 |−F 2 ) ∧ valid(L) ∧ valid(Lsyn ) ∧ A=∅
//
∧ sound(R) ∧ R < {U1Π , U2Π , U1Σ , U2Σ , UIH } }
8:
if IsSynthesisRule(R) then A ← Assumptions(R)
//unreachable since R is not a synthesis rule
|
// { . . .∧(mode = SynLM)∧¬hasU nk(F 1 −F 2 )∧valid(L)∧valid(Lsyn )∧A=∅∧Asms(R)=∅∧sound(R) }
9:
if IsAxiomRule(R) then
//valid(∅)=true and valid ID (∅)=true
// { . . . ∧ valid(Lsyn ) ∧ A = ∅ ∧ Asms(R) = ∅ ∧ Prems(R) = ∅ ∧ valid ID (H, L, F 1 |−F 2 ) }
10:
ξ ← CreateWitnessProofTree(F 1 |− F 2 , R, ∅)
11:
return (Valid⟨ξ ⟩, Lsyn , A)
// { . . . ∧ valid(Lsyn ) ∧ A = ∅ ∧ valid ID (H, L, F 1 |−F 2 ) ∧ res = (Valid⟨ξ ⟩, Lsyn , A) }
// −→ { ∃ ξ . (res = (Valid⟨ξ ⟩, Lsyn , ∅) ∧ valid(Lsyn ) ∧ valid ID (H, L, F 1 |−F 2 )) }
12:
(Hi , Li , F 1i |− F 2i )i = 1...n ← Premises(R)
Ó
// { . . . ∧ (mode = SynLM) ∧ valid(L) ∧ valid(Lsyn ) ∧ A = ∅ ∧ ¬hasU nk(F 1 |−F 2 ) ∧ ni=1 (Li = L)
n
//
∧ Asms(R) = ∅ ∧ Prems(R) = {Hi , Li , F 1i |− F 2i }i=1 ∧ sound(R) }
Ó
// −→
//since ¬hasU nk(F 1 |−F 2 ) → ni=1 (¬hasU nk(F 1i |−F 2i ))
Ón
// { (mode = SynLM) ∧ valid(L) ∧ valid(Lsyn ) ∧ A = ∅ ∧ i=1 (¬hasU nk(F 1i |−F 2i ) ∧ Li = L)
n ∧ sound(R) }
//
∧ Asms(R) = ∅ ∧ Prems(R) = {Hi , Li , F 1i |− F 2i }i=1
Fig. 24. Verification of the proof search procedure Prove on the third specification (Part 1)
Proceedings of the ACM on Programming Languages, Vol. 2, No. POPL, Article 9. Publication date: January 2018.
9:42
13:
14:
15:
16:
17:
Quang-Trung Ta, Ton Chanh Le, Siau-Cheng Khoo, and Wei-Ngan Chin
Ó
// { (mode = SynLM) ∧ valid(L) ∧ valid(Lsyn ) ∧ A = ∅ ∧ ni=1 (¬hasU nk(F 1i |−F 2i ) ∧ Li = L)
n ∧ sound(R) }
//
∧ Asms(R) = ∅ ∧ Prems(R) = {Hi , Li , F 1i |− F 2i }i=1
i
(r i , Lsyn , Ai )i = 1...n ← Prove(Hi , L ∪ Lsyn ∪ Li , F 1i |− F 2i , mode)i = 1...n
Ó
// { . . . ∧ valid(Lsyn ) ∧ A = ∅ ∧ ( ni=1 valid ID (Hi , L, F 1i |− F 2i ) → valid ID (H, L, F 1 |−F 2 )) ∧
Ón
i
//
i=1 ((r i = Unkn ∧ Lsyn = ∅ ∧ Ai = ∅)
i ) ∧ A = ∅))) }
//
∨ (∃ξ i . (r i = Valid⟨ξ i ⟩ ∧ valid ID (Hi , L, F 1i |−F 2i ) ∧ valid(Lsyn
i
// −→
Ó
// { valid(Lsyn ) ∧ A = ∅ ∧ ( ni=1 valid ID (Hi , L, F 1i |− F 2i ) → valid ID (H, L, F 1 |−F 2 )) ∧
Ón
i
//
i=1 ((r i = Unkn ∧ Lsyn = ∅ ∧ Ai = ∅)
i ) ∧ A = ∅))) }
//
∨ (∃ξ i . (r i = Valid⟨ξ i ⟩ ∧ valid ID (Hi , L, F 1i |−F 2i ) ∧ valid(Lsyn
i
Ón
|
−
// { valid(Lsyn ) ∧ A = ∅ ∧ ( i=1 valid ID (Hi , L, F 1i F 2i ) → valid ID (H, L, F 1 |−F 2 )) ∧
Ón
i
//
i=1 ((r i = Unkn ∧ Lsyn = ∅ ∧ Ai = ∅)
if r i = Valid⟨ξ i ⟩ for all i = 1 . . . n then
Ó
// { valid(Lsyn ) ∧ A = ∅ ∧ ( ni=1 valid ID (Hi , L, F 1i |− F 2i ) → valid ID (H, L, F 1 |−F 2 )) ∧
Ón
i
|−
//
i=1 (r i = Valid⟨ξ i ⟩ ∧ valid(Lsyn ) ∧ Ai = ∅ ∧ valid ID (Hi , L, F 1i F 2i )) }
ξ ← CreateWitnessProofTree(F 1 |− F 2 , R, {ξ 1 , . . . , ξ n })
1 ∪ . . . ∪ Ln , A ∪ A ∪ . . . ∪ A )
return (Valid⟨ξ ⟩, Lsyn ∪ Lsyn
1
n
syn
Ón
1 ∪ . . . ∪ L n , ∅) ∧ valid(L
i
// { . . . ∧ res = (Valid⟨ξ ⟩, Lsyn ∪ Lsyn
syn ) ∧ i=1 valid(Lsyn ) ∧
syn
Ón
Ón
|
|
−
−
//
valid
(H
,
L,
F
F
)
}
∧
(
valid
(H
,
L,
F
F
)→valid
(H,
L,
F 1 |−F 2 ))
i
1i
2i
i
1i
2i
ID
ID
ID
i=1
i=1
1 ∪ . . . ∪ L n , ∅)
// −→ { res = (Valid⟨ξ ⟩, Lsyn ∪Lsyn
syn
1 ∪ . . . ∪ L n ) ∧ valid (H,L,F |−F ) }
//
∧ valid(Lsyn ∪ Lsyn
1 2
ID
syn
return (Unkn, ∅, ∅)
// { res = (Unkn, ∅, ∅) }
Fig. 25. Verification of the proof search procedure Prove on the third specification (Part 2)
Proceedings of the ACM on Programming Languages, Vol. 2, No. POPL, Article 9. Publication date: January 2018.
| 6 |
We present KB LRN, a framework for end-toend learning of knowledge base representations from latent, relational, and numerical features. KB LRN integrates feature types with
a novel combination of neural representation
learning and probabilistic product of experts
models. To the best of our knowledge, KB LRN
is the first approach that learns representations of knowledge bases by integrating latent, relational, and numerical features. We
show that instances of KB LRN outperform existing methods on a range of knowledge base
completion tasks. We contribute a novel data
sets enriching commonly used knowledge base
completion benchmarks with numerical features. We have made the data sets available for
further research1 . We also investigate the impact numerical features have on the KB completion performance of KB LRN.
1
Introduction
The importance of knowledge bases (KBs) for AI systems has been demonstrated numerous times. KBs provide ways to organize, manage, and retrieve structured
data and allow AI system to perform reasoning. In recent years, KBs have been playing an increasingly crucial role in several application domains. Purely logical
representations of knowledge bases have a long history
in AI [24]. However, they suffer from being inefficient
and brittle. Inefficient because the computational complexity of reasoning is exponential in the worst case and,
therefore, the time required by a reasoner highly unpredictable. Brittle because a purely logical KB requires
a large set of logical rules that are handcrafted and/or
1
https://github.com/nle-ml/mmkb
Murasaki Shikibu
Gotoh Museum
dIn
Abstract
Mathias Niepert
NEC Labs Europe
Heidelberg, Germany
[email protected]
locate
Alberto Garcı́a-Durán
NEC Labs Europe
Heidelberg, Germany
[email protected]
Tokyo
hasArtAbout
bornIn
arXiv:1709.04676v2 [] 12 Mar 2018
KB LRN: End-to-End Learning of Knowledge Base Representations with
Latent, Relational, and Numerical Features
captialOf
loc
ate
Latitude: 35.65
dIn
Area: 2,2
Avg. salary: 520,060
Japan
Latitude: 36.21
Area: 378
Avg. salary: 482,163
Sensō-ji
Figure 1: A small part of a knowledge base.
mined. These problems are even more pressing in applications whose environments are changing over time.
Motivated by these shortcomings, there has been a flurry
of work on combining logical and statistical approaches
to build systems capable of reasoning over and learning from incomplete structured data. Most notably, the
statistical relational learning community has proposed
numerous formalisms that combine logic and probability [21]. These formalism are able to address the learning
problem and make the resulting AI systems more robust
to missing data and missing rules. Intuitively, logical formulas act as relational features and the probability of a
possible world is determined by a sufficient statistic for
the values of these features. These approaches, however,
are in in most cases even less efficient because logical
inference is substituted with probabilistic inference.
More recently, the research community has focused on
efficient machine learning models that perform well on
restricted tasks such as link prediction in KBs. Examples are knowledge base factorization and embedding
approaches [2, 19, 10, 18] and random-walk based ML
models [14, 6]. The former learn latent features for the
entities and relations in the knowledge base and use those
to perform link prediction. The latter explore specific relational features such as path types between two entities
and train a machine learning model for link prediction.
With this work, we propose KB LRN, a novel approach to
combining relational, latent (learned), and numerical features, that is, features that can take on a large or infinite
number of real values. The combination of various features types is achieved by integrating embedding-based
learning with probabilistic models in two ways. First,
we show that modeling numerical features with radial
basis functions is beneficial and can be integrated in an
end-to-end differentiable learning system. Second, we
propose a probabilistic product of experts (PoE) [12] approach to combine the feature types. Instead of training
the PoE with contrastive divergence, we approximate the
partition function with a negative sampling strategy. The
PoE approach has the advantage of being able to train the
model jointly and end-to-end.
The paper is organized as follows. First, we discuss relational, latent, and numerical features. Second, we describe KB LRN. Third, we present empirical evidence
that instances of KB LRN outperform state of the art
methods for KB completion. We also investigate in detail
under what conditions numerical features are beneficial.
2
Relational, Latent, and Numerical
Features
We assume that the facts of a knowledge base (KB) are
given as a set of triples of the form (h, r, t) where h and
t are the head and tail entities and r is a relation type.
Figure 1 depicts a small fragment of a KB with relations
and numerical features. KB completion is the problem of
answering queries of the form (?, r, t) or (h, r, ?). While
the proposed approach can be generalized to more complex queries, we focus on completion queries for the sake
of simplicity. We now discuss the three features types
used in KB LRN and motivate their utility for knowledge
base completion. How exactly we extract features from
a given KB is described in the experimental section.
2.1
Relational Features
Each relational features is given as a logical formula
which is evaluated in the KB to determine the feature’s
value. For instance, the formula ∃x (A, bornIn, x) ∧
(x, capitalOf, B) corresponds to a binary feature which
is 1 if there exists a path of that type from entity A to entity B, and 0 otherwise. These features are often used in
relational models [27, 20] and random-walk based models such as PRA and SFE [14, 6]. In this work, we use
relational paths of length one and two and use the rule
mining approach A MIE + [4]. We detail the generation of
the relational features in the experimental section. For a
pair of entities (h, t), we denote the feature vector computed based on a set of relational features by r(h,t) .
2.2
Latent Features
Numerous embedding methods for KBs have been proposed in recent years [19, 2, 10, 18]. Embedding methods provide fixed-size vector representations (embeddings) for all entities in the KB. In the simplest of cases,
relations are modeled as translations in the entity embedding space [2]. We incorporate typical embedding learning objectives into KB LRN and write eh and et to refer
to an embedding of a head entity and a tail entity, respectively. The advantages of latent feature models are their
computational efficiency and their ability to learn latent
entity types suitable for downstream ML tasks without
hand-crafted or mined logical rules.
2.3
Numerical Features
Numerical features are entity features whose values can
take on a very large or infinite number of real values.
To the best of our knowledge, there does not exists a
principled approach that integrates numerical features
into a relational ML model for KB completion. This
is surprising, considering that numerical data is available in almost all existing large-scale KBs. The assumption that numerical data is helpful for KB completion
tasks is reasonable. For several relations types the differences between the head and tail are characteristic of
the relation itself. For example, while the mean difference of birth years is 0.4 for the Freebase relation
/people/marriage/spouse, it is 32.4 for the relation /person/children. These observations motivate specifically the use of differences of numerical feature values. Taking the difference has the advantage that
even if a numerical feature is not distributed according
to a normal distribution (e.g., birth years in a KB), the
difference is often normally distributed. This is important as we need to fit simple parametric distributions to
the sparse numerical data. We detail the fully automated
extraction and generation of the numerical features in the
experimental section.
3
KB LRN: Learning End-To-End Joint
Representations for Knowledge Bases
With KB LRN we aim to provide a framework for endto-end learning of KB representations. Since we want
to combine different feature types (relational, latent
or learned, and numerical) we need to find a suitable
method for integrating the respective submodels, one per
feature type. We propose a product of experts (PoE)
approach where one expert is trained for each (relation
1
2
( * ). ( * ).
eh et w
r
eh et' wr
...
softmax(
Product Of Experts
...
N+1
1
+
.
... ( * ).
eh et'' wr
2
.
wrrel
r(h,t)
)
L
...
wrrel
r(h,t')
...
N+1
.
categorical cross-entropy loss
1
+
.
wrrel
2
wrnum
ϕ(n(h,t))
r(h,t'')
.
wrnum
N+1
...
...
ϕ(n(h,t'))
.
wrnum
ϕ(n(h,t''))
Figure 2: An illustration of an instance of KB LRN. For every relation, there is a seperate expert for each of the different
feature types. The entities t0 and t00 are two of N randomly sampled entities. The scores of the various submodels are
added and normalized with a softmax function. A categorical cross-entropy loss is applied to the normalized scores.
type, feature type) pair. We extend the product of experts
approach in two novel ways. First, we create dependencies between the experts by sharing the parameters of the
entity embedding model across relation types. By doing
this, we combine a probabilistic model with a model that
learns vector representations from discrete and numerical data. Second, while product of experts are commonly
trained with contrastive divergence [12], we train it with
negative sampling and a cross-entropy loss.
is equivalent to the exponential of the D IST M ULT [31]
scoring function. With KB LRN we can use any of the
existing KB embedding scoring functions.
In general, a PoE’s probability distribution is
Q
m fm (d | θm )
p(d | θ1 , ..., θn ) = P Q
,
c
m fm (c | θm )
where c indexes all possible triples.
where d is a data vector in a discrete space, θm are the
parameters of individual model m, fm (d | φm ) is the
value of d under model m, and the c’s index all possible
vectors in the data space. The PoE model is now trained
to assign high probability to observed data vectors.
In the KB context, the data vector d is always a triple
d = (h, r, t) and the objective is to learn a PoE that
assigns high probability to true triples and low probabilities to triples assumed to be false. If (h, r, t) holds in the
KB, the pair’s vector representations are used as positive
training examples. Let d = (h, r, t). We can now define
one individual expert f(r,F) (d | φ(r,F) ) for each (relation
type r, feature type F) pair as follows
r
f(r,L) (d | θ(r,L) ) = exp((eh ∗ et ) · w )
r
f(r,R) (d | θ(r,R) ) = exp r(h,t) · wrel
r
f(r,N) (d | θ(r,N) ) = exp φ n(h,t) · wnum
and
The probability for triple d = (h, r, t) of the PoE model
is now
Q
F∈{R,L,N} f(r,F) (d | θ(r,F) )
,
p(d | θ1 , ..., θn ) = P Q
c
F∈{R,L,N} f(r,F) (c | θ(r,F) ))
For numerical features, an activation function should fire
when the difference of values is in a specific range. For
example, we want the activation to be high when the
difference of the birth years between a parent and its
child is close to 32.4 years. Commonly used activation functions such as sigmoid or tanh are not suitable
here, since they saturate whenever they exceed a certain threshold. For each relation r and the dn corresponding relevant numerical features, therefore, we apply a radial basis function over the differences of values
(1)
(dn )
φ(n(h,t) ) = [φ(n(h,t) ), . . . , φ(n(h,t)
)], where
(i)
φ n(h,t) = exp
(i)
−||n(h,t) − ci ||22
σi2
.
This results in the RBF kernel being activated whenever
the difference of values is close to the expected value
ci . We discuss and evaluate several alternative strategies
for incorporating numerical features in the experimental
section.
f(r0 ,F) (d | θ(r0 ,F) ) = 1 for all r0 6= r and F ∈ {L, R, N},
where ∗ is the elment-wise product, · is the dot product,
r
r
wr , wrel
, wnum
are the parameter vectors for the latent,
relational, and numerical features corresponding to the
relation r, and φ is the radial basis function (RBF) applied element-wise to n(h,t) . Note that f(r,L) (d | θ(r,L) )
3.1
Learning
Product of experts are usually trained with contrastive divergence (CD) [12]. Contrastive divergence relies on an
approximation of the gradient of the log-likelihood using
a short Markov chain started at the current seen example.
The advantage of CD is that the partition function, that is,
the denominator of the probability p, which is intractable
to compute, does not need to be approximated. Due to
the parameterization of the PoE we have defined here,
however, it is not possible to perform CD since there is
no way to sample a hidden state given a triple d. Hence,
instead of using CD, we approximate the partition function by performing negative sampling.
The logarithmic loss for the given training triples T is
defined as
X
L=−
log p(t | θ1 , ..., θn ).
t∈T
To fit the PoE to the training triples, we follow the derivative of the log likelihood of each observed triple d ∈ T
under the PoE
∂ log p(d | θ1 , ..., θn ) ∂ log fm (d | θm )
=
∂θm
∂θ
P mQ
∂ log c m fm (c | θm )
−
∂θm
Now, to approximate the intractable second term of the
right hand side of the above equation, we generate for
each triple d = (h, r, t) a set E consisting of N triples
(h, r, t0 ) by sampling exactly N entities t0 uniformly at
random from the set of all entities. The term
P Q
∂ log c m fm (c | θm )
∂θm
is then approximated by the term
P
Q
∂ log c∈E m fm (c | θm )
.
∂θm
Analogously for the head of the triple. This is often referred to as negative sampling. Figure 2 illustrates the
KB LRN framework.
4
Related Work
A combination of latent and relational features has been
explored by Toutanova et al. [27, 28]. There, a weighted
combination of two independently learned models, a
latent feature model [31] and a model fed with a binary vector reflecting the presence of paths of length
one between the head and tail, is proposed. These
simple relational features aim at capturing association
strengths between pairs of relationships (e.g. contains
and contained by). [22] proposed a method that learns
implicit associations between pairs of relations in addition to a latent feature model in the context of relation extraction. [7] modifies the path ranking algorithm (PRA) [14] to incorporate latent representations
into models based on random walks in KBs. [6] extracted relational features other than paths to better capture entity type information. There are a number of recent approaches that combine relational and latent representations by incorporating known logical rules into the
embedding learning formulation [23, 9, 16]. Despite its
simplicity, KB LRN’s combination of relational and latent representations significantly outperforms all these
approaches.
There exists additional work on combining various types
of KB features. [17] proposed a modification of the
well-known tensor factorization method RESCAL [19],
called ARE, which adds a learnable matrix that weighs
a set of metrics (e.g. Common Neighbors) between pairs
of entities; [5] proposed a combination of latent features,
aiming to take advantage of the different interaction patterns between the elements of a triple.
KB LRN is different to these approaches in that (i) we incorporate numerical features for KB completion, (ii) we
propose a unifying end-to-end learning framework that
integrates arbitrary relational, latent, and numerical features.
5
Experiments
We conducted experiments on six different knowledge
base completion data sets. Primarily, we wanted to understand for what type of relations numerical features are
helpful and what input representation of numerical features achieves the best results. An additional objective
was the comparison to state of the art methods.
5.1
Datasets
We conducted experiments on six different data sets:
FB15k, FB15k-237, FB15k-num, FB15k-237-num,
WN18, and FB122. FB15k [2] and Wordnet (WN) [1]
are knowledge base completion data sets commonly used
in the literature. The FB15k data set is a representative
subset of the Freebase knowledge base. WN18 represents lexical relations between word senses. The two
data sets are being increasingly criticized for the frequent
occurrence of reverse relations causing simple relational
baselines to outperform most embedding-based methods [27]. For these reasons, we also conducted experiments with FB15k-237 a variant of FB15k where reverse
relations have been removed [27]. FB122 is a subset of
FB15k focusing on relations pertaining to the topics of
“people”, “location”, and “sports.” In previous work, a
set of 47 logical rules was created for FB122 and subsequently used in experiments for methods that take logical
rules into account [9, 16].
Data set
Entities
Relation types
Training triples
Validation triples
Test triples
Relational features
FB15k
14,951
1,345
483,142
50,000
59,071
90,318
FB15k-num
14,951
1,345
483,142
5,156
6,012
90,318
FB15k-237
14,541
237
272,115
17,535
20, 466
7,834
FB15k-237-num
14,541
237
272,115
1,058
1,215
7,834
WN18
40,943
18
141,442
5,000
5,000
14
FB122
9,738
122
91,638
9,595
11,243
47
Table 1: Statistics of the data sets.
n(h,t)
sign
RBF
MR
231
121
MRR
29.7
31.4
Hits@1
20.1
21.2
Hits@10
50.1
52.3
Table 2: KB LRN for two possible input representations
of numerical features for FB15k-237-num.
The main objective of this paper is to investigate the impact of incorporating numerical features. Hence, we created two additional data set by removing those triples
from FB15k’s and FB15k-237’s validation and test sets
where numerical features are never used for the triples’
relation type. Hence, the remaining test and validation
triples lead to completion queries where the numerical
features under consideration are potentially used. We refer to these new data sets as FB15k-num and FB15k-237num. A similar methodology can be followed to evaluate
the performance on a different set of numerical features.
We extracted numerical data from the 1.9 billion triple
Freebase RDF dump by mining triples that associate entities to literals of some numerical type. For example,
the relation /location/geocode/latitude maps
entities to their latitude. We performed the extraction for
all entities in FB15k but only kept a numerical feature
if at least 5 entities had values for it. This resulted in
116 different numerical features and 12,826 entities for
which at least one of the numerical features had a value.
On average each entity had 2.3 numerical features with
a value. The numerical features will be published. Since
numerical data is not available for Wordnet, we do not
perform experiments with numerical features for variants
of this KB.
Each data set contains a set of triples that are known to
be true (usually referred to as positive triples). Statistics of the data sets are provided in Table 1. Since the
identifiers for entities and relations have been changed in
FB13 [26], we could not extract numerical features for
the data set and excluded it from the experiments.
5.2
General Set-up
We evaluated the different methods by their ability to
answer completion queries of the form (h, r, ?) and
(?, r, t). For queries of the form (h, r, ?), we replaced
the tail by each of the KB’s entities in turn, sorted the
triples based on the scores or probabilities returned by
the different methods, and computed the rank of the correct entity. We repeated the same process for the queries
of type (?, r, t). We follow the filtered setting described
in [2] which removes correct triples that are different to
the target triple from the ranked list. The mean of all
computed ranks is the Mean Rank (lower is better) and
the fraction of correct entities ranked in the top n is called
hits@n (higher is better). We also computer the Mean
Reciprocal Rank (higher is better) which is an evaluation
measure for rankings that is less susceptible to outliers.
We conduct experiments with the scoring function of
D IST M ULT [31] which is an application of parallel factor analysis to multi-relational graphs. For a review on
parallel factor analysis we refer the reader to [11]. We
validated the embedding size of KB LRN from the values
{100, 200} for all experiments. These values are used in
most of the literature on KB embedding methods. For all
other embedding methods, we report the original results
from the literature or run the authors’ original implementation. For FB15k and FB15k-237, the results for DistMult, Complex, and R-GCN+ are taken from [25]; results for the relational baseline Node+LinkFeat are taken
from [27]; results for ConvE are taken from [3] and results for TransE were obtained by running the authors’
implementation. For WN18-rules and FB122-all, the results for TransE, TransH, TransR, and KALE are taken
from [9], and results for ComplEx and ASR-ComplEx
are taken from [16]. All methods were tuned for each of
the respective data sets.
For KB LRN we used A DAM [13] for parameter learning
in a mini-batch setting with a learning rate of 0.001, the
categorical cross-entropy as loss function and the number
of epochs was set to 100. We validated every 5 epochs
and stopped learning whenever the MRR (Mean Reciprocal Rank) values on the validation set decreased. The
batch size was set to 512 and the number N of negative samples to 500 for all experiments. We use the abbreviations KBsuffix to refer to the different instances of
KB LRN. suffix is a combination of the letters L (Latent),
R ( R elational) and N ( N umerical) to indicate the inclusion
of each of the three feature types.
T RANS E
D IST M ULT
C OMPL E X
N ODE +L INK F EAT
R-GCN+
C ONV E
MR
51
64
MRR
44.3
65.4
69.2
82.2
69.6
74.5
FB15k
Hits@1
25.1
54.6
59.9
60.1
67.0
KB L
KB R
KB LR
69
628
45
77.4
78.7
79.0
71.2
75.6
74.2
KB LN
KB RN
KB LRN
66
598
44
78.3
78.7
79.4
72.6
75.6
74.8
FB15k-237
Hits@10
MR
MRR Hits@1
76.3
214
25.1
14.5
82.4
19.1
10.6
84.0
20.1
11.2
87.0
23.7
84.2
24.8
15.3
87.3
330
30.1
22.0
without numerical features
87.6
231
30.1
21.4
84.3
2518
18.0
12.8
87.3
231
30.6
22.0
with numerical features
87.8
229
30.4
22.0
84.2
3303
18.2
13.0
87.5
209
30.9
21.9
Hits@10
46.3
37.6
38.8
36.0
41.7
45.8
47.5
28.5
48.2
47.0
28.7
49.3
Table 3: Results (filtered setting) for KB LRN and state of the art approaches.
T RANS E
D IST M ULT
MR
25
39
FB15k-num
MRR Hits@1
34.7
5.5
72.6
62.1
KB L
KB R
KB LR
39
399
28
72.6
84.7
85.3
62.1
81.6
80.3
KB LN
KB RN
KB LRN
32
68
25
73.6
84.0
85.9
63.0
80.6
81.0
FB15k-237-num
Hits@10
MR
MRR Hits@1 Hits@10
79.9
158
21.8
10.41
45.6
89.7
195
26.4
16.4
47.3
without numerical features
89.7
195
26.4
16.4
47.3
90.1
3595
23.6
17.8
36.1
92.4
232
29.3
19.7
49.2
with numerical features
90.7
122
28.6
17.9
51.6
90.0
600
26.1
19.3
39.7
92.9
121
31.4
21.2
52.3
Table 4: Results (filtered) on the data sets where the test and validation sets are comprised of those triples whose type could potentially benefit from numerical features.
T RANS E
T RANS H
T RANS R
K ALE - PRE
K ALE - JOINT
C OMPL E X
ASR-C OMPL E X
KB L
KB R
KB LR
MR
537
7113
588
MRR
45.3
56.0
51.4
53.2
66.2
94.2
94.2
80.8
72.0
93.6
WN18+rules[9]
Hits@3 Hits@5
79.1
89.1
80.0
86.1
69.7
77.5
86.4
91.9
85.5
90.1
94.7
95.0
94.7
95.0
92.5
93.7
72.1
72.1
94.5
94.8
Hits@10
93.6
90.0
84.3
94.4
93.0
95.1
95.1
94.7
72.1
95.1
MR
117
2018
113
MRR
48.0
46.0
40.1
52.3
52.3
64.1
69.8
69.5
54.7
70.2
FB122-all[9]
Hits@3 Hits@5
58.9
64.2
53.7
59.1
46.4
52.4
61.7
66.2
61.2
66.4
67.3
69.5
71.7
73.6
74.6
77.2
54.7
54.7
74.0
77.0
Hits@10
70.2
66.0
59.3
71.8
72.8
71.9
75.7
80.0
54.7
79.7
Table 5: Results (filtered setting) for KB completion benchmarks where logical rules are provided.
5.3
Automated Generation of Relational and
Numerical Features
For the data sets FB15k, FB15k-237, and their numerical versions, we used all relational paths of length one
and two found in the training data as relational features.
These correspond to the formula types (h, r, t) (1-hop)
and ∃x (h, r1 , x) ∧ (x, r2 , t) (2-hops). We computed
these relational paths with A MIE + [4], a highly efficient
system for mining logical rules from knowledge bases.
We used the standard settings of A MIE + with the exception that the minimal head support was set to 1. With
these settings, A MIE + returns horn rules of the form
body ⇒ (x, r, y) that are present for at least 1% of the
triples of the form (x, r, y). For each relation r, we used
the body of those rules where r occurs in the head as
r’s relational path features. For instance, given a rule
such as (x, r1 , z), (z, r2 , y) ⇒ (x, r, y), we introduce
Relation
capital of
spouse of
influenced by
KB LR
MRR H@10
5.7
13.6
4.4
0.0
7.3
20.9
KB LRN
MRR H@10
14.6
18.2
7.9
0.0
9.9
26.8
Table 6: MRR and hits@10 results (filtered) for KB LRN
with and without numerical features in FB15k-237. Results improve for relations where the difference of the
relevant features is approximately normal (see Figure 3).
the relational feature ∃x (h, r1 , x) ∧ (x, r2 , t) for the relation r. For the data sets WN18 and FB122, we used
the set of logical formulas previously used in the literature [9]. Using the same set of relational features allows
us to compare KB LRN with existing approaches that incorporate logical rules into the embedding learning objective [9, 16].
For each relation r we only included a numerical feature if, in at least τ = 90% of training triples, both the
head and the tail had a value for it. This increases the
likelihood that the feature is usable during test time. For
τ = 90% there were 105 relations in FB15k for which at
least one numerical feature was included during learning,
and 33 relations in FB15k-237. With the exception of the
RBF parameters, all network weights are initialized following [8]. The parameters of KB LRN’s RBF kernels
P
(i)
1
are initialized and fixed to ci = |T|
(h,r,t)∈T n(h,t) ,
where T is the set of training triples (h, r, t) for the re(i)
(i)
lationq
r for which both nh and nt have a value; and
P
(i)
1
2
σi = |T|
(h,r,t)∈T (n(h,t) − ci ) .
5.4
Representations of Numerical Features
We experimented with different strategies for incorporating raw numerical features. For the difference of feature values the simplest method is the application of the
sign function. For a numerical attribute i, the activation is either 1 or −1 depending on whether the differ(i)
(i)
ence nh − nt is positive or negative. For a more
nuanced representation of differences of numerical features, a layer of RBF kernels is a suitable choice since
the activation is here highest in a particular range of input values. The RBF kernel might not be appropriate,
however, in cases where the underlying distribution is not
normal.
To evaluate different input representations, we conducted
experiments with KB LRN on the FB15k-237-num data
set. Table 2 depicts the KB completion performance of
two representation strategies for the difference of head
and tail values. Each row corresponds to one evaluated
strategy. “sign” stands for applying the sign function to
capital_of
10 0 10
Diff. of latitude
spouse
influenced_by
30 0 30
Diff. of birth year
400 0 400
Diff. of birth year
Figure 3: Histograms and fitted RBFs for three representative relations and numerical features.
the difference of numerical feature values. RBF stands
for using an RBF kernel layer for the differences of numerical feature values. All results are for the FB15k237-num test triples.
The RBF kernels outperform the sign functions significantly. This indicates that the difference of feature values is often distributed normally and that having a region
of activation is beneficial. Given these results, we use
the RBF input layer for n(h,t) for the remainder of the
experiments.
5.5
Comparison to State of the Art
For the standard benchmark data sets FB15k and FB15k237, we compare KB LRN with T RANS E, D IST M ULT,
C OMPL E X [29], R-GCN+ [25], and ConvE [3].
Table 3 lists the KB completion results. KB LRN is competitive with state of the art knowledge base completion methods on FB15k and significantly outperforms all
other methods on the more challenging FB15k-237 data
set. Since the fraction of triples that can potentially benefit from numerical features is very small for FB15k, the
inclusion of numerical features is only slightly beneficial. For FB15k-237, however, the numerical features
significantly improve the results.
For the numerical versions of FB15k and FB15k-237, we
compared KB LRN to TransE and DistMult. Table 4 lists
the results for the KB completion task on these data sets.
KB LRN significantly outperforms the KB embedding approaches. The positive impact of including the numerical
features is significant.
For the data sets WN18-rules and FB122-all we compared KB LRN to KB embedding methods TransE,
TransR [15], TransH [30], and ComplEx [29] as well as
state of the are approaches for incorporating logical rules
into the learning process. The experimental set-up is consistent with that of previous work. Table 5 lists the results
for the KB completion task on these data sets. KB LRN
combining relational and latent representations significantly outperforms existing approaches on FB122 with
KB LR
KB LRN
O NE
42
60
MR
M ANY
201
135
O NE
74.9
74.2
MRR
M ANY
21.0
22.8
Table 7: MR and MRR results (filtered) on FB15k-237num based on the cardinality of the test queries.
exactly the same set of rules. This provides evidence
that KB LRN’s strategy to combine latent and relational
features is effective despite its simplicity relative to existing approaches. For WN18+rules, KB LRN is competitive with ComplEx, the best performing method on this
data set. In addition, KB LRN’s performance improves
significantly when relational and latent representations
are combined. In contrast, ASR-C OMPL E X is not able
to improve the results of ComplEx, its underlying latent
representation.
5.6
The Impact of Numerical Features
The integration of numerical features improves
KB LRN’s performance significantly. We performed
several additional experiments so as to gain a deeper
understanding of the impact numerical features have.
Table 6 compares KB LRN’s performance with and
without integrating numerical features on three relations. The performance of the model with numerical
features is clearly superior for all three relationships
(capital of, spouse and influenced by). Figure 3 shows the normalized histograms for the values
n(h,t) for these relations. We observe the differences of
feature values are approximately normal.
Following previous work [2], we have classified each test
query of FB15k-237-num as either O NE or M ANY, depending on the whether one or many entities can complete that query. For the queries labeled O NE the model
without numerical features shows a slightly worse performance with respect to the model that makes use of
them, whereas for the queries labeled M ANY, KB LRN
significantly outperforms KB LR in both MR and MRR.
A well-known finding is the lack of completeness of
FB15k and FB15k-237. This results in numerous cases
where the correct entity for a completion query is not
contained in the ground truth (neither in the training, nor
test, nor validation data set). This is especially problematic for queries where a large number of entities are correct completions. To investigate the actual benefits of
the numerical features we carried out the following experiment: We manually determined all correct completions for the query (USA, /location/contains, ?).
We ended up with 1619 entities that correctly complete
T RANS E
KB LR
KB LRN
AUC-PR
0.837
0.913
0.958
MR
231
94
43
MRR
26.5
66.8
70.8
Table 8: AUC-PR, MR, and MRR results for the completion query (USA, /location/contains, ?).
the query. FB15k-237 contains only 954 of these correct
completions. With a complete ground truth, we can now
use the precision-recall area under the curve (PR-AUC)
metric to evaluate KB completion methods [19, 17, 5]. A
high PR-AUC represents both high recall and high precision. Table 8 lists the results for the different methods.
KB LRN with numerical features consistently and significantly outperformed all other approaches.
6
Conclusion
We introduced KB LRN, a class of machine learning
models that, to the best of our knowledge, is the first proposal aiming at integrating relational, latent, and continuous features of KBs into a single framework. KB LRN
outperforms state of the art KB completion methods on a
range of data sets. We show that the inclusion of numerical features is beneficial for KB completion tasks.
References
[1] A. Bordes, X. Glorot, J. Weston, and Y. Bengio.
A semantic matching energy function for learning with multi-relational data. Machine Learning,
94(2):233–259, 2014.
[2] A. Bordes, N. Usunier, A. Garcia-Duran, J. Weston, and O. Yakhnenko. Translating embeddings
for modeling multi-relational data. In Advances
in Neural Information Processing Systems, pages
2787–2795, 2013.
[3] T. Dettmers, P. Minervini, P. Stenetorp, and
S. Riedel. Convolutional 2d knowledge graph embeddings. arXiv preprint arXiv:1707.01476, 2017.
[4] L. Galárraga, C. Teflioudi, K. Hose, and F. M.
Suchanek. Fast rule mining in ontological knowledge bases with amie+. The VLDB Journal,
24(6):707–730, 2015.
[5] A. Garcia-Duran, A. Bordes, N. Usunier, and
Y. Grandvalet. Combining two and three-way embeddings models for link prediction in knowledge
bases. arXiv preprint arXiv:1506.00999, 2015.
[6] M. Gardner and T. M. Mitchell. Efficient and
expressive knowledge base completion using subgraph feature extraction. In EMNLP, pages 1488–
1498, 2015.
[7] M. Gardner, P. P. Talukdar, J. Krishnamurthy, and
T. Mitchell. Incorporating vector space similarity
in random walk inference over knowledge bases. In
EMNLP, 2014.
[8] X. Glorot and Y. Bengio. Understanding the difficulty of training deep feedforward neural networks.
In AISTATS, volume 9, pages 249–256, 2010.
[9] S. Guo, Q. Wang, L. Wang, B. Wang, and L. Guo.
Jointly embedding knowledge graphs and logical
rules. In EMNLP, pages 192–202, 2016.
[10] K. Guu, J. Miller, and P. Liang. Traversing knowledge graphs in vector space. In EMNLP, 2015.
[11] R. A. Harshman and M. E. Lundy. Parafac: Parallel
factor analysis. Computational Statistics & Data
Analysis, 18(1):39–72, 1994.
[12] G. E. Hinton. Training products of experts by minimizing contrastive divergence. Neural computation, 14(8):1771–1800, 2002.
[13] D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980,
2014.
[14] N. Lao, T. Mitchell, and W. W. Cohen. Random
walk inference and learning in a large scale knowledge base. In EMNLP, pages 529–539, 2011.
[15] Y. Lin, Z. Liu, M. Sun, Y. Liu, and X. Zhu. Learning entity and relation embeddings for knowledge
graph completion. In AAAI, pages 2181–2187,
2015.
[16] P. Minervini, T. Demeester, T. Rocktäschel, and
S. Riedel. Adversarial sets for regularised neural
link predictors. In UAI, 2017.
[17] M. Nickel, X. Jiang, and V. Tresp. Reducing the
rank in relational factorization models by including observable patterns. In NIPS, pages 1179–1187,
2014.
[18] M. Nickel, K. Murphy, V. Tresp, and
E. Gabrilovich.
A review of relational machine learning for knowledge graphs. Proceedings
of the IEEE, 104(1):11–33, 2016.
[19] M. Nickel, V. Tresp, and H.-P. Kriegel. A three-way
model for collective learning on multi-relational
data. In ICML, pages 809–816, 2011.
[20] M. Niepert. Discriminative gaifman models. In
NIPS, pages 3405–3413, 2016.
[21] L. D. Raedt, K. Kersting, S. Natarajan, and
D. Poole. Statistical relational artificial intelligence: Logic, probability, and computation. Synthesis Lectures on Artificial Intelligence and Machine Learning, 10(2):1–189, 2016.
[22] S. Riedel, L. Yao, A. McCallum, and B. M. Marlin. Relation extraction with matrix factorization
and universal schemas. 2013.
[23] T. Rocktäschel, S. Singh, and S. Riedel. Injecting
logical background knowledge into embeddings for
relation extraction. In HLT-NAACL, pages 1119–
1129, 2015.
[24] S. Russell and P. Norvig. Artificial Intelligence: A
Modern Approach. Prentice Hall Press, 3rd edition,
2009.
[25] M. Schlichtkrull, T. Kipf, P. Bloem, R. v. d. Berg,
I. Titov, and M. Welling. Modeling relational data
with graph convolutional networks. arXiv preprint
arXiv:1703.06103, 2017.
[26] R. Socher, D. Chen, C. D. Manning, and A. Y. Ng.
Reasoning with neural tensor networks for knowledge base completion. In NIPS, pages 926–934,
2013.
[27] K. Toutanova and D. Chen. Observed versus latent features for knowledge base and text inference.
In Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality, pages 57–66, 2015.
[28] K. Toutanova, D. Chen, P. Pantel, H. Poon,
P. Choudhury, and M. Gamon. Representing text
for joint embedding of text and knowledge bases.
In EMNLP, pages 1499–1509, 2015.
[29] T. Trouillon, J. Welbl, S. Riedel, E. Gaussier, and
G. Bouchard. Complex embeddings for simple link
prediction. In ICML, pages 2071–2080, 2016.
[30] Z. Wang, J. Zhang, J. Feng, and Z. Chen. Knowledge graph embedding by translating on hyperplanes. In AAAI, pages 1112–1119, 2014.
[31] B. Yang, W.-t. Yih, X. He, J. Gao, and
L. Deng. Learning multi-relational semantics using neural-embedding models. arXiv preprint
arXiv:1411.4072, 2014.
| 2 |
arXiv:1803.01665v1 [] 5 Mar 2018
Expected length of post-model-selection
confidence intervals conditional on
polyhedral constraints
Danijel Kivaranovic
Department of Statistics and Operations Research, University of Vienna,
and
Hannes Leeb
Department of Statistics and Operations Research, University of Vienna
March 6, 2018
Abstract
Valid inference after model selection is currently a very active area of research.
The polyhedral method, pioneered by Lee et al. (2016), allows for valid inference
after model selection if the model selection event can be described by polyhedral
constraints. In that reference, the method is exemplified by constructing two valid
confidence intervals when the Lasso estimator is used to select a model. We here
study the expected length of these intervals. For one of these confidence intervals,
that is easier to compute, we find that its expected length is always infinite. For
the other of these confidence intervals, whose computation is more demanding, we
give a necessary and sufficient condition for its expected length to be infinite. In
simulations, we find that this condition is typically satisfied.
Keywords: Lasso, inference after model selection, hypothesis test.
1
1
Introduction
Lee et al. (2016) recently introduced a new technique for valid inference after model selection, the so-called polyhedral method. Using this method, and using the Lasso for model
selection in linear regression, Lee et al. (2016) derived two new confidence sets that are
valid conditional on the outcome of the model selection step. More precisely, let m̂ denote the model containing those regressors that correspond to non-zero coefficients of the
Lasso estimator, and let ŝ denote the sign-vector of those non-zero Lasso coefficients. Then
Lee et al. (2016) constructed confidence intervals [Lm̂,ŝ , Um̂,ŝ ] and [Lm̂ , Um̂ ] whose coverage
probability is 1 − α, conditional on the events {m̂ = m, ŝ = s} and {m̂ = m}, respectively
(provided that the probability of the conditioning event is positive). The computational
effort in constructing these intervals is considerably lighter for [Lm̂,ŝ , Um̂,ŝ ]. In simulations,
Lee et al. (2016) noted that this latter interval can be quite long in some cases; cf. Figure 10
in that reference. We here analyze the (conditional) expected length of these intervals.
1.1
Overview of findings
Throughout, we use the same setting and assumptions as Lee et al. (2016). In particular,
we assume that the response vector is distributed as N (µ, σ 2 In ) with unknown mean µ ∈ Rn
and known variance σ 2 > 0 (our results carry over to the unknown-variance case; see the
end of Section 3), and that the non-stochastic regressor matrix has columns in general
position. Write Pµ,σ2 and Eµ,σ2 for the probability measure and the expectation operator,
respectively, corresponding to N (µ, σ 2 In ).
For the interval [Lm̂,ŝ , Um̂,ŝ ], we find the following: Fix a non-empty model m, a signvector s, as well as µ ∈ Rn and σ 2 > 0. If Pµ,σ2 (m̂ = m, ŝ = s) > 0, then
Eµ,σ2 [Um̂,ŝ − Lm̂,ŝ | m̂ = m, ŝ = s]
2
=
∞.
(1)
Obviously, this statement continues to hold if the event m̂ = m, ŝ = s is replaced by the
larger event m̂ = m throughout (because there are only finitely many possible values for
the sign-vector s). And this statement continues to hold if the condition Pµ,σ2 (m̂ = m, ŝ =
s) > 0 is dropped and the conditional expectation in (1) is replaced by the unconditional
one.
For the interval [Lm̂ , Um̂ ], we derive a necessary and sufficient condition for its expected
length to be infinite, conditional on the event m̂ = m. That condition depends on the
regressor matrix, on the model m and also on a linear contrast that defines the quantity
of interest, and is daunting to verify in all but the most basic examples. We also provide
a sufficient condition for infinite expected length that is easy to verify. In simulations, we
find that this sufficient condition for infinite expected length is typically satisfied when the
model m excludes a significant portion of all the available regressors (e.g., if the selected
model is ‘sparse’). And even if the model m is not sparse, we find that this condition is still
satisfied for a sizable fraction of the linear contrasts that define the quantity of interest.
See Table 1 and the attending discussion for more detail.
The methods developed in this paper can also be used if the Lasso, as the model
selector, is replaced by any other procedure that allows for application of the polyhedral
method. In particular, we see that confidence intervals based on the polyhedral method
in Gaussian regression can have infinite expected length. Our findings suggest that the
expected length of confidence intervals based on the polyhedral method should be closely
scrutinized, in Gaussian regression but also in non-Gaussian settings and in other variations
of the polyhedral method.
The rest of the paper is organized as follows: We conclude this section by discussing a
number of related results that put our findings in context. Section 2 describes the confidence
intervals of Lee et al. (2016) in detail and introduces some notation. Section 3 contains
3
two core results, Propositions 1 and 2, which entail our main findings, the simulation study
mentioned earlier, as well as a discussion of the unknown variance case. The appendix
contains the proofs of our core results and some auxiliary lemmas.
1.2
Context and related results
There are currently several exciting ongoing developments based on the polyhedral method,
not least because it proved to be applicable to more complicated settings, and there are
several generalization of this framework. See, among others, Tibshirani et al. (2016), Taylor
& Tibshirani (2017), Tian & Taylor (2015). Certain optimality results of the method of
Lee et al. (2016) are given in Fithian et al. (2017). Using a different approach, Berk et al.
(2013) proposed the so-called PoSI-intervals which are unconditionally valid. A benefit of
the PoSI-intervals is that they are valid after selection with any possible model selector,
instead of a particular one like the Lasso; however, as a consequence, the PoSI-intervals are
typically very conservative (that is, the actual coverage probability is above the nominal
level). Nonetheless, Bachoc et al. (2016) showed in a Monte Carlo simulation that, in
certain scenarios, the PoSI-intervals can be shorter than the intervals of Lee et al. (2016).
The results of the present paper are based on the first author’s master’s thesis.
It is important to note that all confidence sets discussed so far are non-standard, in
the sense that the parameter to be covered is not the true parameter in an underlying correct model (or components thereof), but instead is a model-dependent quantity of interest.
(See Section 2 for details and the references in the preceding paragraph for more extensive
discussions.) An advantage of this non-standard approach is that it does not rely on the
assumption that any of the candidate models is correct. Valid inference for an underlying
true parameter is a more challenging task, as demonstrated by the impossibility results in
Leeb & Pötscher (2006a,b, 2008). There are several proposals of valid confidence intervals
4
after model selection (in the sense that the actual coverage probability of the true parameter is at or above the nominal level) but these are rather large compared to the standard
confidence intervals from the full model (supposing that one can fit the full model); see
Pötscher (2009), Pötscher & Schneider (2010), Schneider (2016). In fact, Leeb & Kabaila
(2017) showed that the usual confidence interval obtained by fitting the full model is admissible also in the unknown variance case; therefore, one cannot obtain uniformly smaller
valid confidence sets for a component of the true parameter by any other method.
2
Assumptions and confidence intervals
Let Y denote the N (µ, σ 2 In )-distributed response vector, n ≥ 1, where µ ∈ Rn is unknown
and σ 2 > 0 is known. Let X = (x1 , . . . , xp ), p ≥ 1, with xi ∈ Rn for each i = 1, . . . , p,
be the non-stochastic n × p regressor matrix. We assume that the columns of X are in
general position (this mild assumption is further discussed in the following paragraph).
The full model {1, . . . , p} is denoted by mF . All subsets of the full model are collected in
M, that is, M = {m : m ⊆ mF }. The cardinality of a model m is denoted by |m|. For
any m = {i1 , . . . , ik } ∈ M \ ∅ with i1 < · · · < ik , we set Xm = (xi1 , . . . , xik ). Analogously,
for any vector v ∈ Rp , we set vm = (vi1 , . . . , vik )0 . If m is the empty model, then Xm is to
be interpreted as the zero vector in Rn and vm as 0.
The Lasso estimator, denoted by β̂(y), is a minimizer of the least squares problem with
an additional penalty on the absolute size of the regression coefficients (Frank & Friedman
1993, Tibshirani 1996):
minp
β∈R
1
ky − Xβk22 + λkβk1 ,
2
y ∈ Rn , λ > 0.
The Lasso has the property that some coefficients of β̂(y) are zero with positive probability.
A minimizer of the Lasso objective function always exists, but it is not necessarily unique.
5
Uniqueness of β̂(y) is guaranteed here by our assumption that the columns of X are in
general position (Tibshirani 2013). This assumption is relatively mild; e.g., if the entries
of the matrix X are drawn from a (joint) distribution that has a Lebesgue density, then
the columns of X are in general position with probability 1 (Tibshirani 2013). The model
m̂(y) selected by the Lasso and the sign-vector ŝ(y) of non-zero Lasso coefficients can now
formally be defined through
n
o
m̂(y) = j : β̂j (y) 6= 0
and
ŝ(y) = sign β̂m̂(y) (y) .
Recall that M denotes the set of all possible submodels and set Sm = {−1, 1}|m| for each
+
m ∈ M. For later use we also denote by M+ and Sm
the collection of non-empty models
and the collection of corresponding sign-vectors, that occur with positive probability, i.e.,
M+ = {m ∈ M \ ∅ : Pµ,σ2 (m̂(Y ) = m) > 0} ,
+
Sm
= {s ∈ Sm : Pµ,σ2 (m̂(Y ) = m, ŝ(Y ) = s) > 0}
(m ∈ M \ ∅).
These sets do not depend on µ and σ 2 as the measure Pµ,σ2 is equivalent to Lebesgue
measure with respect to null sets. Also, our assumption that the columns of X are in
general position guarantees that M+ only contains models m for which Xm has columnrank m (Tibshirani 2013).
Inference is focused on a non-standard, model dependent, quantity of interest. Throughout the following, fix m ∈ M+ and let
0
0
0
0
β m = Eµ,σ2 (Xm
Xm )−1 Xm
Y = (Xm
Xm )−1 Xm
µ.
For γ m ∈ R|m| \{0}, the goal is to construct a confidence interval for γ m0 β m with conditional
coverage probability 1 − α on the event {m̂ = m}. Clearly, the quantity of interest can
0
also be written as γ m0 β m = η m0 µ for η m = Xm (Xm
Xm )−1 γ m . For later use, write Pηm for
the orthogonal projection on the space spanned by η m .
6
At the core of the polyhedral method lies the observation that the event where m̂ = m
and where ŝ = s describes a convex polytope in sample space Rn (up to a Lebesgue null
set): For each m ∈ M+ and each s ∈ Sm , we have
a.s.
{y : m̂(y) = m, ŝ(y) = s}
=
{y : Am,s y < bm,s },
(2)
cf. Theorem 3.3 in Lee et al. (2016) (explicit formulas for the matrix Am,s and the vector
bm,s are also repeated in Appendix C in our notation). Fix z ∈ Rn orthogonal to η m . Then
the set of y satisfying (In − Pηm )y = z and Am,s y < b is either empty or a line segment. In
−
+
either case, that set can be written as {z + η m w : Vm,s
(z) < w < Vm,s
(z)}. The endpoints
+
−
(z) ≤ ∞ (see Lemma 4.1 of Lee et al. 2016; formulas for
(z) ≤ Vm,s
satisfy −∞ ≤ Vm,s
these quantities are also given in Appendix C in our notation). Now decompose Y into
the sum of two independent Gaussians Pηm Y and (In − Pηm )Y , where the first one is a
linear function of η m0 Y ∼ N (η m0 µ, σ 2 η m0 η m ). With this, the conditional distribution of
η m0 Y , conditional on the event that m̂(Y ) = m, ŝ(Y ) = s and (In − Pηm )(Y ) = z, is the
−
+
conditional N (η m0 µ, σ 2 η m0 η m )-distribution, conditional on the set (Vm,s
(z), Vm,s
(z)) (in the
sense that the latter conditional distribution is a regular conditional distribution if one
starts with the conditional distribution of η m0 Y given m̂ = m and ŝ = s – which is always
well-defined – and if one then conditions on the random variable (In − Pηm )Y ).
To use these observations for the construction of confidence sets, consider first the
conditional distribution of a random variable W ∼ N (θ, ς 2 ) conditional on the event W ∈ T ,
where θ ∈ R, where ς 2 > 0, and where T ⊆ R is the union of finitely many non-empty
T
open intervals. Write Fθ,ς
2 (·) for the cumulative distribution function (c.d.f) of W given
W ∈ T . The corresponding law can be viewed as a ‘truncated normal’ distribution and
will be denoted by T N (θ, ς 2 , T ) in the following. A confidence interval for θ with coverage
probability 1 − α conditional on the event W ∈ T is obtained by the usual method of
7
collecting all values θ0 for which a hypothesis test of H0 : θ = θ0 against H1 : θ 6= θ0 does
not reject. In particular, for w ∈ R, define L(w) and U (w) through
T
FL(w),ς
2 (w) = 1 −
α
2
and FUT(w),ς 2 (w) =
α
2
(these quantities are well-defined in view of Lemma A.2). With this, we have P (θ ∈
[L(W ), U (W )]|W ∈ T ) = 1 − α, irrespective of θ ∈ R.
+
2
−
+
Fix m ∈ M+ and s ∈ Sm
, and let σm
= σ 2 η m0 η m and Tm,s (z) = (Vm,s
(z), Vm,s
(z)) for z
orthogonal to η m . With this, we have
η m0 Y {m̂ = m, ŝ = s, (In − Pηm )Y = z}
∼
2
T N (η m0 µ, σm
, Tm,s (z)).
for each z ∈ {(In − Pηm )y : Am,s y < bm,s }. Now define Lm,s (y) and Um,s (y) through
T
((I −P
m,s
n
η
FLm,s
2
(y),σm
m )y)
(η m0 y) = 1 −
α
2
T
((I −P
m,s
n
η
and FUm,s
2
(y),σm
m )y)
(η m0 y) =
α
2
for each y so that Am,s y < bm,s . Then, by construction, the random interval [Lm,s (Y ), Um,s (Y )]
covers η m0 µ with probability 1 − α conditional on the event that m̂ = m and ŝ = s.
In a similar fashion, fix m ∈ M+ , set Tm (z) = ∪s∈Sm+ Tm,s (z) for z orthogonal to η m ,
and define Lm (y) and Um (y) through
T ((I −P
n
η
FLmm(y),σ
2
m
m )y)
(η m0 y) = 1 −
α
2
T ((I −P
n
η
and FUmm(y),σ
2
m
m )y)
(η m0 y) =
α
.
2
Again by construction, the random interval [Lm (Y ), Um (Y )] covers η m0 µ with probability
1 − α conditional on the event that m̂ = m.
Remark. (i) If m̃ = m̃(y) is any other model selection procedure, so that the event
{y : m̃ = m} is the union of a finite number of polyhedra (up to null sets), then the
polyhedral method can be applied to obtain a confidence set for η m0 µ with conditional
coverage probability 1 − α, conditional on the event {m̃ = m}, if that event has positive
8
probability. Indeed, for such a model selection procedure, the arguments following (1) also
apply, mutatis mutandis.
(ii) So far, we have defined confidence intervals only on the events W ∈ T , m ∈ M+ and
+
sm ∈ Sm
, and m ∈ M+ , respectively. In the remaining cases, the interval endpoints (and
the corresponding quantity of interest) can be chosen arbitrarily (measurable) without
affecting our results. It is easy to choose constructions so that one obtains meaningful
confidence intervals that are defined everywhere in sample space.
(iii) In Theorem 3.3 of Lee et al. (2016), relation (2) is stated as an equality, not as an
equality up to null sets, and with the right-hand side replaced by {y : Am,s y ≤ bm,s } (in
our notation). Because (2) differs from this only on a Lebesgue null set, the difference is
inconsequential for the purpose of the present paper. The statement in Lee et al. (2016) is
based on the fact that m̂ was defined as the equicorrelation set (Tibshirani 2013) in that
paper. But if m̂ is the equicorrelation set, then there can exist vectors y ∈ {m̂ = m} such
that some coefficients of β̂(y) are zero, which clashes with the idea that m̂ contains those
variables whose Lasso coefficients are non-zero. However, for any m ∈ M+ , the set of such
ys is a Lebesgue null set.
3
Core results
We first analyze the simple confidence set [L(W ), U (W )] that was introduced in the preceding section, which covers θ with probability 1 − α, conditional on W ∈ T , where
W ∼ N (θ, ς 2 ). By assumption, T is of the form T = ∪K
i=1 (ai , bi ) where K < ∞ and
−∞ ≤ a1 < b1 < · · · < aK < bK ≤ ∞. Figure 1 exemplifies the length of [L(w), U (w)]
when T is bounded (left panel) and when T is unbounded (right panel). The dashed line
9
is the length of the standard (unconditional) confidence interval for θ. In the left panel,
we see that the length of [L(w), U (w)] diverges as w approaches the far left or the far right
boundary point of the truncation set (i.e., -3 and 3). On the other hand, in the right
panel we see that the length of [L(w), U (w)] is bounded and converges to the length of the
standard interval as |w| → ∞.
4.5
U(w) − L(w)
U(w) − L(w)
10.0
7.5
4.0
3.5
5.0
3.0
−3
−2
−1
0
w
1
2
3
−7
−5
−3
−1
1
3
5
7
w
Figure 1: Length of the interval [L(w), U (w)] for the case where T = (−3, −2) ∪ (−1, 1) ∪
(2, 3) (left panel) and the case where T = (−∞, −2) ∪ (−1, 1) ∪ (2, ∞) (right panel). In
both cases, we took ς 2 = 1 and α = 0.05.
Write Φ(w) and φ(w) for the c.d.f. and p.d.f. of the standard normal distribution,
respectively, where we adopt the usual convention that Φ(−∞) = 0 and Φ(∞) = 1.
Proposition 1. If T is bounded either from above or from below, then
E[U (W ) − L(W )|W ∈ T ]
10
=
∞.
If T is unbounded from above and from below, then
U (W ) − L(W )
ς
a.s.
≤
2Φ−1 (1 − p∗ α/2)
≤
2Φ−1 (1 − α/2) +
aK − b1
,
ς
where p∗ = inf ϑ∈R P (N (ϑ, ς 2 ) ∈ T ) and where aK − b1 is to be interpreted as 0 in case
K = 1. [The first inequality trivially continues to hold if T is bounded, as then p∗ = 0.]
Intuitively, one expects confidence intervals to be wide if one conditions on a bounded set
because extreme values cannot be observed on a bounded set and the confidence intervals
have to take this into account. We find that the conditional expected length is infinite in
this case. If, for example, T is bounded from below, i.e., if −∞ < a1 , then first statement
in the proposition follows from two facts: First, the length of U (w) − L(w) behaves like
1/(w − a1 ) as w approaches a1 from above; and, second, the p.d.f. of the truncated normal
distribution at w is bounded away from 0 zero as w approaches a1 from above. See the
proof in Section B for a more detailed version of this argument. On the other hand, if
the truncation set is unbounded, extreme values are observable and confidence intervals,
therefore, do not have to be extremely wide. The second upper bound provided by the
proposition for that case will be useful later.
We see that the boundedness of the truncation set T is critical for the interval length.
When the Lasso is used as a model selector, this prompts the question whether the truncation sets Tm,s (z) and Tm (z) are bounded or not, because the intervals [Lm,s (y), Um,s (y)]
and [Lm (y), Um (y)] are obtained from conditional normal distributions with truncation sets
+
Tm,s ((In − Pηm )y) and Tm ((In − Pηm )y), respectively. For m ∈ M+ , s ∈ Sm
, and z orthog−
+
onal to η m , recall that Tm,s (z) = (Vm,s
(z), Vm,s
(z)), and that Tm (z) is the union of these
+
intervals over s ∈ Sm
. Write [η m ]⊥ for the orthogonal complement of the span of η m .
11
Proposition 2. For each m ∈ M+ and each s ∈ Sm , we have
−
(z)
∀z ∈ [η m ]⊥ : −∞ < Vm,s
or
+
(z) < ∞
∀z ∈ [η m ]⊥ : Vm,s
or both.
For the confidence interval [Lm̂,ŝ (Y ), Um̂,ŝ (Y )], the statement in (1) now follows immediately: If m is a non-empty model and s is a sign-vector so that the event {m̂ = m, ŝ = s}
+
has positive probability, then m ∈ M+ and s ∈ Sm
. Now Proposition 2 entails that
Tm,s ((In − Pηm )Y ) is almost surely bounded on the event {m̂ = m, ŝ = s}, and Proposition 1 entails that (1) holds.
For the confidence interval [Lm̂ (Y ), Um̂ (Y )], we obtain that its conditional expected
length is finite, conditional on m̂ = m with m ∈ M+ , if and only if its corresponding
truncation set Tm (Y ) is almost surely unbounded from above and from below on that
event. More precisely, for m ∈ M+ , we have
Eµ,σ2 [Um̂ (Y ) − Lm̂ (Y )|m̂ = m]
=
∞
(3)
+
if and only if there exists a s ∈ Sm
and a vector y satisfying Am,s y < bm,s , so that
Tm ((In − Pηm )y) is bounded from above or from below.
(4)
Before proving this equivalence, recall that Tm ((In − Pηm )y) is the union of the intervals
+
+
−
. Inspection of the explicit formulas for
((In − Pηm )y)) with s ∈ Sm
((In − Pηm )y), Vm,s
(Vm,s
the interval endpoints given in Appendix C now immediately reveals the following: The
−
lower endpoint Vm,s
((In − Pηm )y) is either constant equal to −∞ on the set {y : Am,s y <
bm,s }, or it is the minimum of a finite number of linear functions of y (and hence finite and
+
continuous) on that set. Similarly the upper endpoint Vm,s
((In − Pηm )y) is either constant
12
equal to ∞ on that set, or it is the maximum of a finite number of linear functions of y
(and hence finite and continuous) on that set.
+
To prove the equivalence, we first assume, for some s and y with s ∈ Sm
and Am,s y <
bm,s , that the set in (4) is bounded from above (the case of boundedness from below is
similar). Then there is an open neighborhood O of y, so that each point w ∈ O satisfies
Am,s w < bm,s and also so that Tm ((In − Pηm )w) is bounded from above. Because O has
positive Lebesgue measure, (3) now follows from Proposition 1. To prove the converse,
+
and each y satisfying Am,s y < bm,s that Tm ((In − Pηm )y) is
assume for each s ∈ Sm
+
unbounded from above and from below. Because the sets {y : Am,s y < bm,s } for s ∈ Sm
are
+
. Using
disjoint by construction, the same is true for the sets Tm,s ((In − Pηm )y) for s ∈ Sm
Proposition 1, we then obtain that Um̂ (Y ) − Lm̂ (Y ) is bounded by a linear function of
−
+
max{Vm,s
((In − Pηm )Y ) : s ∈ Sm
}
−
+
+
min{Vm,s
((In − Pηm )Y ) : s ∈ Sm
}
Lebesgue almost everywhere on the event {m̂ = m}. (The maximum and the minimum in
the preceding display correspond to aK and b1 , respectively, in Proposition 1.) It remains
to show that the expression in the preceding display has finite conditional expectation on
the event {m̂ = m}. But this expression is the maximum of a finite number of Gaussians
minus the minimum of a finite number of Gaussians. Its unconditional expectation, and
hence also its conditional expectation on the event {m̂ = m}, is finite.
In order to infer (3) from (4), that latter condition needs to be checked for every point y
in a union of polyhedra. While this is easy in some simple examples like, say, the situation
depicted in Figure 1 of Lee et al. (2016), searching over polyhedra in Rn is hard in general.
In our simulations, we therefore use a simpler sufficient condition that implies (3): After
observing the data, i.e., after observing a particular value y ∗ of Y , and hence also observing
m̂(y ∗ ) = m and ŝ(y ∗ ) = s, we check whether Tm ((In − Pηm )y ∗ ) is bounded from above or
13
from below (and also whether Am,s y ∗ < bm,s , which, if satisfied, entails that m ∈ M+ and
+
). If this is the case, then it follows, ex post, that (3) holds. Note that these
that s ∈ Sm
computations occur naturally during the computation of [Lm (y ∗ ), Um (y ∗ )] and can hence
be performed as a safety precaution with little extra effort.
Remark. If m̃ is any other model selection procedure, so that the event {y : m̃ = m} is
the union of a finite number of polyhedra (up to null sets), then the polyhedral method can
be applied to obtain a confidence set for η m0 µ with conditional coverage probability 1 − α,
conditional on the event {m̃ = m} if that event has positive probability. Clearly, for such a
model selection procedure, an equivalence similar to (3)–(4) holds. Indeed, the derivation
of this equivalence relies on Proposition 1 but not on the Lasso-specific Proposition 2.
3.1
Simulation results
To investigate whether condition (4) is restrictive or not, we perform an exploratory simulation exercise consisting of repeated samples of size n = 100 in the nine scenarios corresponding to the rows in Table 1, which cover all combinations of the cases p = 20, p = 50
and p = 200 (i.e., p small, moderate, large) with the cases λ = 0.1, λ = 1 and λ = 10 (i.e.,
λ small, moderate, large). For each of the nine scenarios, we generate an n × p regressor
matrix X whose rows follow a p-variate Gaussian distribution with mean zero, so that the
diagonal elements of the covariance matrix all equal 1 and the off-diagonal elements all
equal 0.2. We then generate an n-vector y ∗ whose entries are i.i.d. standard Gaussians,
compute the Lasso estimator β̂(y ∗ ) and the resulting selected model m = m̂(y ∗ ) (if m = ∅,
this process is repeated with a newly generated vector y ∗ ). Finally, we generate 2000 direc0
tions γjm that are i.i.d. uniform on the unit-sphere in Rm and set ηjm = Xm (Xm
Xm )−1 γjm ,
1 ≤ j ≤ 2000. For each ηjm we now check if the sufficient condition outlined in the preceding
14
paragraph is satisfied with ηjm replacing η m . If it is satisfied, the corresponding confidence
set [Lm̂ (Y ), Um̂ (Y )] (with ηjm replacing η m ) is guaranteed to have infinite expected length
conditional on the event that m̂ = m. The fraction of indices j, 1 ≤ j ≤ 2000, for which
this is the case, together with the number of parameters in the selected model are displayed
in the cells of Table 1, for 50 independent replications of y ∗ in each scenario (row). In each
of the nine scenarios, and for each of the 50 replications in each, this fraction estimates
a lower bound for the probability that the confidence interval [Lm̂ (Y ), Um̂ (Y )] has infinite
expected length conditional on m̂ = m if the direction of interest, i.e., γ m or, equivalently,
η m , is chosen at random.
As soon as the selected model excludes more than a few variables, we see that the
conditional expected length of [Lm̂ (Y ), Um̂ (Y )] is guaranteed to be infinite in a substantial
number of cases. In particular, this always occurs if p > n. (Also keep in mind that we
only check a sufficient condition for infinite expected length, not a necessary one.) Also,
within each row in the table, the number of cases with infinite conditional expected length
is roughly increasing as the number of parameters in the selected model decreases. Beyond
these observations, there appears to be no simple relation between the number of selected
variables in the model and the percentage of cases where the interval has infinite conditional
expected length. We also stress here that a simulation study can not be exhaustive and
that other simulation scenarios will give different results.
3.2
The unknown variance case
Suppose here that σ 2 > 0 is unknown and that σ̂ 2 is an estimator for σ 2 . Fix m ∈ M+
+
and s ∈ Sm
. Note that the set Am,s y < bm,s does not depend on σ 2 and hence also
−
−
Vm,s
((In − Pηm )y) and Vm,s
((In − Pηm )y) do not depend on σ 2 . For each ς 2 > 0 and for
each y so that Am,s y < bm,s define Lm,s (y, ς 2 ), Um,s (y, ς 2 ), Lm (y, ς 2 ), and Um (y, ς 2 ) like
15
p
λ
0.1
20
1
10
0.1
50
1
10
0.1
200
1
10
y1∗
y2∗
y3∗
···
∗
y48
∗
y49
∗
y50
0%
0%
0%
···
80%
100%
100%
(20)
(20)
(20)
···
(19)
(19)
(19)
0%
0%
0%
···
100%
100%
100%
(20)
(20)
(20)
···
(16)
(16)
(14)
49%
60%
72% · · ·
100%
100%
100%
(4)
(7)
(8)
···
(3)
(3)
(3)
0%
0%
0%
···
100%
100%
100%
(50)
(50)
(50)
···
(48)
(47)
(47)
96%
99%
100%
···
100%
100%
100%
(47)
(44)
(47)
···
(39)
(38)
(37)
100%
100%
100% · · ·
100%
100%
100%
(18)
(18)
···
(7)
(4)
(3)
100%
100%
100% · · ·
100%
100%
100%
(100)
(100)
···
(97)
(97)
(95)
100%
100%
100% · · ·
100%
100%
100%
(96)
(94)
···
(84)
(83)
(82)
100%
100%
100% · · ·
100%
100%
100%
(41)
(37)
···
(21)
(20)
(19)
(17)
(100)
(94)
(36)
Table 1: Percentage of cases where η m is such that the confidence interval [Lm̂ (Y ), Um̂ (Y )]
for η m0 µ is guaranteed to have infinite expected length conditional on m̂ = m, with m =
m̂(yi∗ ) and, in parentheses, the number of parameters in the model, i.e., |m|. The entries
in each row are ordered to improve readability, first by percentage (increasing) and second
by number of parameters (decreasing).
16
Lm,s (y), Um,s (y), Lm (y), and Um (y) in Section 2 with ς 2 replacing σ 2 in the formulas.
2
= σ 2 η m0 η m .) The asymptotic coverage
(Note that, say, Lm,s (y) depends on σ 2 through σm
probability of the intervals [Lm,s (Y, σ̂ 2 ), Um,s (Y, σ̂ 2 )] and [Lm (Y, σ̂ 2 ), Um (Y, σ̂ 2 )], conditional
on the events {m̂ = m, ŝ = s} and {m̂ = m}, respectively, is discussed in Lee et al. (2016).
If σ̂ 2 is independent of η m0 Y and positive with positive probability, then it is easy to see
that (1) continues to hold with [Lm,s (Y, σ̂ 2 ), Um,s (Y, σ̂ 2 )] replacing [Lm,s (Y ), Um,s (Y )] for
+
. And if, in addition, σ̂ 2 has finite mean conditional on the
each m ∈ M+ and each s ∈ Sm
event {m̂ = m} for m ∈ M+ , then it is elementary to verify that the equivalence (3)–(4)
continues to hold with [Lm (Y, σ̂ 2 ), Um (Y, σ̂ 2 )] replacing [Lm (Y ), Um (Y )] (upon repeating
the arguments following (3)–(4) and upon using the finite conditional mean of σ̂ 2 in the
last step).
In the case where p < n, the usual variance estimator kY − X(X 0 X)−1 X 0 Y k2 /(n − p)
is independent of η m0 Y , is positive with probability 1, and has finite unconditional (and
hence also conditional) mean. For variance estimators in the case where p ≥ n, we refer to
Lee et al. (2016) and the references therein.
Appendix A
Auxiliary results
T
In this section, we collect some properties of functions like Fθ,ς
2 (w) that will be needed in
the proofs of Proposition 1 and Proposition 2. The following result will be used repeatedly
in the following and is easily verified using L’Hospital’s method.
Lemma A.1. For all a, b with −∞ ≤ a < b ≤ ∞, the following holds:
Φ (a − θ)
= 0.
θ→∞ Φ (b − θ)
lim
17
T
T
2
Write Fθ,ς
2 (w) and fθ,ς 2 (w) for the c.d.f. and p.d.f. of the T N (θ, ς , T )-distribution,
where T = ∪K
i=1 (ai , bi ) with −∞ ≤ a1 < b1 < a2 < · · · < aK < bK ≤ ∞. For w ∈ T and for
k so that ak < w < bk , we have
Φ
T
Fθ,ς
2 (w) =
w−θ
ς
−Φ
K
P
ak −θ
ς
Φ
+
k−1
P
bi −θ
ς
−Φ
ai −θ
ς
i=1
bi −θ
ς
i=1
Φ
−Φ
ai −θ
ς
;
if k = 1, the sum in the numerator is to be interpreted as 0. And for w as above, the
T
density fθ,ς
2 (w) is equal to φ((w − θ)/ς)/ς divided by the denominator in the preceding
display.
T
Lemma A.2. For each fixed w ∈ T , Fθ,ς
2 (w) is continuous and strictly decreasing in θ,
and
T
lim Fθ,ς
2 (w) = 1
θ→−∞
and
T
lim Fθ,ς
2 (w) = 0.
θ→∞
Proof. Continuity is obvious and monotonicity has been shown in Lee et al. (2016) for the
case where T is a single interval, i.e., K = 1; it is easy to adapt that argument to also
T
cover the case K > 1. Next consider the formula for Fθ,ς
2 (w). As θ → ∞, Lemma A.1
implies that the leading term in the numerator is Φ((w −θ)/ς) while the leading term in the
T
denominator is Φ((bK − θ)/ς). Using Lemma A.1 again gives limθ→∞ Fθ,ς
2 (w) = 0. Finally,
−T
T
it is easy to see that Fθ,ς
2 (w) = 1 − F−θ,ς 2 (−w) (upon using the relation Φ(t) = 1 − Φ(−t)
T
and a little algebra). With this, we also obtain that limθ→−∞ Fθ,ς
2 (w) = 1.
For γ ∈ (0, 1) and w ∈ T , define Qγ (w) through
FQTγ (w),ς 2 (w)
=
γ.
Lemma A.2 ensures that Qγ (w) is well-defined. Note that L(w) = Q1−α/2 (w) and U (w) =
Qα/2 (w).
18
Lemma A.3. For fixed w ∈ T , Qγ (w) is strictly decreasing in γ on (0, 1). And for fixed
γ ∈ (0, 1), Qγ (w) is continuous and strictly increasing in w ∈ T so that limw&a1 Qγ (w) =
−∞ and limw%bK Qγ (w) = ∞.
Proof. Fix w ∈ T . Strict monotonicity of Qγ (w) in γ follows from strict monotonicity of
T
Fθ,ς
2 (w) in θ; cf. Lemma A.2.
Fix γ ∈ (0, 1) throughout the following. To show that Qγ (·) is strictly increasing on T ,
fix w, w0 ∈ T with w < w0 . We get
γ
=
FQTγ (w),ς 2 (w)
<
FQTγ (w),ς 2 (w0 ),
where the inequality holds because the density of FQTγ (w),ς 2 (·) is positive on T . The definition
of Qγ (w0 ) and Lemma A.2 entail that Qγ (w) < Qγ (w0 ).
T
To show that Qγ (·) is continuous on T , we first note that Fθ,ς
2 (w) is continuous in
T
(θ, w) ∈ R × T (which is easy to see from the formula for Fθ,ς
2 (w) given after Lemma A.1).
Now fix w ∈ T . Because Qγ (·) is monotone, it suffices to show that Qγ (wn ) → Qγ (w)
for any increasing sequence wn in T converging to w from below, and for any decreasing
sequence wn converging to w from above. If the wn increase towards w from below, the
sequence Qγ (wn ) is increasing and bounded by Qγ (w) from above, so that Qγ (wn ) converges
T
to a finite limit Q. With this, and because Fθ,ς
2 (w) is continuous in (θ, w), it follows that
lim FQTγ (wn ),ς 2 (wn )
n
=
T
FQ,ς
2 (w).
In the preceding display, the sequence on the left-hand side is constant equal to γ by
T
definition of Qγ (wn ), so that FQ,ς
2 (w) = γ. It follows that Q = Qγ (w). If the wn decrease
towards w from above, a similar argument applies.
To show that limw%bK Qγ (w) = ∞, let wn , n ≥ 1, be an increasing sequence in T that
converges to bK . It follows that Qγ (wn ) converges to a (not necessarily finite) limit Q as
19
n → ∞. If Q < ∞, we get for each b < bK that
T
lim inf FQTγ (wn ),ς 2 (wn ) ≥ lim inf FQTγ (wn ),ς 2 (b) = FQ,ς
2 (b).
n
n
In this display, the inequality holds because FQTγ (wn ),ς 2 (·) is a c.d.f., and the equality holds
T
because Fθ,ς
2 (b) is continuous in θ.
As this holds for each b < bK , we obtain that
lim inf n FQTγ (wn ),ς 2 (wn ) = 1. But in this equality, the left-hand side equals γ – a contradiction. By similar arguments, it also follows that limw&a1 Qγ (w) = −∞.
Lemma A.4. The function Qγ (·) satisfies
lim (bK − w)Qγ (w) = −ς 2 log(γ)
if bK < ∞ and
lim (a1 − w)Qγ (w) = −ς 2 log(1 − γ)
if a1 > −∞.
w%bK
w&a1
Proof. As both statements follow from similar arguments, we only give the details for the
first one. As w approaches bk from below, Qγ (w) converges to ∞ by Lemma A.3. This
observation, the fact that FQTγ (w),ς 2 (w) = γ holds for each w, and Lemma A.1 together
imply that
Φ
w−Qγ (w)
ς
Φ
bk −Qγ (w)
ς
lim
w%bk
=
γ.
Because Φ(−x)/(φ(x)/x) → 1 as x → ∞ (cf. Feller 1957, Lemma VII.1.2.), we get that
w−Qγ (w)
φ
ς
= γ.
lim
w%bk
φ bk −Qς γ (w)
The claim now follows by plugging-in the formula for φ(·) on the left-hand side, simplifying,
and then taking the logarithm of both sides.
20
Appendix B
Proof of Proposition 1
Proof of the first statement in Proposition 2. Assume that bK < ∞ (the case where a1 >
−∞ is treated similarly). Lemma A.4 entails that limw%bK (bK − w)(U (w) − L(w)) = ς 2 C,
where C = log((1 − α/2)/(α/2)) is positive. Hence, there exists a constant > 0 so that
U (w) − L(w)
>
1 ς 2C
2 bK − w
T
whenever w ∈ (bK − , bK ) ∩ T . Set B = inf{fθ,ς
2 (w) : w ∈ (bK − , bK ) ∩ T }. For w ∈ T ,
T
fθ,ς
2 (w) is a Gaussian density divided by a constant scaling factor, so that B > 0. Because
U (w) − L(w) ≥ 0 in view of Lemma A.3, we obtain that
Z
1
ς 2 BC
dw
Eθ,ς 2 [U (W ) − L(W )|W ∈ T ] ≥
2
(bK −)∩T bK − w
=
∞.
Proof of the first inequality in Proposition 2. Define Rγ (w) through Φ((w−Rγ (w))/ς) = γ,
i.e, Rγ (w) = w − ςΦ−1 (γ) Then, on the one hand, we have
FRTγ (w),ς 2 (w)
=
≤
P (N (Rγ (w), ς 2 ) ≤ w, N (Rγ (w), ς 2 ) ∈ T )
P (N (Rγ (w), ς 2 ) ∈ T )
γ
P (N (Rγ (w), ς 2 ) ≤ w)
=
,
2
inf ϑ P (N (ϑ, ς ) ∈ T )
p∗
while, on the other,
FRTγ (w),ς 2 (w)
≥
≥
=
P (N (Rγ (w), ς 2 ) ≤ w) − P (N (Rγ (w), ς 2 ) 6∈ T )
P (N (Rγ (w), ς 2 ) ∈ T )
P (N (Rγ (w), ς 2 ) ≤ w) − 1 + P (N (ϑ, ς 2 ) ∈ T )
inf
ϑ
P (N (ϑ, ς 2 ) ∈ T )
γ − 1 + p∗
.
p∗
21
T
The inequalities in the preceding two displays, together with the fact that Fθ,ς
2 (w) is
decreasing in θ, imply that
R1−p∗ (1−γ) (w)
≤
Qγ (w)
≤
Rp∗ γ (w).
(Note that the inequality in the third-to-last display continues to hold with p∗ γ replacing
γ; in that case, the upper bound reduces to γ. And, similarly, the inequality in the
second-to-last display continues to hold with 1 − p∗ (1 − γ) replacing γ, in which case the
lower bound reduces to γ). In particular, we get that U (w) = Qα/2 (w) ≤ Rp∗ α/2 (w) =
w − ςΦ−1 (p∗ α/2) and that L(w) = Q1−α/2 (w) ≥ R1−p∗ α/2 (w) = w − ςΦ−1 (1 − p∗ α/2). The
last two inequalities, and the symmetry of Φ(·) around zero, imply the second inequality
in the proposition.
Proof of the second inequality in Proposition 2. Note that p∗ ≥ p◦ = inf ϑ P (N (ϑ, ς 2 ) <
b1 or N (ϑ, ς 2 ) > aK ), because T is unbounded above and below. Setting δ = (aK −b1 )/(2ς),
we note that δ ≥ 0 and that it is elementary to verify that p◦ = 2Φ(−δ). Because Φ−1 (1 −
p∗ α/2) ≤ Φ−1 (1 − p◦ α/2), the inequality will follow if we can show that Φ−1 (1 − p◦ α/2) ≤
Φ−1 (1 − α/2) + δ or, equivalently, that Φ−1 (p◦ α/2) ≥ Φ−1 (α/2) − δ. Because Φ(·) is strictly
increasing, this is equivalent to
p◦ α/2
=
Φ(−δ)α
≥
Φ(Φ−1 (α/2) − δ).
To this end, we set f (δ) = αΦ(−δ)/Φ(Φ−1 (α/2) − δ) and show that f (δ) ≥ 1 for δ ≥ 0.
Because f (0) = 1, it suffices to show that f 0 (δ) is non-negative for δ > 0. The derivative
can be written as a fraction with positive denominator and with numerator equal to
−αφ(−δ)Φ(Φ−1 (α/2) − δ) + αΦ(−δ)φ(Φ−1 (α/2) − δ).
The expression in the preceding display is non-negative if and only if
Φ(−δ)
φ(−δ)
≥
Φ(Φ−1 (α/2) − δ)
.
φ(Φ−1 (α/2) − δ)
22
This will follow if the function g(x) = Φ(−x)/φ(x) is decreasing in x ≥ 0. The derivative
g 0 (x) can be written as a fraction with positive denominator and with numerator equal to
φ(x)
2
.
−φ(x) + xΦ(−x)φ(x) = xφ(x) Φ(−x) −
x
Using the well-known inequality Φ(−x) ≤ φ(x)/x for x > 0 (Feller 1957, Lemma VII.1.2.),
we see that the expression in the preceding display is non-positive for x > 0.
Appendix C
Proof of Proposition 2
From Lee et al. (2016), we recall the formulas for the expressions on the right-hand side of
(2), namely Am,s = (A0m,s 0 , A1m,s 0 )0 and bm,s = (b0m,s 0 , b1m,s 0 )0 with A0m,s and b0m,s given by
0
Xm
c (In
− PXm )
1
λ −X 0 c (In − PX )
m
m
and
ι−
0
0
−1
Xm
s
c Xm (Xm Xm )
−1
0
0
s
ι + Xm
c Xm (Xm Xm )
,
0
0
0
Xm )−1 s (in
and b1m,s = −λdiag(s)(Xm
Xm )−1 Xm
respectively, and with A1m,s = −diag(s)(Xm
the preceding display, PXm denotes the orthogonal projection matrix onto the column space
spanned by Xm and ι denotes an appropriate vector of ones). Moreover, it is easy to see that
−
(z) <
the set {y : Am,s y < bm,s } can be written as {y : for z = (Ip − PXm )y, we have Vm,s
+
0
η m0 y < Vm,s
(z), Vm,s
(z) > 0}, where
−
Vm,s
(z)
=
+
Vm,s
(z)
=
0
Vm,s
(z)
=
max {(bm,s − Am,s z)i /(Am,s cm )i : (Am,s cm )i < 0} ∪ {−∞} ,
min {(bm,s − Am,s z)i /(Am,s cm )i : (Am,s cm )i > 0} ∪ {∞} ,
min {(bm,s − Am,s z)i : (Am,s cm )i = 0} ∪ {∞}
with cm = η m /kη m k2 ; cf. also Lee et al. (2016).
23
Proof of Proposition 2. Set I− = {i : (Am,s cm )i < 0} and I+ = {i : (Am,s cm )i > 0}. In
+
−
(z) given earlier, it suffices to show that either
(z) and Vm,s
view of the formulas of Vm,s
I− or I+ is non-empty. Conversely, assume that I− = I+ = ∅. Then Am,s cm = 0 and
hence also A1m,s cm = 0. Using the explicit formula for A1m,s and the definition of η m , i.e.,
0
Xm )−1 γ m , it follows that γ m = 0, which contradicts our assumption that
η m = Xm (Xm
γ m ∈ R|m| \ {0}.
References
Bachoc, F., Leeb, H. & Pötscher, B. M. (2016), ‘Valid confidence intervals for post-modelselection predictors’, arXiv preprint arXiv:1412.4605 .
Berk, R., Brown, L., Buja, A., Zhang, K. & Zhao, L. (2013), ‘Valid post-selection inference’,
Ann. Statist. 41, 802–837.
Feller, W. (1957), An Introduction to Probability Theory and its Applications, Vol. 1, 2nd
edn, Wiley, New York, NY.
Fithian, W., Sun, D. & Taylor, J. (2017), ‘Optimal inference after model selection’, arXiv
preprint arXiv:1410.2597 .
Frank, I. E. & Friedman, J. H. (1993), ‘A statistical view of some chemometrics regression
tools’, Technometrics 35, 109–135.
Lee, J. D., Sun, D. L., Sun, Y. & Taylor, J. E. (2016), ‘Exact post-selection inference, with
application to the Lasso’, Ann. Statist. 44, 907–927.
Leeb, H. & Kabaila, P. (2017), ‘Admissibility of the usual confidence set for the mean of a
24
univariate or bivariate normal population: The unknown variance case’, J. R. Stat. Soc.
Ser. B Stat. Methodol. 79, 801–813.
Leeb, H. & Pötscher, B. M. (2006a), ‘Can one estimate the conditional distribution of
post-model-selection estimators?’, Ann. Statist. 34, 2554–2591.
Leeb, H. & Pötscher, B. M. (2006b), ‘Performance limits for estimators of the risk or
distribution of shrinkage-type estimators, and some general lower risk-bound results’,
Econometric Theory 22, 69–97.
Leeb, H. & Pötscher, B. M. (2008), ‘Can one estimate the unconditional distribution of
post-model-selection estimators?’, Econometric Theory 24, 338–376.
Pötscher, B. M. (2009), ‘Confidence sets based on sparse estimators are necessarily large’,
Sankhyā Ser. A 71, 1–18.
Pötscher, B. M. & Schneider, U. (2010), ‘Confidence sets based on penalized maximum
likelihood estimators in Gaussian regression’, Electron. J. Statist. 4, 334–360.
Schneider, U. (2016), ‘Confidence sets based on thresholding estimators in high-dimensional
Gaussian regression models’, Econometric Rev. 35, 1412–1455.
Taylor, J. & Tibshirani, R. (2017), ‘Post-selection inference for l1 -penalized likelihood models’, Canad. J. Statist. forthcoming.
Tian, X. & Taylor, J. E. (2015), ‘Selective inference with a randomized response’, arXiv
preprint arXiv:1507.06739 .
Tibshirani, R. (1996), ‘Regression shrinkage and selection via the Lasso’, J. R. Stat. Soc.
Ser. B Stat. Methodol. 58, 267–288.
25
Tibshirani, R. J. (2013), ‘The Lasso problem and uniqueness’, Electron. J. Statist. 7, 1456–
1490.
Tibshirani, R. J., Taylor, J., Lockhart, R. & Tibshirani, R. (2016), ‘Exact post-selection
inference for sequential regression procedures’, J. Amer. Statist. Assoc. 111, 600–620.
26
| 10 |
Under review as a conference paper at ICLR 2018
B UILDING G ENERALIZABLE AGENTS
WITH A R EALISTIC AND R ICH 3D E NVIRONMENT
Yi Wu
UC Berkeley
[email protected]
Yuxin Wu & Georgia Gkioxari & Yuandong Tian
Facebook AI Research
{yuxinwu, gkioxari, yuandong}@fb.com
arXiv:1801.02209v1 [cs.LG] 7 Jan 2018
A BSTRACT
Towards bridging the gap between machine and human intelligence, it is of utmost importance to introduce environments that are visually realistic and rich in
content. In such environments, one can evaluate and improve a crucial property of practical intelligent systems, namely generalization. In this work, we
build House3D, a rich, extensible and efficient environment that contains 45,622
human-designed 3D scenes of houses, ranging from single-room studios to multistoreyed houses, equipped with a diverse set of fully labeled 3D objects, textures
and scene layouts, based on the SUNCG dataset (Song et al., 2017). With an
emphasis on semantic-level generalization, we study the task of concept-driven
navigation, RoomNav, using a subset of houses in House3D. In RoomNav, an
agent navigates towards a target specified by a semantic concept. To succeed, the
agent learns to comprehend the scene it lives in by developing perception, understand the concept by mapping it to the correct semantics, and navigate to the
target by obeying the underlying physical rules. We train RL agents with both
continuous and discrete action spaces and show their ability to generalize in new
unseen environments. In particular, we observe that (1) training is substantially
harder on large house sets but results in better generalization, (2) using semantic
signals (e.g. segmentation mask) boosts the generalization performance, and (3)
gated networks on semantic input signal lead to improved training performance
and generalization. We hope House3D1 , including the analysis of the RoomNav
task, serves as a building block towards designing practical intelligent systems and
we wish it to be broadly adopted by the community.
1
I NTRODUCTION
Recently, deep reinforcement learning has shown its strength on multiple games, such as Atari (Mnih
et al., 2015) and Go (Silver et al., 2016), vastly overpowering human performance. Underlying these
big achievements is a well-defined and efficient simulated environment for the agent to freely explore and learn. Till now, many proposed environments have encoded some aspects of human intelligence. This includes 3D understanding (DeepMind Lab (Beattie et al., 2016) and Malmo (Johnson
et al., 2016)), real-time strategy decision (TorchCraft (Synnaeve et al., 2016) and ELF (Tian et al.,
2017)), fast reaction (Atari (Bellemare et al., 2013)), long-term planning (Go, Chess), language and
communications (ParlAI (Miller et al., 2017) and (Das et al., 2017b)).
Nonetheless, it still remains an open problem whether and how the advances of deep reinforcement
learning in these simulated environments can be transferred to the real world. Towards this direction,
it is of utmost importance to build environments that emulate the real world, with its rich structure,
content and dynamic nature. To facilitate learning, these environments should respond in real-time
and should provide a large amount of diverse and complex data. While these properties are much
desired, they still don’t guarantee an important goal of artificial intelligence, namely generalization.
Generalization is the ability of an agent to successfully accomplish a task in new unseen scenarios.
This is of essence for practical applications; for example an in-home robot or a self-driving car
trained in a set of houses or cities, respectively, should be easily deployed in a new house or a new
city, which can be visually completely different from its training scenarios.
1
Available at http://github.com/facebookresearch/House3D
1
Under review as a conference paper at ICLR 2018
While generalization is thought of as being tied to learning, it is undoubtedly bounded by the diversity of the environments an agent is trained in. To facilitate generalization, an environment needs
to be able to provide a large amount of data that will allow the agent to test its ability to recognize
and act under novel conditions. The unbiased and unrestricted nature of the newly generated environments is necessary in order to verify whether an agent has developed intelligent skills rather than
just memorization (overfitting). Note that the connection of generalization and large-scale datasets
has shaped the recent advances in image and object recognition (Russakovsky et al., 2015; He et al.,
2015).
Contrary to supervised learning where generalization is formally defined and well studied (Vapnik,
2013), in reinforcement learning, generalization is interpreted in a variety of ways. In DeepMind
lab (Beattie et al., 2016; Higgins et al., 2017), diversity in the environment is introduced by pixellevel color or texture changes and maze layouts. Tobin et al. (2017) explore pixel-level generalization by introducing random noise to alter object colors. Finn et al. (2017b) study the generalization
skills of an agent across similar task configurations while rewards are provided only for a subset of
tasks. Pathak et al. (2017) test an agent on more difficult levels of the same game.
However, pixel-level variations (e.g. color and texture of objects) in the environment or levels of
difficulty yield very similar visual observations to the agent. In the real world, a human perceives
and understands complicated visual signals. For example, a human can easily recognize the kitchen
room when visiting a friend’s house even if the decoration and design are substantially new. For an
agent to succeed in the real world, it needs to interpret novel structure layouts and diverse object
appearances. The agent needs to be able to attach semantic meaning to a novel scene while generalizing beyond compositional visual changes. In this work, we focus on semantic-level generalization
in which training and test environments are visually distinct, but share the same high-level conceptual properties. Specifically, we propose House3D, a virtual 3D environment consisting of thousands
of indoor scenes equipped with a diverse set of scene types, layouts and objects. An overview of
House3D is shown in Figure 1a. House3D leverages the SUNCG dataset (Song et al., 2017) which
contains 45K human-designed real-world 3D house models, ranging from single studios to houses
with gardens, in which objects are fully labeled with categories. We convert the SUNCG dataset
to an environment, House3D, which is efficient and extensible for various tasks. In House3D, an
agent can freely explore the space while perceiving a large number of objects under various visual
appearances.
In our proposed House3D, we show that the agent can indeed learn high-level concepts and can
generalize to unseen scenarios in a new benchmark task, RoomNav. In RoomNav, an agent starts
at a random location in a house and is asked to navigate to a destination specified by a high-level
semantic concept (e.g. kitchen), following simple physics (e.g. no object penetration). An example
of RoomNav is shown in Figure 1b. From House3D, 270 houses are manually selected and split
into two training sets (20 and 200 houses) and a held-out test set (50 houses) for evaluation. These
selected houses are suitable and large enough for navigation tasks. We show that a large and diverse
training set leads to improved generalization in RoomNav when using gated-CNN and gated-LSTM
policies trained with standard deep reinforcement learning methods, i.e. A3C (Mnih et al., 2016) and
DDPG (Lillicrap et al., 2015). This is contrary to the small training set regime, where overfitting is
prominent. Furthermore, depth information plus semantic signal (e.g. segmentation) result in better
generalization. The former (depth) facilitates immediate action, while the latter (segmentation) aids
semantic understanding in the new environment. This empirical observation indicates the significance of building a practical vision system for real-world robots and also suggests the separation of
vision and robotic learning when handling complicated real-world tasks.
The remaining of the paper is structured as follows. Section 2 summarizes relevant work. Section 3
describes our environment, House3D, in detail and section 4 describes the task, RoomNav. Section 5
introduces our gated networks and the applied algorithms. Finally, experimental results are shown
in Section 6.
2
R ELATED W ORK
Environment: The development of environments largely pushes the frontier of reinforcement learning. Table 1 summarizes the attributes of some of the most relevant environments and compares them
to House3D. Other than this family of environments closely related to House3D, there are more sim2
Under review as a conference paper at ICLR 2018
(a) House3D environment
(b) RoomNav task
Figure 1: An overview of House3D environment and RoomNav task. (a) We build an efficient and
interactive environment upon the SUNCG dataset (Song et al., 2017) that contains 45K diverse
indoor scenes, ranging from studios to two-storied houses with swimming pools and fitness rooms.
All 3D Objects are fully annotated into over 80 categories. Agents in the environment have access
to observations of multiple modalities (e.g., RGB images, Depth, Segmentation masks (from object
category), top-down 2D view, etc. (b) We focus on the task of semantic based navigation. Given a
high-level task description, the agent explores the environment to reach the target room.
ulated environments such as OpenAI Gym (Brockman et al., 2016), ParlAI (Miller et al., 2017) for
language communication as well as some strategic game environments (Synnaeve et al., 2016; Tian
et al., 2017; Vinyals et al., 2017). Most of these environments are pertinent to one particular aspect
of intelligence, such as dialogue or a single type of game, and make it hard to facilitate the study
of more problems. On the contrary, we focus on building a platform that intersects with multiple
research directions, such as object and scene understanding, 3D navigation, embodied question answering (Das et al., 2017a), while allowing users to customize the level of complexity to their needs.
Two concurrent works (Brodeur et al., 2017; Savva et al., 2017) introduce a very similar platform to
House3D, indicating the interest for large-scale interactive and realistic 3D environments.
We build on SUNCG (Song et al., 2017), a dataset that consists of thousands of diverse synthetic
indoor scenes equipped with a variety of objects and layouts. Its visual diversity and rich content naturally allow the study of semantic generalization for reinforcement learning agents. While SUNCG
is an appealing 3D scene dataset due its large size and its complex and rich scenes, it is not the
only publicly available 3D dataset. Alternative options, yet smaller in size, include Al2-THOR (Zhu
et al., 2017), SceneNet RGB-D (McCormac et al., 2017), Stanford 3D (Armeni et al., 2016) and
Matterport 3D (Chang et al., 2017).
3D Navigation: There has been a prominent line of work on the task of navigation in real 3D
scenes (Leonard & Durrant-Whyte, 1992). Classical approaches decompose the task into two subtasks by building a 3D map of the scene using SLAM and then planning in this map (Fox et al.,
2005). More recently, end-to-end learning methods were introduced to predict robotic actions from
raw pixel data (Levine et al., 2016). Some of the most recent works on navigation show the effectiveness of end-to-end learning. Gupta et al. (2017) learn to navigate via mapping and planning using
shortest path supervision. Sadeghi & Levine (2017) teach an agent to fly using solely simulated data
and deploy it in the real world. Dhiraj et al. (2017) collect a dataset of drones crashing into objects
and train self-supervised agents on this data in order to avoid obstacles.
A number of recent works also use deep reinforcement learning for navigation in simulated 3D
scenes. Mirowski et al. (2016); Jaderberg et al. (2016) improve an agent’s navigation ability in
mazes by introducing auxiliary tasks. Parisotto & Salakhutdinov (2017) propose a new architecture
which stores information of the environment on a 2D map. However, these works only evaluate the
3
Under review as a conference paper at ICLR 2018
Environment
Atari (Bellemare et al., 2013)
OpenAI Universe (Shi et al., 2017)
Malmo (Johnson et al., 2016)
DeepMind Lab (Beattie et al., 2016)
VizDoom (Kempka et al., 2016)
AI2-THOR (Zhu et al., 2017)
Stanford2D-3D (Armeni et al., 2016)
Matterport3D (Chang et al., 2017)
House3D
3D
•
•
•
•
•
•
•
Realistic
Large-scale
•
•
•
•
•
•
•
•
Fast
•
•
•
•
•
•
•
•
Customizable
•
•
•
•
•
Table 1: A summary of popular environments. The attributes include 3D: 3D nature of the rendered
objects, Realistic: resemblance to the real-world, Large-Scale: a large set of environments, Fastspeed: fast rendering speed and Customizable: flexibility to be customized to other applications.
agent’s generalization ability on pixel-level variations or small mazes. We argue that a much richer
environment is crucial for evaluating semantic-level generalization.
Gated Modules: In our work, we focus on the task of RoomNav, where the goal is communicated to
the agent as a high-level instruction selected from a set of predefined concepts. To modulate the behavior of the agent in RoomNav, we encode the instruction as an embedding vector which gates the
visual signal. The idea of gated attention has been used in the past for language grounding (Chaplot
et al., 2017), and transfer learning by language grounding (Narasimhan et al., 2017). Similar to
those works, we use concept grounding as an attention mechanism. We believe that our gated reinforcement learning models serve as a strong baseline for the task of semantic based navigation in
House3D. Furthermore, our empirical results allow us to draw conclusions on the models’ efficacy
when training agents in a large-scale, diverse dataset with an emphasis on generalization.
Generalization: There is a recent trend in reinforcement learning focusing on the problem of generalization, ranging from learning to plan (Tamar et al., 2016), meta-learning (Duan et al., 2016;
Finn et al., 2017a) to zero-shot learning (Andreas et al., 2016; Oh et al., 2017; Higgins et al., 2017).
However, these works either focus on over-simplified tasks or test on environments which are only
slightly varied from the training ones. In contrast, we use a more diverse set of environments and
show that the agent can work well in unseen scenes. House3D provides the agent with scenes that
are both visually and structurally different thus forcing it to perceive in order to accomplish its task.
In this work, we show improved generalization performance in complex 3D scenes when using depth
and segmentation masks on top of the raw visual input. This observation is similar to other works
which use a diverse set of input modalities (Mirowski et al., 2016; Tai & Liu, 2016). Our result
suggests that it is possible to decouple real-world robotics from vision via a vision API. The vision
API trained in the desired scenes, e.g. an object detection or semantic segmentation system, can
bridge the gap between simulated environment and real-world (Tobin et al., 2017; Rusu et al., 2016;
Christiano et al., 2016).
3
H OUSE 3D: A N E XTENSIBLE E NVIRONMENT OF 45K 3D H OUSES
Towards building an ultimately practical AI system, there is a need for a realistic environment that
is rich in content and structure, and closely resembles the real world. Such an environment can
serve as the testbed for various problems which require visual understanding, language grounding,
concept learning and more. For the environment to be of value, it is important that it carries complex
visual signals and is of large enough scale to enable semantic learning and generalization. For this,
we develop an efficient and flexible environment of thousands of indoor scenes, which we call
House3D. An overview of House3D is shown in Figure 1a. Our environment allows the agent to
navigate inside complex scenes which consist of a diverse set of layouts, objects and rooms and are
accompanied by useful semantic labels.
4
Under review as a conference paper at ICLR 2018
3.1
DATASET
The 3D scenes in House3D are sourced from the SUNCG dataset (Song et al., 2017), which consists
of 45,622 human-designed 3D scenes ranging from single-room studios to multi-floor houses. The
SUNCG dataset was designed to encourage research on large-scale 3D object recognition problems
and thus carries a variety of objects, scene layouts and structures. On average, there are 8.9 rooms
and 1.3 floors per scene with the max being 155 rooms and 3 floors, respectively. There is a diverse
set of room and object types in each scene. In total, there are over 20 different room types, such as
bedroom, living room, kitchen, bathroom etc., with over 80 different object categories. On average,
there are 14 different objects in each room. In total, the SUNCG dataset contains 404,508 different
rooms and 5,697,217 object instances drawn from 2644 unique object meshes.
3.2
A NNOTATIONS
SUNCG includes a diverse set of labels for each scene. Based on these labels, at every time step an
agent can have access to the following signals in our environment: a) the visual signal of its current
first person view, consisting of RGB values, b) semantic segmentation masks for all the objects
visible in its current view, and c) depth information. These signals are greatly desired since they
enable thorough exploration to determine their practical value for different tasks or can serve as a
predictive target when learning models. For example, for the task of navigation one can easily swap
the RGB values with depth information or with semantic masks in order to quantify the importance
of these different input signals.
Other than the visual input, each scene in SUNCG is fully annotated with the 3D location and
size of all rooms and object instances in the form of a 3D bounding box. Rooms and objects are
also marked with their type, e.g. bedroom or shoe cabinet respectively. This information allows
for a detailed mapping from each 3D location (x, y, z) to an object instance (or None if the space
is unoccupied), as well as the room type. Furthermore, more features can be built on top of these
existing annotations, such as top-down 2D occupancy maps, connectivity analysis and shortest paths,
or any arbitrary physics, all potentially helpful for a variety of applications.
3.3
3.3.1
B UILDING AN I NTERACTIVE E NVIRONMENT
R ENDERER
To build a realistic 3D environment, we develop a renderer for the 3D scenes from SUNCG, and
define actions that obey simple physical rules for an agent to navigate in the scenes. The renderer
is based on OpenGL and can run on both Linux and MacOS. The renderer provides RGB images,
segmentation masks and depth maps.
As highlighted above, the environment needs to be efficient to be used for large-scale reinforcement
learning. On a NVIDIA Tesla M40 GPU, our implementation can render 120×90-sized frames at
over 600 fps, while multiple renderers can run in parallel on one or more GPUs. When rendering
multiple houses simultaneously, one M40 GPU can be fully utilized to render at a total of 1800 fps.
The default simple physics, currently written in Python, adds a negligible overhead to the rendering,
especially when used with multi-processing. The high throughput of our implementation enables
efficient learning for a variety of interactive tasks, such as on-policy reinforcement learning.
The environment along with a python API for easy use is available at http://github.com/
facebookresearch/House3D.
3.3.2
I NTERACTION
In House3D, an agent can live in any location (x, y, z) within a 3D scene, as long as it does not
collide with any object instance (including walls) within a small distance range, i.e. robot’s radius.
Doors, gates and arches are considered passage ways for the agent, meaning that an agent can walk
through those structures freely. These default design choices add negligible runtime overhead and
closely follow the behaviour of a real robot navigating inside a house: a robot can not walk through
walls or other objects but can pass freely through free space, including doors and gates, in order to
5
Under review as a conference paper at ICLR 2018
reach different parts of the scene. Note that more complex interaction rules can be incorporated (e.g.
manipulating objects) within House3D using our flexible API, depending on the task at hand.
4
ROOM NAV : A B ENCHMARK TASK FOR C ONCEPT-D RIVEN NAVIGATION
To test whether an agent can learn semantic concepts, we consider a natural use case of a home
robot in Figure 1b. A human may give a high level instruction to the robot, for example, “Go to the
kitchen”, so that one can later ask the robot to turn on the oven. The robot needs to behave appropriately conditioned on the house it is located in as well as the instruction, e.g. the semantic concept
“kitchen”. Moreover, a practical commercial home robot also needs the ability of generalization:
the robot can be intensively trained for an arbitrarily long time by its manufacturer but it needs to be
immediately and successfully deployed to the user’s house, even if the new house is of a completely
different design and consists of different objects than the training environments.
The key challenges for building real-world home robots are: (1) understanding of the scene from
its visual sensors; (2) execution of high-level conceptual instructions; (3) safe navigation and exploration in complex indoor environments; (4) generalization to unseen scenes under the desired
concepts.
To study the aforementioned abilities of an agent, we develop a benchmark task, Concept-Driven
Navigation (RoomNav), based on our House3D environment. For our experimentation, we restrict
the instruction to be of the form “go to RoomType”, where RoomType denotes a pre-defined room
type. In RoomNav, the room types define the semantic concepts that an agent needs to interpret
across a variety of scenes of distinct visual appearances. To ensure fast experimentation cycles,
we perform experiments on a subset of House3D. We manually select 270 houses suitable for a
navigation task and split them into a small set (20 houses), a large set (200 houses) and a test set (50
houses), where the test set is used to evaluate the generalization of the trained agents.
4.1
TASK F ORMULATION
Suppose we have a set of episodic environments E = {E1 , . . . , En } and a set of semantic concepts
I = {I1 , . . . , Im }. During each episode, the agent is interacting with one environment E ∈ E and is
given an instruction I ∈ I. In the beginning of an episode, the agent is randomly placed somewhere
in E. At each time step t, the agent receives a visual signal Xt from E via its first person view
sensor. Let st = {X1 , . . . , Xt , I} denote the state of the agent at time t. The agent needs to propose
action at to navigate and rotate its sensor given st . The environment will give out a reward signal rt
and terminates when it succeeds or times out.
The objective of this task is to learn an optimal policy π(at |st , I) that leads to the targeted room
according to I. During training, we train the agent in a training set of environments Etrain . For
measuring the generalization of the learned policy, we will also evaluate the policy in another set of
environments Etest , such that Etest ∩ Etrain = ∅.
4.2
D ETAILS
Here we provide further details for the task, RoomNav. For more specifications see the Appendix.
Environment Statistics: The selected 270 houses are manually verified to be suitable for the
task of navigation; e.g. they are well connected, contain many targets, and are large enough for
exploration (studios excluded). We split them into 3 disjoint sets, denoted by Esmall , Elarge and
Etest respectively. For the semantic concepts, we select the five most common room types, i.e.
kitchen, living room, dining room, bedroom and bathroom. The detailed statistics are shown in
Table 2. Note that we define a small set of room types as we emphasize on exploring the problem
of concept-driven navigation. One could extend this set to include objects or even subareas within
rooms.
Observations: We utilize three different kinds of visual input signals for Xt , including (1) raw pixel
values; (2) segmentation mask of the pixel input; and (3) depth information, and experiment with
different combinations of them. We encode each concept I with a one-hot vector representation.
6
Under review as a conference paper at ICLR 2018
Esmall
Elarge
Etest
|E|
20
200
50
avg. #targets
kitchen%
dining room %
living room%
bedroom%
bathroom%
3.9
3.7
3.7
0.95
1.00
1.00
0.60
0.35
0.48
0.60
0.63
0.58
0.95
0.94
0.94
0.80
0.80
0.70
Table 2: Statistics of the selected environment sets for RoomNav. RoomType% denotes the percentage of houses containing at least one target room of type RoomType.
Action Space: Similar to existing navigation works, we define a fixed set of actions, here 12 in
number including different scales of rotations and movements. Due to the complexity of the indoor
scenes as well as the fact that real robots may navigate towards any direction with any (bounded)
velocity, we also explore a continuous action space similar to Lowe et al. (2017). See the Appendix
for more details. In all cases, if the agent hits an obstacle it remains still.
Success Measure and Reward Function: To define success for the task, we want to ensure that
the agent identifies the room due to its unique properties (e.g. presence of appropriate objects in the
room such as pan and knives for kitchen and bed for bedroom) instead of merely reaching there by
luck. An episode is considered successful if the agent achieves both of the following two criteria:
(1) the agent is located inside the target room; (2) the agent consecutively “sees” a designated object
category associated with that target room type for at least 2 time steps. We assume that an agent
“sees” an object if there are at least 4% of pixels in Xt belonging to that object.
Regarding the reward function, ideally two signals are enough to reflect the task requirement: (1) a
collision penalty when hitting obstacles; and (2) a success reward when completing the instruction.
However, this basic reward function makes it too difficult for an RL agent to learn, as the positive
reward is too sparse. Thus, during training we resort to an informative reward shaping in order
to provide additional supervision: we compute the approximate shortest distance from the target
room to each location in the house and adopt the difference of shortest distances between the agent’s
movement as an additional reward signal. Note that our ultimate goal is to learn a policy that could
generalize to unseen houses. Our strong reward shaping supervises the agent at train time and is
not available to the agent at test time. We empirically observe that stronger reward shaping leads to
better performances on both training and testing.
5
G ATED -ATTENTION N ETWORKS FOR M ULTI -TARGET L EARNING
The RoomNav task can be considered as a multi-target learning problem: the policy needs to condition on both the input st and the target I, namely the instruction concept. For policy representations
which incorporate the target I, we propose two baseline models with a gated-attention architecture,
similar to Dhingra et al. (2016) and Chaplot et al. (2017): a gated-CNN network for continuous
actions and a gated-LSTM network for discrete actions. We train the gated-CNN policy using the
deep deterministic policy gradient (DDPG) (Lillicrap et al., 2015), while the gated-LSTM policy is
trained using the asynchronous advantage actor-critic algorithm (A3C) (Mnih et al., 2016).
5.1
5.1.1
DDPG WITH GATED -CNN POLICY
D EEP DETERMINISTIC POLICY GRADIENT
DDPG is a widely used offline-learning algorithm for continuous action space (Lillicrap et al., 2015).
Suppose we have a deterministic policy µ(st |θ) (actor) and the Q-function Q(st , a|θ) (critic) both
parametrized by θ. DDPG optimizes the policy µ(st |θ) by maximizing
Lµ (θ) = Est [Q(st , µ(st |θ)|θ)] ,
and updates the Q-function by minimizing
h
i
2
LQ (θ) = Est ,at ,rt (Q(st , at |θ) − γQ(st+1 , µ(st+1 |θ)|θ) − rt )
Here, we use a shared network for both actor and critic, which leads to the final loss function
LDDPG (θ) = −Lµ (θ) + αDDPG LQ (θ), where αDDPG is a constant balancing the two objectives.
7
Under review as a conference paper at ICLR 2018
5.1.2
G ATED -CNN NETWORK FOR CONTINUOUS POLICY
State Encoding:
Given state st , we first stack the most recent k frames X =
[Xt , Xt−1 , . . . , Xt−k+1 ] channel-wise and apply a convolutional neural network to derive an image representation x = fcnn (X|θ) ∈ RdX . We convert the target I into an embedding vector
y = fembed (I|θ) ∈ RdI . Subsequently, we apply a fusion module M (x, y|θ) to derive the final
encoding hs = M (x, y|θ).
Gated-Attention for Feature Fusion: For the fusion module M (x, y|θ), the straightforward version is concatenation, namely Mcat (x, y|·) = [x, y]. In our case, x is always a high-dimensional
feature vector (i.e., image feature) while y is a simple low-dimensional conditioning vector (e.g.,
instruction). Thus, simple concatenation may result in optimization difficulties. For this reason, we propose to use a gated-attention mechanism. Suppose x ∈ Rdx and y ∈ Rdy where
dy < dx . First, we transform y to y 0 ∈ RdX via an MLP, namely y 0 = fmlp (y|θ), and then perform a Hadamard product between x and sigmoid(y 0 ), which leads to our final gated fusion module
M (x, y|θ) = x sigmoid(fmlp (y|θ)). This gated fusion module could also be interpreted as an attention mechanism over the feature vector which could help better shape the feature representation.
Policy Representation: For the policy, we apply a MLP layer on the state representation hs , followed by a softmax operator (for bounded velocity) to produce the continuous action. Moreover, in
order to produce a stochastic policy for better exploration, we apply the Softmax-Gumbel trick (Jang
et al., 2016), resulting in the final policy µ(st |θ) = Gumbel-Softmax(fmlp (hs |θ)). Note that since
we add randomness to µ(st |θ), our DDPG formulation can also be interpreted as the SVG(0) algorithm (Heess et al., 2015).
Q-function: The Q-function Q(s, a) conditions on both state s and action a. We again apply a
gated fusion module to the feature vector x and the action vector a to derive a hidden representation
hQ = M (x, a|θ). We eventually apply another MLP to hQ to produce the final value Q(s, a).
A model demonstration is shown in the top part of Fig. 2, where each block has its own parameters.
Figure 2: Overview of our proposed models. Bottom part demonstrates the gated-LSTM model for
discrete action while the top part shows the gated-CNN model for continuous action. The “Gated
Fusion” module denotes the gated-attention architecture.
8
Under review as a conference paper at ICLR 2018
5.2
A3C WITH GATED -LSTM POLICY
5.2.1
A SYNCHRONOUS ADVANTAGE ACTOR - CRITIC
A3C is a variant of policy gradient algorithm introduced by Mnih et al. (2016), which reduces the
variance of policy gradient by jointly updating a value function as the baseline. Suppose we have a
discrete policy π(a; s|θ) and a value function v(s|θ). A3C optimizes the policy by minimizing the
loss function
" T
#
X
Lpg (θ) = −Est ,at ,rt
(Rt − v(st )) log π(at ; st |θ) ,
t=1
where Rt is the discounted accumulative reward defined by Rt =
PT −t
i=0
γ i rt+i + v(sT +1 ).
The value function is updated by minimizing the loss
h
i
2
Lv (θ) = Est ,rt (Rt − v(st )) .
Finally the overall loss function for A3C is
LA3C = Lpg (θ) + αA3C Lv (θ)
where αA3C is a constant coefficient.
5.2.2
G ATED -LSTM NETWORK FOR DISCRETE POLICY
State Encoding: Given state st , we first apply a CNN module to extract image feature xt for
each input frame Xt . Similarly, for the target, we apply a gated fusion module to derive a state
representation ht = M (xt , I|θ) at each time step t. Then, we concatenate ht with the target I and
the result is fed into the LSTM module (Hochreiter & Schmidhuber, 1997) to obtain a sequence of
LSTM outputs {ot }t , so that the LSTM module has direct access to the target other than the attended
visual feature.
Policy and Value Function: For each time step t, we concatenate the state vector ht with the
output of the LSTM ot to obtain a joint hidden vector hjoint = [ht , ot ]. Then we apply two MLPs to
hjoint to obtain the policy distribution π(a; st |θ) as well as the value function v(st |θ).
A visualization of the model is shown in the bottom part of Fig. 2. Note that the parameters of CNN
module are shared across time.
6
E XPERIMENTAL R ESULTS
We report experimental results for our models on the task of RoomNav under different sized training
sets. We compare models with discrete and continuous action spaces and empirically show the effect
of using different input modalities.
6.1
D ETAILS
We train our two baseline models on both the small set (Esmall , 20 houses) and the large set (Elarge ,
200 houses), and examine both the training performance (success rate) on the training set as well as
the generalization performance on the test set (Etest , 50 houses). All the success rate evaluations use
a fixed random seed for a fair comparison. In all the cases, if the agent cannot accomplish the goal
within 100 steps2 , we terminate the episode and declare failure. We use gated-CNN and gated-LSTM
to denote the networks with gated-attention, and concat-CNN and concat-LSTM for models with
simple concatenation. We also experiment with different visual signals to the agents, including RGB
image (RGB Only), RGB image with depth information (RGB+Depth) and semantics mask with
depth information (Mask+Depth). To preserve the richness of the input visual signal, the resolution
of the input image is 120 × 90.
2
This is enough for success evaluation. The average number of steps for success runs is much smaller than
100. Refer to appendix B.4 for details.
9
Under review as a conference paper at ICLR 2018
(a) Training performance
(b) Generalization performance on the test set
Figure 3: Performance of various models trained on Esmall (20 houses) with different input signals:
RGB Only, RGB+Depth and Mask+Depth. In each group, the bars from left to right correspond to
gated-LSTM, concat-LSTM, gated-CNN, concat-CNN and random policy respectively.
During each simulated episode, we randomly select a house from the environment set and randomly
pick an available target from the house to instruct the agent. Each episode will be forced to terminate
after 100 time steps. During training, we add an entropy bonus term for both models in addition to
the original loss function. For evaluation, we keep the final model for DDPG due to its stable learning curve, while for A3C, we take the model with the highest training success rate. Optimization is
performed via Adam (Kingma & Ba, 2014) while the implementation is done in PyTorch (Paszke
et al., 2017). More model and experiment details can be found in appendix.
6.2
T RAINING ON THE SMALL SET WITH 20 HOUSES
Training Performance: For each of the models, we run 2000 evaluation episodes on Esmall and
measure overall the success rate. The results are summarized in Fig. 3a. Our empirical results show
that the gated models achieve higher success rate than the models without gated-attention architecture. Especially for the DDPG case, the gated-CNN model outperforms the concat-CNN model with
a large margin. We believe this is due to the fact that there are two gated-fusion modules utilized in
gated-CNN model. Notably, the simpler CNN model with DDPG has stable learning curve and even
achieves higher success rate than LSTM with all different input signals, which suggests that simpler
CNN models can be an appropriate candidate for fitting a small set of environments.
Regarding the different combinations of input signals, adding a depth channel to the RGB channel
generally improves the training performance. Furthermore, changing the RGB signal to semantic
signal significantly boosts the performance for all the models.
Generalization Performance on Etest : Here, the models trained on Esmall are evaluated on the test
set Etest to measure their generalization ability. We run 2000 episodes and measure the success rate
of each model. The results are shown in Fig. 3b.
Regarding different types of visual signals, we draw the same conclusions as at training time: depth
improves the performance and semantic information contributes to generalization ability the most.
More importantly, we observe that the generalization performance is drastically worse than the training performance, especially for the cases with RGB input: the gated-LSTM models achieve even
lower success rate than concat-LSTM models at test time despite the fact that they have much better training performance. This indicates that the neural model tends to overfit to the environments
with a small training set, while having a semantic input signal could slightly alleviate the issue of
overfitting.
6.3
T RAINING ON THE LARGE SET WITH 200 HOUSES
Here we train our models on the large set Elarge containing 200 different houses. For visual signals,
we focus on “RGB + Depth” and “Mask + Depth”. For evaluation, we measure the training success
10
Under review as a conference paper at ICLR 2018
(a) Training performance
(b) Generalization performance on the test set
Figure 4: Performance of various models trained on Elarge (200 houses) with input signals of
RGB+Depth and Mask+Depth. In each group, the bars from left to right correspond to gated-LSTM,
concat-LSTM, gated-CNN, concat-CNN and random policy respectively.
rate over 5000 evaluation episodes and the test success rate over 2000 episodes. Both train and test
results are summarized in Fig. 4.
Training Performance: Compared to the performance on the small training set, the training performance on the large set drops significantly, especially for the models using RGB signal. We also
notice that in the case of RGB input, gated models perform similar to concat-models. This suggests
fundamental difficulties for reinforcement learning agents to learn within a large set of diverse and
visually rich environments. Whereas, for models with semantic signals, we observe a huge gain on
training performance as well as the benefits by having a gated-attention architecture. This implies
that a semantic vision system can be potentially an effective component for building a real-world
robotic system.
In addition, we observe that on the large set, even with a relatively unstable algorithm, such as
A3C, the models with larger capacity, i.e. LSTMs, considerably outperform the simpler reactive
models, i.e., CNNs. We believe this is due to the larger scale and the high complexity of the training
set, which makes it almost impossible for an agent to “remember” the optimal actions for every
scenario. Instead, an agent needs to develop high-level abstract strategies, such as exploration, and
memory. This also suggest a potential future direction of persuading the agent to learn generalizable
abstractions by introducing more inductive bias into the model.
Generalization Performance: Regarding generalization, as we can see from Fig. 4b, after training
on a large number of environments, every model now has a much smaller gap between its training
performance and test performance. For the models with semantic input signals, their test performance are largely improved compared to those trained on the small set. This again emphasizes the
importance of having a large set of diverse environments for training generalizable agents.
Lastly, we also analyze the detailed success rate with respect to each target instruction and provide
the results in the appendix.
7
C ONCLUSION
In this paper, we developed a new environment, House3D, which contains 45K of houses with a
rich set of objects as well as natural layouts resembling the real-world. House3D is an efficient and
extensible environment that could be used for a variety of applications.
Based on our House3D environment, we further introduce a concept-driven navigation task, RoomNav, which tests an agent’s intelligence including understanding the given semantic concept, interpreting the comprehensive visual signal, navigation and, most importantly, generalization. In
addition to our work and to this date House3D has been used for other tasks, such as question
answering (Das et al., 2017a).
11
Under review as a conference paper at ICLR 2018
We develop two baseline models for RoomNav, gated-CNN and gated-LSTM, with the gatedattention architecture, which show promising results at train and test time. In our experiments,
we observe that using the semantic signal as the input considerably enhances the agent’s generalization ability. Increasing the size of the training environments is important but at the same time
introduces fundamental bottlenecks when training agents to accomplish the RoomNav task due to
the higher complexity of the underlying task.
We believe our environment will benefit the community and facilitate the efforts towards building
better AI agents. We also hope that our initial attempts towards addressing semantic generalization
ability in reinforcement learning will serve as an important step towards building real-world robotic
systems.
R EFERENCES
Jacob Andreas, Dan Klein, and Sergey Levine. Modular multitask reinforcement learning with
policy sketches. arXiv preprint arXiv:1611.01796, 2016.
Iro Armeni, Ozan Sener, Amir R. Zamir, Helen Jiang, Ioannis Brilakis, Martin Fischer, and Silvio
Savarese. 3D semantic parsing of large-scale indoor spaces. CVPR, 2016.
Charles Beattie, Joel Z Leibo, Denis Teplyashin, Tom Ward, Marcus Wainwright, Heinrich Küttler,
Andrew Lefrancq, Simon Green, Vı́ctor Valdés, Amir Sadik, et al. Deepmind lab. arXiv preprint
arXiv:1612.03801, 2016.
Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: An evaluation platform for general agents. J. Artif. Intell. Res.(JAIR), 47:253–279,
2013.
Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and
Wojciech Zaremba. OpenAI gym. arXiv preprint arXiv:1606.01540, 2016.
Simon Brodeur, Ethan Perez, Ankesh Anand, Florian Golemo, Luca Celotti, Florian Strub Strub,
Jean Rouat, Hugo Larochelle, and Aaron Courville Courville. HoME: a household multimodal
environment. arXiv 1711.11017, 2017.
Angel Chang, Angela Dai, Thomas Funkhouser, Maciej Halber, Matthias Niessner, Manolis Savva,
Shuran Song, Andy Zeng, and Yinda Zhang. Matterport3d: Learning from RGB-D data in indoor
environments. International Conference on 3D Vision (3DV), 2017.
Devendra Singh Chaplot, Kanthashree Mysore Sathyendra, Rama Kumar Pasumarthi, Dheeraj Rajagopal, and Ruslan Salakhutdinov. Gated-attention architectures for task-oriented language
grounding. arXiv preprint arXiv:1706.07230, 2017.
Paul Christiano, Zain Shah, Igor Mordatch, Jonas Schneider, Trevor Blackwell, Joshua Tobin, Pieter
Abbeel, and Wojciech Zaremba. Transfer from simulation to real world through learning deep
inverse dynamics model. arXiv preprint arXiv:1610.03518, 2016.
Abhishek Das, Samyak Datta, Georgia Gkioxari, Stefan Lee, Devi Parikh, and Dhruv Batra. Embodied Question Answering. arXiv preprint arXiv:1711.11543, 2017a.
Abhishek Das, Satwik Kottur, Stefan Lee, Jos M.F. Moura, and Dhruv Batra. Learning cooperative
visual dialog agents with deep reinforcement learning. ICCV, 2017b.
Bhuwan Dhingra, Hanxiao Liu, Zhilin Yang, William W Cohen, and Ruslan Salakhutdinov. Gatedattention readers for text comprehension. arXiv preprint arXiv:1606.01549, 2016.
Gandhi Dhiraj, Pinto Lerrel, and Gupta Abhinav. Learning to fly by crashing. IROS, 2017.
Yan Duan, John Schulman, Xi Chen, Peter L Bartlett, Ilya Sutskever, and Pieter Abbeel. RL2 : Fast
reinforcement learning via slow reinforcement learning. arXiv preprint arXiv:1611.02779, 2016.
Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation
of deep networks. arXiv preprint arXiv:1703.03400, 2017a.
12
Under review as a conference paper at ICLR 2018
Chelsea Finn, Tianhe Yu, Justin Fu, Pieter Abbeel, and Sergey Levine. Generalizing skills with
semi-supervised reinforcement learning. ICLR, 2017b.
Dieter Fox, Sebastian Thrun, and Wolfram Burgard. Probabilistic Robotics. MIT press, 2005.
Saurabh Gupta, James Davidson, Sergey Levine, Rahul Sukthankar, and Jitendra Malik. Cognitive
mapping and planning for visual navigation. CVPR, 2017.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015.
Nicolas Heess, Gregory Wayne, David Silver, Tim Lillicrap, Tom Erez, and Yuval Tassa. Learning
continuous control policies by stochastic value gradients. In Advances in Neural Information
Processing Systems, pp. 2944–2952, 2015.
Irina Higgins, Arka Pal, Andrei A Rusu, Loic Matthey, Christopher P Burgess, Alexander Pritzel,
Matthew Botvinick, Charles Blundell, and Alexander Lerchner. Darla: Improving zero-shot transfer in reinforcement learning. arXiv preprint arXiv:1707.08475, 2017.
Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):
1735–1780, 1997.
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David
Silver, and Koray Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks. arXiv
preprint arXiv:1611.05397, 2016.
Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with Gumbel-Softmax.
arXiv preprint arXiv:1611.01144, 2016.
Matthew Johnson, Katja Hofmann, Tim Hutton, and David Bignell. The Malmo platform for artificial intelligence experimentation. In IJCAI, pp. 4246–4247, 2016.
Michał Kempka, Marek Wydmuch, Grzegorz Runc, Jakub Toczek, and Wojciech Jaśkowski. Vizdoom: A doom-based AI research platform for visual reinforcement learning. In Computational
Intelligence and Games (CIG), 2016 IEEE Conference on, pp. 1–8. IEEE, 2016.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980, 2014.
John J. Leonard and Hugh F. Durrant-Whyte. Directed Sonar Sensing for Mobile Robot Navigation.
Kluwer Academic Publishers, Norwell, MA, USA, 1992. ISBN 0792392426.
Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuomotor policies. JMLR, 2016.
Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa,
David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv
preprint arXiv:1509.02971, 2015.
Ryan Lowe, Yi Wu, Aviv Tamar, Jean Harb, Pieter Abbeel, and Igor Mordatch. Multi-agent actorcritic for mixed cooperative-competitive environments. arXiv preprint arXiv:1706.02275, 2017.
John McCormac, Ankur Handa, Stefan Leutenegger, and Andrew J Davison. SceneNet RGB-D: Can
5m synthetic images beat generic ImageNet pre-training on indoor segmentation? In Proceedings
of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2678–2687, 2017.
Alexander H Miller, Will Feng, Adam Fisch, Jiasen Lu, Dhruv Batra, Antoine Bordes, Devi Parikh,
and Jason Weston. Parlai: A dialog research software platform. arXiv preprint arXiv:1705.06476,
2017.
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andy Ballard, Andrea Banino, Misha
Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, et al. Learning to navigate in complex
environments. arXiv preprint arXiv:1611.03673, 2016.
13
Under review as a conference paper at ICLR 2018
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level
control through deep reinforcement learning. Nature, 518(7540):529–533, 2015.
Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim
Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement
learning. In International Conference on Machine Learning, pp. 1928–1937, 2016.
Karthik Narasimhan, Regina Barzilay, and Tommi Jaakkola. Deep transfer in reinforcement learning
by language grounding. arXiv preprint arXiv:1708.00133, 2017.
Junhyuk Oh, Satinder Singh, Honglak Lee, and Pushmeet Kohli. Zero-shot task generalization with
multi-task deep reinforcement learning. arXiv preprint arXiv:1706.05064, 2017.
Emilio Parisotto and Ruslan Salakhutdinov. Neural map: Structured memory for deep reinforcement
learning. arXiv preprint arXiv:1702.08360, 2017.
Adam Paszke, Sam Gross, and Soumith Chintala. Pytorch, 2017. URL http://pytorch.org/.
Deepak Pathak, Pulkit Agrawal, Alexei A. Efros, and Trevor Darrell. Curiosity-driven exploration
by self-supervised prediction. ICML, 2017.
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng
Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei.
ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision
(IJCV), 115(3):211–252, 2015. doi: 10.1007/s11263-015-0816-y.
Andrei A Rusu, Matej Vecerik, Thomas Rothörl, Nicolas Heess, Razvan Pascanu, and Raia Hadsell.
Sim-to-real robot learning from pixels with progressive nets. arXiv preprint arXiv:1610.04286,
2016.
Fereshteh Sadeghi and Sergey Levine. CAD2RL: Real single-image flight without a single real
image. RSS, 2017.
Manolis Savva, Angel X. Chang, Alexey Dosovitskiy, Thomas Funkhouser, and Vladlen Koltun. MINOS: Multimodal indoor simulator for navigation in complex environments. arXiv:1712.03931,
2017.
Tianlin Shi, Andrej Karpathy, Linxi Fan, Jonathan Hernandez, and Percy Liang. World of Bits: An
open-domain platform for web-based agents. In International Conference on Machine Learning,
pp. 3135–3144, 2017.
David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche,
Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering
the game of go with deep neural networks and tree search. Nature, 529(7587):484–489, 2016.
Shuran Song, Fisher Yu, Andy Zeng, Angel X Chang, Manolis Savva, and Thomas Funkhouser.
Semantic scene completion from a single depth image. CVPR, 2017.
Gabriel Synnaeve, Nantas Nardelli, Alex Auvolat, Soumith Chintala, Timothée Lacroix, Zeming
Lin, Florian Richoux, and Nicolas Usunier. Torchcraft: a library for machine learning research
on real-time strategy games. arXiv preprint arXiv:1611.00625, 2016.
Lei Tai and Ming Liu. Towards cognitive exploration through deep reinforcement learning for mobile robots. arXiv preprint arXiv:1610.01733, 2016.
Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, and Pieter Abbeel. Value iteration networks.
In Advances in Neural Information Processing Systems, pp. 2154–2162, 2016.
Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, and Larry Zitnick. ELF: An extensive, lightweight and flexible research platform for real-time strategy games. arXiv preprint
arXiv:1707.01067, 2017.
14
Under review as a conference paper at ICLR 2018
Josh Tobin, Rachel Fong, Alex Ray, Jonas Schneider, Wojciech Zaremba, and Pieter Abbeel. Domain randomization for transferring deep neural networks from simulation to the real world. arXiv
preprint arXiv:1703.06907, 2017.
Vladimir Vapnik. The nature of statistical learning theory. Springer science & business media,
2013.
Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets,
Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, et al.
Starcraft ii: A new challenge for reinforcement learning. arXiv preprint arXiv:1708.04782, 2017.
Yuke Zhu, Roozbeh Mottaghi, Eric Kolve, Joseph J Lim, Abhinav Gupta, Li Fei-Fei, and Ali
Farhadi. Target-driven visual navigation in indoor scenes using deep reinforcement learning.
In Robotics and Automation (ICRA), 2017 IEEE International Conference on, pp. 3357–3364.
IEEE, 2017.
15
Under review as a conference paper at ICLR 2018
A
ROOM NAV TASK D ETAILS
The location information of an agent can be represented by 4 real numbers: the 3D location (x, y, z)
and the rotation degree ρ of its first person view sensor, which indicates the front direction of the
agent. Note that in RoomNav, the agent is not allowed to change its height z, hence the overall
degree of freedom is 3.
An action can be in the form of a triple a = (δx , δy , δρ ). After taking the action a, the agent will
move to a new 3D location (x + δx , y + δy , z) with a new rotation ρ + δρ . The physics in House3D
will detect collisions with objects under action a and in RoomNav, the agent will remain still in case
of a collision. We also restrict the velocity of the agent such that |δx |, |δy | ≤ 0.5 and |δρ | ≤ 30 to
ensure a smooth movement.
Continuous Action: A continuous action a consists of two parts a = [m, r] where m =
(m1 , . . . , m4 ) is for movement and r = (r1 , r2 ) is for rotation. Since the velocity of the agent
should be bounded, we require m, r to be a valid probability distribution. Suppose the original location of robot is (x, y, z) and the angle of camera is ρ, then after executing a, the new 3D location
will be (x + (m1 − m2 ) ∗ 0.5, y + (m3 − m4 ) ∗ 0.5, z) and the new angle is ρ + (r1 − r2 ) ∗ 30.
Discrete Action: We define 12 different action triples in the form of ai = (δx , δy , δρ ) satisfying the
velocity constraints. There are 8 actions for movement: left, forward, right with two scales and two
diagonal directions; and 4 actions for rotation: clockwise and counter-clockwise with two scales. In
the discrete action setting, we do not allow the agent to move and rotate simultaneously.
Reward Details: In addition to the reward shaping of difference of shortest distances, we have
the following rewards. When hitting an obstacle, the agent receives a penalty of 0.3. In the case
of success, the winning reward is +10. In order to encourage exploration (or to prevent eternal
rotation), we add a time penalty of 0.1 to the agent for each time step outside the target room. Note
that since we restrict√the velocity of the agent, the difference of shortest path after an action will be
no more than 0.5 × 2 ≈ 0.7.
B
B.1
E XPERIMENT D ETAILS
N ETWORK ARCHITECTURES
We apply a batch normalization layer after each layer in the CNN module. The activation function
used is ReLU. The embedding dimension of concept instruction is 25.
Gated-CNN: In the CNN part, we have 4 convolution layers of 64, 64, 128, 128 channels perspective and with kernel size 5 and stride 2, as well as a fully-connected layer of 512 units. We use a
linear layer to transform the concept embedding to a 512-dimension vector for gated fusion. The
MLP for policy has two hidden layers of 128 and 64 units, and the MLP for Q-function has a single
hidden layer of 64 units.
Gated-LSTM: In the CNN module, we have 4 convolution layers of 64, 64, 128, 128 channels
each and with kernel size 5 and stride 2, as well as a fully-connected layer of 256 units. We use a
linear layer to convert the concept embedding to a 256-dimension vector. The LSTM module has
256 hidden dimensions. The MLP module for policy contains two layers of 128 and 64 hidden units,
and the MLP for value function has two hidden layers of 64 and 32 units.
B.2
T RAINING PARAMETERS
We normalize each channel of the input frame to [0, 1] before feeding it into the neural network.
Each of the training procedures includes a weight decay of 10−5 and a discounted factor γ = 0.95.
DDPG: We stack k = 5 recent frames and use learning rate 104 with batch size 128. We choose
αDDPG = 100 for all the settings except for the case with input signal of “RGB+Depth” on Elarge ,
where we choose αDDPG = 10. We use an entropy bonus term with coefficient 0.001 on Esmall and
0.01 on Elarge . We use exponential average to update the target network with rate 0.001. A training
update is performed every 10 time steps. The replay buffer size is 7 × 105 . We run training for
80000 episodes in all. We use a linear exploration strategy in the first 30000 episodes.
16
Under review as a conference paper at ICLR 2018
A3C: We clip the reward to the range [−1, 1] and use a learning rate 1e − 3 with batch size 64.
We launch 120 processes on Esmall and 200 on Elarge . During training we estimate the discounted
accumulative rewards and back-propagate through time for every 30 time steps unrolled. We perform
a gradient clipping of 1.0 and decay the learning rate by a factor of 1.5 when the difference of KLdivergence becomes larger than 0.01. For training on Esmall , we use a entropy bonus term with
coefficient 0.1; while on Elarge , the coefficient is 0.05. αA3C is 1.0. We perform 105 training updates
and keep the best model with the highest training success rate.
B.3
G ENERALIZATION OVER DIFFERENT CONCEPTS
gated-LSTM
gated-CNN
test succ.
kitchen%
dining room %
living room%
bedroom%
bathroom%
35.8
29.7
37.9
31.6
50.4
42.5
48.0
54.3
33.5
27.6
21.2
17.4
Table 3: Detailed test success rates for gated-CNN model and gated-LSTM model with
“Mask+Depth” as input signal across different instruction concepts.
We illustrate in Table 3 the detailed test success rates of our models trained on Etrain with respect to
each of the 5 concepts. Note that both models have similar behaviour across concepts. In particular,
“dining room” and “living room” are the easiest while “bathroom” is the hardest. We suspect that
this is because dining room and living room are often with large room space and have the best
connectivity to other places. By contrast, bathroom is often very small and harder to find in big
houses.
Lastly, we also experiment with adding auxiliary tasks of predicting the current room type during
training. We found this does not help the training performance nor the test performance. We believe
it is because our reward shaping has already provided strong supervision signals.
B.4
AVERAGE STEPS TOWARDS SUCCESS
We also measure the number of steps required for an agent in RoomNav. For all the successful
episodes, we evaluate the averaged number of steps towards the final target. The numbers are shown
in Table 4. A random agent can only succeed when it’s initially spawned very close to the target,
and therefore have very small number of steps towards target. Our trained agents, on the other hand,
can explore in the environment and reach the target after resonable number of steps. Generally, our
DDPG models takes fewer steps than our A3C models thanks to their continuous action space. But
in all the settings, the number of steps required for a success is still far less than 100, namely the
horizon length.
random concat-LSTM gated-LSTM concat-CNN gated-CNN
Avg. #steps towards targets on Esmall with different input signals
RGB+D (train)
14.2
35.9
41.0
31.7
33.8
RGB+D (test)
13.3
27.1
29.8
26.1
25.3
Mask+D (train)
14.2
38.4
40.9
34.9
36.6
Mask+D (test)
13.3
31.9
34.3
26.2
30.4
Avg. #steps towards targets on Elarge with different input signals
RGB+D (train)
16.0
36.4
35.6
31.0
32.4
13.3
34.0
33.8
24.4
25.7
RGB+D (test)
Mask+D (train)
16.0
40.1
38.8
34.6
36.2
Mask+D (test)
13.3
34.8
34.3
30.6
30.9
Table 4: Averaged number of steps towards the target in all success trials for all the evaluated models
with various input signals and different environments.
17
| 2 |
GEOMETRY OF CERTAIN FINITE COXETER GROUP ACTIONS
arXiv:1707.03137v1 [] 11 Jul 2017
M. J. DYER AND G. I. LEHRER
Abstract. We determine a fundamental domain for the diagonal action of a
finite Coxeter group W on V ⊕n , where V is the reflection representation. This is
used to give a stratification of V ⊕n , which is respected by the group action, and we
study the geometry, topology and combinatorics of this stratification. These ideas
are used to obtain results on the classification of root subsystems up to conjugacy,
as well as a character formula for W .
Introduction
Let Φ be a finite root system in the Euclidean space V = Rℓ , whose inner product
we shall denote h −, − i. Let W = W (Φ) be the corresponding Weyl group. This is
a finite reflection group on V , and the choice of a simple subsystem of Φ defines a
fundamental region, known as the fundamental chamber, for the action of W on V .
The group W acts diagonally on V n := V ⊕n = V ⊕ · · · ⊕ V , and our first main
(n)
result, Theorem 2.2 in §2 below, is the determination of a fundamental region CW
for this W -action. This turns out to be a locally closed, convex subset of V n . In §3
we show how our fundamental region may be used to obtain a stratification of V n
by locally closed subsets which are transformed into each other under the W -action,
and are such that the closure of any one is a union of such sets. This leads to a
combinatorial structure which significantly generalises the Coxeter complex of W .
We study the topology and combinatorics of this stratification in §4.
As applications of these results, we give in §3.18 a character formula for W , which
generalises the usual Solomon formula for the alternating character of W .
Then, in §5, we show how our fundamental region may be used to study W -orbits
of finite subsets of V , both ordered and unordered. In §6, we apply the results of
§5 to show how the conjugacy classes of type A-subsystems of an arbitrary root
system Φ may be determined by inspection of the Dynkin diagram of Φ. Finally, in
§7, we indicate, partly without proof, how the results of §5–6 may be used to study
conjugacy classes of root subsystems of Φ and thus conjugacy classes of reflection
subgroups of W . Related results on conjugacy of subsystems of root systems may
be found in [14], [12], [5] and [16].
1. Preliminaries
1.1. Let V be a real Euclidean space, i.e. a finite dimensional real vector space
equipped with a symmetric, positive definite bilinear form h −, − i : V × V → R.
For non-zero α ∈ V , let sα : V → V denote the orthogonal reflection in α; it is the
2010 Mathematics Subject Classification. Primary: 20F55: Secondary: 17B22.
1
2
M. J. DYER AND G. I. LEHRER
2
α. In this paper,
R-linear map defined by sα (v) = v − h v, α∨ iα where α∨ := h α,α
i
by a root system Φ in V , we shall mean a subset Φ of V satisfying the following
conditions (i)–(iii):
(i) Φ is a finite subset of V \ {0}.
(ii) If α, β ∈ Φ, then sa (β) ∈ Φ.
(iii) If α, cα ∈ Φ with c ∈ R, then c ∈ {±1}.
The subgroup W of End(V ) generated by { sα | α ∈ Φ } is a finite (real) reflection
group i.e. a finite Coxeter group. A simple system Π of Φ is a linearly independent
subset Π ⊆ Φ such that Φ = Φ+ ⊔Φ− where Φ+ := Φ∩R≥0 Π and Φ− = −Φ+ (we use
the symbol ⊔ to indicate a disjoint union). Fix a simple system Π (it is well known
that such simple systems exist). Then Φ+ is the corresponding positive system of Φ
and S := { sα | α ∈ Π } ⊆ W is called the set of simple reflections of W . It is well
known that (W, S) is a Coxeter system. The subset T := { sα | α ∈ Φ } = { wsw −1 |
w ∈ W, s ∈ S } of W is called the set of reflections of W .
1.2. Dual root system. If Φ is a root system in V , then Φ∨ := { α∨ | α ∈ Φ } is
also a root system, called the dual root system of Φ; it has a system of simple roots
Π∨ := { α∨ | α ∈ Φ } with corresponding positive roots Φ∨+ := { α∨ | α ∈ Φ+ } and
associated finite Coxeter system (W, S).
1.3. Weyl groups. The root system Φ is said to be crystallographic if for all α, β ∈
Φ, one has h α, β ∨ i ∈ Z. In that case, W is a finite Weyl group and one defines
the root lattice Q(Φ) := ZΠ and weight lattice P (Φ) := { λ ∈ V | h λ, Φ∨ i ⊆ Z }.
The corresponding lattices Q(Φ∨ ) and P (Φ∨ ) for Φ∨ are called the coroot lattice and
coweight lattice of Φ respectively.
1.4. A subgroup W ′ of V generated by a subset of T is called a reflection subgroup.
It has a root system ΦW ′ = { α ∈ Φ | sα ∈ W ′ }. We call ΦW ′ a (root) subsystem of
Φ. A simple system (resp., positive system) of a root subsystem of Φ will be called
a simple (resp., positive) subsystem of Φ. It is well known that ΦW ′ has a unique
simple system ΠW ′ contained in the set of positive roots Φ+ of W ; the corresponding
positive system is ΦW ′ ,+ := Φ+ ∩ ΦW ′ .
The reflection subgroups WI := h I i generated by subsets I of S are called standard parabolic subgroups and their conjugates are called parabolic subgroups. If
W ′ = WI , then ΠW ′ = { α ∈ Π | sa ∈ I } and ΦW ′ ,+ = Φ ∩ R≥0 ΠW ′ .
1.5. Fundamental chamber for the W -action on V . The subset C = CW :=
{ v ∈ V | h v, Π i ⊆ R≥0 } of V is called the fundamental chamber of W . In the
following Lemma, we collect several standard facts concerning this situation, which
may be found in [2, 15].
1.6. Lemma.
(a) Every W orbit on V contains a unique point of C .
(b) For v ∈ C , the stabiliser StabW (v) := { w ∈ W | w(v) = v } is equal to the
standard parabolic subgroup WI where I := { s ∈ S | s(v) = v } = { sα | α ∈
Π, h α, v i = 0 }.
(c) The set of parabolic subgroups is equal to the set of the stabilisers of points
of V . It is also equal to the set of pointwise stabilisers of subsets of V .
GEOMETRY OF CERTAIN FINITE COXETER GROUP ACTIONS
3
(d) The intersection of any set of parabolic subgroups of W is a parabolic subgroup
of W . In particular, we have for I, J ⊆ S, WI ∩ WJ = WI∩J .
It follows that if α ∈ Φ is any root, then the W -orbit W α contains a unique
element of CW . A root α in Φ ∩ CW is said to be a dominant root. If Φ is irreducible,
there are at most two such roots (cf. [6, Lemma 2]). We note that if V were a
complex vector space, (c) above would not be true.
1.7. Determination of W -orbit representatives. An efficient algorithm for computing the unique element in W v ∩ C for an arbitrary element v ∈ V is as follows.
For v ∈ V , let Φv := { β ∈ Φ+ | h β, v i < 0 } and nv := |Φv |. If Φv ∩ Π = ∅,
then Φv = ∅ and v ∈ C . Otherwise, there exists some β ∈ Φv ∩ Π; one then
has Φsβ (v) = sβ (Φv \ {β}) and nsβ (v) = nv − 1. Continuing thus, one obtains
sβ1 . . . sβr v ∈ C , where r = nv .
1.8. Dynkin diagrams of simple systems and np subsets. Suppose Φ is crystallographic. Define a np subset Γ of Φ to be a subset such that for distinct elements
α, β ∈ Γ, one has h α, β ∨ i ≤ 0. Then a subset Γ of Φ is a simple subsystem if and
only if it is a linearly independent np subset (see [6]).
Define the diagram of a np subset Γ of Φ to be the Dynkin diagram (as in [10,
§4.7]) of the generalised Cartan matrix (mα,β )α,β∈Γ where mα,β := h α∨, β i. This is
a graph with vertex set Γ such that if α, β ∈ Γ are distinct with |mα,β | ≥ |mβ,α |,
the vertices α, β are connected by |mα,β | lines and these lines are equipped with an
arrow pointing towards α if |mα,β | > 1. Note that the arrow points towards the
shorter root if α, β are of different length, and that if α = −β, then α, β are joined
by two lines equipped with a pair of arrows in opposite directions. It is well known
that the connected components of the diagram of Γ are certain Dynkin diagrams of
finite or affine type. Further, the np subset Γ of Φ is a simple subsystem of Φ if and
only if the irreducible components of its diagram are all Dynkin diagrams of finite
type.
By an ordered simple subsystem (resp., ordered np subset) of Φ, we mean a tuple
b = (β1 , . . . , βn ) of pairwise distinct roots whose underlying set [b] := {β1 , . . . , βn }
is a simple subsystem (resp., np subset) of Φ.
2. Orbits under the diagonal action
2.1. Fundamental domain for the diagonal W -action. For each n ∈ N, let
V n := V ⊕. . .⊕V (n factors) with the diagonal W -action defined by w(v1 , . . . , vn ) :=
(wv1 , . . . , wvn ). (For n = 0, V n := {0} with trivial W -action, by convention). We
identify V n × V m with V n+m in the natural way. For v := (v1 , . . . , vn ) ∈ V n and
any m ≤ n in N, define the truncation τm (v) := (v1 , . . . , vm ) and
(2.1.1)
Wv,m := StabW (τm (v)) = StabW (v1 ) ∩ . . . ∩ StabW (vm ).
Thus, Wv,0 = W and each Wv,m is a parabolic subgroup of Wv,m−1 . In particular,
Wv,m is a reflection group acting on V , and since its root system has a natural
4
M. J. DYER AND G. I. LEHRER
positive subsystem given by its intersection with Φ+ , it has a well defined fundamental chamber CWv,m ⊆ V . Note that for each m, Wv,m ⊆ Wv,m−1 , whence
CWv,m ⊇ CWv,m−1 .
Let
(2.1.2)
(n)
CW := { v = (v1 , . . . , vn ) ∈ V n | for m = 1, . . . , n, vm ∈ CWv,m−1 }.
Let v = (v1 , . . . , vn ) ∈ V n and 1 ≤ m ≤ n − 1. Set W ′ := Wv,m , v′ := (v1 , . . . , vm )
and v′′ := (vm+1 , . . . , vn ). The definitions immediately imply that
(n)
(m)
v ∈ CW if and only if v′ ∈ CW
(2.1.3)
(n−m)
and v′′ ∈ CW ′
.
The next result will be crucial for the investigation of the action of W on subsets
of V . It identifies a fundamental region for the action of W on V n .
(n)
2.2. Theorem.
(a) If v = (v1 , . . . vn ) ∈ CW , then StabW (v) = Wv,n is the
standard parabolic subgroup WIv of W , where
Iv = { sα | α ∈ Π, h α, vi i = 0 for i = 1, . . . , n }.
(n)
(b) Every W -orbit on V n contains a unique point of CW .
(n)
(c) CW is a convex (and hence connected) subset of V n .
Proof. If n = 0, (a) is trivial. Let n > 0. Assume by way of induction that Wv,n−1
is the standard parabolic subgroup WIτn−1 (v) of W . Clearly StabW (v) ⊇ WIv , and
so it suffices to show that any element of StabW (v) lies in WIv . Let w ∈ StabW (v);
then evidently w ∈ StabW (τn−1 v) = WIτn−1 v . Moreover vn ∈ CWIτn−1 v implies
that StabWIτ (v) (vn ) is the standard parabolic subgroup of WIτn−1 (v) generated by
n−1
{ sα ∈ ΠIτn−1 (v) | h α, vn i = 0 }, which is precisely the set Iv . This proves (a).
To prove (b), assume by induction that every W -orbit on V n−1 contains a unique
(n−1)
point of CW . By induction, there is an element w ′ ∈ W such that w ′ (τn−1 (v)) ∈
(n−1)
(n−1)
′
CW . Let v′ := w ′ (v) = (v1′ , . . . , vn′ ). Then τn−1 (v′ ) ∈ CW . Let Wn−1
:=
′
StabW (τn−1 v ); by (a), this is the standard parabolic subgroup WI ′ of W , where
I ′ = { sα | α ∈ Π, h α, vi′ i = 0 for i = 1, . . . , n − 1 }. Now there is an element
′
′
w ′′ ∈ Wn−1
such that w ′′ (vn′ ) ∈ CWn−1
. Let w := w ′′ w ′ ∈ W . Then since w ′′
(n)
′
stabilises (v1′ , . . . , vn−1
), it is clear that v′′ = w(v) = w ′′ (v′ ) ∈ CW . This shows that
(n)
every element of V n may be moved into CW by W . We show that the intersection
(n)
of a W -orbit on V n with CW is a single point, that is, that no two distinct points
(n)
of CW are in the same W -orbit.
This will be done by induction on n. The case n = 1 is clear. Suppose that
(n)
(n)
(n−1)
v ∈ CW and that w ∈ W is such that w(v) ∈ CW . Then w(τn−1 (v)) ∈ CW . By
induction, w ∈ StabW (τn−1 v), which by (a) is equal to WI , where I = { sα | α ∈
Π, h α, vi i = 0 for i = 1, . . . , n − 1 }. Since w(vn ) ∈ CWI and vn ∈ CWI , it follows
that wvn = vn and (b) follows.
(n)
To prove (c), let u = (u1 , . . . , un ) and v = (v1 , . . . , vn ) be points in CW , and for
(n)
t ∈ R, 0 ≤ t ≤ 1, write c(t) = tu + (1 − t)v. We wish to show that c(t) ∈ CW . By
GEOMETRY OF CERTAIN FINITE COXETER GROUP ACTIONS
5
induction, we may assume that tum +(1−t)vm ∈ CWct ,m−1 for m < n, and we require
the corresponding statement for m = n. We shall prove first that for t 6= 0, 1, we
have, for all non-negative integers m,
(2.2.1)
Wc(t),m = Wu,m ∩ Wv,m .
The inclusion of the right hand side in the left is clear. We prove the reverse
inclusion. For m = 0, this is trivial. In general, suppose by way of induction (using
(a)) that Wc(t),m−1 = WI with m ≥ 1, where I = I1 ∩ I2 , and I1 = { sα | α ∈
Π, h α, ui i = 0 for i = 1, . . . , m − 1 } and I2 = { sα | α ∈ Π, h α, vi i = 0 for i =
1, . . . , m − 1 }. Then um ∈ CWI1 , whence h um, α i ≥ 0 for α ∈ ΠI1 , and vm ∈ CWI2 .
Now if α ∈ ΠI , then h α, um i ≥ 0 and h α, vm i ≥ 0. If h α, um i =
6 0 or h α, vm i =
6 0,
then h α, tum + (1 − t)vm i > 0, and sα 6∈ Wc(t),m . This proves the assertion (2.2.1).
It follows that for t 6= 0, 1, CWc(t),n−1 = { v ∈ V | h v, α i ≥ 0 for all α ∈ I },
where I = I1 ∩ I2 is as in the previous paragraph with m = n. But un ∈ CWI1 and
vn ∈ CWI2 , whence for α ∈ ΠI1 ∩ ΠI2 , we have h α, tun + (1 − t)vn i ≥ 0, and (c) is
proved.
2.3. Total orderings on V . A vector space total order ≤ of V is a total order of
V such that the set { v ∈ V | v > 0 } of positive elements is closed under addition
and under multiplication by positive real scalars. One way such orderings arise is as
follows. Take any (ordered) basisP{α1 , . . . , αp } of V , and define the total order ≤ by
declaring that u < v if v − u = pi=1 ci αi and there is some j with ci = 0 for i < j
and cj > 0. It is well known that in fact, all total orders arise in this way (since V
is finite dimensional).
When V contains a root system Φ, fix a vector space total order ≤ of V such that
every positive root of W is positive in that order i.e. Φ+ ⊆ { v ∈ V | v > 0 }. One
such order is the one described above, taking the simple system Π = {α1 , . . . , αl } to
be the initial segment of a basis of V as above.
Given such an ordering of V , we endow V n with the lexicographic total order
induced by ≤ i.e. (v1 , . . . , vn ) ≺ (u1 , . . . , un ) if there is some j such that vi = ui for
i < j and vj < uj .
(n)
2.4. Proposition. Let v ∈ CW . Then v is the maximum element of the orbit W v
in the total order induced by .
Proof. Let w ∈ W and u := wv ∈ V n . If w ∈ Wv,n , then u = v. Otherwise, there
is some j with 0 ≤ j ≤ n − 1 such that w ∈ Wv,j \ Wv,j+1 . Write v = (v1 , . . . , vn )
and u = (u1 , . . . , un ). Then vi = ui for i ≤ j, vj+1 ∈ CWv,j , and uj+1 = w(vj+1).
Since w ∈ Wv,j \ Wv,j+1 = Wv,j \ StabW (vj+1 ), it follows that 0 6= vj+1 − uj+1 =
vj+1 − w(vj+1) ∈ R≥0 ΠWv,j . Moreover ΠWv,j ⊆ Φ+ = { v ∈ Φ | v > 0 }, and so
vj+1 > uj+1. Since vi = ui for i < j + 1, one has u v as required.
2.5. Corollary. Given any total order on V such that 0 ≤ Φ+ , let be the induced
(n)
lexicographic total order on V n . Then the region CW is precisely the set of maximal
elements in the W -orbits on V n .
Proof. This is immediate from Theorem 2.2 and Proposition 2.4.
6
M. J. DYER AND G. I. LEHRER
2.6. Ordered and unordered sets. For a group H acting on the left on a set U,
we denote the set of H-orbits on U by U/H. We use frequently below the simple fact
that if H = H1 × H2 is a product of two groups, then H1 (resp., H2 ) acts naturally
on U/H2 (resp., U/H1 )) and there are canonical bijections
(2.6.1)
U/H ∼
= (U/H1 )/H2 ∼
= (U/H2 )/H1 .
We record the following elementary observation.
2.7. Lemma. Let U be a totally ordered set, and let H be a finite group acting on
U. For any finite subset A of U, denote by max(A) the maximum element of A.
Then
(a) The map Hu 7→ max(Hu) (u ∈ U) defines a bijection from the set U/H to
a subset of U, which we denote by MH = MH (U).
(b) If H = H1 × H2 is a direct product of groups H1 , H2 , then for u ∈ U,
max(Hu) = maxh∈H1 (max(hH2 u)).
(c) The set { H2u | u ∈ MH1 ×H2 } is a set of distinct representatives for the
H1 -orbits on U/H2 .
2.8. Let n ∈ N. We regard the symmetric group Symn as the group of all permutations of {1, . . . , n} (acting on the left) and often write its elements in cycle notation. Take U = V n , and let G be a subgroup of the symmetric group
Symn . Then G has a natural left action on V n by place permutations, defined by
σ(v1 , . . . , vn ) = (vσ−1 (1) , . . . , vσ−1 (n) ), which commutes with the diagonal W -action
and induces a W × G-action on V n . Assume chosen, a total order ≤ on V , with
corresponding total order on U = V n , as in Corollary 2.5. Write H = W × G, and
recall that for any H-action on the ordered space U, MH (U) is the set of elements
of U which are maximal in their H-orbit.
2.9. Corollary.
(a) For v ∈ V n , max(Hv) = max{ max(W σ(v)) | σ ∈ G }.
(b) { Gv | v ∈ MW ×G (V n ) } is a set of orbit representatives for W acting on
V n /G.
(n)
(c) We have MW (V n ) = CW ⊇ MW ×G (V n ).
Proof. Parts (a) and (b) are immediate from Lemma 2.7. Part (c) follows from parts
(a) and (b) and Corollary 2.5.
cbe a subset of V n which is stable under the action of the
2.10. Proposition. Let S
group H = W × G as above.
(n)
c/W .
(a) The set Rb := Sc∩ CW is in canonical bijection with S
c/G is in canonical bijection with the set S
c/H of
(b) The set of W -orbits on S
c and also with the set of G-orbits on S
c/W .
H-orbits on S
(c) Define a left “dot” G-action (g, b) 7→ g · b on Rb by transferring the Gc/W via the bijection of (a). This action is determined by either
action on S
condition {g · b} = W gb ∩ Rb or g · b = max(W gb) for b ∈ Rb and g ∈ G.
(d) The G-orbits in the dot action on Rb are the equivalence classes for the
b that
equivalence relation ≃ on Rb defined by stipulating (for b, b′ ∈ R)
GEOMETRY OF CERTAIN FINITE COXETER GROUP ACTIONS
7
c/G are in canonical
b ≃ b′ ⇐⇒ b′ ∈ Hb. Hence the W -orbits on S
b
bijection with R/≃.
c−→ S
c/G, then the number of W -orbits on S
c/G
(e) If η is the natural map S
is at most |η(Rb )|.
Proof. Part (a) follows from Theorem 2.2. Part (b) was already observed more
c/W be the canonical bijection of
generally in 2.6. We prove (c). Let φ : Rb → S
(a). By definition, φ(b) = W b for b ∈ Rb and {φ−1 (Γ)} = Γ ∩ Rb for any W -orbit
c/W . Hence the dot action is determined by {g · b} = W gb ∩ Rb for g ∈ G
Γ∈S
b But by Corollary 2.5, W gb ∩ Rb = {max(W gb)} and (c) follows.
and b ∈ R.
Now we prove (d). Let ≃ be the equivalence relation on Rb for which the equivalence classes are the G-orbits in the dot action. Then by (c), for b, b′ ∈ Rb one
has b ≃ b′ ⇐⇒ b′ ∈ { max(W gb) | g ∈ G }. Certainly b ≃ b′ implies b′ ∈ Hb.
But if b, b′ ∈ Rb with b′ ∈ Hb, one has b′ ∈ W gb for some g ∈ G and then
b This proves the first assertion of
b′ = max(W gb) by Corollary 2.5 since b ∈ R.
(d), and the second then follows from (a).
To see (e), we need only note that the fibres of the restriction of η to Rb are subsets
of the ≃-equivalence classes, whence the number of equivalence classes on Rb is at
most the number of such fibres.
2.11. Taking Sc = V n and G = Symn in Proposition 2.10(c) defines a dot action
(n)
(g, b) 7→ g · b = g ·n b of Symn on CW for each n (of which all possible dot actions
as in Proposition 2.10(c) are various restrictions). We record the following trivial
but useful relations between these ·n actions for varying n
(n)
2.12. Proposition. Let notation be as above. Let b = (β1 , . . . , βn ) ∈ CW , σ ∈ Symn
(n)
and m ∈ N with 0 ≤ m ≤ n. Set b′ := τm (b) = (β1 , . . . , βm ) ∈ CW , W ′ := Wb′
(n−m)
(n−m)
and b′′ := (βm+1 , . . . , βn ) ∈ CW ′ . Denote the · action of Symn−m on CW ′
by
·′n−m .
(n)
(a) If σb ∈ CW , then σ · b = σb.
(b) Suppose {1, . . . , m} is σ-stable. Let σ ′ be the restriction of σ to a permutation
of {1, . . . , m}. Then τm (σ ·n b) = σ ′ ·m (τm b).
(c) Suppose that σ fixes i for i = 1, . . . , m. Let σ ′ ∈ Symn−m be defined by
σ ′ (i) := σ(i + m) − m for i = 1, . . . , n − m. Then σ ·n b = σ ·n (b′ , b′′ ) =
′
′
(b′ , σ ′ ·′n−m b′′ ) i.e. σ·n b = (β1 , . . . , βm , βm+1
, . . . , βn′ ) where (βm+1
, . . . , βn′ ) :=
′′
′ ′
σ ·n−m b .
Proof. This is a straightforward consequence of the definitions. Details are left to
the reader.
2.13. Automorphisms. Denote the group of all (linear) isometries of (V, h −, − i)
which restrict to a permutation of the simple roots Π by D. Then D acts diag(n)
onally on V n and it is easily seen that CW is D-stable. It is well known that if
span(Φ) = V and the W -invariant inner product h −, − i is chosen suitably (by
8
M. J. DYER AND G. I. LEHRER
rescaling if necessary its restrictions to the linear spans of the components of Φ),
then D identifies with the group D ′ of all diagram automorphisms of Π (i.e. the
group of automorphisms of the Coxeter system (W, S)). It follows that in general,
(n)
CW ∩ Φn is invariant under the diagonal action of D ′ on Φn (even if span(Φ) 6= V
or h −, − i is not chosen in this way).
3. Stratification of V n
3.1. Cones. Recall that a subset of a topological space V is locally closed if it is
open in its closure, or equivalently, if it is the intersection of an open subset and a
closed subset of V . A subset of V is said to be constructible if it is a finite union
of locally closed subsets of V . By a cone in a real vector space, we mean a subset
which is closed under addition and closed under multiplication by positive scalars.
Note that cones may or may not be constructible.
3.2. Facets. The fundamental chamber C = CW and certain notions below depend
not only on W and Φ, but also on the simple system Π; this dependence will be
made explicit in notation to be introduced presently.
For J ⊆ I ⊆ S define
(3.2.1)
CI,J := { v ∈ V | h α, v i = 0 for α ∈ ΠJ and h α, v i > 0 for α ∈ ΠI \ J }
This is a non-zero locally closed cone in V . From [2],
G
G
(3.2.2)
CWI =
CI,J ,
CI,J =
J ⊆I
CI,K
K:J ⊆ K ⊆ I
For J ⊆ I ⊆ S, F
let W J := { w ∈ W | w(ΠJ ) ⊆ Φ+ } and WIJ := WI ∩ W J . From [2],
one has WI = w∈W J wWJ and each element w ∈ WIJ satisfies l(wx) = l(w) + l(x)
I
for all x ∈ WJ .
Fix I ⊆ S. The sets w(CI,J ) for w ∈ WI and J ⊆ I are called the facets (of WI
acting on V ). They are locally closed cones in V , and the closure of any facet is a
union of facets. It is well known that any two facets either coincide or are disjoint.
The setwise stabiliser in WI of CI,J coincides with the stabiliser in WI of any point
of CI,J and is equal to WJ . It follows that for I ⊆ S, one has the decomposition
G
G
(3.2.3)
V =
w(CI,J )
J : I ⊆ J w∈W J
I
of V as a union of pairwise disjoint facets.
3.3. Strata. The family of subsets w(CS,J ) for J ( S and w ∈ W , or rather the
complex of spherical simplices cut out by their intersections with the unit sphere in
V , is known as the Coxeter complex of W on V (see [3], [9], [11]); there are also
other closely related meanings of the term Coxeter complex in the literature, for
instance (cf. [4]) where the Coxeter complex is an (abstract) simplicial complex or
chamber complex with vertices which are cosets of proper parabolic subgroups. We
shall now define a similar “complex” for W acting on V n , where n ∈ N≥1 , and show
that it has significant similarities to, and differences from, the Coxeter complex.
GEOMETRY OF CERTAIN FINITE COXETER GROUP ACTIONS
9
Let
(3.3.1)
I = I (n) := { (I0 , . . . , In ) | S = I0 ⊇ I1 ⊇ . . . ⊇ In }
denote the set of all weakly decreasing sequences of n + 1 subsets of S with S as
first term. For I = (I0 , . . . , In ) ∈ I , define
(3.3.2)
XI := CI0 ,I1 × CI1 ,I2 × . . . × CIn−1 ,In ⊆ V n .
The sets w(XI ) for I ∈ I (n) and w ∈ W are non-empty locally closed cones in V n
which will be called the strata of V n (for (W, S) or W ). Their basic properties are
listed in the Theorem below.
3.4. Theorem.
(a) If I ∈ I (n) then XI ⊆ C (n) .
(n)
(b) If I ∈ I , w ∈ W and v ∈ w(XI ), then StabW (v) = wWIn w −1 .
(c) Let v, w ∈ W and I, J ∈ I (n) . Then the following conditions are equivalent
(i) v(XI ) ∩ w(XJ ) 6= ∅.
(ii) v(XI ) = w(XJ ).
(iii) I = J and v −1 w ∈ WIn .
(d) If I ∈ I (n) and w ∈ W , then StabW (w(XI )) = wWIn w −1 .
(e) The sets C (n) and V n are the disjoint unions of the strata they contain.
(f) The topological closure of any stratum of V n is a union of strata.
(n)
3.5. Remarks. (1) The fundamental region CW is constructible (in fact, it is a finite
union of locally closed cones).
(n)
(2) If n = 1, the fundamental region CW = C is closed in V , but in general,
(n)
CW is a constructible subset of V n which need not be locally closed; moreover the
(n)
(n)
closure of a stratum in CW may contain strata outside CW .
(3) If n = 1, then the facets (here called strata) of W on V are the maximal
connected subsets of V , all points of which have the same stabiliser. For n > 1,
the stratum containing v is precisely the connected component containing v of the
space of all u ∈ V n such that StabW (τi (u)) = StabW (τi (v)) for i = 1, . . . , n. Recall
that here τi is the truncation map V n → V i given by τi (u1, . . . , un ) = (u1, . . . , ui ).
3.6. An example. Before giving its proof, we illustrate Theorem 3.4 and its following remarks in the simplest non-trivial situation.
Let W = {1, s}, and S = {s}, regarded as Coxeter group of type A1 acting as
reflection group on V := R with s acting by multiplication by −1, with root system
Φ = {±1} and unique positive root α := 1.
Then CW = R≥0 , CW∅ = R, CS,S = {0}, CS,∅ = R>0 and C∅,∅ = R. If n ∈ N≥1 ,
(n)
then CW is the set of all (λ1 , . . . , λn ) ∈ Rn such that if λj 6= 0 and λi = 0 for
all i < j, then λj > 0. In other words, it consists of zero and all non-zero vectors
in Rn with their first non-zero coordinate positive. Note that this is the cone of
non-negative vectors of a vector space total ordering of V .
(n)
A typical stratum in CW is of the form Xi := XS,...,S,∅,...,∅ for some 0 ≤ i ≤ n.
where there are n + 1 subscripts on X , of which the first n − i + 1 are equal to S
and the last i are equal to ∅. One readily checks that X0 = s(X0 ) = {(0, . . . , 0)}
10
M. J. DYER AND G. I. LEHRER
and that for i > 0,
Xi = {0} × . . . × {0} × R>0 × R × . . . × R = {0}n−i × R>0 × Ri−1 .
Thus, there are 2n + 1 distinct strata of W on V n , namely X0 and Xi , s(Xi ) for
i = 1, . . . , n. One readily checks that the closure of a stratum is given by its union
with the strata below it in the following Hasse diagram:
Xn ❖❖ s(Xn )
❖❖
♦♦♦
Xn−1 s(Xn−1 )
X2 ❖❖ s(X2 )
❖♦
❖
♦♦♦
X1 ❄
s(X1 )
❄
X0
⑧⑧
.
3.7. Proof of Theorem 3.4(a)–(e). Let I ∈ I (n) and v ∈ XI . Then vi ∈ CIi−1 ,Ii
for i = 1, . . . , n, so StabWIi−1 (vi ) = WIi . Since Wv,i = StabWv,i−1 (vi ), it follows by
induction that Wv,i = WIi . By definition, v ∈ C (n) and StabW (v) = WIn . This
proves (a) and (b). In (c), (iii) implies (ii) by (b) and it is trivial that (ii) implies
(i). We show that (i) implies (iii). Suppose that (i) holds: i.e. v(XI ) ∩ w(XJ ) 6= ∅.
That is, for i = 1, . . . , n, CIi−1 ,Ii ∩ v −1 w(CJi−1 ,Ji ) 6= ∅. We have I0 = J0 = S and
v −1 w ∈ WI0 = W . The properties of facets in §3.2 imply by induction on i that
for i = 0, . . . , n, Ii = Ji and v −1 w ∈ WIi . This shows that (i) implies (iii), and
completes the proof of (c). Part (d) follows immediately from (c) and (b).
For (e), we claim that
[
[ [
w(XI ).
(3.7.1)
C (n) =
XI ,
Vn =
I∈I (n)
I∈I (n) w∈W In
To prove the assertion about C (n) , note first that the right hand side is included
in the left. To prove the converse, let v ∈ C (n) . Using Theorem 2.2, write Wv,i =
WIi where Ii ⊆ S, for i = 0, . . . , n. Then clearly I := (I0 , . . . , In ) ∈ I (n) . Since
vi ∈ CWv,i−1 and StabWv,i−1 (vi ) = Wv,i , it follows by induction on i that vi ∈ CIi−1 ,Ii .
Hence v ∈ XI , proving the above formula for C (n) . Since V n = ∪w∈W w(C (n) ), the
above assertion concerning V n follows using (c), which also implies that the facets
in the unions 3.7.1 are pairwise distinct and disjoint.
3.8. Distinguished coset representatives. The proof of Theorem 3.4(f) (which
is given in 3.14) involves relationships between closures of facets with respect to
different parabolic subgroups of W . These results actually apply to arbitrary reflection subgroups, and we prove them in that generality (there is no simplification in
the formulations or proofs of the results for parabolic subgroups alone). In order to
formulate the results, we shall need additional background on reflection subgroups
and more detailed notation which indicates dependence of notions such as facets,
coset representatives etc on the chosen simple systems for the reflection subgroups
GEOMETRY OF CERTAIN FINITE COXETER GROUP ACTIONS
11
involved. The results needed are simple extensions (given in [8]) of facts from [2]
which are well known in the case of standard parabolic subgroups.
Recall that a simple subsystem Γ of Φ is defined to be a simple system Γ of some
root subsystem of Φ. For such a simple subsystem Γ, let SΓ := { sα | α ∈ Γ } and
WΓ = h SΓ i. Then (WΓ , SΓ ) is a Coxeter system, the length function of which we
denote as lΓ . Denote the set of roots of WΓ as ΦΓ = { α ∈ Φ | sα ∈ WΓ } = WΓ Γ
and the set of positive roots of ΦΓ with respect to its simple system Γ as ΦΓ,+
Let Γ, Γ′ be simple subsystems of Φ such that WΓ′ ⊆ WΓ . Let
′
WΓΓ := { w ∈ WΓ | w(Γ′ ) ⊆ ΦWΓ ,+ }.
(3.8.1)
Evidently one has
u(Γ′ )
(3.8.2)
WΓ
′
= WΓΓ u−1 for all u ∈ WΓ .
′
It is known from [8] that under the additional assumption that ΦΓ′ ,+ ⊆ ΦΓ,+ , WΓΓ
′
is a set of coset representatives for WΓ /WΓ′ and that each element w ∈ WΓΓ is the
unique element of minimal length in the coset wWΓ′ of WΓ with respect to the length
function lΓ . Moreover,
(3.8.3)
′
WΓΓ = { w ∈ WΓ | lΓ (wsα ) > lΓ (w) for all α ∈ Γ′ },
if ΦΓ′ ,+ ⊆ ΦΓ,+
Now in general if WΓ′ ⊆ WΓ , there is a unique simple system Γ′′ for ΦΓ′ such that
Γ ⊆ Φ+ and a unique u ∈ WΓ′ such that Γ′′ = u(Γ′ ). It follows from (3.8.2) and the
′
preceding comments that in this more general situation, it is still true that WΓΓ is
a set of coset representatives for WΓ /WΓ′ .
Similarly, define
′′
(3.8.4)
Γ′
′
WΓ := (WΓΓ )−1 = { w ∈ WΓ | w −1 (Γ′ ) ⊆ ΦWΓ ,+ }.
This is a set of coset representatives in WΓ′ \WΓ , each of minimal length in its coset
if ΦΓ′ ,+ ⊆ ΦΓ,+ . Further,
(3.8.5)
u(Γ′ )
′
WΓ = u Γ WΓ for u ∈ WΓ
and
(3.8.6) if ΦΓ′ ,+ ⊆ ΦΓ,+ , then
Γ′
WΓ = { w ∈ WΓ | lΓ (sα w) > lΓ (w) for all α ∈ Γ′ }.
3.9. Further notation. We now expand the notation of §3.2 to include the possibility of non-parabolic reflection subgroups. For any simple subsystem Γ of Φ,
let
(3.9.1)
CΓ := { v ∈ V | h v, Γ i ⊆ R≥0 } = { v ∈ V | h v, ΦΓ,+ i ⊆ R≥0 }.
denote the corresponding closed fundamental chamber for (WΓ , SΓ ) acting on V .
For any ∆ ⊆ Γ, let
(3.9.2)
CΓ,∆ := { v ∈ V | h v, Γ \ ∆ i ⊆ R>0 , h v, ∆ i = 0 }
denote the (unique) facet of WΓ on V which is open in CΓ ∩ ∆⊥ . One easily checks
that for w ∈ W ,
(3.9.3)
Cw(Γ) = w(CΓ ),
Cw(Γ),w(∆) = w(CΓ,∆ ).
12
M. J. DYER AND G. I. LEHRER
The setwise stabiliser of CΓ,∆ in WΓ coincides with the stabiliser in WΓ of any point
of CΓ,∆ , which is W∆ . Moreover,
G
G
(3.9.4)
CΓ =
CΓ,∆ ,
CΓ,∆ =
CΓ,∆′ .
∆′ : ∆ ⊆ ∆′ ⊆ Γ
∆⊆Γ
3.10. Lemma. Let Γ, Γ′ be simple subsystems of Φ with ΦΓ′ ,+ ⊆ ΦΓ,+ . Then
(a) CΓ ⊆ CΓ′
(n)
(n)
(n)
(n)
(b) CWΓ ⊆ CWΓ′ where CWΓ and CWΓ′ are the fundamental domains we have defined for (WΓ , SΓ ) and (WΓ′ , SΓ′ ) respectively acting on V n .
Proof. For (a), observe that
CΓ = { v ∈ V | h v, ΦΓ,+ i ⊆ R≥0 } ⊆ { v ∈ V | h v, ΦΓ′ ,+ i ⊆ R≥0 } = CΓ′ .
To prove (b), let v ∈ V n . Let W ′ = WΓ and W ′′ := WΓ′ . Recall that Wv,i =
′′
′
=
StabW (τi (v)). Similarly define Wv,i
= StabW ′ (τi (v)) = W ′ ∩ Wv,i and Wv,i
′′
′
′
StabW ′′ (τi (v)) = W ∩ Wv,i ⊆ Wv,i ; they are parabolic subgroups of (WΓ , SΓ ) and of
(WΓ′ , SΓ′ ) respectively with standard positive systems
′′ ,+ = ΦW ′′ ∩ ΦΓ′ ,+ ⊆ ΦW ′
′ ,+
ΦWv,i
∩ ΦΓ,+ = ΦWv,i
v,i
v,i
′′
′
for all i. If v ∈
⊆ CWv,i
by the assumption that ΦΓ′ ,+ ⊆ ΦΓ,+ . Hence by (a), CWv,i
(n)
(n)
′′
′
and so v ∈ CW ′′ by definition.
⊆ CWv,i−1
CW ′ , then for all i = 1, . . . , n, vi ∈ CWv,i−1
3.11. The main lemma. We now prove the main technical lemma required for the
proof of Theorem 3.4(f).
3.12. Lemma. Let Γ, Γ′ be simple subsystems of Φ with WΓ′ ⊆ WΓ .
S
(a) CΓ′ = w∈ Γ′ W w(CΓ ).
Γ
(b) If ∆′ ⊆ Γ′ , then
[
[
w(CΓ,∆ ).
CΓ′ ,∆′ =
′
w∈ Γ WΓ
∆⊆Γ
Ww(∆) ⊇ W∆′
Proof. Suppose that (a) holds for Γ and Γ′ . Then it also holds for Γ and u(Γ′ ) for
any u ∈ WΓ . For by (3.9.3) and (3.8.5), one would have
[
[
[
uw(CΓ ) =
w(CΓ) =
w ′ (CΓ ).
Cu(Γ′ ) = u(CΓ′ ) = u
′
w∈ Γ WΓ
′
w∈ Γ WΓ
w′ ∈
u(Γ′ )
WΓ
A similar argument shows that if (b) is true for Γ, ∆′ and Γ′ , it is true for Γ, u(∆′ )
and u(Γ′ ) for any u ∈ WΓ . Since there is u ∈ WΓ with u(Γ′ ) ⊆ ΦΓ,+ , we may and do
assume for the proofs of (a)–(b) that ΦΓ′ ,+ ⊆ ΦΓ,+ .
′
To prove (a), note that if w ∈ WΓΓ , then (3.9.3) and Lemma 3.10(a) imply that
w(CΓ′ ) = Cw(Γ′ ) ⊇ CΓ . Hence ∪w∈ Γ′ W w(CΓ ) = ∪w∈W Γ′ w −1 (CΓ ) ⊆ CΓ′ . To prove the
Γ
Γ
reverse inclusion, let v ∈ CΓ′ . Write v = w(v ′) where v ′ ∈ CΓ and w ∈ WΓ is of
minimal length lΓ (w). By (3.8.3)–(3.8.4), it will suffice to show that lΓ (sα w) ≥ lΓ (w)
GEOMETRY OF CERTAIN FINITE COXETER GROUP ACTIONS
13
for all α ∈ Γ′ . Suppose first that h α, v i = 0. Then v = sa (v) = (sα w)(v ′) with
sα w ∈ WΓ . By choice of w, lΓ (sα w) ≥ lΓ (w). On the other hand, suppose h α, v i =
6 0.
Since v ∈ CΓ′ , this forces 0 < h v, α i = h w(v ′), α i = h v ′, w −1(α) i. Since v ′ ∈ CΓ
and w −1 (α) ∈ ΦWΓ , it follows that w −1(α) ∈ ΦWΓ ,+ and so lΓ (sα w) > lΓ (w) as
required.
′
Now we prove (b). Let w ∈ Γ WΓ , ∆ ⊆ Γ with Ww(∆) ⊇ W∆′ . Let v ∈ CΓ,∆ . Then
v ∈ CΓ , so by (a), w(v) ∈ CΓ′ . Since h v, ∆ i = 0, it follows that h w(v), w(∆) i = 0
and therefore h w(v), ∆′ i = 0 since Ww(∆) ⊇ W∆′ . Hence w(v) ∈ CΓ′ ,∆′ by (3.9.4).
Thus the right hand side of (b) is included in the left hand side. For the reverse
′
inclusion, let v ∈ CΓ′ ,∆′ ⊆ CΓ′ . By (a), there exists w ∈ Γ WΓ with v ′ := w −1 (v) ∈ CΓ .
Thus, v ′ ∈ CΓ,∆ for some ∆ ⊆ Γ. It remains to prove that Ww−1 (∆′ ) ⊆ W∆ . Let
′
α ∈ ∆′ ⊆ Γ′ . Since w ∈ Γ WΓ , it follows that w −1 (α) ∈ ΦWΓ ,+ . Note that
0 = h α, v i = h w −1(α), v ′ i.
Since v ′ ∈ CΓ,∆ , one has sw−1 (α) ∈ StabWΓ (v ′ ) = W∆ . Therefore Ww−1 (∆′ ) ⊆ W∆
since the elements sw−1 (α) for α ∈ ∆′ generate the left hand side. This completes
the proof of (b).
′
3.13. Remarks. One may show that in the union in (b), Γ WΓ may be replaced by
′
Γ′
WΓ∆ := Γ WΓ ∩ WΓ∆ , which is a set of (WΓ′ , W∆ ) double coset representatives in
WΓ′ \WΓ /W∆ , and is the set of all double coset representatives which are of minimal
(and minimum) length in their double coset if ΦΓ′ ,+ ⊆ ΦΓ,+ . (This uses the fact that
standard facts on double coset representatives with respect to standard parabolic
subgroups on both sides generalise to double coset representatives with respect to
an arbitrary reflection subgroup on one side and a standard parabolic subgroup on
the other side.) After this replacement, the union in (b) is one of pairwise disjoint
facets. This leads to a similar refinement in (3.15.1).
3.14. Proof of Theorem 3.4(f). For a simple subsystem Γ of Φ, and n ∈ N, let
(3.14.1)
(n)
IΓ = IΓ
:= { Γ = (Γ0 , . . . , Γn ) | Γ = Γ0 ⊇ Γ1 ⊇ . . . ⊇ Γn }
denote the set of all weakly decreasing sequences Γ of n + 1 subsets of Γ with Γ as
first term. For Γ = (Γ0 , . . . , Γn ) ∈ IΓ , define
(3.14.2)
XΓ := CΓ0 ,Γ1 × CΓ1 ,Γ2 × . . . × CΓn−1 ,Γn ⊆ V n .
(n)
The sets w(XΓ ) for Γ ∈ IΓ and w ∈ WΓ are the (WΓ , SΓ )-strata of V n . If
(n)
I = (I0 , . . . , In ) ∈ I (n) , then Γ := (ΠI0 , . . . , ΠIn ) ∈ IΠ and XI = XΓ . It is easy
to see that the collection of strata of V n with respect to (WΓ , SΓ ) depends only on
the reflection subgroup WΓ and not on the chosen simple system Γ.
(n)
There is a left action of W on ∪Γ IΓ , where the union is over simple subsystems
(n)
Γ of Φ, defined as follows: for w ∈ W and Γ = (Γ0 , . . . , Γn ) ∈ IΓ , one has
(n)
w(Γ) := (w(Γ0 ), . . . , w(Γn )) ∈ Iw(Γ) . By (3.9.3), this action satisfies
(3.14.3)
w(XΓ ) = Xw(Γ) , for all w ∈ W .
The setwise stabiliser of XΓ in WΓ is equal to the stabiliser in WΓ of any point of
XΓ , which is WΓn .
14
M. J. DYER AND G. I. LEHRER
Theorem 3.4(f) follows from the special case Γ = Λ = Π and W = WΓ = WΛ
of the following (superficially stronger but actually equivalent) result, which has a
simpler inductive proof because of the greater generality of its hypotheses.
3.15. Theorem. Let Γ and Λ be simple subsystems of Φ with WΓ ⊆ WΛ . Then for all
n ∈ N≥1 , the closure of any (WΓ , SΓ )-stratum F ′ of V n is a union of (WΛ , SΛ )-strata
F of V n .
Proof. A typical stratum F ′ of V n for WΓ is, by the definitions and (3.14.3), of the
(n)
form F ′ = u(XΓ ) = Xu(Γ) for some u ∈ WΛ and Γ ∈ IΓ . Replacing Γ by u(Γ),
we may assume without loss of generality that u = 1. It will therefore suffice to
(n)
establish the following formula: for Γ = (Γ = Γ0 , . . . , Γn ) ∈ IΓ :
[
(3.15.1)
wn · · · w1 (XΛ ).
XΓ =
w∈W n
(n)
Λ∈IΛ
P (w,Λ)
The union in (3.15.1) is taken over certain sequences w = (w1 , . . . , wn ) ∈ W n and
(n)
Λ = (Λ = Λ0 , . . . , Λn ) ∈ IΛ satisfying the conditions P (w, Λ)(i)–(ii) below
Γ
(i) For i = 1, . . . , n, wi ∈ i−1Wwi−1 ···w1 (Λi−1 ) .
(ii) For i = 1, . . . , n, Wwi ···w1 (Λi ) ⊇ WΓi .
For fixed P (w, Λ), we denote these conditions as (i)–(ii). Note that the condition (ii)
with i = n implies that for F ′ := XΓ and F := wn · · · w1 (XΛ ), one has StabWΛ (F ) =
Wwn ···w1 (Λn ) ⊇ WΓn = StabWΓ (F ′ ).
We shall prove (3.15.1) (and that the conditions (i)–(ii) make sense) by induction
on n. If n = 1, then (3.15.1) reduces to Lemma 3.12(b). Now assume by way of
induction that (3.15.1) holds and consider Γ′ = (Γ, Γn+1 ) = (Γ0 , . . . , Γn , Γn+1 ) ∈
(n+1)
IΓ
. Then
[
wn · · · w1 (XΛ ) × CΓn ,Γn+1
CΓ′ = XΓ × CΓn ,Γn+1 =
w∈W n
(n)
Λ∈IΛ
P (w,Λ)
(n)
Fix w ∈ W n and Λ ∈ IΛ satisfying P (w, Λ) and write w := wn · · · w1 . Then since
Ww(Λn ) ⊇ WΓn , Lemma 3.12(b) gives
[
wn · · · w1 (XΛ ) × CΓn ,Γn+1 = Xw(Λ) × CΓn ,Γn+1 = Xw(Λ) ×
w ′ (Cw(Λn ),Σ )
w ′ ,Σ
where the union is over all w ′ ∈ ΓnWw(Λn ) and Σ ⊆ w(Λn ) with Ww′ (Σ) ⊇ WΓn+1 .
Writing w ′ = wn+1 and Σ = w(Λn+1) gives
[
[
Xw(Λ) × wn+1 (Cw(Λn ),w(Λn+1 ) )
w ′(Cw(Λn ),Σ ) =
Xw(Λ) ×
w ′ ,Σ
wn+1 ,Λn+1
where the union on the right is taken over all wn+1 ∈ ΓnWw(Λn ) and Λn+1 ⊆ Λn
with Wwn+1 w(Λn+1 ) ⊇ WΓn+1 . Since wn+1 ∈ Ww(Λn ) = StabWΛ (Xw(Λ) ), it follows using
GEOMETRY OF CERTAIN FINITE COXETER GROUP ACTIONS
15
(3.14.3) and (3.9.3) that
Xw(Λ) × wn+1 (Cw(Λn ),w(Λn+1 ) ) = wn+1 (Xw(Λ) ) × wn+1 (Cw(Λn ),w(Λn+1 ) )
= wn+1 w(XΛ × CΛn ,Λn+1 ) = wn+1 w(XΛ′ )
(n+1)
where Λ′ := (Λ0 , . . . , Λn+1 ) ∈ IΛ
. Observe that the conditions imposed on
wn+1 , Λn+1 in the last union are precisely those which ensure that wn+1 , Λn+1 satisfy
the conditions on wi , Λi in (i), (ii) with i = n + 1, and that Λ′ := (Λ0 , . . . , Λn+1 ) ∈
(n+1)
IΛ
. Combining the last four displayed equations with this observation establishes the validity of (3.15.1) with n replaced by n + 1. This completes the inductive
step, and the proof of Theorem 3.15.
The proof of Theorem 3.4 is now complete.
3.16. Geometry. Define the dimension dim(C) of any non-empty cone C in a finitedimensional real vector space by dim(C) = dimR (RC), where RC is the subspace
spanned by C. It is well-known that the dimension of any cone contained in C \ C
is strictly smaller than the dimension of C.
3.17. Corollary. Maintain the hypotheses of Theorem 3.15, so that in particular
WΓ ⊆ WΛ .
(a) The closure of any (WΓ , SΓ )-facet F of V n is the disjoint union of F and of
(WΓ , SΓ )-strata of V n whose dimension is strictly less than dim(F ).
(b) Any (WΓ , SΓ )-stratum F ′ of V n is a union of certain (WΛ , SΛ )-strata F of
V n.
Proof. Write d := dim V . Let Γ be a simple subsystem and ∆ ⊆ Γ. Then one has
dim(CΓ,∆ ) := dim(RCΓ,∆ ) = d − |∆|. In fact, there is an isomorphism of vector
∼
∼
=
=
|Γ \ ∆|
→ Rd−|Γ| × R>0
and
→ Rd which induces a homeomorphism CΓ,∆ −
spaces V −
∼
=
|Γ \ ∆|
d−|Γ|
0
0
m
m−i
→R
× R≥0 where R≥0 = R := {0} and R is identified with R
× Ri
CΓ,∆ −
for 0 ≤ i ≤ m.
(n)
It follows from the above and the definitions that for Γ ∈ IΓ , one has dim(XΓ ) =
Pn
Pn
∼
n =
−
→ Rnd ini=1 |Γi |. Also, there is an isomorphism V
i=1 dim CΓi−1 ,Γi = nd −
ducing homeomorphisms
(3.17.1)
XΓ ∼
= Rnd−
Pn
i=1
|Γi−1 |
|Γ −Γn |
× R>00
,
XΓ ∼
= Rnd−
Pn
i=1
|Γi−1 |
|Γ −Γn |
× R≥00
Note that any cone in XΓ \ XΓ has dimension strictly smaller than that of XΓ
(either by the general fact mentioned above or by a direct check in this case). Part
(a) follows readily from this fact and the special case of Theorem 3.15 in which
Λ = Γ. Given (a), (b) follows from Theorem 3.15 by induction on dim(F ′ ) as
follows: (b) holds vacuously for strata F ′ of negative dimension (since there are
none). In general, F ′ is a union of (WΛ , SΛ )-strata by Theorem 3.15, F ′ \ F ′ is
a union of (WΓ , SΓ )-strata of dimension less than dim(F ′ ) by (a) and hence is a
union of (WΛ , SΛ )-strata by induction, and therefore F ′ = F ′ \ (F ′ \ F ′) is a union
of (WΛ , SΛ )-strata as asserted.
16
M. J. DYER AND G. I. LEHRER
3.18. Character formulae. We finish this section with a character-theoretic application of Theorem 3.4. Assume for simplicity that RΠ = V . The Coxeter complex
{w(CS,J ) for J ( S, w ∈ W } provides a subdivision of the unit sphere S(V ) in
V ∼
= Rℓ into spherical simplices. Applying the Hopf trace formula to the resulting
chain complex, and recalling that dim(w(CS,J ) ∩ S(V )) = ℓ − 1 − |J|, one obtains
the familiar character formula (due to Solomon)
(3.18.1)
det V (w) =
X
(−1)|J| IndW
WJ (1)(w) for w ∈ W.
J⊆S
It follows from Theorem 3.4 and (3.17.1) that the intersections of the strata
w(CXI ), where w ∈ W and I 6= (S, S, . . . , S), with the unit sphere S(V n ) give
a subdivision of S(V n ) into spherical cells (homeomorphic to open balls), and one
may again apply the Hopf trace formula to the resulting chain complex. A straightforward computation, using the fact that for any u ∈ W , dim(uXI ∩ S(V n )) =
nℓ − (|I1 | + |I2 | + · · · + |In |) − 1, then shows that, given the formula (3.18.1), we have
the following formula for w ∈ W .
(3.18.2)
det V n (w) = det V (w)n =
X
(−1)(|I1 |+|I2 |+···+|In |) IndW
WIn (1)(w).
I=(I0 ,I1 ,...,In )∈I (n)
3.19. Remark. It is an easy exercise to show that for fixed In ⊆ S, we have
n|S|
if In = S
(−1)
X
(|I1 |+|I2 |+···+|In |)
(−1)
= 0 if In ( S and n is even
(−1)|In | if I ( S and n is odd.
(n)
I=(I0 ,I1 ,...,In )∈I
n
It follows that when n is even, (3.18.2) is amounts to the statement that detn =
1W , while if n is odd, the right side of (3.18.2) reduces to the right side of (3.18.1),
and therefore amounts to the statement that for n odd, detn = det.
4. Topological and combinatorial properties of the stratification.
Maintain the assumptions of §3.3. Write d := dim(V ). Let F = F (n) :=
(n)
{ w(XI ) | I ∈ IW , w ∈ W } denote the set of all W -strata of V n , partially ordered by inclusion of closures of strata; i.e. for F, G ∈ F (n) , we say that F ≤ G
if F ⊆ G. The fact that this defines a partial order follows from Corollary 3.17(a).
Note that W acts naturally on F (n) as a group of order preserving automorphisms.
(n)
(n)
(n)
Let F
:= { (w, I) | I ∈ IW , w ∈ W In }. The map (w, I) 7→ w(XI ) : F
→
(n)
F is a bijection, by Theorem 3.4(b). We use this bijection to transfer the partial
(n)
order and the W -action on F (n) to a partial order and the W -action on F .
(n)
Using (3.15.1), one sees that this partial order and W -action F
have a purely
combinatorial description in terms of the Coxeter system (W, S); this is in analogy
with the description of the Coxeter complex in terms of cosets of standard parabolic
GEOMETRY OF CERTAIN FINITE COXETER GROUP ACTIONS
17
(n)
would be unchanged (as poset with W -action) if it
subgroups. In particular, F
had been defined using the diagonal W -action on (RΦ)n instead of that on V n .
(n)
(n)
4.1. Lemma. Let I− := (S, S, . . . , S) ∈ IW and I+ := (S, ∅, . . . , ∅) ∈ IW .
(a) The poset F (n) has a minimum element 0̂ := XI− .
(b) The elements w(XI+ ) for w ∈ W are the distinct maximal elements of F (n) .
Proof. Note that by Theorem 3.4, 0̂ is fixed by the W -action. To show that 0̂ is
(n)
the minimum element of F , it therefore suffices to show that if I ∈ IW , one has
0̂ ⊆ XI . This is clear since by (3.3.2) and (3.2.2)
XI = CI0 ,I1 × . . . × CIn−1 ,In ⊇ CI0 ,I0 × . . . × CIn−1 ,In−1 ⊇ CS,S × . . . × CS,S = XI−
since CI,I is the set of all points of V fixed by WI . This proves (a).
(n)
A similar argument using C∅,∅ = V shows that XI+ ⊇ XI for all I ∈ IW . Since by
(n)
definition F = { w(XI) | w ∈ W, I ∈ IW }, this implies that any maximal stratum
in F is of the form w(XI+ ) for some w ∈ W . But W acts simply transitively on
the set of these strata and there is at least one maximal element, so (b) follows.
4.2. Topology. In this subsection, we discuss basic topological facts about the
stratification of V n . For m ∈ N, let Bm denote the standard m-ball in Rm and Sm−1
its boundary, the standard m-sphere (with S−1 := ∅).
Let U be a finite-dimensional real vector space. A ray in U is a subset of U of the
form R>0 v for some non-zero v ∈ U. Let R = RU := { R>0 v | v ∈ U, v 6= 0 } denote
the set of all rays in U. Topologise R as follows. Let K be a convex body (i.e. a
compact convex set with non-empty interior) in U with 0 in its interior, so that K
contains a small ball with centre 0. Let ∂(K) denote the boundary of K i.e. the set
of all non-interior points of K. The map v 7→ R>0 v : ∂(K) → R is a bijection and
we topologise R so this map is a homeomorphism. A compactness argument shows
the resulting topology is independent of choice of K. Taking K as the unit sphere
in U (with respect to some Euclidean space structure on U) gives R ∼
= Sdim(U )−1 .
There is a map C 7→ [C] := { R>0 v | v ∈ C \ {0} } taking convex cones C in U
to subsets of R. Clearly, [C] = [C ′ ] if and only if C ′ ∪ {0} = C ∪ {0}. This map
satisfies [C] = [C] and [Int(C)] = Int([C]) where X and Int(X) denote respectively
the closure and interior of a subspace X of the ambient topological space (U or R).
We apply the above considerations with U = V n . Recall that here dim(V ) = d.
4.3. Lemma.
(a) [0̂] = [XI− ] ∼
= Sn(d−|S|)−1 .
Pn
(n)
(b) If F = w(XI ) ∈ F \ {0̂}, then [F ] ∼
= BN where N = nd − 1 − i=1 |Ii |
and [F ] ∼
= BN \ ∂(BN ) := BN \ SN −1 .
(n)
Proof. Note that for I ∈ IW , one has XI = XI− if and only if I0 = In . Using
(3.17.1) and the independence of the topology on RV n from the choice of compact
body K in its definition, it suffices to verify that for m ≤ n ≤ M in N with n ≥ 1,
18
M. J. DYER AND G. I. LEHRER
the following equations hold in RM = Rm × Rn−m × RM −n :
(
Sn−1 , if m = n
n−m
(Rm × R≥0
) ∩ SM −1 ∼
=
Bn−1 , if m < n
n−m
and (Rm × R>0
) ∩ SM −1 ∼
= ∂(Bn−1 ) if m < n. Details are omitted.
4.4. Regular cell decompositions. We shall use below notions of regular cell
complexes and their face posets, and shellability of posets and complexes. A convenient reference for the definitions and facts required is [1, 4.7].
4.5. Proposition.
(a) Suppose that V = RΠ. Then { [F ] | F ∈ F \ {0̂} } is
(the set of open cells of ) a regular cell decomposition of RV n ∼
= Snd−1 where
d := dim V = |S|.
(n)
(b) The poset F \ {0̂} is the face poset of a regular cell decomposition of
Sn|S|−1.
Proof. First we prove (a). Let Ω := F \ {0̂}, regarded as poset. For F ∈ Ω, call
[F ] ⊆ R an open cell and its closure [F ] = [F ] a closed cell. By Corollary 3.17 and
Lemma 4.3 (and the discussion preceding its statement), each closed cell [F ] = [F ]
is a ball in R with [F ] as its interior and with boundary
[F ] \ [F ] = ∪G∈Ω,G<F [G] = ∪G∈Ω,G<F [G]
equal to a union of closed cells. Since R is Hausdorff, (a) follows by the definition
(n)
in [1]. By the discussion immediately preceding Lemma 4.1, F \ {0̂} is the face
poset of the regular cell complex in (a), and (b) follows.
4.6. Proposition 4.5 has a number of combinatorial and topological consequences
c(n) := F (n) ∪ {1̂} obtained by
listed in [1, 4.7]. In particular, the finite poset F
(n)
formally adjoining a maximum element 1̂ to F is graded (i.e. has a minimum element and a maximum element, and all its maximal chains (totally ordered subsets)
c(n) is called the extended face poset of the
have the same cardinality). Note that F
regular cell complex in Proposition 4.5(a).
We conclude with the remark that significant parts of the above results (though
not the regular cell subdivisions of spheres in Proposition 4.5, for example) extend
mutatis mutandis to the diagonal action of infinite Coxeter groups on powers U n of
their Tits cone U.
5. Applications to conjugacy of sets of roots and vectors
5.1. Let Rm×n denote the set of real m × n matrices and A 7→ At denote the matrix
transpose. Identify Rn = Rn×1 .
We shall be addressing the classification of certain tuples (v1 , . . . , vn ) ∈ V n up to
order, under the action of W . Evidently the W -action leaves invariant the set of
inner products hvi , vj i, 1 ≤ i, j ≤ n. Arranging these inner products in a matrix
motivates the following definition.
GEOMETRY OF CERTAIN FINITE COXETER GROUP ACTIONS
19
5.2. Definition. A genus of rank n is a n × n matrix σ = (ai,j )ni,j=1 of real numbers.
The symmetric group Symn acts on the set G (n) = Rn×n of genera of rank n in
such a way that, for ρ ∈ Symn and σ ∈ G (n) , one has ρσ := σ ′ = (a′i,j )ni,j=1 where
a′i,j = aρ−1 (i),ρ−1 (j) . The automorphism group Gσ of the genus σ is the stabiliser of
σ in this action. Two genera of the same rank n are said to be of the same type if
they are in the same Symn orbit on G (n) . More formally, we define a type of genera
of rank n to be a Symn orbit on G (n) and write I(σ) := Symn σ (the Symn -orbit of
σ) for the type of the genus σ.
For example, the Cartan matrices of fixed rank n can be regarded as genera, and
two of the them have the same type (in the above sense) if and only if they have the
nA
nG
same type (of form A1 1 × · · · × G2 2 ) in the usual sense. Similar remarks apply
to generalised Cartan matrices. Thus, we may regard types, in the usual sense, of
(generalised) Cartan matrices as (special) types of genera in the above sense
5.3. For a natural number n, an ordered system of rank n in V is by definition, an
n-tuple b = (β1 , . . . , βn ) ∈ V n . The Gram genus of b = (β1 , . . . , βn ) is defined to be
the Gram matrix C ′ (b) := (ci,j )i,j=1,...,n ∈ Rn×n where ci,j := h βi , βj i. This defines
a Symn -equivariant map C ′ : V n → G (n) .
Note that σ := C ′ (b) is a positive semidefinite matrix, and is positive definite
(equivalently, it is invertible) if and only if P
[b] is linearly independent. More generally, the space {a = (a1 , . . . , an )t ∈ Rn | ni=1 ai βi = 0} of linear relations on b
identifies with the radical of the quadratic form a 7→ at C ′ (b)a : Rn → R with matrix
C ′ (b) with respect to the standard basis of Rn . It readily follows that the non-empty
fibres of the map C ′ are the orbits of the orthogonal group OV := O(V, h −, − i) acting diagonally on V n (and the set of matrices over which the fibres are non-empty
is precisely the set of positive semidefinite, symmetric matrices in Rn×n ). Also, for
1 ≤ i < j ≤ n, one has βi = βj if and only if the i-th and j-th columns (or rows)
of C ′ (b) are equal. In particular, the columns of σ are pairwise distinct if and only
if β1 , . . . , βn are pairwise distinct. In that case, Gσ is isomorphic to the group of
isometries of the metric space [b] (with metric induced by the norm from h −, − i).
5.4. Suppose that Φ is crystallographic. If b ∈ Φn is an ordered system of
rank n consisting of roots in Φ, its Cartan genus is the matrix of inner products
C ′′ (b) := (h βi∨, βj i)ni,j=1. This defines a Symn -equivariant map C ′′ : Φn → G (n) .
Again, one has βi = βj if and only if the i-th and j-th columns of C ′′ (b) are equal.
Note that b is an ordered simple system if and only if its Cartan genus C ′′ (b) is a
Cartan matrix. Similarly, b is an ordered np subset of rank n, if and only if C ′′ (b)
is a generalised Cartan matrix (which then necessarily has only finite and affine
components). Clearly, the Cartan genus C ′′ (b) of b ∈ Φn is completely determined
by the Gram genus C ′ (b).
5.5. Remark. If b is an ordered simple system in Φ, the automorphism group (see
Definition 5.2) of C ′′ (b) is known as the group of diagram automorphisms of Φ.
5.6. In 5.6–5.10, fix a subgroup W ′ of OV and a W ′ -stable subset Ψ of V . The
main situation of interest is that in which (W ′ , Ψ) = (W, Φ), but other cases such
20
M. J. DYER AND G. I. LEHRER
as when (W ′ , Ψ) is (W, V ) or (OV , V ) are also of interest. Fix a natural number n.
Let
Ψ
:= { Γ ⊆ Ψ | |Γ| = n } ⊆ P(Ψ)
(5.6.1)
n
be the “configuration space” of n distinct unordered points of Ψ. This has a natural
W ′ action given by (w, Γ) 7→ w(Γ) := { wγ | γ ∈ Γ } for w ∈ W ′ and Γ ∈ Ψn .
Our main interest will be the study of W ′ -orbits on Ψn . With this in mind, define
the configuration space
Ψ(n) := { b = (β1 , . . . , βn ) ∈ Ψn | βi 6= βj if i 6= j }
of n ordered distinct points of Ψ. Then Ψ(n) admits the diagonal W ′ -action and a
commuting Symn -action by place permutations, hence a natural W ′ × Symn -action.
Moreover, there is a natural W ′ -equivariant surjection π̂ : b 7→ [b] : Ψ(n) → Ψn .
The fibres of π̂ are the Symn -orbits on Ψ(n) , and Symn acts simply transitively on
each fibre. As in (2.6.1), we may canonically identify the set of W ′ -orbits on Ψn as
Ψ
(5.6.2)
/W ′ ∼
= (Ψ(n) / Symn )/W ′ ∼
= Ψ(n) /(W ′ × Symn ) ∼
= (Ψ(n) /W ′ )/ Symn .
n
5.7. There is a natural Symn -equivariant map b 7→ C ′ (b) : Ψn → G (n) which assigns to b ∈ Ψn its Gram genus. As a variant in the case (W ′ , Ψ) = (W, Φ), one
can consider instead the map b 7→ C ′′ (b) : Ψn → G (n) which assigns to b its Cartan
genus; C ′′ is also Symn -equivariant. Let C : Ψn → G (n) be one of the maps C ′ or C ′′ .
(n)
Let G0 be the subset of G (n) consisting of matrices with pairwise distinct columns.
As remarked already, for b = (β1 , . . . , βn ) ∈ Ψn , one has b ∈ Ψ(n) (i.e. βi 6= βj for
(n)
i 6= j) if and only if one has C(b) ∈ G0 .
For a genus σ of rank n and type τ := I(σ), we write S (σ) := C −1 (σ) for the
fibre of C over σ and
[
ρ(S (σ))
T (τ ) := { b ∈ Ψ(n) | C(b) ∈ τ } =
ρ∈Symn
for the union of the fibres of C over all genera of the same type as σ. Let
U (τ ) := { [b] | b ∈ T (τ ) } = { [b] | b ∈ S (σ) } ⊆ P(Ψ).
(n)
(n)
One has U (τ ) ⊆ Ψn if and only if σ ∈ G0 (or equivalently, τ ⊆ G0 ). If it is
necessary to indicate dependence on C, we write SC (τ ) etc.
For example, if (W ′ , Ψ, C) = (W, Φ, C) and τ is a type of Cartan matrices, then
T (τ ) (resp., U (τ )) is the set of all ordered (resp., unordered) simple subsystems of
Φ of that type. Similarly if τ is a type of generalised Cartan matrices, then T (τ )
(resp., U (τ )) is the set of ordered (resp., unordered) np subsets of that type (this
set being empty unless all components of τ are of finite or affine type).
In general, each set S (σ), T (τ ) and U (τ ) is W ′ -stable. In particular, the classification of W ′ -orbits on Ψn is reduced to the classification of W ′ -orbits on U (τ )
(n)
for each type of genus τ ⊆ G0 (with U (τ ) 6= ∅). We describe an approach to this
classification along lines similar to (5.6.2), by study of S (σ) and T (σ)
GEOMETRY OF CERTAIN FINITE COXETER GROUP ACTIONS
21
5.8. Let σ ∈ G (n) and τ := I(σ). One has a commutative diagram
S (σ)
/
T (τ )
πσ
π
U (τ )
U (τ )
of W ′ -sets in which the top horizontal map is an inclusion, and π and πσ are the
indicated restrictions of the map π̂ : b 7→ [b]. Since C is Symn -equivariant and
τ = I(σ) is the Symn -orbit of σ, Symn acts naturally on T (τ ), commuting with
its W ′ -action. In this way, T (τ ) acquires a natural structure of W ′ × Symn -set.
By restriction, S (σ) has a natural structure of W ′ × Gσ -set. This W ′ × Gσ -set
S (σ) depends only on the type of σ up to natural identifications. More precisely,
let σ ′ = (a′i,j )i,j=1,...,n be another genus of type τ , say σ ′ = ρσ where ρ ∈ Symn .
Then Gσ′ = ρGσ ρ−1 and S (σ ′ ) = ρS (σ); in fact, the map p : S (σ) → S (σ ′ )
given by b 7→ ρb is a W ′ -equivariant bijection and satisfies p(νb) = (ρνρ−1 )p(b) for
b ∈ S (b) and ν ∈ Gσ .
(n)
(n)
5.9. Assume henceforth that σ ∈ G0 , so τ ⊆ G0 . Then the Symn -orbits on
T (τ ) (resp, Gσ -orbits on S (σ)) are precisely the fibres of π (resp., πσ ) and Symn
(resp., Gσ ) acts simply transitively on each fibre of π (resp., πσ ). (There is even a
natural isomorphism of W′ × Symn -sets Symn ×Gσ S (σ) ∼
= T (τ ).) Hence we may
∼
∼
naturally identify U (τ ) = T (τ )/ Symn = S (σ)/Gσ as W′ -sets. From (2.6.1), one
gets canonical identifications
(5.9.1)
U (τ )/W ′ ∼
= (T (τ )/W ′ )/ Symn ∼
= (S (σ)/W ′)/Gσ .
(n)
In the case (W ′ , Ψ, C) = (OV , V, C ′ ), each non-empty set S (σ) forσ ∈ G0 is a
single W ′ -orbit on Ψ(n) (see 5.3) and thus U (τ ) is a W ′ -orbit on Ψn . The above
equations and the discussion of 5.3 therefore give rise to the following closely related
parameterisations of the set of OV -orbits of unordered sets of n (distinct) points in
V : the set of such orbits corresponds bijectively to the set of symmetric, positive
semidefinite real n × n matrices with distinct columns, modulo simultaneous row
and column permutation (resp., to a set of representatives of such matrices under
such permutations).
5.10. Remarks. For any groups G, H there is a natural analogue in the category of Gsets of the standard notion of principal H-bundle. Transferring standard terminology
from the theory of principal bundles to this setting, the above shows that (assuming
(n)
σ ∈ G0 ), π
b and π are principal Symn -bundles and πσ is a principal Gσ -bundle
which affords a reduction of the structure group of the bundle π from Symn to
Gσ . Moreover, the bundle πσ depends (up to natural identifications of the various
possible structure groups Gσ induced by inner automorphisms of Symn ) only on the
type of σ and not on σ itself. Simple examples show that in general, these bundles
are not trivial bundles.
22
M. J. DYER AND G. I. LEHRER
5.11. From now on, we restrict to the case W ′ = W , which is of primary interest
here. Recall the definition of the dot action of Symn on the fundamental region
(n)
(n)
(n)
CW : for ρ ∈ Symn and b ∈ CW , one has {ρ · b} = W ρb ∩ CW . Recall also the
maps C ′ , C ′′ which associate to an n-tuple of roots its associated Gram genus and
Cartan genus respectively.
(n)
5.12. Proposition. Let σ ∈ G0 and τ := I(σ). The set U (τ )/W of W -orbits
on U (τ ) may be canonically identified with S (σ)/Gσ where RC (σ) = R(σ) :=
(n)
(n)
S (σ) ∩ CW (resp., with (T (τ ) ∩ CW )/ Symn ) and the Gσ (resp., Symn -action)
used to form the coset space is a restriction of the dot action.
Proof. This follows immediately from (5.9.1) (with W ′ = W ) on identifying T (τ )/W
(n)
(n)
with CW ∩ T (τ ) and S (σ)/W with CW ∩ S (σ) (the corresponding actions by
Symn and Gσ identify with restrictions of the dot action).
5.13. We now record some consequences of the bijection U (τ )/W ∼
= R(σ)/Gσ in
the case (W ′ , Ψ) = (W, Φ), for the classification of W -orbits of simple subsystems
of Φ; similar results hold for W -orbits of arbitrary subsets of roots (or indeed, of
vectors in V ). For any b ∈ Φ(n) , we call I(C ′ (b)) (resp., I(C ′′ (b))) the Gram type
(resp., Cartan type) of b and of [b] ∈ Φn . Note that specifying the Gram type of a
(unordered or ordered) simple system amounts to specifying its Cartan type (i.e. its
Cartan matrix up to reindexing) together with the function taking each irreducible
component to the maximal root length in that component.
By a Cartan (resp., Gram) genus of simple systems we mean the Cartan (resp.,
Gram) genus of some ordered simple system of some crystallographic (resp., arbitrary) finite root system. Thus, a Cartan genus of ordered simple systems is just a
Cartan matrix (ai,j )ni,j=1 (with a specified indexing by 1, . . . , n).
5.14. Proposition. Let σ be a Cartan (resp., Gram) genus of ordered simple systems
and C := C ′′ (resp., C := C ′ ). Let τ := I(σ).
(a) There is a natural bijection between W -orbits of simple systems of Cartan
(resp., Gram) type τ in Φ and Gσ -orbits for the dot action on RC (σ).
(b) The conjugacy classes of simple subsystems of Cartan (resp., Gram) type τ
are exactly the conjugacy classes of simple systems [b] for b ∈ RC (σ).
(c) The number of W -orbits of simple systems of Cartan (resp., Gram) type τ
in Φ is at most the number |{ [b] | b ∈ RC (σ) }| of simple systems [b] which
underlie some ordered simple system b in RC (σ).
(d) If |RC (σ)| = 1, or, more generally, RC (σ) 6= ∅ and all ordered simple systems
b in RC (σ) have the same underlying simple system [b], then there is a single
W -conjugacy class of simple systems in Φ of Cartan (resp., Gram) type τ .
Proof. Since SC (σ) is the set of ordered simple systems of Φ with Cartan (resp.,
Gram) matrix σ, part (a) follows from Proposition 5.12. We prove (b). Note that
if ∆ is a simple subsystem of Cartan (resp. Gram) type τ , then ∆ = [b] for some
b ∈ S (σ). We have wb ∈ R(σ) for some w ∈ W . Thus, w∆ = [wb]. So the W orbit of any simple subsystem ∆ of Cartan (resp., Gram) type τ has a representative
GEOMETRY OF CERTAIN FINITE COXETER GROUP ACTIONS
23
w∆ in RC (σ). On the other hand, for any b ∈ R(σ), one has [b] ∈ T (τ ). Part (b)
follows. Part (c) follows directly from (b), and part (d) follows from (c).
5.15. Remarks. For fixed Φ and any Cartan genus σ of ordered simple systems of type
τ , there are finitely many Gram genera σ1 , . . . , σn of types τ1 , . . . , τn respectively,
such that SC ′′ (σ) = ⊔ni=1 SC ′ (σi ), RC ′′ (σ) = ⊔ni=1 RC ′ (σi ) TC ′′ (τ ) = ∪ni=1 TC ′ (τi ) and
UC ′′ (τ ) = ∪ni=1 UC ′ (τi ). Gram genera are convenient in that there are typically fewer
simple systems of a specified Gram genus than a corresponding Cartan genus; on
the other hand, it may be inconvenient to specify the several Gram genera necessary to describe all (ordered) simple systems of a specified Cartan genus (e.g. an
isomorphism type of simple subsystems).
As an application, we shall show that if L is the length of some root of Φ, there
is a unique conjugacy class of simple subsystems of Gram type I(L · Idn ) with n
maximal.
5.16. Proposition. Let L ∈ R>0 . Consider simple subsystems ∆ of Φ satisfying the
three conditions below:
(i) ∆ is of type An1 for some n.
(ii) All roots in ∆ have the same length L.
(iii) ∆ is inclusion maximal subject to (i) and (ii).
Then any two subsystems of Φ satisfying (i)–(iii) are conjugate under W .
Proof. Following Proposition 2.10, we begin by describing all the ordered simple
(n)
subsystems b = { β1 , . . . , βn } in CW satisfying (i)–(ii). First, β1 is a dominant
root of length L of some component of Φ. The possibilities for β2 are identified by
deleting from the affine Dynkin diagram of Π ⊔ {−β1 } the root −β1 and all roots
attached to −β1 . One then obtains a simple system for a new root (sub)system and
repeats the above process. At each stage, the only choice is in which component
(containing roots of length L) of the relevant root system one takes the dominant
root of length L. The condition (iii) will be satisfied if and only if none of the
components at that stage contain roots of length L. By the maximality assumption
(iii), the integer n and the set [b] = { β1 , . . . , βn } is clearly independent of the order
in which these operations are done.
This implies that all ordered simple systems which satisfy (i)–(iii) and which
lie in the fundamental region are (certain) permutations of each other. It follows
by Proposition 5.14(c), there is just one conjugacy class of unordered subsystems
satisfying (i)–(iii).
5.17. Action of the longest element. For any simple subsystem ∆ of Φ, we
denote by ω∆ the longest element of the Coxeter system (W∆ , S∆ ) where S∆ =
{ sα | α ∈ ∆ } and W∆ := h S∆ i. It is well known that there is some permutation
ρ∆ of ∆ (which may and often will be regarded as a diagram automorphism of the
Dynkin diagram of ∆) such that ω∆ (α) = −ρ∆ (α) for all α ∈ ∆. In case ∆ = Π,
we write ωΠ = ω and ρΠ = ρ. Extending the permutation action on Π linearly,
we sometimes regard ρΠ as an automorphism of Φ which fixes the set Π. If b is an
24
M. J. DYER AND G. I. LEHRER
ordered simple subsystem of Φ, then [b] is a simple subsystem of Φ and so ωb := ω[b]
and ρb := ρ[b] are defined as above.
Recall from Proposition 5.12 that the set RC (σ) of ordered np sets of genus σ
in the fundamental region has a Symn dot action. In general, this action may not
be easy to describe explicitly. However, the following useful special result gives
a simple description of the effect of applying diagram automorphisms induced by
longest elements, to simple subsystems in the fundamental region for W .
(n)
5.18. Proposition. Let b := (β1 , . . . , βn ) ∈ Φn ∩ CW be an ordered simple system
(for some root subsystem Ψ of Φ) which lies in the fundamental region for W on V n .
(n)
Then W (ρb (β1 ), . . . , ρb (βn )) ∩ CW = {(ρ(β1 ), . . . , ρ(βn ))} i.e. (ρ(β1 ), . . . , ρ(βn )) is
the representative, in the fundamental region for W on V n , of the ordered simple
subsystem (ρb (β1 ), . . . , ρb (βn )) of Ψ.
Proof. One has ωb (β1 , . . . , βn ) = −(ρb (β1 ), . . . , ρb (βn )) and therefore (β1 , . . . , βn ) =
(n)
−ωb (ρb (β1 ), . . . , ρb (βn )) ∈ CW since ωb is either an involution or the identity. By
(n)
2.13, CW ∩ Φn is −ω-invariant. Hence
(n)
−ω(β1 , . . . , βn ) = (ωωb)(ρb (β1 ), . . . , ρb (βn )) ∈ CW ∩ Φn .
Since ωωb ∈ W and −ω(βi ) = ρ(βi ), the proposition follows from Theorem 2.2.
6. Simple subsystems of type A
In this section, Φ denotes a crystallographic root system. We shall define a standard genus σn of type An and determine all ordered simple subsystems of Φ of
Cartan genus σn in the fundamental region for W . The following result of Oshima
plays an important role in the proof of this and related results in arbitrary type.
6.1. Lemma. [14] Suppose that W is an irreducible finite Weyl group with crystallographic root system Φ. Fix ∆ ⊆ Π, and scalars cβ ∈ R for β ∈ ∆, not all zero,
and l ∈ R. Let γ[α] denote the coefficient of α ∈ Π in γ, expressed as a linear
combination of Π, and write
X := { γ ∈ Φ | h γ, γ i1/2 = l and γ[β] = cβ for all β ∈ ∆ }.
If X is non-empty, then it is a single WΠ \ ∆ -orbit of roots. Equivalently, |X ∩
CΠ \ ∆ | ≤ 1.
This is proved in [14, Lemma 4.3] using facts in [13] from the representation theory
of semisimple complex Lie algebras. We shall give a direct proof of a more general
version of Lemma 6.1 in a future work [7], but in this paper make use of it only as
stated.
6.2. Definition. The standard genus σn of Cartan type An is the Cartan matrix of
type An in its standard ordering as in [2] i.e. σn = (ai,j )ni,j=1 where for 1 ≤ i ≤ j ≤ n,
one has aij = 2 if i = j, ai,j = aj,i = −1 if j − i = 1 and ai,j = aji = 0 if j − i > 1.
Thus, for b := (β1 , . . . , βn ) ∈ Φn , one has b ∈ SC ′′ (σn ) if and only if one has
(h βi∨ , βj i)i,j=1,...,n = σn . Clearly, the automorphism group Gσn of σn is trivial if
GEOMETRY OF CERTAIN FINITE COXETER GROUP ACTIONS
25
n = 1 and is cyclic of order 2 if n > 1, with non-trivial element the longest element
(n)
of Symn in the latter case. Recall that RC ′′ (σn ) := SC ′′ (σn ) ∩ CW .
6.3. Proposition. Let b = (β1 , . . . , βn ) ∈ RC ′′ (σn ) where n ≥ 1. Then β1 is a
dominant root of Φ and β2 , . . . , βn ∈ −Π.
Proof. The proof is by induction on n. Since h βi∨, βi+1 i = −1 for i = 1, . . . , n − 1,
β2 , . . . , βn lie in the same component of Φ as β1 . Without loss of generality, we may
replace Φ by the component containing β1 and assume that Φ is irreducible. For
n = 1, one has β1 ∈ Φ ∩ CW and the result is clear. Now suppose that n > 1. Since
b′ := τn−1 (b) = (β1 , . . . , βn−1 ) ∈ RC ′′ (σn−1 ), induction gives β2 , . . . , βn−1 ∈ −Π. Let
Γ := Π ∩ {β1 , . . . , βn−1 }⊥ and ∆ := Π ∩ {β1 , . . . , βn−2 }⊥ ⊇ Γ. Then WΓ = Wb,n−1
and W∆ = Wb,n−2 by Theorem 2.2. One has sβn ∈ Wb,n−2 = W∆ since h βn , βi i = 0
for i < n − 1, so Σ := supp(βn ) ⊆ ∆. Define
P
Λ := { γ ∈ Σ | h γ, βn−1 i =
6 0 }.
Since 0 6= h βn , βn−1 i = γ∈Σ βn [γ]h γ, βn−1 i, it is clear that |Λ| ≥ 1.
We claim that |Λ| = 1. Suppose to the contrary that |Λ| ≥ 2. We have βn−1 6∈ ∆.
If n ≥ 3, then the full subgraph of the Dynkin diagram on vertex set Σ∪{−βn−1 } ⊆ Π
contains a cycle, contrary to finiteness of W . Hence n = 2, βn−1 ∈ Φ ∩ CW , ∆ = Π
and |{γ ∈ Π | h βn−1 , γ i > 0}| ≥ |Λ| ≥ 2. The classification implies that Φ is of type
Am with m > 1. Regard Π ∪ {−β1 } as the vertex set of the affine Dynkin diagram
corresponding to Φ. Then the full subgraph on {−β1 } ∪ Σ contains a cycle, so its
vertex set is equal to Π ∪ {−β1 }. This implies that Σ = Π and β2 = βn = ±β1
contrary to h βn−1, βn∨ i = −1. This proves the claim.
Write Λ = {α} where α ∈ Π and abbreviate c := βn [α] 6= 0. We have
X
h βn , βn−1 i =
βn [γ]h γ, βn−1 i = βn [α]h α, βn−1 i = ch α, βn−1 i.
γ∈Σ
∨
Writing h βn , βn i = h βn−1 , βn−1 i = dh α, α i, it follows that −1 = h βn , βn−1
i =
∨
∨
∨
ch α, βn−1 i. Since c, h α, βn−1 i ∈ Z, this implies c = −h α, βn−1 i ∈ {±1}. We have
∨
h α, βn−1
i ≥ 0 since α ∈ Π and βn−1 is a dominant root if n = 2, and −βn−1 6= α
∨
are both in Π if n > 2. Hence c = h α, −βn−1
i = −1. Also, −1 = h βn−1, βn∨ i =
c
h βn−1, α∨ i so h βn−1 , α∨ i = −dc−1 = d ∈ Z≥1 . From
d
X
X h γ, γ i
2
2βn
βn [γ]γ =
=
βn [γ]γ ∨
βn∨ =
h βn , βn i
h βn , βn i γ∈Π
h βn , βn i
γ∈Π
i
βn [α] = dc = −d−1 so d = 1.
one sees that Z ∋ βn∨ [α∨ ] = h hβα,α
n ,βn i
We have now established that h α, α i = h βn−1 , βn−1 i = h βn , βn i. This implies
∨
h βn−1 , α∨ i = h α, βn−1
i = 1. To complete the proof, it will suffice to show that
βn = −α. Taking Σ = supp(βn ) as above, let Ψ := WΣ Σ; this is an irreducible
crystallographic root system. From above, one has βn , −α ∈ Ψ, βn [α] = −1 =
(−α)[α] and h βn , βn i = h α, α i. By Lemma 6.1, it follows that βn and −α are in the
⊥
⊥
⊆ ∆∩βn−1
=
same WΣ′ -orbit on ΦΣ where Σ′ := Σ \ {α}. But Σ′ = Σ \ Λ = Σ∩βn−1
Γ. Hence βn ∈ CWΓ ⊆ CWΣ′ . We also have −α ∈ CWΣ′ . From Lemma 1.6, it follows
that βn = −α as required.
26
M. J. DYER AND G. I. LEHRER
The following Corollary 6.4 and Theorem 6.7 are closely related to the type An
case of the result [14, Theorem 3.5(i)] of Oshima, while Remark 6.5 is related to
that result in some other classical types.
6.4. Corollary. Let Pn denote the set of all pairs (β, Γ) such that β is a dominant
root of Φ, Γ is the vertex set of a type An subdiagram of the Dynkin diagram of
{β} ∪ −Π and β is a terminal vertex of Γ. Then there is a bijection RC ′′ (σn ) → Pn
defined by b = (β1 , . . . , βn ) 7→ (β1 , [b]) for all b ∈ RC ′′ (σn ).
Proof. Let b ∈ RC ′′ (σn ). Then Proposition 6.3 implies that [b] is the vertex set of a
subdiagram of the Dynkin diagram of {β1 } ∪ −Π. By definition of σn , this diagram
is of type An with β1 terminal and βi joined to βi+1 for i = 1, . . . , n − 1. Conversely,
suppose (β, Γ) ∈ Pn . Since Γ is of type An with β terminal, we may uniquely write
Γ = [b] for some b = (β1 , . . . , βn ) ∈ S (σn ) with β1 = β. We have to show that
(n)
b ∈ CW i.e. for i = 1, . . . , n, βi ∈ CWΠi . where Πi := Π ∩ {β1 , . . . , βi−1 }⊥ . One has
β1 ∈ C by the assumption β1 is dominant. To show βi ∈ WΠi for 2 ≤ i ≤ n, we
have to show that h βi , γ i ≥ 0 for γ ∈ Πi . But γ 6= βi since h βi , βi−1 i < 0. Hence
h γ, βi i ≥ 0, since γ ∈ Π and βi ∈ −Π.
6.5. Remark. An argument similar to that in the last part of the proof shows the
following: Let β1 ∈ Φ be a dominant root and β2 , . . . , βn ∈ −Π be pairwise distinct
and such that for each i = 2, . . . , n, there is some j < i with h βi , βj i =
6 0. Then
(n)
(β1 , . . . , βn ) is an ordered np subset of Φ which lies in CW .
6.6. Conjugacy of simple subsystems of type A. Let ρ denote the automorphism ρ = ρΠ = −ωΠ of Φ induced by the action of the negative of the longest
element ωΠ of W (see 5.17). One has ρ2 = IdΦ and ρ fixes each dominant root.
Hence H := h ρ i = {1, ρ} acts naturally on Pn , where Pn is defined in Corollary 6.4.
6.7. Theorem. The set of W -conjugacy classes of simple systems of type An in Φ
is in bijection with Pn /H.
Proof. Let σ = σn and G = gσ . By Proposition 5.14, conjugacy classes of simple
systems of type An in Φ correspond bijectively to G-orbits in the dot action of G on
RC ′′ (σ). Now G = {1, θ} ⊆ Symn where θ(i) = n − i + 1 for i = 1, . . . , n. For b =
(β1 , . . . , βn ) ∈ RC ′′ (σn ), one has θ(β1 , . . . , βn ) = (βn , . . . , β1 ) = (ρb (β1 ), . . . , ρb (βn ))
for the diagram automorphism ρb defined as in 5.17. For b ∈ R(σ), Proposition
5.18 implies that θ · b = ρ(b) and so G · b = Hb. This shows that conjugacy
classes of simple systems of type An in Φ correspond bijectively to H-orbits on
∼
=
→ Pn
RC ′′ (σ). Finally, the theorem follows on observing that the bijection RC ′′ (σ) −
of Proposition 6.4 is H-equivariant.
7. Simple subsystems of arbitrary types
In this section, we indicate, without proofs, how Propositions 5.14 and 6.3 provide an approach to studying conjugacy of simple subsystems of arbitrary types
in finite crystallographic root systems. We confine the discussion to Cartan genera, though similar results apply to Gram genera and may be more convenient in
GEOMETRY OF CERTAIN FINITE COXETER GROUP ACTIONS
27
practice. Throughout this section, Φ is a (possibly reducible) crystallographic root
system.
7.1. Standard genera of irreducible type. Choose for each type Xn of finite irreducible crystallographic root system a standard genus σXn = (ai,j )i,j=1n as follows.
Let l be the maximal rank of a type Al standard parabolic subsystem of a simple
system of type Xl (one has l = n − 1 unless Xn = An , when l = n, or Xn = F4 ,
when l = n − 2 = 2). We require first that σXn be a Cartan matrix of type Xn , that
(ai,j )i,j=1l = σl and that for each i = l + 1, . . . , n, there is some (necessarily unique)
pi < i such that api ,i 6= 0. Second, we require that σXn be chosen so (pl+1 , . . . , pn ) is
lexicographically greatest for all possible Cartan matrices satisfying these requirements (i.e. the index of any branch vertex is taken as large as possible subject to
the previous conditions). Finally, if σXn is not uniquely determined by the previous
conditions (that is, if it is B2 = C2 , F4 or G2 ), we require that al,l+1 = 1 (i.e. a root
corresponding to al is longer than that corresponding to al+1 ). In types An with
n ≥ 1, Bn with n ≥ 2, Cn with n ≥ 3, Dn with n ≥ 4 and F4 , σXn is the Cartan
matrix of type Xn indexed exactly as in [2]; in other types the indexing differs.
7.2. The above conditions are chosen on heuristic grounds to make determination of RC ′′ (σXn ) as computationally simple as possible. They assure that if b =
(β1 , . . . , βn ) ∈ RC ′′ (σXn ) then at most one of −β2 , . . . , −βn is not a simple root
(except for Xn = F4 which requires a separate check, this follows from Proposition
6.3 since τl b = (β1 , . . . , βl ) ∈ RC ′′ (σAl ) and l ≥ n − 1).
If such a non-simple root amongst −β2 , . . . , −βn does not exist, then b must arise
as in Remark 6.5 and the possible such b may be determined by inspection of the
Dynkin diagram. It turns out that if there is a non-simple root in −β2 , . . . , −βn ,
then, with two types of exceptions, it is a dominant root for an irreducible standard
parabolic subsystem of the component of Φ containing β1 (and thus containing [b]).
(The exceptions occur when Xn = G2 and that component is of type G2 , or when
Xn = E6 and that component is of type E8 .) In fact, general results (depending on
Lemma 6.1) can be given which reduce the determination of RC ′′ (Xn ) in all cases to
inspection of Dynkin diagrams (alternatively, the results could be listed as tables).
Moreover, the action of Gσ , where σ = σXn can be explicitly determined (in most
cases it is either trivial or given by Proposition 5.18).
7.3. We give some examples. Suppose that Φ is of type Bn with n ≥ 3. Enumerate
the simple roots α1 , . . . , αn exactly as in [2], so α1 and αn are terminal in the Coxeter
graph and αn is short. Denote the highest long (resp., short) root of Φ as α0l (resp.,
s
l
α0s ). We write, for example, αm,...,n
(resp., αm,...,n
) for the highest short (resp., long)
root of the subsystem with simple roots {αm , . . . , αn }. Then for 2 ≤ m ≤ n one has
s
)}
RC ′′ (σBm ) = {(α0l , −α2 , . . . , −αm−1 , −αm,...,n
Proposition 5.14 implies there is a single conjugacy class of subsystems of type Bm .
For 4 < m ≤ n, one has
l
)}.
RC ′′ (σDm ) = {bm := (α0l , −α2 , . . . , −αm−1 , −αm−1,...,n
28
M. J. DYER AND G. I. LEHRER
For m = 4 < n, one has RC ′′ (σD4 ) = {b4 } ∪ {di := (α0l , −α2 , −αi , −α4−i ) | i = 1, 3}.
Observe that [b4 ], [d1 ] = [d3 ] are simple systems for the same subsystem of Φ (of
type D4 ). Hence they are conjugate. We conclude there is a unique conjugacy class
of simple systems of type Dm in Φ if 4 ≤ m ≤ n.
These results may also be seen by (or used in) calculating the dot actions. For example, consider the dot action of G := GσD4 = h (1, 3), (3, 4) i ∼
= Sym3 on RC ′′ (σD4 )
where 4 < n. One obviously has (3, 4) · d1 = d3 (noting (3, 4)d1 = d3 and using
Proposition 2.12(a)) and hence (3, 4) · b4 = b4 . A more tedious calculation shows
(1, 3) · d3 = b4 and that (1, 3) · d1 = d1 . This can also be seen more indirectly as
follows. By Propositions 2.12 and 5.18, (1, 3) · τ3 (u) = τ3 (u) for all u ∈ RC ′′ (σD4 ).
This implies {d3 , b4 } must be stable under the dot action of (1, 3). If the restricted
dot action of (1, 3) on this set is trivial, then (1, 3) would fix RC ′′ (σD4 ) pointwise.
But from above, G acts transitively on RC ′′ (σD4 ), so (1, 3) can’t act trivially.
Note that the dot action provides finer information than just the the description
of conjugacy classes. For example, one may determine from above which diagram
automorphism of, say, [d1 ] = [d3 ] are induced by the action of an element of W .
Although the above results on conjugacy in classical type are well known, the same
techniques can be applied in general to determine even by hand the conjugacy classes
of irreducible subsystems for Φ of exceptional type. We conclude by indicating how
the methods can be extended to the study of possibly reducible subsystems.
7.4. Standard genera of reducible type. The next Lemma permits some reductions in the study of conjugacy of simple subsystems Γ of Φ to the case when both
Φ and Γ are irreducible.
7.5. Lemma. Let a := (α1 , . . . , αn ) ∈ Φn where n ≥ 1.
(a) Let Ψ be a union of components of Φ such that [a] ⊆ Ψ and let W ′ := WΨ .
(n)
(n)
Then a ∈ CW if and only if a ∈ CW ′ .
(b) Suppose that 1 ≤ m < n and that h αi , αj i = 0 for all 1 ≤ i ≤ m and
m+1 ≤ j ≤ n. Suppose also that for each 1 ≤ j ≤ m, there is some i < j with
h βi , βj i =
6 0. Let W ′ := Wα,m , a′ := (α1 , . . . , αm ) and a′′ := (αm+1 , . . . , αn ).
(n)
Let Ψ be the component of Φ containing α1 . Then a ∈ CW if and only if
(m)
(n−m)
a′ ∈ CWΨ and a′′ ∈ CW ′ .
(c) In (b), [a] is a simple subsystem of Φ if and only if [a′ ] is a simple subsystem
of Ψ and [a′′ ] is a simple subsystem of ΦW ′′ .
′
Proof. Note that Wa,i
= W ′ ∩ Wa,i for i = 1, . . . , n. Since Ψ ⊥ (Φ \ Ψ), it follows
′ ∩ RΨ = CW
∩ RΨ. By the definitions, this implies (a). Part (b) follows
that CWa,i
a,i
from (2.1.3) and (a), and (c) holds since [a] = [a′ ] ⊔ [a′′ ] with [a′ ] ⊥ [a′′ ].
7.6. We refer to the genera σAn = σn with n ≥ 1, σBn with n ≥ 3, σCn with n ≥ 2,
σDn with n ≥ 4, σEn with n = 6, 7 or 8, σF4 or σG2 as the standard irreducible genera.
If σ is one of these standard irreducible genera, of rank n, say, and b = (β1 , . . . , βn ) ∈
RC ′′ (σ) then [b] is an irreducible simple system and hence it is entirely contained
in one component Ψ of Φ. In fact, b ∈ SC ′′ (σ) ∩ Ψn . The stabiliser of b in W is
the standard parabolic subgroup generated by Π ∩ [b]⊥ = (Π ∩ Ψ ∩ [b]⊥ ) ⊔ (Π \ Ψ),
GEOMETRY OF CERTAIN FINITE COXETER GROUP ACTIONS
29
which may be readily determined by inspection of Dynkin diagrams since (with two
types of exceptions which may be easily dealt with), each root β ∈ [b] is, up to sign,
a dominant root for an irreducible subsystem of Φ.
7.7. Given genera ρi of ranks ni for i = 1, . . . , k, define the genus ρ = (ρ1 , . . . , ρk )
P
of rank n = ki=1 ni as follows. Let Ni := n1 + . . . + ni for i = 0, . . . , k, Say that
b := (β1 , . . . , βn ) ∈ Φn is of genus ρ if
(i) For i = 1, . . . , k, (βNi−1 +1 , . . . , βNi ) is of genus ρi .
(ii) For 1 ≤ i < j ≤ k and p, q ∈ N with Ni−1 + 1 ≤ p ≤ Ni and Nj−1 + 1 ≤ q ≤
Nj , one has h βp , βq i = 0.
7.8. Fix an arbitrary total order ≤ of the standard genera (e.g. order them by
decreasing rank, with genera of equal rank ordered in the order in which they were
listed at the start of 7.6). We say that a genus ρ is standard if it is of the form
ρ = (ρ1 , . . . , ρk ) where each ρi is a standard irreducible genus and ρ1 ≤ ρ2 ≤ . . . ≤ ρk .
Then for every isomorphism type X of crystallographic root system of rank n, there
is a unique standard genus σX of that type. Using Lemma 7.5, for any root system
Φ, the elements b of RC ′′ (σX ) can be readily determined. Every unordered simple
subsystem of Φ of type X is conjugate to one or more of the subsystems [b] for
b ∈ RC ′′ (σX ). The determination of the conjugacy classes of simple systems of
type X then reduces to the description of the action of GσX on b ∈ RC ′′ (σX ); this
is trivial in many cases (e.g. when |RC ′′ (σX )| = 1 or |GσX | = 1) but can require
significant computational effort (if calculated by hand) in other situations.
7.9. Conclusion. The process outlined above provides an alternative procedure to
the well known one (see e.g. [6]) which is based on the results of Borel and de
Siebenthal for determining the isomorphism classes of unordered simple subsystems
of Φ. However, the results here include additional information such as explicit representatives of the simple subsystems, which can be used in the study of more refined
questions such as conjugacy of subsystems and whether an isomorphism between two
subsystems can be realised as a composite of a specified (diagram) automorphism
of Φ with the action of an element of W . The classification of conjugacy classes
of simple subsystems has been completed computationally; for instance, it can be
found in [5] (see also [14]). In classical types, even more refined information is readily accessible (e.g. from the explicit list in [8] of all simple subsystems contained in
the positive roots together with standard descriptions of the actions of the groups
involving (signed) permutations). However, the techniques discussed in this paper
provide a uniform conceptual approach, applicable to all types. Moreover their geometric underpinning, our results on fundamental domains and stratification, have
more general applicability.
References
[1] Anders Björner, Michel Las Vergnas, Bernd Sturmfels, Neil White, and Günter M. Ziegler.
Oriented matroids, volume 46 of Encyclopedia of Mathematics and its Applications. Cambridge
University Press, Cambridge, 1999.
30
M. J. DYER AND G. I. LEHRER
[2] N. Bourbaki. Éléments de mathématique. Fasc. XXXIV. Groupes et algèbres de Lie. Chapitre
IV: Groupes de Coxeter et systèmes de Tits. Chapitre V: Groupes engendrés par des réflexions.
Chapitre VI: systèmes de racines. Actualités Scientifiques et Industrielles, No. 1337. Hermann,
Paris, 1968.
[3] Roger W. Carter. Simple groups of Lie type. Wiley Classics Library. John Wiley & Sons Inc.,
New York, 1989.
[4] C. W. Curtis and G. I. Lehrer. A new proof of a theorem of Solomon-Tits. Proc. Amer. Math.
Soc., 85(2):154–156, 1982.
[5] J. Matthew Douglass, Götz Pfeiffer, and Gerhard Röhrle. On reflection subgroups of finite
Coxeter groups. Comm. Algebra, 41(7):2574–2592, 2013.
[6] M. J. Dyer and G. I. Lehrer. Reflection subgroups of finite and affine Weyl groups. Trans.
Amer. Math. Soc., 363(11):5971–6005, 2011.
[7] M. J. Dyer and G. I. Lehrer. On Oshima’s lemma on orbits of parabolic subgroups of finite
root systems. In preparation, 2017.
[8] Matthew Dyer. Reflection subgroups of Coxeter systems. J. Algebra, 135(1):57–73, 1990.
[9] James E. Humphreys. Reflection groups and Coxeter groups, volume 29 of Cambridge Studies
in Advanced Mathematics. Cambridge University Press, Cambridge, 1990.
[10] Victor G. Kac. Infinite-dimensional Lie algebras. Cambridge University Press, Cambridge,
1990.
[11] G. I. Lehrer. The spherical building and regular semisimple elements. Bull. Austral. Math.
Soc., 27(3):361–379, 1983.
[12] Gustav I. Lehrer and Donald E. Taylor. Unitary reflection groups, volume 20 of Australian
Mathematical Society Lecture Series. Cambridge University Press, Cambridge, 2009.
[13] Hiroshi Oda and Toshio Oshima. Minimal polynomials and annihilators of generalized Verma
modules of the scalar type. J. Lie Theory, 16(1):155–219, 2006.
[14] Toshio Oshima. A classification of subsystems of a root system. arXiv:math/0611904
[math.RT], 2006.
[15] Robert Steinberg. Lectures on Chevalley groups. Yale University, New Haven, Conn., 1968.
Notes prepared by John Faulkner and Robert Wilson.
[16] D. E. Taylor. On reflection subgroups of unitary reflection groups. Unpublished Manuscript,
2011.
Department of Mathematics, 255 Hurley Building, University of Notre Dame,
Notre Dame, Indiana 46556, U.S.A.
E-mail address: [email protected]
Department of Mathematics, University of Sydney, Sydney. NSW. 2006
E-mail address: [email protected]
| 4 |
COUNTING CONJUGACY CLASSES IN Out(FN )
arXiv:1707.07095v1 [] 22 Jul 2017
MICHAEL HULL AND ILYA KAPOVICH
Abstract. We show that if a f.g. group G has a non-elementary WPD action on a hyperbolic
metric space X, then the number of G-conjugacy classes of X-loxodromic elements of G coming
from a ball of radius R in the Cayley graph of G grows exponentially in R. As an application we
prove that for N ≥ 3 the number of distinct Out(FN )-conjugacy classes of fully irreducibles ϕ from
an R-ball in the Cayley graph of Out(FN ) with log λ(ϕ) on the order of R grows exponentially in
R.
1. Introduction
The study of the growth of the number of periodic orbits of a dynamical system is an important
theme in dynamics and geometry. A classic and still incredibly influential result of Margulis [19]
computes the precise asymptotics of the number of closed geodesics of length ≤ R (that is, of
periodic orbits of geodesic flow of length ≤ R) for a compact hyperbolic manifold. A recent famous
result of Eskin and Mirzakhani [11], which served as a motivation for this note, shows that for
the moduli space Mg of a closed oriented surface Sg of genus g ≥ 2, the number N (R) of closed
hR
Teichmuller geodesics in Mg of length ≤ R grows as N (R) ∼ ehR as R → ∞ where h = 6g −6. Note
that in the context of a group action, counting closed geodesics amounts to counting conjugacy
classes of elements rather than counting elements displacing a basepoint by distance ≤ R. The
problems about counting conjugacy classes with various metric restrictions are harder than those
about counting group elements and many group-theoretic tricks (e.g. embeddings of free subgroups
or of free subsemigroups) do not, a priori, give any useful information about the growth of the
number of conjugacy classes. In the context of the Eskin-Mirzakhani result mentioned above, a
closed Teichmuller geodesic of length ≤ R in Mg comes from an axis L(ϕ) in the Teichmüller
space Tg of a pseudo-Anosov element ϕ ∈ M od(Sg ) with translation length τ (ϕ) ≤ R. Note that
τ (ϕ) = log λ(ϕ), where λ(ϕ) is the dilatation of ϕ. Thus N (R) is the number of M od(Sg )-conjugacy
classes of pseudo-Anosov elements ϕ ∈ M od(Sg ) with log λ(ϕ) ≤ R, where M od(Sg ) is the mapping
class group.
In this note we study a version of this question for Out(FN ) where N ≥ 3. The main analog of
being pseudo-Anosov in the Out(FN ) context is the notion of a fully irreducible element. Every fully
irreducible element ϕ ∈ Out(FN ) has a well-defined stretch factor λ(ϕ) > 1 (see [5]) which is an
invariant of the Out(FN )-conjugacy class of ϕ. For specific ϕ one can compute λ(ϕ) via train track
methods, but counting the number of distinct λ(ϕ) with log(λ(ϕ)) ≤ R (where ϕ ∈ Out(FN ) is
fully irreducible) appears to be an unapproachable task. Other Out(FN )-conjugacy invariants such
as the index, the index list and the ideal Whitehead graph of fully irreducibles [6, 7, 14, 21], also do
not appear to be well suited for counting problems. Unlike the Teichmüller space, the Outer space
2010 Mathematics Subject Classification. Primary 20F65, Secondary 57M, 37B, 37D.
The second author was supported by the individual NSF grants DMS-1405146 and DMS-1710868. Both authors
acknowledge the support of the conference grant DMS-1719710 “Conference on Groups and Computation”.
1
2
MICHAEL HULL AND ILYA KAPOVICH
CVN does not have a nice local analytic/geometric structure similar to the Teichmuller geodesic
flow. Moreover, as we explain in Remark 3.2 below, it is reasonable to expect that (for N ≥ 3)
the number of Out(FN )-conjugacy classes of fully irreducibles ϕ ∈ Out(FN ) with log λ(ϕ) probably
grows as a double exponential in R (rather than as a single exponential in R, as in the case of Mg ).
Therefore, to get an exponential growth of the number of conjugacy classes one needs to consider
more restricted context, namely where the elements come from an R-ball in the Cayley graph of
Out(FN ).
Here we obtain:
Theorem 1.1. Let N ≥ 3. Let S be a finite generating set for Out(FN ). There exist constants
σ > 1 and C2 > C1 > 0, R0 ≥ 1 such that the following hols. For R ≥ 0 let BR be the ball of radius
R in the Cayley graph of Out(FN ) with respect to S.
Denote by cR the number of distinct Out(FN )-conjugacy classes of fully irreducible elements
ϕ ∈ BR such that C1 R ≤ log λ(ϕ) ≤ C2 R. Then
cR ≥ σ R
for all R ≥ R0 .
In the process of the proof of Theorem 1.1 we establish the following general result:
Theorem 1.2. Suppose G has a cobounded WPD action on a hyperbolic metric space X and L is
a non-elementary subgroup of G for this action. Let S be a generating set of G. For R ≥ 1 let bR
be the number of distinct G-conjugacy classes of elements of g ∈ L that act loxodromically on X
and such that |g|S ≤ R. Then there exist R0 ≥ 1 and c > 1 such that for all R ≥ R0
bR ≥ cR .
This statement is mostly relevant in the case where G is finitely generated and S is finite, but the
conclusion of Theorem 1.2 is not obvious even if S is infinite. Theorem 1.2 is a generalization of [16,
Theorem 1.1], which states (under a different but equivalent hypothesis, see [20]) that such G has
exponential conjugacy growth. The proofs of Theorem 1.2 and [16, Theorem 1.1] are similar and are
both based on the theory of hyperbolically embedded subgroups developed in [9], and specifically
the construction of virtually free hyperbolically embedded subgroups in [Theorem 6.14][9].
Both Theorem 1.1 and Theorem 1.2 are derived using:
Theorem 1.3. Suppose G has a cobounded WPD action on a hyperbolic metric space X and L
is a non-elementary subgroup of G for this action. Then for every n ≥ 2 there exists a subgroup
H ≤ L and a finite group K ≤ G such that:
(1) H ∼
= Fn .
(2) H × K ֒→h G.
(3) The orbit map H → X is a quasi-isometric embedding.
In particular, every element of H is loxodromic with respect to the action on X and two elements
of H are conjugate in G if and only if they are conjugate in H.
Here for a subgroup A of a group G writing A ֒→h G means that A is hyperbolically embedded
in G.
The proof of Theorem 1.1 also uses, as an essential ingredient, a result of Dowdall and Taylor
[10, Theorem 4.1] about quasigeodesics in the Outer spaces and the free factor complex. Note
COUNTING CONJUGACY CLASSES IN Out(FN )
3
that it is still not known whether the action of Out(FN ) on the free factor complex is acylindrical,
and, in a sense, the use of Theorem 1.2 provides a partial workaround here. Note that in the
proof of Theorem 1.1 instead of using Theorem 1.3 we could have used an argument about stable
subgroups having finite width. It is proved in [2] that convex cocompact (that is, finitely generated
and with the orbit map to the free factor complex being a quasi-isometric embedding) subgroups
of Out(FN ) are stable in Out(FN ). In turn, it is proved in [1] that if H is a stable subgroup of a
group G, then H has finite width in G (see [1] for the relevant definitions). Having finite width is a
property sufficiently close to being malnormal to allow counting arguments with conjugacy classes
to go through. The proof given here, based on using Theorem 1.3 above, is different in flavor and
proceeds from rather general assumptions. Note that in the conclusion of Theorem 1.3 the fact
that H × K ֒→h G implies that H and H × K are stable in G.
Another possible approach to counting conjugacy classes involves the notion of “statistically
convex cocompact actions” recently introduced and studied by Yang, see [22, 23] (particularly
[23, Theorem B] about genericity of conjugacy classes of strongly contracting elements). However,
Yang’s results only apply to actions on proper geodesic metric spaces with finite critical exponent
for the action, such as, for example, the action of the mapping class group on the Teichmüller space.
For essentially the same reasons as explained in Remark 3.2 below, the action of Out(FN ) (where
N ≥ 3) on CVN , with either the asymmetric or symmetrized Lipschitz metric, has infinite critical
exponent. Still, it is possible that the statistical convex cocompacness methods may be applicable
to the actions on CVN of some interesting subgroups of Out(FN ).
The first author would like to thank the organizers of the conference “Groups and Computation”
at Stevens Institute of Technology, June 2017. The second author is grateful to Spencer Dowdall,
Samuel Taylor, Wenyuan Yang and Paul Schupp for helpful conversations.
2. Conjugacy classes of loxodromics for WPD actions
We assume throughout that all metric spaces under consideration are geodesic and all group
actions on metric spaces are by isometries. Given a generating set A of a group G, we let Cay(G, A)
denote the Cayley graph of G with respect to A. In order to directly apply results from [9] it is
more convenient to consider actions on Cayley graphs. By the well known Milnor-Svarc Lemma
this is equivalent to considering cobounded actions.
Lemma 2.1 (Milnor-Svarc). If G acts coboundedly on a geodesic metric space X, then there exists
A ⊆ G such that Cay(G, A) is G-equivarently quasi-isometric to X.
For a subgroup H ≤ G and a subset S ⊆ G, we use H ֒→h (G, S) to denote that H is hyperbolically
embedded in G with respect to S, or simply H ֒→h G if H ֒→h (G, S) for some S ⊆ G. We refer to
[9] for the definition of a hyperbolically embedded subgroup. The only property of a hyperbolically
embedded subgroup H that we use is that H is almost malnormal, that is for g ∈ G \ H, the
intersection of H and H g is finite [9, Proposition 4.33]. Note that this implies that any two infinite
order elements of H are conjugate in G if and only if they are conjugate in H.
For a metric space X and an isometry g of X, the asymptotic translation length ||g||X is defined
i
as ||g||X = limi→∞ d(g ix,x) , where x ∈ X. It is well-known that this limit always exists and is
independent of x ∈ X. If ||g||X > 0 then g is called loxodromic. For a group G acting on X, a
loxodromic element is called a WPD element if for all ε > 0 and all x ∈ X, there exists N ∈ N
such that
|{h ∈ G | d(x, hx) < ε, d(g n x, hg n x) < ε}| < ∞.
4
MICHAEL HULL AND ILYA KAPOVICH
We say the action of G on X is WPD if every loxodromic element is a WPD element.
Now we fix a subset A ⊆ G such that Cay(G, A) is hyperbolic and the action of G on Cay(G, A) is
WPD. We say g ∈ G is loxodromic if it is loxodromic with respect to the action of G on Cay(G, A).
Each such element is contained in a unique, maximal virtually cyclic subgroup E(g) [9, Lemma
6.5].
Lemma 2.2. [15, Corollary 3.17] If g1 , ..., gn are non-commensurable loxodromic elements, then
{E(g1 ), ..., E(gn )} ֒→h (G, A).
A subgroup L ≤ G is called non-elementary if L contains two non-commensurable loxodromic
elements. Let KG (L) denote the maximal finite subgroup of G normalized by L. When L is
non-elementary, this subgroup is well-defined by [15, Lemma 5.5].
The following lemma was proved in [15] under the assumption that the action is acylindrical,
but the proof only uses that the action is WPD.
Lemma 2.3. [15, Lemma 5.6] Let L be a non-elementary subgroup of G. Then there exists noncommensurable, loxodromic elements g1 , ..., gn contained in L such that E(gi ) = hgi i × KG (L).
Proof of Theorem 1.3. First we note that (3) implies that every element of H is loxodromic with
respect to the action on X. Also, the fact that two elements of H are conjugate in G if and only
if they are conjugate in H follows from the fact that H × K is almost malnormal in G and K acts
trivially on H by conjugation.
We use the construction from [Theorem 6.14][9]. As in [Theorem 6.14][9], we let n = 2 since the
construction from this case can be easily modified for any n.
By Lemma 2.1, we can assume X = Cay(G, A) for some A ⊆ G. Let g1 , ..., g6 be elements of
L given by Lemma 2.3. Then each E(gi ) = hgi i × KG (L) and furthermore {E(g1 ), ..., E(g6 )} ֒→h
(G, A) by Lemma 2.2. Let
6
[
E(gi ) \ {1}.
E=
g1n g2n g3n
i=1
g4n g5n g6n
for sufficiently large n. It is shown in [9] that x
and y =
Let H = hx, yi where x =
and y generate a free subgroup of G and this subgroup is quasi-convex in Cay(G, A ∪ E). Hence H
(with the natural word metric) is quasi-isometrically embedded in Cay(G, A∪E), and since the map
Cay(G, A) → Cay(G, A ∪ E) is 1-lipschitz H is also quasi-isometrically embedded in Cay(G, A).
Let K = KG (L). Since x and y both commute with K, hH, Ki ∼
= H × K. Finally, we can apply [9,
Theorem 4.42] to get that H × K ֒→h G. Verifing the assumptions of [9, Theorem 4.42] is identical
to the proof of [9, Theorem 6.14].
Note that Theorem 1.2 is an immediate consequence of Theorem 1.3.
3. The case of Out(FN ).
We assume familiarity of the reader with the basics related to Out(FN ) and Outer space. For
the background information on these topics we refer the reader to [4, 5, 8, 12].
In what follows we assume that N ≥ 2 is an integer, CVN is the (volume-one normalized) CullerVogtmann Outer space, FN is the free factor complex for FN , dC is the asymmetric Lipschitz metric
on CVN and dsym
is the symmetrized Lipschitz metric on CVN . When we talk about the Hausdorff
C
COUNTING CONJUGACY CLASSES IN Out(FN )
5
distance dHaus in CVN , we always mean the Hausdorff distance with respect to dsym
C . For K ≥ 1,
by a K-quasigeodesic in CVN we mean a function γ : I → CVN (where I ⊆ R) is an interval, such
that for all s, t ∈ I, s ≤ t we have
1
(t − s) − K ≤ dC (γ(s), γ(t)) ≤ K(t − s) + K.
K
For ǫ > 0 we denote by CVN,ǫ the ǫ-thick part of CVN .
We recall a portion of one of the main technical results of Dowdall and Taylor, [10, Theorem
4.1]:
Proposition 3.1. Let K ≥ 1 and let γ : R → CVN be a K-quasigeodesic such that its projection
π ◦ γ : R → FN is also a K-quasigeodesic. There exist constants D > 0, ǫ > 0, depending only on
K and N , such that the following holds:
If ρ : R → CVN is any geodesic with the same endpoints as γ then:
(1) We have γ(R), ρ(R) ⊂ CVN,ǫ .
(2) We have dHaus (γ(R), ρ(R)) ≤ D.
Here saying that γ and ρ have the same endpoints means that supt∈R dsym
C (γ(t), ρ(t)) < ∞.
Proof of Theorem 1.1. By [4], the action of Out(FN ) on FN satisfies the hypothesis of Theorem 1.3.
Let H be the subgroup provided by Theorem 1.3 with L = Out(Fn ).
We fix a free basis A = {a, b} for the free group H, and let dA be the corresponding word metric
on H.
Note that the assumptions on H imply that every nontrivial element of H is fully irreducible.
Moreover, if we pick a base-point p in FN , then there is K ≥ 1 such that the image of every geodesic
in the Cayley graph Cay(H, A) in FN under the orbit map is a (parameterized) K-quasigeodesic.
Pick a basepoint G0 ∈ CVN . Since the projection π : (CVN , dC ) → FN is coarsely Lipschitz [4],
and since the orbit map H → FN is a quasi-isometric embedding, it follows that the orbit map
(H, dA ) → (CVN , dC ), u 7→ uG0 is a K1 -quasi-isometric embedding for some K1 ≥ 1. Moreover,
the image of this orbit map lives in an ǫ0 -thick part of CVN (where ǫ0 is the injectivity radius of
G0 ). Since on CVN,ǫ0 the metrics dC and dsym
are bi-Lipschitz equivalent, it follows that the orbit
C
sym
map (H, dA ) → (CVN , dC ), u 7→ uG0 is a K2 -quasi-isometric embedding for some K2 ≥ 1. For
every c ∈ A±1 fix a dC -geodesic τc from G0 to cG0 .
Now let γ : I → Cay(H, A) be a geodesic such that γ −1 (H) = I ∩ Z and such that the endpoints
of I (if any) are integers. We then define a path γ : I → CVN as follows. Whenever n ∈ Z is such
that [n, n + 1] ⊆ I, then γ(n) = g and γ(n + 1) = gc for some c ∈ A±1 . In this case we define
γ|[n,n+1] to be gτc . Then for every geodesic γ : I → Cay(H, A) as above the path γ : I → CVN is
K3 -quasigeodesic, with respect to both dC and dsym
C , for some K3 ≥ 1 independent of γ. Moreover,
γ(I) ⊂ CVN,ǫ1 for some ǫ1 > 0 independent of γ.
Let w be a cyclically reduced word of length n ≥ 1 in H. Consider the bi-infinite w−1 -periodic
geodesic γ : R → Cay(H, A) with γ(0) = 1 and γ(n) = w−1 . Thus the path the path γ : I → CVN
is K3 -quasigeodesic, with respect to both dC and dsym
C , and γ(I) ⊂ CVN,ǫ1 . Since 1 6= w ∈ H,
it follows that w is fully irreducible as an element of Out(FN ). Hence w can be represented by
an expanding irreducible train-track map f : G → G with the Perron-Frobenius eigenvalue λ(f )
equal to the stretch factor λ(w) of the outer automorphism w ∈ Out(FN ). There exists a volumeone “eigenmetric” df on G with respect to which f is a local λ(f )-homothety. Then, if we view
6
MICHAEL HULL AND ILYA KAPOVICH
(G, df ) as a point of CVN , then dC (G, Gw) = log λ(w). Moreover, in this case we can construct
a w-periodic dC -geodesic folding line ρ : R → CVN with the property that for any integer i,
ρ(i) = Gwi , hence for integers i < j dC (Gwi , Gwj ) = (j − i) log λ(w). Thus, for any i > 0 we have
dC (G, w−i G) = i log λ(w).
The bi-infinite lines ρ and γ are both w−1 -periodic (in the sense of the left action of w−1 ) and
therefore supt∈R dsym
C (γ(t), ρ(t)) < ∞.
Hence, by Proposition 3.1, there exist constants D > 0 and ǫ > 0 (independent of w) such
that ρ ⊂ CVN,ǫ and that dHaus (ρ, γ)) ≤ D. The fact that ρ ⊂ CVN,ǫ implies that ρ is a K4 quasigeodesic with respect to dsym
for some constant K4 ≥ 1 independent of w. Now for the
C
asymptotic translation length ||w||CV , where w is viewed as an isometry of (CVN , dsym
C ), we get
that one one hand (using the line ρ)
1
log λ(w−1 ) ≤ ||w||CV ≤ K4 log λ(w−1 )
K4
and on the other hand (using the line γ) that
1
n ≤ ||w||CV ≤ K3 n.
K3
Therefore
1
n ≤ log λ(w−1 ) ≤ K3 K4 n.
K3 K4
Recall also, that by a result of Handel and Mosher [13], there exists a constant M = M (N ) ≥ 1
log λ(ϕ)
1
≤ log
such that for every fully irreducible ϕ ∈ Out(FN ) we have M
λ(ϕ−1 ) ≤ M . Therefore for w as
above we have
1
n ≤ log λ(w) ≤ K3 K4 M n.
K3 K4 M
Recall that A = {a, b} and that S is a finite generating set for Out(FN ). Put M ′ =
max{|a|S , |b|S }, so that for every freely reduced word w in H we have |w|S ≤ M ′ |w|A .
R
For R >> 1 put n = ⌊ M
′ ⌋. The number of distinct H-conjugacy classes represented by cyclically
reduced words w of length n is ≥ 2n for n big enough. Recall two elements of H are conjugate in
H of and only if they are conjugate in Out(FN ). Therefore we get
R
≥ 2n ≥ 2 M ′ −1
distinct Out(FN )-conjugacy classes from such words w. As we have seen above, each such w gives
us a fully irreducible element of Out(FN ) with
R
R
1
≤ log λ(w) ≤ K3 K4 M n ≤ K3 K4 M ′ ,
′
K3 K4 M 2M
M
and the statement of Theorem 1.1 is verified.
Remark 3.2. As noted in the introduction, unlike in the mapping class group case, we expect
that for N ≥ 3 the number of Out(FN )-conjugacy classes of fully irreducibles ϕ ∈ Out(FN ) with
log λ(ϕ) ≤ R grows as a double exponential in R. A double exponential upper bound follows from
COUNTING CONJUGACY CLASSES IN Out(FN )
7
general Perron-Frobenius considerations. Every fully irreducible ϕ ∈ Out(FN ) can be represented
by an expanding irreducible train track map f : G → G, where G is a finite connected graph with
b1 (G) = N and with all vertices in G of degree ≤ 3. The number of possibilities for such G is finite
in terms of N . By [18, Proposition A.4], if f is as above and λ := λ(f ), then max mij ≤ kλk+1
(where k is the number of edges in G and M = (mij )ij is the transition matrix of f ). If log λ ≤ R,
we get max log mij ≤ log k + (k + 1)R and max mij ≤ ke(k+1)R . Thus we get exponentially many
(in terms of R) possibilities for the transition matrix M of f . Since for a given length L there
are exponentially many paths of length L in G, this yields an (a priori) double exponential upper
bound for the number of train track maps representing fully irreducible elements of Out(FN ) with
log λ ≤ R.
For the prospective double exponential lower bound we give the following explicit construction
for the case of F3 . Let w ∈ F (b, c) be a nontrivial positive word containing the subwords b2 , c2 ,
bc and cb. Consider the automorphism ϕw of F3 = F (a, b, c) defined as ϕw (a) = b, ϕw (b) = c,
ϕw (c) = aw(b, c). We can also view ϕw as a graph map fw : R3 → R3 where R3 is the 3-rose with
the petals marked by a, b, c. Then fw is an expanding irreducible train track map representing
ϕw . Moreover, a direct check shows that, under the assumptions made on w, the Whitehead graph
of fw is connected. Additionally, for a given n ≥ 1, “most” positive words of length n in F (b, c)
satisfy the above conditions and define fully irreducible automorphisms ϕw . To see this, observe
that the free-by-cyclic group Gw = F3 ⋊ϕw Z can be rewritten as a one-relator group:
Gw = ha, b, c, t|t−1 at = b, t−1 bt = c, t−1 ct = aw(b, c)i =
ha, t|t−3 at3 = aw(t−1 at, t−2 at2 )i.
Moreover, one can check that if w was a C ′ (1/20) word, then the above one-relator presentation
of Gw satisfies the C ′ (1/6) small cancellation condition, and therefore Gw is word-hyperbolic and
the automorphism ϕw ∈ Out(F3 ) is atoroidal. Since, as noted above, ϕw admits an expanding
irreducible train track map on the rose with connected Whitehead graph, a result of Kapovich [17]
implies that ϕ is fully irreducible. Moreover, if |w| = L, then it is not hard to check that log λ(fw ) =
log λ(ϕw ) grows like log L.
Since “random” positive words w ∈ F (b, c) are C ′ (1/20) and contain b2 , c2 , cb, bc as subwords, for
sufficiently large R ≥ 1, the above construction produces doubly exponentially many atoroidal fully
irreducible automorphisms ϕw with |w| = eR and log λ(ϕw ) on the order of R. We conjecture that
in fact most of these elements are pairwise non-conjugate in Out(F3 ) and that this method yields
doubly exponentially many fully irreducible elements ϕ of Out(F3 ) with log λ(ϕ) ≤ R. However,
verifying this conjecture appears to require some new techniques and ideas beyond the reach of this
paper.
References
[1] Y. Antolin, M. Mj, A. Sisto, and S. J Taylor, Intersection properties of stable subgroups and bounded cohomology,
preprint; arXiv:1612.07227
[2] T. Aougab, M. G. Durham, S. J. Taylor, Middle recurrence and pulling back stability, preprint; arXiv:1609.06698
[3] M. Bestvina, Geometry of outer space. Geometric group theory, 173–206, IAS/Park City Math. Ser., 21, Amer.
Math. Soc., Providence, RI, 2014
[4] M. Bestvina and M. Feighn, Hyperbolicity of the complex of free factors, Adv. Math. 256 (2014), 104–155
[5] M. Bestvina and M. Handel, Train tracks and automorphisms of free groups, Ann. of Math. 135 (1992), 1–51
[6] T. Coulbois and A. Hilion, Botany of irreducible automorphisms of free groups, Pacific J. Math. 256(2012), no.
2
8
MICHAEL HULL AND ILYA KAPOVICH
[7] T. Coulbois, and M. Lustig, Index realization for automorphisms of free groups. Illinois J. Math. 59 (2015), no.
4, 1111–1128
[8] M. Culler, K. Vogtmann, Moduli of graphs and automorphisms of free groups, Invent. Math. 84 (1986), no. 1.
91–119
[9] F. Dahmani, V. Guirardel, D. Osin, Hyperbolically embedded subgroups and rotating families in groups acting
on hyperbolic spaces, arXiv:1111.7048, to appear in Memoirs. Amer. Math. Soc.
[10] S. Dowdall and S. J. Taylor, Hyperbolic extensions of free groups, Geometry & Topology, to appear;
arXiv:1406.2567
[11] A. Eskin, M. Mirzakhani, Counting closed geodesics in moduli space. J. Mod. Dyn. 5 (2011), no. 1, 71–105
[12] S. Francaviglia and A. Martino, Metric properties of outer space, Publ. Mat. 55 (2011), no. 2, 433–473
[13] M. Handel, L. Mosher, The expansion factors of an outer automorphism and its inverse, Trans. Amer. Math.
Soc. 359 (2007), 3185–3208
[14] M. Handel and L. Mosher, Axes in outer space, Mem. Amer. Math. Soc. 213 (2011), no. 1004
[15] M. Hull, Small cancellation in acylindrically hyperbolic groups, Groups, Geom., & Dynam. 10 (2016), no. 4,
1077-1119
[16] M. Hull, D. Osin, Conjugacy growth of finitely generated groups. Adv. Math. 235 (2013), 361-389
[17] I. Kapovich, Algorithmic detectability of iwip automorphisms. Bull. Lond. Math. Soc. 46 (2014), no. 2, 279–290
[18] I. Kapovich and M. Bell, Detecting fully irreducible automorphisms: a polynomial time algorithm. Experimental
Mathematics, to appear; arXiv:1609.03820
[19] G. A. Margulis, On Some Aspects of the Theory of Anosov Systems, With a survey by Richard Sharp: Periodic orbits of hyperbolic flows, Translated from the Russian by Valentina Vladimirovna Szulikowska, Springer
Monographs in Mathematics, Springer-Verlag, Berlin, 2004
[20] D. Osin Acylindrically hyperbolic groups, Trans. Amer. Math. Soc. 368 (2016), no. 2, 851-888.
[21] C. Pfaff, Ideal Whitehead graphs in Out(Fr ). I. Some unachieved graphs. New York J. Math. 21 (2015), 417–463
[22] W. Yang, Statistically convex-cocompact actions of groups with contracting elements, preprint, arXiv:1612.03648
[23] W. Yang, Genericity of contracting elements in groups, preprint; arXiv:1707.06006
Department of Mathematics, PO Box 118105, University of Florida, Gainesville, FL 32611-8105
https://people.clas.ufl.edu/mbhull/
E-mail address: [email protected]
Department of Mathematics, University of Illinois at Urbana-Champaign, 1409 West Green Street,
Urbana, IL 61801, USA
http://www.math.uiuc.edu/~kapovich/
E-mail address: [email protected]
| 4 |
Semigroups of rectangular matrices under a sandwich operation
Igor Dolinka
Department of Mathematics and Informatics
University of Novi Sad, Trg Dositeja Obradovića 4, 21101 Novi Sad, Serbia
dockie @ dmi.uns.ac.rs
James East
Centre for Research in Mathematics; School of Computing, Engineering and Mathematics
arXiv:1503.03139v2 [] 15 Aug 2016
Western Sydney University, Locked Bag 1797, Penrith NSW 2751, Australia
J.East @ WesternSydney.edu.au
August 16, 2016
Abstract
Let Mmn = Mmn (F) denote the set of all m × n matrices over a field F, and fix some n × m
matrix A ∈ Mnm . An associative operation ? may be defined on Mmn by X ? Y = XAY for all
A
X, Y ∈ Mmn , and the resulting sandwich semigroup is denoted MA
mn = Mmn (F). These semigroups
are closely related to Munn rings, which are fundamental tools in the representation theory of finite
A
A
semigroups. In this article, we study MA
mn as well as its subsemigroups Reg(Mmn ) and Emn (consisting
of all regular elements and products of idempotents, respectively), as well as the ideals of Reg(MA
mn ).
Among other results, we: characterise the regular elements; determine Green’s relations and preorders;
calculate the minimal number of matrices (or idempotent matrices, if applicable) required to generate each
semigroup we consider; and classify the isomorphisms between finite sandwich semigroups MA
mn (F1 ) and
MB
(F
).
Along
the
way,
we
develop
a
general
theory
of
sandwich
semigroups
in
a
suitably
defined
class
2
kl
of partial semigroups related to Ehresmann-style “arrows only” categories; we hope this framework will be
useful in studies of sandwich semigroups in other categories. We note that all our results have applications
to the variants MA
n of the full linear monoid Mn (in the case m = n), and to certain semigroups of linear
transformations of restricted range or kernel (in the case that rank(A) is equal to one of m, n).
Keywords: Matrix semigroups, sandwich semigroups, variant semigroups, idempotents, generators,
rank, idempotent rank, Munn rings, generalised matrix algebras.
MSC: 15A30; 20M20; 20M10; 20M17.
1
Introduction
In the classical representation theory of finite semigroups, a key role is played by the so-called Munn rings.
These are rings of m×n matrices (where m and n need not be equal) with the familiar addition operation but
with a sandwich multiplication defined by X ?Y = XAY , where A is a fixed n×m matrix. These rings are so
named, because of Douglas Munn’s 1955 paper [68], in which it was shown that: (1) the representation theory
of a finite semigroup is determined by the representations of certain completely 0-simple semigroups arising
from its ideal structure, and (2) the semigroup algebra of such a finite completely 0-simple semigroup is
isomorphic to an appropriate Munn ring over the group algebra of a naturally associated maximal subgroup;
conditions were also given for such a Munn ring to be semisimple. (Here, the sandwich matrix A arises from
the celebrated Rees structure theorem [81] for completely 0-simple semigroups.) Since their introduction
in [68], Munn rings have been studied by numerous authors, and continue to heavily inflence the theory of
semigroup representations: for classical studies, see [12–14,37,54,62–64,68–70,76]; for modern accounts, see
for example [1, 29, 47, 75, 78, 79, 85, 86], and especially the monographs [73, 74, 77, 82, 87].
In the same year as Munn’s article [68] was published, William Brown introduced the so-called generalised
matrix algebras [5], motivated by a connection with classical groups [3, 6, 95]. These generalised matrix
1
algebras are again rings of m × n matrices over a field, with multiplication determined by a fixed n × m
sandwich matrix. Whereas the sandwich matrix in a Munn ring is taken to be the structure matrix of a
completely 0-simple semigroup (and so has a certain prescribed form), Brown considered arbitrary sandwich
matrices. As with Munn rings, these generalised matrix algebras have influenced representation theory to
this day, and have been studied by numerous authors; see for example [21, 30, 35, 51, 52, 55, 92, 96, 97].
Shortly after the Munn and Brown articles [5, 68] appeared, Evgeny Lyapin’s early monograph on semigroups [57] was published. In [57, Chapter VII], we find a number of interesting semigroup constructions,
including the following. Let V and W be arbitrary non-empty sets, and let θ : W → V be an arbitrary (but
fixed) function. Then the set T (V, W ) of all functions V → W forms a semigroup, denoted T θ (V, W ), under
the operation ?θ defined by f ?θ g = f ◦ θ ◦ g. If it is assumed that V and W are vector spaces (over the
same field) and θ a linear transformation, then the subset L(V, W ) ⊆ T (V, W ) of all linear transformations
V → W is a subsemigroup of T θ (V, W ). This subsemigroup, denoted Lθ (V, W ) and referred to as a linear
sandwich semigroup, is clearly isomorphic to the underlying multiplicative semigroup of an associated generalised matrix algebra [5]. As noted above, the addition on a generalised matrix algebra is just the usual
operation, so these linear sandwich semigroups capture and isolate (in a sense) the more complex of the
operations on the algebras.
The sandwich semigroups T θ (V, W ) were first investigated in a series of articles by Magill and Subbiah
[58–60], and more recent studies may be found in [7, 65, 90, 93]; most of these address structural concerns
such as (von Neumann) regularity, Green’s relations, ideals, classification up to isomorphism, and so on.
The linear sandwich semigroups Lθ (V, W ) have received less attention, though they have also been studied
by a number of authors [9, 48, 49, 66], with studies again focusing on basic structural properties. This is
regrettable, because these semigroups display a great deal of algebraic and combinatorial charm, as we
hope to show in the current article. It is therefore our purpose to carry out a systematic investigation of
the linear sandwich semigroups, bringing their study up to date, and focusing on modern themes, especially
combinatorial invariant theory. As does Brown [5], we focus on the case that V and W are finite dimensional;
A
in fact, we study the equivalent sandwich semigroups MA
mn = Mmn (F) consisting of all m × n matrices over
the field F under the operation ?A defined by X ?A Y = XAY , where A is a fixed n × m matrix.
We speculate that the difficulty (until now) of systematically investigating the linear sandwich semigroups
may be due to the lack of a consistent theoretical framework for studying sandwich semigroups in more
generality. In the case that V = W , the sets T (V, W ) and L(V, W ) are themselves semigroups (under
composition); these are the full transformation semigroup TV [23, 28, 32, 34, 42, 43, 45, 65] and the general
linear monoid LV [2, 15, 16, 18, 20, 25, 33, 53, 74, 80, 94], respectively. In turn, the semigroups T θ (V, V ) and
Lθ (V, V ) are special cases of the semigroup variant construction. The variant of a semigroup S with respect
to an element a ∈ S is the semigroup S a = (S, ?a ), with operation defined by x ?a y = xay. Variants
were first explicitly studied by Hickey in the 1980s [38, 39], though (as noted above) the idea goes back to
Lyapin’s monograph [57]; a more recent study may be found in [50]. The current authors developed the
general theory of variants further in [19], and then used this as a starting point to explore the variants of
the finite full transformation semigroups, obtaining a great deal of algebraic and combinatorial information
about these semigroups. Unfortunately, the theory of semigroup variants does not help with studying
the more general sandwich semigroups T θ (V, W ) and Lθ (V, W ), since the underlying sets T (V, W ) and
L(V, W ) are not even semigroups if V 6= W . One of the main goals of the current article, therefore, is to
develop an appropriate general framework for working with arbitrary sandwich semigroups. Namely, if V
and W are objects in a (locally) small category C , and if θ ∈ Hom(W, V ) is some fixed morphism, then
the set Hom(V, W ) becomes a semigroup under the sandwich operation defined by f ?θ g = f ◦ θ ◦ g, for
f, g ∈ Hom(V, W ). (In the case that V = W and θ is the identity morphism, this construction reduces to
the usual endomorphism monoid End(V ).) The semigroups T θ (V, W ) and Lθ (V, W ) arise when C is the
category of sets (and mappings) or vector spaces (and linear transformations), respectively. In order to
develop a general theory of sandwich semigroups in such categories, we first explain how many important
semigroup theoretical techniques extend to the more general categorical setting; we note that there is only a
little overlap with the theory of Green’s relations in categories developed in [56], which focuses on issues more
relevant to representation theory. In order to avoid any confusion arising from terminology conflicts between
semigroup and category theory, rather than speak of (locally small) categories, we focus on the equivalently
defined class of partial semigroups, which are related to Ehresmann-style “arrows only” categories [24]. We
hope that the general theory we develop will prove to be a useful starting point for future studies of sandwich
2
semigroups in other categories.
The article is organised as follows. In Section 2, we develop a general theory of sandwich semigroups in partial
semigroups (i.e., locally finite categories), extending certain important semigroup theoretic notions (such as
Green’s relations, regularity and stability, the definitions of which are given in Section 2) to the more general
context. In Section 3, we gather results on the partial semigroup M = M(F) of all (finite dimensional)
matrices over the field F, mainly focusing on regularity, stability and Green’s relations, and we state some
well-known results on (idempotent) generation and ideals of the general linear monoids Mn . We begin our
investigation of the linear sandwich semigroups MA
mn in Section 4, the main results of this section being: a
characterisation of the regular elements (Proposition 4.3); a description of Green’s relations (Theorem 4.5)
and the ordering on D-classes (Propositions 4.6, 4.7 and 4.10); a classification of the isomorphism classes
of sandwich semigroups over Mmn (Corollary 4.8); and the calculation of rank(MA
mn ) (Theorems 4.12
and 4.14). (Recall that the rank of a semigroup S, denoted rank(S), is the minimum size of a generating
set for S.) Section 5 explores the relationship between a sandwich semigroup MA
mn and various (nonsandwich) matrix semigroups, the main structural results being Theorem 5.7 and Propositions 5.8 and 5.11.
We then focus on the regular subsemigroup P = Reg(MA
mn ) in Section 6, where we: calculate the size
of P and various Green’s classes (Proposition 6.2 and Theorem 6.4); classify the isomorphism classes of
finite linear sandwich semigroups (Theorem 6.5); and calculate rank(P ) (Theorem 6.10). In Section 7, we
A of MA , where we: enumerate the idempotents
investigate the idempotent generated subsemigroup Emn
mn
A
A
of Mmn (Proposition 7.2); show that Emn consists of P \ D and the idempotents from D, where D is
A ) and idrank(E A ), showing in particular that
the maximal D-class (Theorem 7.3); and calculate rank(Emn
mn
these are equal (Theorem 7.5). (The idempotent rank of an idempotent generated semigroup S, denoted
idrank(S), is defined similarly to the rank, but with respect to idempotent generating sets for S.) Finally, in
Section 8, we classify the proper ideals of P , showing that these are idempotent generated, and calculating
their ranks and idempotent ranks, which are again equal (Theorem 8.1). We note that all our results have
applications to the variants MA
n of the full linear monoid Mn (in the case m = n), and to certain semigroups
of linear transformations of restricted range or kernel (in the case that rank(A) is equal to one of m, n; see
Remarks 4.2 and 5.3).
2
Sandwich semigroups from partial semigroups
A
Recall that our main interest is in the linear sandwich semigroups MA
mn = Mmn (F). The underlying set of
A
Mmn is Mmn , the set of all m × n matrices over the field F, which is not itself a semigroup (unless m = n).
However, Mmn is contained in M, the set of all (finite dimensional) matrices over F. While M is still not a
semigroup, it does have the structure of a (small) category. As we will see, in order to understand the linear
sandwich semigroups MA
mn , we need to move beyond just m × n (and n × m) matrices, and gain a fuller
understanding of the whole category M. Some (but not all) of what we need to know about M is true in a
larger class of categories, and more general structures we call partial semigroups, so we devote this section
to the development of the general theory of these structures. We begin with the definitions.
Definition 2.1. A partial semigroup is a 5-tuple (S, ·, I, λ, ρ) consisting of a set S, a partial binary operation
(x, y) 7→ x · y (defined on some subset of S × S), a set I, and functions λ, ρ : S → I, such that, for all
x, y, z ∈ S,
(i) x · y is defined if and only if ρ(x) = λ(y),
(ii) if x · y is defined, then λ(x · y) = λ(x) and ρ(x · y) = ρ(y),
(iii) if x · y and y · z are defined, then (x · y) · z = x · (y · z).
We say that a partial semigroup (S, ·, I, λ, ρ) is monoidal if in addition to (i–iii),
(iv) there exists a function I → S : i 7→ ei such that, for all x ∈ S, x · eρ(x) = x = eλ(x) · x.
We say that a partial semigroup (S, ·, I, λ, ρ) is regular if in addition to (i–iii),
(v) for all x ∈ S, there exists y ∈ S such that x = x · y · x and y = y · x · y.
3
Remark 2.2. We note that conditions (i–iv) amount to one of several equivalent ways to define (small)
categories in an “arrows only” fashion. See for example Ehresmann’s monograph [24], and also [41] for a
historical discussion of the connections between category theory and (inverse) semigroup theory.
For a partial semigroup (S, ·, I, λ, ρ), and for i, j ∈ I, we write
Sij = {x ∈ S : λ(x) = i, ρ(x) = j}
and
Si = Sii .
S
So S = i,j∈I Sij . Note that if x ∈ S, then x · x is defined if and only if λ(x) = ρ(x). It follows that Si
is a semigroup with respect to the induced binary operation (the restriction of · to Si × Si ) for each i ∈ I,
but that Sij is not if i 6= j. We will often slightly abuse notation and refer to “the partial semigroup S” if
the rest of the data (S, ·, I, λ, ρ) is clear from context. We also note that in what follows, we could allow S
and I to be classes (rather than insist on them being sets); but we would still require Sij to be a set for
each i, j ∈ I.
Note that, as is the case with semigroups, condition (v) is equivalent to the (ostensibly) weaker condition:
(v)0 for all x ∈ S, there exists z ∈ S such that x = x · z · x.
Indeed, with z as in (v)0 , one easily checks that y = z · x · z satisfies the condition of (v).
If S is monoidal, then Si is a monoid with identity ei ∈ Si for each i. If S is not monoidal, then S may be
embedded in a monoidal partial semigroup S (1) as follows: for each i ∈ I we adjoin an element ei to Si and
declare that x · ei = x and ei · y = y for all x, y ∈ S with ρ(x) = i and λ(y) = i, if such an element ei ∈ Si
does not already exist. In particular, if S is monoidal, then S = S (1) .
Obviously any semigroup is a partial semigroup (with |I| = 1); in particular, all results we prove in this
section concerning partial semigroups hold for semigroups. A great number of non-semigroup examples
exist, but we will limit ourselves to describing just a few.
Example
2.3. As a trivial example, let {Si : i ∈ I} be any set of pairwise disjoint semigroups. Then
S
S = i∈I Si is a partial semigroup where we define λ, ρ : S → I by λ(x) = ρ(x) = i for each i ∈ I and
x ∈ Si , and x · y is defined if and only if x, y ∈ Si for some i, in which case x · y is just the product of x, y
in Si . Note that this S is regular (resp., monoidal) if and only if each Si is regular (resp., a monoid).
Example 2.4. Let X be some set, and P(X ) = {A : A ⊆ X } the power set of X . The set TX =
{(B, f, A) : A, B ⊆ X , f is a function A → B} is a regular monoidal partial semigroup. We define I =
P(X ), and λ(B, f, A) = B and ρ(B, f, A) = A, with (D, g, C) · (B, f, A) defined if and only if B = C, in
which case (D, g, C) · (B, f, A) = (D, g ◦ f, A).
The previous example may be extended in a number of ways, by replacing functions f : A → B by other
objects such as binary relations [8, 91], partial functions [11, 88], partial bijections [10], block bijections [26],
partial braids [22], partitions [61], Brauer diagrams [3], etc., or by assuming the functions f : A → B preserve
some kind of algebraic or geometric structure on the sets A, B. The main example we will concentrate on
in this article is as follows.
Example 2.5. Let F be a field, and write M = M(F) for the set of all (finite dimensional, non-empty)
matrices over F. Then M has the structure of a regular monoidal partial semigroup. We take I = N =
{1, 2, 3, . . .} to be the set of all natural numbers and, for X ∈ M, we define λ(X) (resp., ρ(X)) to be the
number of rows (resp., columns) of X. For m, n ∈ N, Mmn = Mmn (F) denotes the set of all m × n matrices
over F, and forms a semigroup if and only if m = n. (Of course, M is isomorphic to a certain partial
semigroup of linear transformations; we will have more to say about this later.)
For the remainder of this section, we fix a partial semigroup (S, ·, I, λ, ρ), and we write xy for the product
x · y (whenever it is defined). Note that we may define a second partial binary operation • on S by
x•y =y·x
for each x, y ∈ S with ρ(y) = λ(x).
4
We see then that (S, •, I, ρ, λ) is a partial semigroup (note the swapping of λ and ρ), and we call this the
dual partial semigroup to (S, ·, I, λ, ρ). As is frequently the case in semigroup theory, this duality will allow
us to shorten several proofs.
Green’s relations and preorders are crucial tools in semigroup theory (for general background on semigroups,
see [40, 44]), and we will need to extend these to the partial semigroup setting. If x, y ∈ S, then we say
• x ≤R y if x = ya for some a ∈ S (1) ,
• x ≤L y if x = ay for some a ∈ S (1) ,
• x ≤J y if x = ayb for some a, b ∈ S (1) .
Note that if x ≤R y (resp., x ≤L y), then λ(x) = λ(y) (resp., ρ(x) = ρ(y)). Note also that if x ≤R y, then
ux ≤R uy for any u ∈ S with ρ(u) = λ(x); a dual statement holds for the ≤L relation. Finally, note that
the use of S (1) is merely for convenience since, for example, x ≤R y means that x = y or x = ya for some
a ∈ S. All three of the above relations are preorders (i.e., they are reflexive and transitive). If K is one
of R, L , J , we write K = ≤K ∩ ≥K for the equivalence relation on S induced by K . So, for example,
xRy if and only if x = ya and y = xb for some a, b ∈ S (1) . We also define equivalence relations
H =R∩L
and
D = R ∨L.
(The join ε ∨ η of two equivalences ε and η is the transitive closure of ε ∪ η, and is itself an equivalence.) It
is easy to see that D ⊆ J . The duality mentioned above means that x ≤R y in (S, ·, I, λ, ρ) if and only if
x ≤L y in (S, •, I, ρ, λ), and so on.
Analogously to the definition for semigroups [83, Definition A.2.1], we say that the partial semigroup S is
stable if for all x, y ∈ S,
xJ xy ⇔ xRxy
and
xJ yx ⇔ xL yx.
The following simple but crucial observation is proved in analogous fashion to the corresponding results for
semigroups; see for example [44, Proposition 2.1.3] and [83, Corollary A.2.5].
Lemma 2.6. We have D = R ◦ L = L ◦ R. If S is stable, then D = J .
2
If x ∈ Sij and K is one of R, L , J , D, H , we write
[x]K = {y ∈ S : xK y}
and
Kx = [x]K ∩ Sij = {y ∈ Sij : xK y}.
We call [x]K (resp., Kx ) the K -class of x in S (resp., in Sij ). The next result is reminiscent of Green’s
Lemma, and may be proved in virtually identical fashion to [44, Lemma 2.2.1].
Lemma 2.7. Let x, y ∈ S.
(i) Suppose xRy, and that x = ya and y = xb where a, b ∈ S (1) . Then the maps [x]L → [y]L : w 7→ wb
and [y]L → [x]L : w 7→ wa are mutually inverse bijections. These maps restrict to mutually inverse
bijections [x]H → [y]H and [y]H → [x]H .
(ii) Suppose xL y, and that x = ay and y = bx where a, b ∈ S (1) . Then the maps [x]R → [y]R : w 7→ bw
and [y]R → [x]R : w 7→ aw are mutually inverse bijections. These maps restrict to mutually inverse
bijections [x]H → [y]H and [y]H → [x]H .
(iii) If xDy, then [x]R = [y]R , [x]L = [y]L and [x]H = [y]H .
2
Note that if x, y ∈ S are such that xH y, then λ(x) = λ(y) and ρ(x) = ρ(y). It follows that [x]H = Hx for
all x ∈ S.
Lemma 2.8. Let x, y ∈ Sij .
(i) Suppose xRy, and that x = ya and y = xb where a, b ∈ S (1) . Then the maps Lx → Ly : w 7→ wb and
Ly → Lx : w 7→ wa are mutually inverse bijections. These maps restrict to mutually inverse bijections
Hx → Hy and Hy → Hx .
5
(ii) Suppose xL y, and that x = ay and y = bx where a, b ∈ S (1) . Then the maps Rx → Ry : w 7→ bw and
Ry → Rx : w 7→ aw are mutually inverse bijections. These maps restrict to mutually inverse bijections
Hx → Hy and Hy → Hx .
(iii) If xDy, then |Rx | = |Ry |, |Lx | = |Ly | and |Hx | = |Hy |.
Proof. Suppose xRy, and that x = ya and y = xb where a, b ∈ S (1) . We first show that the map
f : Lx → S : w 7→ wb does indeed map Lx into Ly . With this in mind, let w ∈ Lx . We already know that
wb ∈ [y]L , by Lemma 2.7(i). Also, w = ux for some u ∈ S (1) , since wL x. Now, λ(wb) = λ(w) = i, and
also ρ(wb) = ρ(uxb) = ρ(uy) = ρ(y) = j, showing that wb ∈ [y]L ∩ Sij = Ly , as required. By symmetry, it
follows that g : Ly → S : w 7→ wa maps Ly into Lx . By Lemma 2.7(i), we see that f ◦ g and g ◦ f are the
identity maps on their respective domains. This completes the proof of (i).
Next, note that (ii) follows from (i) by duality. Now suppose xDy. So xRzL y for some z ∈ S. Since xRz,
it follows that λ(z) = λ(x) = i; similarly, ρ(z) = j, so in fact, z ∈ Sij . In particular, Rx = Rz and Ly = Lz .
The statement about cardinalities then follows from parts (i) and (ii).
2
As is the case for semigroups [40, 44], Lemma 2.6 means that the elements of a D-class D of S or Sij may
be grouped together in a rectangular array of cells, which (for continuity with semigroup theory) we call an
eggbox. We place all elements from D in a box in such a way that R-related (resp., L -related) elements are
in the same row (resp., column), and H -related elements in the same cell. An example is given in Figure 2
below for a D-class of the linear partial semigroup M(Z3 ).
We now come to the definition of the main objects of our study, the sandwich semigroups.
Definition 2.9. Let (S, ·, I, λ, ρ) be a partial semigroup. Fix some a ∈ Sji , where i, j ∈ I. Define a binary
operation ?a on Sij by x ?a y = xay for each x, y ∈ Sij . It is easily checked that ?a is associative. We
a = (S , ? ) the semigroup obtained in this way, and call S a the sandwich semigroup of S
denote by Sij
ij
ij a
ij
a = S a is the well-known variant [38, 39, 50] of S with respect
with respect to a. (Note that when i = j, Sij
i
i
to a ∈ Si .)
Recall that an element x of a semigroup T is regular if x = xyx and y = yxy for some y ∈ T (or, equivalently,
if x = xzx for some z ∈ T ). The set of all regular elements of T is denoted by Reg(T ), and we say T is
regular if T = Reg(T ). (In general, Reg(T ) need not even be a subsemigroup of T .) Of crucial importance is
that if any element of a D-class D of a semigroup T is regular, then every element of D is regular, in which
case every element of D is L -related to at least one idempotent (and also R-related to a possibly different
idempotent); the H -class He of an idempotent e ∈ E(T ) = {x ∈ T : x = x2 } is a group, and He ∼
= Hf for
any two D-related idempotents e, f ∈ E(T ). When drawing eggbox diagrams, group H -classes are usually
shaded grey (see for example Figure 3). See [40, 44] for more details.
a need not be regular themselves (alIf S is a regular partial semigroup, then the sandwich semigroups Sij
a
a forms a subsemigroup,
though all of the semigroups Si are), but the set Reg(Sij ) of all regular elements of Sij
as we now show.
a ) is a subsemigroup of S a
Proposition 2.10. Let (S, ·, I, λ, ρ) be a regular partial semigroup. Then Reg(Sij
ij
for all i, j ∈ I and a ∈ Sji .
a ), so x = xauax and y = yavay for some u, v ∈ S . Since S is regular, there
Proof. Let x, y ∈ Reg(Sij
ij
exists w ∈ S such that (auaxayava)w(auaxayava) = (auaxayava). Then
(xay)a(vawau)a(xay) = (xauaxay)a(vawau)a(xayavay)
= x(auaxayava)w(auaxayava)y = x(auaxayava)y = xay,
a ).
showing that (x ?a y) ? (v ?a w ?a u) ?a (x ?a y) = x ?a y, and x ?a y ∈ Reg(Sij
2
a , we
In order to say more about the regular elements and Green’s relations of the sandwich semigroup Sij
define the sets
P1a = {x ∈ Sij : xaRx},
P2a = {x ∈ Sij : axL x},
6
P3a = {x ∈ Sij : axaJ x},
P a = P1a ∩ P2a .
The next result explains the relationships that hold between these sets; the various inclusions are pictured
in Figure 1.
P1a
P2a
P1a
P2a
R
R
Pa
P a = P3a
P3a
Figure 1: Venn diagrams illustrating the various relationships between the sets P1a , P2a , P3a , P a = P1a ∩ P2a
a ) in the general case (left) and the stable case (right); for clarity, we have written R = Reg(S a ).
and Reg(Sij
ij
Proposition 2.11. Let (S, ·, I, λ, ρ) be a partial semigroup, and fix i, j ∈ I and a ∈ Sji . Then
a ) ⊆ P a ⊆ P a,
(i) Reg(Sij
3
(ii) P a = P3a if S is stable.
a ), then x = xayax for some y ∈ S , giving xRxa and xL ax, so that x ∈ P a ∩P a = P a .
Proof. If x ∈ Reg(Sij
ij
1
2
a
a
a
Next, suppose x ∈ P = P1 ∩ P2 , so x = xav = uax for some u, v ∈ S (1) . It follows that x = uaxav, so
xJ axa and x ∈ P3a . This completes the proof of (i).
Now suppose S is stable, and let x ∈ P3a . So x = uaxav for some u, v ∈ S (1) . It then follows that xJ xa
and xJ ax. By stability, it follows that xRxa and xL ax, so that x ∈ P1a ∩ P2a = P a , completing the proof
of (ii).
2
Remark 2.12. The assumption of regularity (resp., stability) could be greatly weakened in Proposition 2.10
(resp., Proposition 2.11(ii)). However, because the linear partial semigroup M is regular and stable (see
Lemmas 3.1 and 3.2), we will not pursue this thought any further.
We now show how the sets P1a , P2a , P3a and P a = P1a ∩ P2a may be used to relate Green’s relations on the
a to the corresponding relations on S. To avoid confusion, if K is one of R, L , J ,
sandwich semigroups Sij
a
a . So, for example, if x, y ∈ S , then
D, H , we write K for the Green’s K -relation on Sij
ij
• xR a y if and only if [x = y] or [x = y ?a u = yau and y = x ?a v = xav for some u, v ∈ Sij ].
It is then clear that R a ⊆ R, and the analogous statement is true for all of the other Green’s relations. If
a . Since K a ⊆ K , it follows that
x ∈ Sij , we write Kxa = {y ∈ Sij : xK a y} for the K a -class of x in Sij
a
Kx ⊆ Kx for all x ∈ Sij .
Theorem 2.13. Let (S, ·, I, λ, ρ) be a partial semigroup, and let a ∈ Sji where i, j ∈ I. If x ∈ Sij , then
(
Rx ∩ P1a
(i)
=
{x}
(
Lx ∩ P2a
(ii) Lax =
{x}
(
Hx
(iii) Hxa =
{x}
Rxa
if x ∈ P1a
if x ∈ Sij \ P1a ,
if x ∈ P2a
if x ∈ Sij \ P2a ,
if x ∈ P a
if x ∈ Sij \ P a ,
Dx ∩ P a
La
x
(iv) Dxa =
Rxa
{x}
(
Jx ∩ P3a
(v) Jxa =
Dxa
7
if
if
if
if
x ∈ Pa
x ∈ P2a \ P1a
x ∈ P1a \ P2a
x ∈ Sij \ (P1a ∪ P2a ),
if x ∈ P3a
if x ∈ Sij \ P3a .
a.
Further, if x ∈ Sij \ P a , then Hxa = {x} is a non-group H a -class of Sij
Proof. The proof of [19, Proposition 3.2] may easily be adapted to prove (i–iv) and the final statement
about H a -classes. We now prove (v). Let x ∈ Sij .
Suppose y ∈ Jxa \ {x}. So one of (a–c) and one of (d–f) holds:
(a) y = sax for some s ∈ Sij ,
(d) x = uay for some u ∈ Sij ,
(b) y = xat for some t ∈ Sij ,
(e) x = yav for some v ∈ Sij ,
(c) y = saxat for some s, t ∈ Sij ,
(f) x = uayav for some u, v ∈ Sij .
Suppose first that (a) and (d) hold. Then xL a y. Since x 6= y, we deduce that x ∈ P2a by (ii). Since
Lax = Lay , we also have y ∈ P2a . Similarly, if (b) and (e) hold, then xR a y and x, y ∈ P1a . One may check that
any other combination of (a–c) and (d–f) implies x, y ∈ P3a . For example, if (a) and (e) hold, then
y = sax = s(aya)v
and
x = yav = s(axa)v.
In particular, we have shown that |Jxa | ≥ 2 implies x ∈ P1a ∪ P2a ∪ P3a . By the contrapositive of this last
statement, if z ∈ Sij \ (P1a ∪ P2a ∪ P3a ), then Jza = {z} = Dza , with the last equality following from (iv).
Next, suppose x ∈ P1a \ P3a . In particular, x 6∈ P2a since P1a ∩ P2a ⊆ P3a by Proposition 2.11(i). Since
D a ⊆ J a , we have Dxa ⊆ Jxa . Conversely, suppose y ∈ Jxa . We must show that y ∈ Dxa . If y = x, then
we are done, so suppose y 6= x. As above, one of (a–c) and one of (d–f) holds. If (b) and (e) hold, then
y ∈ Rxa = Dxa , the second equality holding by (iv). If any other combination of (a–c) and (d–f) holds then, as
explained in the previous paragraph, x (and y) would belong to P2a or P3a , a contradiction. This completes
the proof that Jxa ⊆ Dxa . A dual argument shows that Jxa = Dxa if x ∈ P2a \ P3a .
Finally, suppose x ∈ P3a . Let z ∈ Jx ∩ P3a . So we have
x = s0 axat0 ,
z = s00 azat00 ,
z = u0 xv 0 ,
x = u00 zv 00
for some s0 , s00 , t0 , t00 , u0 , u00 , v 0 , v 00 ∈ S (1) .
We then calculate z = u0 xv 0 = u0 s0 axat0 v 0 = u0 s0 a(s0 axat0 )at0 v 0 = (u0 s0 as0 ) ?a x ?a (t0 at0 v 0 ), and similarly
x = (u00 s00 as00 ) ?a z ?a (t00 at00 v 00 ), showing that zJ a x, and Jx ∩ P3a ⊆ Jxa . To prove the reverse inclusion, since
we have already observed that Jxa ⊆ Jx , it suffices to show that Jxa ⊆ P3a . So suppose y ∈ Jxa . If y = x, then
y ∈ P3a , so suppose y 6= x. Then one of (a–c) and one of (d–f) above holds. If (a) and (d) hold, then
y = sax = sas0 axat0 = sas0 auayat0 ,
showing that y ∈ P3a . A similar argument covers the case in which (b) and (e) hold. As we observed above,
any other combination of (a–c) and (d–f) implies that y ∈ P3a . This completes the proof.
2
For a pictorial understanding of Theorem 2.13, Figures 4 and 5 below give eggbox diagrams of various linear
a.
sandwich semigroups.Next, we show that stability of S entails stability of all sandwich semigroups Sij
a is stable for all i, j ∈ I and
Proposition 2.14. Let (S, ·, I, λ, ρ) be a stable partial semigroup. Then Sij
a ∈ Sji .
Proof. Let x, y ∈ Sij . We must show that
xJ a x ?a y ⇔ xR a x ?a y
and
xJ a y ?a x ⇔ xL a y ?a x.
By duality, it suffices to prove the first of these. Clearly, xR a x ?a y ⇒ xJ a x ?a y. Conversely, suppose
xJ a x ?a y. Then one of the following holds:
(i) x = xay,
(iii) x = uaxay for some u ∈ Sij ,
(ii) x = xayav for some v ∈ Sij ,
(iv) x = uaxayav for some u, v ∈ Sij .
8
Clearly, (i) or (ii) implies xR aaxay. Next, suppose (iv) holds. Then xJ xaya, so that xRxaya by stability.
Clearly, (i) or (ii) implies xR xay. Next, suppose (iv) holds. Then xJ xaya, so that xRxaya by stability.
In particular, (a) x = xaya or (b) x = xayaw for some w 2 Sij . If (a) holds, then x = (xaya)aya, so (b)
In particular, (a) x = xaya or (b) x = xayaw for some w ∈ Sij . If (a) holds, then x = (xaya)aya, so (b)
holds with w = aya. In particular, x = (x ?a y) ?a w, completing the proof that xR aax ?a y. Finally, if (iii)
holds with w = aya. In particular, x = (x ?a y) ?a w, completing the proof that xR x ?a y. Finally, if (iii)
holds, then x = ua(uaxay)ay, so that case (iii) reduces to case (iv). The proof is therefore complete.
2
holds, then x = ua(uaxay)ay, so that case (iii) reduces to case (iv). The proof is therefore complete.
2
We conclude this section with a result that shows how regularity of the sandwich element implies close
We conclude this section with a result that shows how
regularity
of the sandwich element implies close
a and
b and certain
relationships between certain sandwich semigroups Sij
subsemigroups
a and S
b and certain (non-sandwich)
relationships between certain sandwich semigroups Sij
Sji
(non-sandwich) subsemigroups
ji
of Si and Sj .
of Si and Sj .
Theorem 2.15. Let (S, ·, I, , ⇢) be a partial semigroup and let i, j 2 I. Let a 2 Sji and b 2 Sij be such
Theorem 2.15. Let (S, ·, I, λ, ρ) be a partial semigroup and let i, j ∈ I. Let a ∈ Sji and b ∈ Sij be such
that a = aba and b = bab. Then
that a = aba and b = bab. Then
(i) Sij a and aSij are subsemigroups of Si and Sj (respectively),
(i) Sij a and aSij are subsemigroups of Si and Sj (respectively),
(ii) (aSij a, ?b ) and (bSji b, ?a ) are monoids with identities b and a (respectively), and are subsemigroups of
(ii) (aS
a, ?b )a and (bSji b, ?a ) are monoids with identities b and a (respectively), and are subsemigroups of
b ij
Sji
S a (respectively),
b and
Sji
and Sijij
(respectively),
(iii) the maps aSij a ! bSji b : x 7! bxb and bSji b ! aSij a : x 7! axa define mutually inverse isomorphisms
(iii) the maps aSij a → bSji b : x 7→ bxb and bSji b → aSij a : x 7→ axa define mutually inverse isomorphisms
between (aSij a, ?b ) and (bSji b, ?a ),
between (aSij a, ?b ) and (bSji b, ?a ),
a )a is contained in Reg(S b ),
(iv) a Reg(Sij
a )a is contained in Reg(Sji
b
(iv) a Reg(Sij
ji ),
(v) the following diagrams commute, with all maps being homomorphisms:
(v) the following diagrams commute, with all maps being homomorphisms:
(Sij , ?a )
1 :x7!xa
2 :x7!ax
(Sij a, ·)
1 :y7!ay
Reg(Sij , ?a )
1 :x7!xa
(aSij , ·)
(aSij a, ?b )
2 :x7!ax
Reg(Sij a, ·)
2 :y7!ya
1 :y7!ay
Reg(aSij , ·)
Reg(aSij a, ?b )
2 :y7!ya
aa ), so x = xauax
∈ Reg(Sij
Proof. Part (i) is clear, and parts (ii) and (iii) are easily checked. Next, suppose x 2
ij
for some u 2
∈ Sijij . Then axa = axauaxa = axabauabaxa = (axa) ?bb (aua) ?bb (axa), giving (iv). Part (v) is
all mostly easy to check. That Φ11 is a homomorphism follows from Φ11((xa)(ya)) = axaya = axabaya =
aa ) into Reg(S a). It follows from (iv) that φ maps
Φ11 (xa) ?bb Φ11 (ya). It is clear that ψ11 maps Reg(Sij
ij
ij
ij
11
bb ).
Reg(Sijija) into Reg(Sji
2
ji
Remark 2.16. Other relationships exist, such as (baSijija, ·) = (bSji
jiba, ·), but these will not be explored any
further.
3
The linear partial semigroup
A
As noted earlier, to understand the linear sandwich semigroups MA
mn, it is crucial to first understand the
mn
partial semigroup M. So in this section, we gather the required material on M, showing how the general
framework of Section 2 applies in this case.
∈ N = {1, 2, 3, . . .}, we write
We fix a field F for the remainder of the article. For positive integers m, n 2
=M
Mmn
(F) for
forthe
theset
setofofallallmm
n matrices
(i.e.,
matrices
with
m rows
n columns)
F.
Mmn
⇥×
n matrices
(i.e.,
all all
matrices
with
m rows
andand
n columns)
overover
F. We
mn =
mn(F)
S S
We write
= M(F)
M
for
the
set
of
all
(finite
dimensional,
non-empty)
matrices
over
F.
So
write
M =MM(F)
= =
M
for
the
set
of
all
(finite
dimensional)
matrices
over
F.
So
M
is
a
partial
mn
m,n∈Nmn
m,n2N
M is a partial
semigroup,
as noted
Example
2.5. we
By consider
convention,
wetoconsider
theremto
a unique
m×0
semigroup,
as noted
in Example
2.5.inBy
convention,
there
be a unique
⇥ 0beand
0 ⇥ n matrix
andany
0 ×m,
n nmatrix
for anythe
m, empty
n ≥ 0, matrix,
namely which
the empty
matrix,
wemn
denote
So0 M
for
0, namely
we denote
by which
;. So M
= {;}byif ∅.
m=
or mn
n ==0.{∅}
Butif
m =is0 aormatter
n = 0.ofBut
this is a matter
do not
consider
theanempty
matrix
∅ to
an
this
convenience,
and weofdoconvenience,
not considerand
the we
empty
matrix
; to be
element
of M.
Webealso
element
of
M.
We
also
write
M
=
M
(F)
=
M
for
any
n,
and
denote
by
G
=
G
(F)
the
group
of
n
×
n
write Mn = Mn (F) = Mnn fornany n,n and denote
nn by Gn = Gn (F) the group nof n ⇥
n n invertible matrices
9
9
invertible matrices over F. So Mn and Gn are the full linear monoid and general linear group of degree n.
For background on the full linear monoids, the monograph [74] is highly recommended.
If V and W are vector spaces, we write Hom(V, W ) for the set of all linear transformations from V to W .
As usual, if α ∈ Hom(V, W ), we write im(α) = {α(v) : v ∈ V } and ker(α) = {v ∈ V : α(v) = 0} for the
image and kernel of α. We write End(V ) = Hom(V, V ) for the monoid of all endomorphisms of V (i.e.,
all linear transformations V → V ), and Aut(V ) for the group of all automorphisms of V (i.e., all invertible
endomorphisms of V ). For n ≥ 0, we write Vn = Fn for the vector space of all n×1 column vectors over F. We
will identify Mmn with Hom(Vn , Vm ) in the usual way. Namely, if X ∈ Mmn , we write λX ∈ Hom(Vn , Vm )
for the linear transformation λX : Vn → Vm defined by λX (v) = Xv for all v ∈ Vn . We will often prove
statements about Mmn by proving the equivalent statement about Hom(Vn , Vm ). When m = n, the map
X → λX determines an isomorphism of monoids Mn → End(Vn ), and its restriction to Gn ⊆ Mn determines
an isomorphism of groups Gn → Aut(Vn ). We write {en1 , . . . , enn } for the standard basis of Vn (eni has a 1
in position i and 0’s elsewhere). We also write Wns = span{en1 , . . . , ens } for each 0 ≤ s ≤ n. (We interpret
span ∅ = {0}, though the dimension of the ambient space must be understood from context.)
Our first aim is to characterise Green’s relations (R, L , J , D, H ) and preorders (≤R , ≤L , ≤J ) on M.
Because M is monoidal (see Definition 2.1), M = M(1) . So, for example, if X, Y ∈ M are two matrices
(not necessarily of the same size), then X ≤R Y if and only if X = Y A for some A ∈ M. Note that if
X ≤R Y (resp., X ≤L Y ), then X and Y must have the same number of rows (resp., columns).
Let X ∈ Mmn . For 1 ≤ i ≤ m and 1 ≤ j ≤ n, we write ri (X) and cj (X) for the ith row and jth column
of X, respectively. We write Row(X) = span{r1 (X), . . . , rm (X)} and Col(X) = span{c1 (X), . . . , cn (X)} for
the row space and column space of X, respectively, and we write rank(X) = dim(Row(X)) = dim(Col(X))
for the rank of X. Because of the transpose map M → M : A 7→ AT , which is a bijection and satisfies
(AB)T = B T AT , the linear partial semigroup M is self-dual (in the sense that it is anti-isomorphic to its own
dual). Since Row(X T ) = Col(X), any statement about row spaces implies a corresponding dual statement
about column spaces (and vice versa). (Without causing confusion, we will often blur the distinction between
row vectors and column vectors, and think of Row(X) and Col(X) as subspaces of Fn and Fm , respectively.)
The next result characterises Green’s relations and preorders on M in terms of the parameters introduced
above. An equivalent formulation in the special case of square matrices may be found in [74, Lemma 2.1].
Lemma 3.1. Let X, Y ∈ M. Then
(i) X ≤R Y ⇔ Col(X) ⊆ Col(Y ),
(iv) XRY ⇔ Col(X) = Col(Y ),
(ii) X ≤L Y ⇔ Row(X) ⊆ Row(Y ),
(v) XL Y ⇔ Row(X) = Row(Y ),
(iii) X ≤J Y ⇔ rank(X) ⊆ rank(Y ),
(vi) XJ Y ⇔ rank(X) = rank(Y ).
ßFurther, M is stable, so J = D.
Proof. Clearly, (iv–vi) follow from (i–iii). Note that (ii) is the dual of (i), which is true because
X ≤R Y ⇔ X = Y A for some A ∈ M
⇔ every column of X is a linear combination of the columns of Y
⇔ Col(X) ⊆ Col(Y ).
For (iii), if X ≤J Y , then X = AY B for some A, B ∈ M, giving rank(X) = rank(AY B) ≤ rank(Y ).
Conversely, suppose rank(X) ≤ rank(Y ), and say X ∈ Mmn and Y ∈ Mkl . It is sufficient to show
that λX = α ◦ λY ◦ β for some α ∈ Hom(Vk , Vm ) and β ∈ Hom(Vn , Vl ). Put r = rank(X) and s =
rank(Y ). Choose bases B1 = {u1 , . . . , un } and B2 = {v1 , . . . , vl } for Vn and Vl so that {ur+1 , . . . , un } and
{vs+1 , . . . , vl } are bases for ker(λX ) and ker(λY ), respectively. Extend (if necessary) the linearly independent
sets {λY (v1 ), . . . , λY (vr )} and {λX (u1 , . . . , λX (ur )} arbitrarily to bases
B3 = {λY (v1 ), . . . , λY (vr ), wr+1 , . . . , wk }
and
10
B4 = {λX (u1 ), . . . , λX (ur ), xr+1 , . . . , xm }
for Vk and Vm . Now let α ∈ Hom(Vk , Vm ) and β ∈ Hom(Vn , Vl ) be chosen arbitrarily so that
α(λY (vi )) = λX (ui ),
β(ui ) = vi ,
α(wj ) ∈ span{xr+1 , . . . , xm }
β(uj ) ∈ span{vs+1 , . . . , vl }
for all 1 ≤ i ≤ r and r + 1 ≤ j ≤ k,
for all 1 ≤ i ≤ r and r + 1 ≤ j ≤ n.
One easily checks that α ◦ λY ◦ β = λX , by checking the respective actions on the basis B1 of Vn .
To prove stability, we must show that for all X, Y ∈ M,
XJ XY ⇔ XRXY
and
XJ Y X ⇔ XL Y X.
By duality, it suffices to prove the first equivalence. Since R ⊆ J , it is enough to prove that XJ XY ⇒
XRXY . Now, Col(XY ) ⊆ Col(X). But also XJ XY gives dim(Col(X)) = rank(X) = rank(XY ) =
dim(Col(XY )), so that Col(X) = Col(XY ), whence XRXY .
2
As we saw in Section 2, stability and regularity are very useful properties for a partial semigroup to have.
Now that we know M is stable, let us show that M is also regular.
Lemma 3.2. The linear partial semigroup M is regular.
Proof. Let X ∈ Mmn . It suffices to show that there exists α ∈ Hom(Vm , Vn ) such that λX = λX ◦ α ◦ λX .
Let B = {v1 , . . . , vn } be a basis of Vn such that {vr+1 , . . . , vn } is a basis of ker(λX ). Extend (if necessary)
the linearly independent set {λX (v1 ), . . . , λX (vr )} to a basis {λX (v1 ), . . . , λX (vr ), wr+1 , . . . , wm } of Vm . Let
α ∈ Hom(Vm , Vn ) be any linear transformation for which α(λX (vi )) = vi for each 1 ≤ i ≤ r. Then one easily
checks that λX = λX ◦ α ◦ λX by calculating the action on the basis B.
2
As in Section 2, if X ∈ Mmn and K is one of R, L , J , D, H , we write KX = {Y ∈ Mmn : XK Y }, and
call KX the K -class of X in Mmn . Note that all matrices from KX have the same dimensions. (We will
have no need to consider the sets [X]K of all matrices K -related to X.) Recall that Gk denotes the group
of all invertible k × k matrices over F. The next result gives an alternative description of various Green’s
classes in M.
Lemma 3.3. Let X ∈ Mmn . Then
(i) RX = {Y ∈ Mmn : Col(X) = Col(Y )} = XGn ,
(ii) LX = {Y ∈ Mmn : Row(X) = Row(Y )} = Gm X,
(iii) DX = JX = {Y ∈ Mmn : rank(X) = rank(Y )} = Gm XGn .
Proof. For (i), note that clearly XGn ⊆ RX . By Lemma 3.1, it remains to show the reverse inclusion,
so suppose Y ∈ RX . In particular, XJ Y , so rank(X) = rank(Y ). Put r = rank(X). We show that
λY = λX ◦ α for some α ∈ Aut(Vn ). Since XRY , we already know that λY = λX ◦ β for some β ∈
End(Vn ). Let B1 = {u1 , . . . , un } be a basis of Vn such that {ur+1 , . . . , un } is a basis of ker(λY ). So
{λX (β(u1 )), . . . , λX (β(ur ))} = {λY (u1 ), . . . , λY (ur )} is a basis of im(λY ). It follows that {β(u1 ), . . . , β(ur )}
is linearly independent. We may therefore extend this set to a basis B2 = {β(u1 ), . . . , β(ur ), vr+1 , . . . , vn }
of Vn , where {vr+1 , . . . , vn } is a basis of ker(λX ). Now define α ∈ Aut(Vn ) by
(
β(ui ) if 1 ≤ i ≤ r
α(ui ) =
vi
if r < i ≤ n.
One easily checks that λY = λX ◦ α. This completes the proof of (i).
Part (ii) is dual to (i). For (iii), clearly Gm XGn ⊆ JX , and the converse follows quickly from (i) and (ii) and
the fact that J = D = L ◦ R. By Lemma 3.1, this completes the proof.
2
If K is one of R, L , D = J , then the set Mmn /K of all K -classes of Mmn inherits a partial order:
KX ≤K KY ⇔ X ≤K Y.
11
We typically write ≤ for the order ≤J on the D = J -classes. Of importance is the fact that these classes
form a chain:
D0 (Mmn ) < D1 (Mmn ) < · · · < Dl (Mmn ),
where Ds (Mmn ) = {X ∈ Mmn : rank(X) = s} for all 0 ≤ s ≤ l = min(m, n).
Figure 2 pictures an eggbox diagram (as explained in Section 2) of the D-class D1 (M23 (Z3 )) of all 2 × 3
matrices of rank 1 over the field F = Z3 = {0, 1, 2} (see Lemma 3.4 for an explanation of the number
and sizes of the R-, L - and H -classes). The reader need not yet worry about the subdivisions within
the eggbox; for now, it is enough to note that the matrices to the left (resp., top) of the vertical (resp.,
horizontal) divider satisfy the property that the first column (resp., row) spans the column space (resp., row
space) of the matrix.
6⊆ P1
⊆ P1
[ 10 00 00 ] [ 10 00 10 ] [ 10 00 20 ] [ 10 10 00 ] [ 10 10 10 ] [ 10 10 20 ] [ 10 20 00 ] [ 10 20 10 ] [ 10 20 20 ] [ 00 10 00 ] [ 00 10 10 ] [ 00 10 20 ] [ 00 00 10 ]
[ 20 00 00 ] [ 20 00 20 ] [ 20 00 10 ] [ 20 20 00 ] [ 20 20 20 ] [ 20 20 10 ] [ 20 10 00 ] [ 20 10 20 ] [ 20 10 10 ] [ 00 20 00 ] [ 00 20 20 ] [ 00 20 10 ] [ 00 00 20 ]
⊆ P2
[ 11 00 00 ] [ 11 00 11 ] [ 11 00 22 ] [ 11 11 00 ] [ 11 11 11 ] [ 11 11 22 ] [ 11 22 00 ] [ 11 22 11 ] [ 11 22 22 ] [ 00 11 00 ] [ 00 11 11 ] [ 00 11 22 ] [ 00 00 11 ]
[ 22 00 00 ] [ 22 00 22 ] [ 22 00 11 ] [ 22 22 00 ] [ 22 22 22 ] [ 22 22 11 ] [ 22 11 00 ] [ 22 11 22 ] [ 22 11 11 ] [ 00 22 00 ] [ 00 22 22 ] [ 00 22 11 ] [ 00 00 22 ]
[ 12 00 00 ] [ 12 00 12 ] [ 12 00 21 ] [ 12 12 00 ] [ 12 12 12 ] [ 12 12 21 ] [ 12 21 00 ] [ 12 21 12 ] [ 12 21 21 ] [ 00 12 00 ] [ 00 12 12 ] [ 00 12 21 ] [ 00 00 12 ]
[ 21 00 00 ] [ 21 00 21 ] [ 21 00 12 ] [ 21 21 00 ] [ 21 21 21 ] [ 21 21 12 ] [ 21 12 00 ] [ 21 12 21 ] [ 21 12 12 ] [ 00 21 00 ] [ 00 21 21 ] [ 00 21 12 ] [ 00 00 21 ]
6⊆ P2
[ 01 00 00 ] [ 01 00 01 ] [ 01 00 02 ] [ 01 01 00 ] [ 01 01 01 ] [ 01 01 02 ] [ 01 02 00 ] [ 01 02 01 ] [ 01 02 02 ] [ 00 01 00 ] [ 00 01 01 ] [ 00 01 02 ] [ 00 00 01 ]
[ 02 00 00 ] [ 02 00 02 ] [ 02 00 01 ] [ 02 02 00 ] [ 02 02 02 ] [ 02 02 01 ] [ 02 01 00 ] [ 02 01 02 ] [ 02 01 01 ] [ 00 02 00 ] [ 00 02 02 ] [ 00 02 01 ] [ 00 00 02 ]
Figure 2: An eggbox diagram of the D-class D1 (M23 (Z3 )).
So Mmn has min(m, n)+1 D-classes. It will also be convenient to have some more combinatorial information
about the number and size of certain K -classes. Recall that the q-factorials and q-binomial coefficients are
defined by
(q − 1)(q 2 − 1) · · · (q s − 1)
[s]q ! = 1 · (1 + q) · · · (1 + q + · · · + q s−1 ) =
(q − 1)s
and
[m]q !
(q m − 1)(q m − q) · · · (q m − q s−1 )
(q m − 1)(q m−1 − 1) · · · (q m−s+1 − 1)
m
=
=
=
.
s q
[s]q ![m − s]q !
(q s − 1)(q s − q) · · · (q s − q s−1 )
(q s − 1)(q s−1 − 1) · · · (q − 1)
It is easy to check (and well-known) that when |F| = q < ∞,
s
|Gs | = (q s − 1)(q s − q) · · · (q s − q s−1 ) = q (2) (q − 1)s [s]q !.
In what follows, a crucial role will be played by the matrices Jmns ∈ Mmn defined for s ≤ min(m, n) by
Is
Os,n−s
Jmns =
.
Om−s,s Om−s,n−s
Here and elsewhere, we write Is and Okl for the s × s identity matrix and k × l zero matrix (respectively). If
the dimensions are understood from context, we just write O = Okl . So Jmns is the m × n matrix with 1’s in
the first s positions on the leading diagonal and 0’s elsewhere. Note that if s = m ≤ n (resp., s = n ≤ m),
then the
i Om−s,s and Om−s,n−s (resp., Os,n−s and Om−s,n−s ) are empty, and Jmns = [Is Os,n−s ]
h matrices
(resp.,
Is
Om−s,s
).
Lemma 3.4. Suppose |F| = q < ∞, and let 0 ≤ s ≤ min(m, n). Then
(i) Ds (Mmn ) contains [ m
s ]q R-classes,
12
(ii) Ds (Mmn ) contains [ ns ]q L -classes,
s
n
s
()
(iii) Ds (Mmn ) contains [ m
s ]q [ s ]q H -classes, each of which has size |Gs | = q 2 (q − 1) [s]q !,
s
n
s
()
(iv) |Ds (Mmn )| = [ m
s ]q [ s ]q q 2 (q − 1) [s]q !.
Proof. Parts (i) and (ii) follow immediately from parts (i) and (ii) of Lemma 3.3 and the well-known fact
that [ m
s ]q is the number of s dimensional subspaces of an m dimensional vector space over F. The number of
H -classes follows immediately from (i) and (ii). By Lemma 2.8, all the
H -classes in Ds (Mmn ) have the same
A
B
size, so it suffices to calculate the size of H = HJmns . Let X = C D ∈ H, where A ∈ Ms , B ∈ Ms,n−s , and
so on. Since Row(X) = Row(Jmns ), we see that B and
AD
are zero matrices. Considering column spaces, we
O
see that C is also a zero matrix. It follows
that X = O O , and also rank(A) = rank(X) = rank(Jmns ) = s.
A O with rank(A) = s belongs to H. The condition that rank(A) = s is
Clearly every such matrix X = O
O
equivalent to A ∈ Gs , so it follows that |H| = |Gs |. Finally, (iv) follows from (iii).
2
Of course, by considering the size of Mmn when |F| = q < ∞, we obtain the identity
l
X
s
m
n
mn
q
=
[s]q !(q − 1)s q (2) .
s q s q
s=0
We conclude this section by stating some well-known results on the full linear monoids Mn and their
ideals that we will require in what follows. The set E(Mn ) = {X ∈ Mn : X = X 2 } of idempotents
of Mn is not a subsemigroup (unless n ≤ 1), but the subsemigroup En = hE(Mn )i of Mn generated by
these idempotents has a neat description. Namely, it was shown by Erdos [25] that any singular (i.e.,
non-invertible) matrix over F is a product of idempotent matrices. This result has been reproved by a
number of authors [2, 15, 18, 27, 53]. The minimal number of (idempotent) matrices required to generate En
was determined by Dawlings [16]. Recall that the rank (resp., idempotent rank ) of a semigroup (resp.,
idempotent generated semigroup) S, denoted rank(S) (resp., idrank(S)), is the minimal size of a generating
set (resp., idempotent generating set) for S. (The rank of a semigroup should not be confused with the rank
of a matix.) If U is a subset of a semigroup S, we write E(U ) = E(S) ∩ U for the set of all idempotents
from U .
Theorem 3.5 (Erdos [25], Dawlings [15, 16]). We have
En = hE(Mn )i = (Mn \ Gn ) ∪ {In }
and
Mn \ Gn = hE(Dn−1 (Mn ))i.
Further, if |F| = q < ∞, then
rank(Mn \ Gn ) = idrank(Mn \ Gn ) = (q n − 1)/(q − 1).
2
The previous result has been extended by Gray [33] to arbitrary ideals of Mn .
Theorem 3.6 (Gray [33]). The ideals of Mn are precisely the sets
Is (Mn ) = D0 (Mn ) ∪ · · · ∪ Ds (Mn ) = {X ∈ Mn : rank(X) ≤ s}
for 0 ≤ s ≤ n,
and they form a chain: I0 (Mn ) ⊆ · · · ⊆ In (Mn ). If 0 ≤ s < n, then Is (Mn ) = hE(Ds (Mn ))i is generated
by the idempotents in its top D-class. Further, if |F| = q < ∞, then
n
rank(Is (Mn )) = idrank(Is (Mn )) =
.
2
s q
Note that In (Mn ) = Mn , Dn (Mn ) = Gn and In−1 (Mn ) = Mn \ Gn , so Theorem 3.5 is a special case of
n
Theorem 3.6 since [ n−1
]q = (q n − 1)/(q − 1).
On several occasions, we will need to make use of the fact that the general linear group Gn may be generated
by two matrices, as was originally proved by Waterhouse [94]; see also [31], where minimal generating sets
for Gn are explored in more detail. Probabilistic generation of matrix groups is considered in [4, 36], for
example, though the context is usually for classical groups.
13
Theorem 3.7 (Waterhouse [94]). If |F| < ∞, then
(i) rank(G1 ) = 1, and rank(Gn ) = 2 if n ≥ 2,
(ii) Mn = hGn ∪ {X}i for any X ∈ Dn−1 (Mn ),
(iii) rank(M1 ) = 2, and rank(Mn ) = 3 if n ≥ 2.
2
For convenience, eggbox diagrams are given for the full linear monoids Mn (Z2 ) for 0 ≤ n ≤ 3 in Figure 3
below. In the diagrams, group H -classes are shaded grey, and a label of k indicates that the group H -class
is isomorphic to Gk (Z2 ).
3
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
0
1
0
0
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
0
Figure 3: Egg box diagrams of the full linear semigroups M0 , M1 , M2 , M3 , all over Z2 (left to right).
4
Linear sandwich semigroups
Now that we have gathered the required material on M, we may begin our study of the linear sandwich
semigroups. From now on, we fix integers m, n ≥ 1 and an n × m matrix A ∈ Mnm . As in Section 2, we
denote by
A
MA
mn = Mmn (F) = (Mmn , ?A )
the sandwich semigroup of Mmn under the operation ?A defined by
for X, Y ∈ Mmn .
X ?A Y = XAY
A
We note that if m = n, then MA
mn = Mn is a variant [38] of the full linear monoid Mn , so everything
we prove about linear sandwich semigroups holds for such linear variants also. We begin with a simple
observation.
Lemma 4.1.
∼ AT
(i) If A ∈ Mnm , then MA
mn = Mnm .
∼ B
(ii) If A, B ∈ Mnm are such that rank(A) = rank(B), then MA
mn = Mmn .
T
A
Proof. It clear that X 7→ X T defines an isomorphism MA
mn → Mnm , giving (i). Next, if rank(A) =
rank(B), Lemma 3.3 gives A = U BV for some U ∈ Gm and V ∈ Gn . But then one may check that
B
X 7→ V XU defines an isomorphism MA
2
mn → Mmn , giving (ii).
In particular, when studying the semigroup MA
mn where rank(A) = r, we may choose any A ∈ Mnm of
rank r. For the rest of the article, we will therefore study the semigroup MJmn , where
Ir
Or,m−r
J = Jnmr =
∈ Mnm .
On−r,r On−r,m−r
14
From now on, unless
specified, whenever a k × l matrix X (with k, l ∈ {m, n}) is written in 2 × 2
A Botherwise
block form, X = C
,
we
will
be tacitly assuming that A ∈ Mr (from
D
which the dimensions of B, C, D
I
O
may be deduced). So for example, we will usually just write J = O O . For simplicity, we will write ? for
the operation ?J on MJmn , throughout. One easily verifies the rule
A B
E F
AE AF
?
=
.
C D
G H
CE CF
A B
Also note that if X = C
D , then
A O
A B
A O
XJ =
∈ Mm , JX =
∈ Mn , JXJ =
∈ Mnm .
C O
O O
O O
Remark 4.2. In the special case that r = m ≤ n, we have J = [I O], and the product in MJmn satisfies
[A B] ? [E F ] = [AE AF ]. But we just view this as a special case of the above rule, with the bottom
rows — i.e., [C D], [G H], [CE CF ] — containing empty blocks. A dual statement holds in the case
r = n ≤ m. In only one place will we need to consider the case in which r = min(m, n) separately (see
Theorems 4.12 and 4.14). If r = m = n, then MJmn is precisely the full linear monoid Mn ; since all
the problems we investigate have already been solved for Mn , we will typically assume that r = m = n
does not hold, though our results are true for the case r < m = n (corresponding to variants of the full
linear monoids Mn ). See Remark 5.3, where the above observations are used to show that the sandwich
semigroups MJmn are isomorphic to certain well-known (non-sandwich) matrix semigroups in the case that
r = min(m, n).
Green’s relations and the regular elements of the sandwich semigroup MJmn were calculated in [9, 49]. We
now show how these results may be recovered (and given a cleaner presentation) using the general theory
developed in Section 2. In particular, a crucial role is played by the sets
P1J = {X ∈ Mmn : XJRX}, P2J = {X ∈ Mmn : JXL X}, P3J = {X ∈ Mmn : JXJJ X}, P J = P1J ∩P2J .
For simplicity, we denote these sets simply by P1 , P2 , P3 , and P = P1 ∩ P2 .
Certain special matrices from Mmn will be very important in what follows. With this in mind, if A ∈ Mr ,
M ∈ Mm−r,r and N ∈ Mr,n−r , we write
A
AN
∈ Mmn .
[M, A, N ] =
M A M AN
One may check that when matrices of this form are multiplied in MJmn , they obey the rule
[M, A, N ] ? [K, B, L] = [M, AB, L].
Proposition 4.3.
(i) P1 = {X ∈ Mmn : rank(XJ) = rank(X)} = X ∈ Mmn : Col(XJ) = Col(X) ,
(ii) P2 = {X ∈ Mmn : rank(JX) = rank(X)} = X ∈ Mmn : Row(JX) = Row(X) ,
(iii) P3 = P = {X ∈ Mmn : rank(JXJ) = rank(X)}
= [M, A, N ] : A ∈ Mr , M ∈ Mm−r,r , N ∈ Mr,n−r ,
(iv) P = Reg(MJmn ) is the set of all regular elements of MJmn , and is a subsemigroup of MJmn .
Proof. Parts (i) and (ii) follow quickly from Lemma 3.1 (making crucial use of stability). We now prove (iii).
Since M is stable,
A B Proposition 2.11 and Lemma 3.3 give P3 = P = {X ∈ Mmn : rank(JXJ) = rank(X)}.
Now let X = C D ∈ Mmn . First, note that
A B
X ∈ P2 ⇔ Row(X) = Row(JX) = Row O
O
⇔ each row of [C D] is a linear combination of the rows of [A B]
⇔ [C D] = M [A B] = [M A M B] for some M ∈ Mm−r,r .
15
Similarly,
B
A
AN
X ∈ P1 ⇔
=
N=
for some N ∈ Mr,n−r .
D
C
CN
Putting these together, we see that X ∈ P = P1 ∩ P2 if and only if P = MAA MAN
AN = [M, A, N ], completing
the proof of (iii).
For (iv), Proposition 2.11 gives Reg(MJmn ) ⊆ P . Conversely, suppose X = [M, A, N ] ∈ P . If B ∈ Mr is
B C for any
such that A = ABA (see Lemma 3.2), then it is easy to check that X = X ? Y ? X where Y = D
E
(appropriately sized) C, D, E, completing the proof that P = Reg(MJmn ). The fact that P is a subsemigroup
follows immediately from Proposition 2.10 and Lemma 3.2 (or directly from the rule [M, A, N ] ? [K, B, L] =
[M, AB, L]).
2
Remark 4.4. Part (iv) of the previous proposition also follows from [9, Theorem 2.1], but the rest of
Proposition 4.3 appears to be new.
Now that we have described the sets P1 , P2 , P3 = P = P1 ∩ P2 , we may characterise Green’s relations
on MJmn . As in Section 2, if K is one of R, L , H , D, J , we will write K J for the Green’s K -relation
on MJmn . Since MJmn is not a monoid in general, these relations are defined, for X, Y ∈ Mmn , by
• XR J Y ⇔ [X = Y ] or [X = Y ? U and Y = X ? V for some U, V ∈ Mmn ],
and so on. Since M is stable, so too is MJmn , so we have J J = D J (see Proposition 2.14 and Lemmas 2.6
and 3.1). We will continue to write R, L , H , D, J for the relations on M defined in Section 3. As in
Section 2, if K is one of R, L , H , D = J , and if X ∈ Mmn , we will write
KX = {Y ∈ Mmn : XK Y }
and
J
KX
= {Y ∈ Mmn : XK J Y }
for the K -class and K J -class of X in Mmn , respectively. As noted in Section 2, K J ⊆ K for each K ,
J ⊆ K for each X. The next result follows immediately from Theorem 2.13. It also follows from
and so KX
X
Theorem 2.3, Lemma 2.4, and Corollaries 2.5–2.8 of [9], but we prefer the current succinct description.
Theorem 4.5. If X ∈ Mmn , then
(
RX ∩ P1 if X ∈ P1
J
(i) RX =
{X}
if X ∈ Mmn \ P1 ,
(ii) LJX
(
LX ∩ P2
=
{X}
if X ∈ P2
if X ∈ Mmn \ P2 ,
(iii)
J
HX
J
(iv) DX
(
HX
=
{X}
DX ∩ P
LJ
X
=
J
R
X
{X}
if X ∈ P
if X ∈ Mmn \ P ,
if
if
if
if
X
X
X
X
∈P
∈ P2 \ P1
∈ P1 \ P2
∈ Mmn \ (P1 ∪ P2 ).
The sets P1 , P2 are described in Proposition 4.3, and the sets RX , LX , HX , DX in Proposition 3.3. In
J = LJ = H J = D J = {X} if rank(X) > r. If X ∈ M
J
particular, RX
mn \ P , then HX = {X} is a non-group
X
X
X
J
J
H -class of Mmn .
2
Eggbox diagrams of some linear sandwich semigroups are given in Figures 4 and 5. As usual, grey boxes
indicate group H J -classes; a label of k on such a group H J -class indicates isomorphism to Gk . Note that
the bottom diagram from Figure 4 is of a variant of M3 (Z2 ) = M33 (Z2 ). The diagrams in the pdf version
of this article may be zoomed in a long way. The authors may be contacted for more such pictures.
Theorem 4.5 yields an intuitive picture of the internal structure of MJmn . Recall that the D-classes of Mmn
are the sets Ds (Mmn ) = {X ∈ Mmn : rank(X) = s} for 0 ≤ s ≤ l = min(m, n). If r < l, then each of
the D-classes Dr+1 (Mmn ), . . . , Dl (Mmn ) separates completely into singleton D J -classes in MJmn . (We will
study these classes in more detail shortly.) Next, note that D0 (Mmn ) = {O} ⊆ P (as the zero matrix
clearly belongs to both P1 and P2 ), so D0 (Mmn ) remains a (regular) D J -class of MJmn . Now fix some
16
1
1
1
1
1
1
1
1
0
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
0
Figure 4: Egg box diagrams of the linear sandwich semigroups MJ32231 (Z2 ) and MJ33332 (Z2 ) (top and bottom,
respectively).
1 ≤ s ≤ r. The D-class Ds (Mmn ) is split into a single regular D J -class, namely Ds (Mmn ) ∩ P , and a
number of non-regular D J -classes. Some of these non-regular D J -classes are singletons, namely those of the
J = {X} where X ∈ D (M
J
form DX
s
mn ) belongs to neither P1 nor P2 . Some of the non-regular D -classes
J
J
J
consist of one non-singleton L -class, namely those of the form DX = LX = LX ∩ P2 , where X ∈ Ds (Mmn )
belongs to P2 \ P1 ; the H J -classes contained in such a D J -class are all singletons. The remaining nonregular D J -classes contained in Ds (Mmn ) consist of one non-singleton R J -class, namely those of the form
J
J
J = RJ = R ∩ P , where X ∈ D (M
DX
1
s
mn ) belongs to P1 \ P2 ; the H -classes contained in such a D -class
X
X
are all singletons.
h 1 0 i This is all pictured in Figure 6 for the D-class D1 (M23 ) where F = Z3 = {0, 1, 2} and
J = J321 = 0 0 ; cf. Figure 2.
00
It will be important to have a description of the partial order ≤ on the D J -classes of MJmn .
J ≤ D J in MJ
Proposition 4.6. Let X, Y ∈ Mmn . Then DX
mn if and only if one of the following holds:
Y
(i) X = Y ,
(iii) Row(X) ⊆ Row(JY ),
(ii) rank(X) ≤ rank(JY J),
(iv) Col(X) ⊆ Col(Y J).
J ≤ D J if and only if one of the following holds:
Proof. Note that DX
Y
(a) X = Y ,
(c) X = U JY for some U ∈ Mmn ,
(b) X = U JY JV for some U, V ∈ Mmn ,
(d) X = Y JV for some V ∈ Mmn .
The equivalences (b) ⇔ (ii), (c) ⇔ (iii), and (d) ⇔ (iv) all follow from Lemma 3.1.
2
The description of the order on D J -classes of MJmn from Proposition 4.6 may be simplified in the case that
one of X, Y is regular.
17
2
2
2
2
2
2
2
1
1
1
1
1
1
1
1
2
2
2
2
1
1
1
1
1
1
1
1
2
2
2
2
1
1
1
1
1
1
1
1
2
2
2
2
2
1
1
1
1
1
1
1
1
1
1
1
1
0
0
Figure 5: Egg box diagrams of the linear sandwich semigroups MJ32232 (Z2 ) and MJ24422 (Z2 ) (left and right,
respectively).
6⊆ P1
⊆ P1
[ 02 01 01 ]
[ 01 02 02 ]
[ 02 01 02 ]
[ 01 02 01 ]
[ 02 01 00 ]
[ 01 02 00 ]
[ 02 02 01 ]
[ 00 21 00 ] [ 00 21 21 ] [ 00 21 12 ] [ 00 00 21 ]
[ 01 01 02 ]
[ 21 00 00 ] [ 21 00 21 ] [ 21 00 12 ] [ 21 21 00 ] [ 21 21 21 ] [ 21 21 12 ] [ 21 12 00 ] [ 21 12 21 ] [ 21 12 12 ]
[ 02 02 02 ]
[ 00 12 00 ] [ 00 12 12 ] [ 00 12 21 ] [ 00 00 12 ]
[ 01 01 01 ]
[ 12 00 00 ] [ 12 00 12 ] [ 12 00 21 ] [ 12 12 00 ] [ 12 12 12 ] [ 12 12 21 ] [ 12 21 00 ] [ 12 21 12 ] [ 12 21 21 ]
[ 02 02 00 ]
[ 00 22 00 ] [ 00 22 22 ] [ 00 22 11 ] [ 00 00 22 ]
[ 01 01 00 ]
[ 22 00 00 ] [ 22 00 22 ] [ 22 00 11 ] [ 22 22 00 ] [ 22 22 22 ] [ 22 22 11 ] [ 22 11 00 ] [ 22 11 22 ] [ 22 11 11 ]
[ 02 00 01 ]
[ 00 11 00 ] [ 00 11 11 ] [ 00 11 22 ] [ 00 00 11 ]
[ 01 00 02 ]
[ 11 00 00 ] [ 11 00 11 ] [ 11 00 22 ] [ 11 11 00 ] [ 11 11 11 ] [ 11 11 22 ] [ 11 22 00 ] [ 11 22 11 ] [ 11 22 22 ]
[ 02 00 02 ]
[ 00 20 00 ] [ 00 20 20 ] [ 00 20 10 ] [ 00 00 20 ]
[ 01 00 01 ]
[ 20 00 00 ] [ 20 00 20 ] [ 20 00 10 ] [ 20 20 00 ] [ 20 20 20 ] [ 20 20 10 ] [ 20 10 00 ] [ 20 10 20 ] [ 20 10 10 ]
[ 02 00 00 ]
6⊆ P2
[ 00 10 00 ] [ 00 10 10 ] [ 00 10 20 ] [ 00 00 10 ]
[ 01 00 00 ]
⊆ P2
[ 10 00 00 ] [ 10 00 10 ] [ 10 00 20 ] [ 10 10 00 ] [ 10 10 10 ] [ 10 10 20 ] [ 10 20 00 ] [ 10 20 10 ] [ 10 20 20 ]
[ 00 01 00 ] [ 00 01 01 ] [ 00 01 02 ] [ 00 00 01 ]
[ 00 02 00 ] [ 00 02 02 ] [ 00 02 01 ] [ 00 00 02 ]
Figure 6: A D-class D1 (M23 (Z3 )) breaks up into D J -classes in MJ23 (Z3 ), where J = J321 . Group H J -classes
are shaded grey; the idempotent of such a group is the upper of the two matrices. (cf. Figure 2.)
Proposition 4.7. Let X, Y ∈ Mmn .
J ≤ D J ⇔ rank(X) ≤ rank(JY J).
(i) If X ∈ P , then DX
Y
J ≤ D J ⇔ rank(X) ≤ rank(Y ).
(ii) If Y ∈ P , then DX
Y
The regular D J -classes of MJmn form a chain: D0J < · · · < DrJ , where
DsJ = Ds (Mmn ) ∩ P = {X ∈ P : rank(X) = s}
for each 0 ≤ s ≤ r.
J ≤ D J if and only if one of (a–d) holds. Suppose first
Proof. As in the proof of Proposition 4.6, DX
Y
that X ∈ P , so X = XJZJX for some Z ∈ Mmn . Then (a) implies X = XJZ(JY J)ZJX, (c) implies
X = U (JY J)ZJX, and (d) implies X = XJZ(JY J)V . So, in each of cases (a–d), we deduce that
J ≤ D J implies rank(X) ≤ rank(Y JY ). Proposition 4.6 gives the reverse
rank(X) ≤ rank(JY J). So DX
Y
implication.
Next, suppose Y ∈ P . Now, each of (a–d) implies rank(X) ≤ rank(Y ). Conversely, if rank(X) ≤ rank(Y ),
J ≤ D J , since rank(Y ) = rank(JY J). The statement about regular D J -classes
then Proposition 4.6 gives DX
Y
follows quickly from (ii).
2
The linear ordering on the regular D J -classes may be seen by inspecting Figures 4 and 5; see also Figure 8.
18
As an immediate consequence of Proposition 4.7, we may classify the isomorphism classes of sandwich
semigroups on the set Mmn ; the m = n case of the next result was proved in [48].
∼ B
Corollary 4.8. Let A, B ∈ Mnm . Then MA
mn = Mmn if and only if rank(A) = rank(B).
Jnmr and
∼
Proof. Put r = rank(A) and s = rank(B). By Proposition 4.7 and Lemma 4.1(ii), MA
mn = Mmn
Jnms
A
B
A ∼
B
∼
MB
mn = Mmn have r + 1 and s + 1 regular D - and D -classes, respectively. So Mmn = Mmn implies
r = s. The converse was proved in Lemma 4.1(ii).
2
B
∼
Remark 4.9. It is possible to have MA
mn = Mkl even if (m, n) 6= (k, l), although we would of course still
need rank(A) = rank(B) by Proposition 4.7. For example, if O = Onm is the n × m zero matrix, then MO
mn
O
is a zero semigroup (X ? Y = Omn for all X, Y ∈ Mmn ). Two such zero semigroups MO
mn and Mkl are
isomorphic if and only if they have the same cardinality; that is, if and only if F is infinite or F is finite and
B
mn = kl. We will return to the problem of distinguishing non-isomorphic MA
mn and Mkl in Theorem 6.5.
See Figure 7.
0
O12
22
Figure 7: Egg box diagram of the linear sandwich semigroup MO
22 (Z2 ) or, equivalently, M21 (F4 ). Both
are zero semigroups of size 16.
The next result describes the maximal D J -classes of MJmn . See also Figures 4 and 5.
Proposition 4.10. (i) If r = min(m, n), then DrJ = Dr ∩ P = {X ∈ P : rank(X) = r} is the unique
maximal D J -class of MJmn , and is a subsemigroup of MJmn .
J = {X} with
(ii) If r < min(m, n), then the maximal D J -classes of MJmn are those of the form DX
rank(X) > r.
Proof. Part (i) follows immediately from Proposition 4.7(ii), the rule [M, A, N ] ? [K, B, L] = [M, AB, L],
and the fact that Gr = Dr (Mr ) is a subgroup of Mr .
J ≤ D J . Then condition (ii) from
For (ii), let X ∈ Mmn . Suppose first that rank(X) > r and that DX
Y
Proposition 4.6 does not hold, since rank(JY J) ≤ rank(J) = r < rank(X). Similarly, rank(JY ) < rank(X)
and rank(Y J) < rank(X), so neither (iii) nor (iv) holds. Having eliminated (ii–iv), we deduce that (i)
J = {X} is indeed maximal. Conversely, suppose rank(X) ≤ r, and let
must hold; that is, X = Y , so DX
Ir O
Y = O D , where D 6= O. Then rank(Y ) > r, so DYJ = {Y } is maximal by the previous paragraph. But
J < D J = {Y }, whence
also JY J = J, and it follows that rank(X) ≤ r = rank(J) = rank(JY J), so that DX
Y
J
DX is not maximal.
2
The description of the maximal D J -classes from Proposition 4.10 allows us to obtain information about
generating sets for MJmn and, in the case of finite F, about rank(MJmn ). In order to avoid confusion when
discussing generation, if Ω ⊆ Mmn , we will write hΩiJ for the subsemigroup of MJmn generated by Ω, which
consists of all products X1 ? · · · ? Xk , with k ≥ 1 and X1 , . . . , Xk ∈ Ω. If Σ ⊆ Mk for some k, we will
continue to write hΣi for the subsemigroup of Mk generated by Σ. For convenience, we will state two
separate results, according to whether r = min(m, n) or r < min(m, n). The next lemma will be useful as
the inductive step in the proofs of both Theorems 4.12 and 4.14. Recall that {em1 , . . . , emm } is the standard
basis of Vm = Fm .
Lemma 4.11. Suppose X ∈ Ds (Mmn ), where 0 ≤ s ≤ l − 1 and l = min(m, n). Then X = Y ? Z for some
Y ∈ Dl (Mmn ) and Z ∈ Ds+1 (Mmn ).
19
Proof. Let B = {v1 , . . . , vn } be a basis of Vn such that {vs+1 , . . . , vn } is a basis of ker(λX ). Consider the
linear transformation β ∈ Hom(Vn , Vm ) defined by
emi if 1 ≤ i ≤ s
β(vi ) = 0
if s < i < n
emm if i = n,
noting that rank(β) = s + 1. The proof now breaks into two cases, depending on whether r < m or r = m.
Case 1. Suppose first that r < m. Let α ∈ Hom(Vn , Vm ) be any linear transformation of rank l that
extends the map eni 7→ λX (vi ) (1 ≤ i ≤ s). One easily checks that α ◦ λJ ◦ β = λX .
Case 2. Now suppose r = m. Recall that we are assuming that r = m = n does not hold, so r = m < n.
This time, define we let α be any linear transformation of rank m = l that extends the map eni 7→ λX (vi )
(1 ≤ i ≤ s), enr = enm 7→ 0. Then, again, one easily checks that α ◦ λJ ◦ β = λX .
2
Theorem 4.12. Suppose r < l = min(m, n). Then MJmn = hΩiJ , where Ω = {X ∈ Mmn : rank(X) > r}.
Further, any generating set for MJmn contains Ω. If |F| = q < ∞, then
l
X
s
m
n
rank(MJmn ) = |Ω| =
q (2) (q − 1)s [s]q !.
s q s q
s=r+1
Proof. For convenience, we will assume that l = m ≤ n. The other case will follow by duality. We will also
denote Ds (Mmn ) simply by Ds for each 0 ≤ s ≤ m. Consider the statement:
H(s):
hΩiJ contains Ds ∪ · · · ∪ Dm = {X ∈ Mmn : rank(X) ≥ s}.
Note that Ω = Dr+1 ∪ · · · ∪ Dm , so H(s) is clearly true for r + 1 ≤ s ≤ m. Lemma 4.11 shows that H(s + 1)
implies H(s) for all 0 ≤ s ≤ m − 1. So we conclude that H(s) is true for all 0 ≤ s ≤ m. In particular, H(0)
says that MJmn = hΩiJ .
Since {X} is a maximal D J -class for any X ∈ Ω, it follows that any generating set of MJmn must contain Ω.
Thus, Ω is the minimal generating set with respect to both size and containment, so rank(MJmn ) = |Ω|. The
formula for |Ω| with |F| finite follows from Lemma 3.4.
2
In order to consider the case in which r = min(m, n), we first prove an intermediate result. There is a dual
version of the following lemma (dealing with the case in which r = n < m), but we will not state it.
Lemma 4.13. If r = m < n, then
(i) P2 = MJmn ,
(ii) P = P1 is a left ideal of MJmn ,
(iii) L J = L in MJmn .
A B
Proof. Let X ∈ MJmn . As noted earlier, in the 2 × 2 block description, X = C
D (where A ∈ M
rI ,and
so on), the matrices C and
D
are
empty
(since
r
=
m).
So
we
write
X
=
[A
B].
Note
that
J
=
O , so
A B . It follows that Row(JX) = Row(X) and, since X ∈ MJ
JX = OI [A B] = O
was
arbitrary,
this
mn
O
completes the proof of (i).
We immediately deduce P = P1 from (i). As in Proposition 4.3, the regular elements of MJmn are of the
form [A AN ] where A ∈ Mr and N ∈ Mr,n−r . We denote such a regular element by [A, N ]. The proof
of (ii) concludes with the easily checked observation that [A B] ? [C, N ] = [AC, N ].
Part (iii) follows quickly from (i) and Theorem 4.5(ii).
Theorem 4.14. Suppose r = min(m, n) where m 6= n. If |F| = q < ∞, then
L
J
rank(Mmn ) =
,
l q
where L = max(m, n) and l = min(m, n).
20
2
Proof. Again, it suffices to assume that r = m < n, so l = m and L = n. We keep the notation of the
previous proof.
Let Ω be an arbitrary generating set for MJmn . Let X ∈ Dm (Mmn ) be arbitrary. We claim that Ω must
contain some element of LJX = LX . Indeed, consider an expression X = Y1 ? · · · ? Yk , where Y1 , . . . , Yk ∈ Ω. If
k = 1, then Y1 = X ∈ LX and the claim is established, so suppose k ≥ 2. Since Dm (Mmn ) is a maximal D J class, we must have Yk ∈ Dm (Mmn ). So Yk D J X = (Y1 ?· · ·?Yk−1 )?Yk , whence Yk L J (Y1 ?· · ·?Yk−1 )?Yk = X,
by stability. By Lemma 4.13(iii), this completes the proof of the claim. In particular, |Ω| is bounded below
n
by the number of L -classes contained in Dm (Mmn ), which is equal to [ m
]q , by Lemma 3.4. Since Ω was
n
J
L
an arbitrary generating set, it follows that rank(Mmn ) ≥ [ m ]q = l q .
To complete the proof, it remains to check that there exists a generating set of the desired cardinality.
J
m)
XN =
[ANN, N∈] 2
is possible
= q m(n
rank(G}mgenerates
) 2 by Theorem
m | that
m . (This
For
each
MDm,n−m
, choose
some since
AN ∈|M
Grm,n
such
{AN : N2,∈and
Mm,n−m
Gm , and 3.7.)
put
J . Also, choose
J
m(n−m)
It
is
easy
to
see
that
⌦
=
{X
:
N
2
M
}
is
a
cross-section
of
the
L
-classes
in
D
1
m,n
m
N
m Theorem 3.7.)
XN = [AN , N ] ∈ Dm . (This is possible since |Mm,n−m | = q
≥ 2, and rank(Gm ) ≤ 2 by
J . Then
J.⌦
some
cross-section
⌦2Ω=1 =
{Yi{X
:N
i 2: I}
of M
the
L
-classes
contained
in
D
(M
)
\
D
= ⌦ 1 [ ⌦2
m
mn
m
It
is easy
to see that
N∈
}
is
a
cross-section
of
the
L
-classes
in
D
m,n−m
m Also, choose
n
is a cross-section
contained in Dm (M
|⌦| J= [ m ]q , the
mn ). Since,
some
cross-sectionofΩ2the
=L
{Y-classes
contained
in Dtherefore,
Ω =proof
Ω1 ∪will
Ω2
i : i ∈ I} of the L -classes
m (Mmn ) \ Dm . Then
J
n
be acomplete
if weofcan
that M
Lemma
4.11, ittherefore,
suffices to
that
h⌦iJproof
contains
is
cross-section
theshow
L -classes
contained
|Ω|show
= [m
]q , the
will
J . DBy
mn = h⌦iin
m (M
mn ). Since,
J , and write Z = [B, L], noting that
Dmcomplete
(Mmn ). So
suppose
Z 2that
Dm (M
first
that Z4.11,
2 Ditm
mn ).= Assume
be
if we
can show
MJmn
hΩiJ . By
Lemma
suffices to show that hΩiJ contains
1
J· ·, A
B m2(M
Grmn
. ).Choose
N1 , . . .Z, N∈k D
2mM
such
that
BA
=
A
·
ThenZone
easily
checks that
that
m,n
m
N
Nk . write
1
D
So suppose
(M
).
Assume
first
that
Z
∈
D
= [B,
L], noting
L
mn
m and
−1
Z=
? · · · ? X Nk 1?, .X
suppose Z is not regular. Choose i 2 I such that ZL Yi . By Lemma
3.3,
checks that
B
∈X
GN
. .L,.NNow,
r .1 Choose N
k ∈ Mm,n−m such that BAL = AN1 · · · ANk . Then one easily
Z
=
U
Y
for
some
U
2
G
.
But
then
Z
=
[U
V
]
?
Y
for
any
V
2
M
.
Since
rank(U
)
=
m,
we
have
i
m
i
m,n
m
Z = XN1 ? · · · ? XNk ? XL . Now, suppose Z is not regular. Choose i ∈ I such that ZL Yi . By Lemma 3.3,
J
[U =V U
] 2YiDfor
h⌦iJU, whence
Z 2 then
h⌦iJ ,Zcompleting
proof.
2
m ✓
Z
some
∈ Gm . But
= [U V ] ? the
Yi for
any V ∈ Mm,n−m . Since rank(U ) = m, we have
J ⊆ hΩi , whence Z ∈ hΩi , completing the proof.
[U V ] ∈ Dm
J
J
2
Remark 4.15. By inspecting Figures 4 and 5, the reader may use Theorems 4.12 and 4.14 to locate the
J the
Remark
4.15.a By
inspecting
Figures
and
reader may use Theorems 4.12 and 4.14 to locate the
elements from
minimal
generating
set4for
M5,
mn .
J
elements from a minimal generating set for Mmn .
5
5
Connection to (non-sandwich) matrix semigroups
Connection to (non-sandwich) matrix semigroups
h
i
h
i
I
Or,n r i
Ir
Or,m r i
T = J
h
Recall that J = Jnmr = h OmIrr r,r OO
2
M
.
Now
let
K
=
J
=
2 Mmn .
nm
mnr
OnIr r,r OO
mr,n−r
r,n r
n r,m−r
r,m r
T = J
Recall that J = Jnmr = Om−r,r
∈
M
.
Now
let
K
=
J
=
∈
Mmn .
nm
mnr
Om−r,n−rJ
On−r,r On−r,m−r
So Lemma 4.1 says that MK
nm and MJ
mn are anti-isomorphic. Also, since J = JKJ and K = KJK,
K
anti-isomorphic.
Also,
J = homomorphisms
JKJ and K = where,
KJK,
Mmn arecommutative
So
Lemma2.15
4.1says
saysthat
thatweM
nm and
Theorem
have
the following
diagrams
of since
semigroup
Theorem
2.15
says
that
we
have
the
following
commutative
diagrams
of
semigroup
homomorphisms
where,
for clarity, we write · for (non-sandwich) matrix multiplication:
for clarity, we write · for (non-sandwich) matrix multiplication:
(Mmn , ?J )
Reg(Mmn , ?J )
1 :X7!XJ
2 :X7!JX
(Mmn J, ·)
1 :Y
7!JY
(JMmn , ·)
2 :Y
(JMmn J, ?K )
7!Y J
1 :X7!XJ
2 :X7!JX
Reg(Mmn J, ·)
1 :Y
7!JY
Reg(JMmn , ·)
2 :Y
Reg(JMmn J, ?K )
7!Y J
In this section, we show that the various semigroups appearing in the above diagrams are all (equal to or
In this section, we show that the various semigroups appearing in the above diagrams are all (equal to or
isomorphic to) certain well-known (non-sandwich) matrix semigroups, and explore the consequences for the
isomorphic to) certain well-known (non-sandwich) matrix semigroups, and explore the consequences for the
structure of the sandwich semigroups MJmn
. First, we have a simple observation.
structure of the sandwich semigroups MJmn . First, we have a simple observation.
Lemma 5.1. We have (JMmn J, ?K ) = Reg(JMmn J, ?K ) ∼
= (Mr , ·).
Lemma 5.1. We have (JMmn J, ?K ) = Reg(JMmn J, ?K ) ⇠
= (Mr , ·).
A B
Proof.
or not, JXJ =
A O Let X = ⇥ C D ⇤ ∈ Mmn . We have already observed Athat,
whether
E O X isAEregular
O ?
O .
A
B
∈
M
.
The
result
follows
quickly
from
the
fact
that
=
Proof.
Letnm
X = C D 2 Mmn . We have already observed⇥ Othat,
whether
or not, JXJ 2
=
O⇤
O⇤ K
O ⇤ X ⇥isOregular
O⇤
⇥O
⇥O
A O 2M
A
O
E
O
AE
O
.
The
result
follows
quickly
from
the
fact
that
?
=
.
2
nm
K O O
O O
O O
O O
For integers k ≥ 1 and 0 ≤ l ≤ k, we write
For integers k
1 and 0 l Ckk,
(l)we
= write
{X ∈ Mk : cl+1 (X) = · · · = ck (X) = O},
Rk (l) = {X ∈ Mk : rl+1 (X) = · · · = rk (X) = O}.
Ck (l) = {X 2 Mk : cl+1 (X) = · · · = ck (X) = O},
Rk (l) = {X 2 Mk : rl+1 (X) = · · · = rk (X) = O}.
21
(As before, without causing confusion, we write O for any zero matrix when the dimensions are clear from
context.) These matrix semigroups have been studied in a number of contexts (see for example [72, 89]),
along with their associated isomorphic semigroups of linear transformations
⊥
Kk (l) = α ∈ End(Vk ) : ker(α) ⊇ Wkl
,
Ik (l) = α ∈ End(Vk ) : im(α) ⊆ Wkl .
⊥ = span{e
Here we have written Wkl
k,l+1 , . . . , ekk }. Clearly, Ck (l) and Rk (l) are anti-isomorphic.
Lemma 5.2. We have Mmn J = Cm (r) and JMmn = Rn (r).
A B
A O
A B
Proof. Let X = C
D ∈ Mmn . We have already observed that XJ = C O ∈ Mm and JX = O O ,
and the result quickly follows.
2
A B
Remark 5.3. A typical element X ∈ Rk (l) may be written as X = O
, B ∈ Ml,k−l and
O , where A
A∈BM
l E
F = AE AF .
so on. One easily checks that multiplication of matrices in this form obeys the rule O O O O
O O
Comparing this to the discussion in Remark 4.2, we see that Rk (l) is isomorphic to the sandwich semigroup
MJlk where J = Jkll ∈ Mkl . (A dual statement holds for the matrix semigroups Ck (l).) Thus, every result
we obtain for linear sandwich semigroups leads to analogous results for the semigroups
Rk (l) and Ck (l). For
example, we deduce from Theorem 4.14 that rank(Ck (l)) = rank(Rk (l)) = kl q if |F| = q < ∞. Note that
the sandwich semigroups MJmn pictured in Figure 5 satisfy r = min(m, n), so Figure 5 essentially pictures
eggbox diagrams of C3 (2) and R4 (2).
Remark 5.4. Similarly, one may think of an arbitrary linear sandwich semigroup MJmn itself as a (nonsandwich) matrix semigroup, as noted by Thrall in [92] and slightly adapted as follows. Consider
h O O O i the set
M of all (m + n − r) × (m + n − r) matrices that may be written in 3 × 3 block form B A O , where
D C O
A ∈ Mr , D ∈ Mm−r,n−r (and from which the dimensions of the other sub-matrices
may
be
h
ih
i hderived). iOne
O O O
O O O
F E O =
easily checks that the matrices from M multiply according to the rule B A O
D C O
H GO
hO O Oi
A B
that C D 7→ B A O determines an isomorphism (Mmn , ?J ) → (M , ·). Note also that
O O O
AF AE O
CF CE O
, so
D C O
M = R∗m+n−r (m) ∩ Cm+n−r (n)
where
write R∗k (l) = {X ∈ Mk : r1 (X) = · · · = rk−l (X) = O}. (It is easily seen that the map
A B here
Owe
J
∗
O
O O → B A determines an isomorphism Rk (l) → Rk (l).) Since using this isomorphic copy M of Mmn
does not appear to confer any obvious advantage, we will make no further reference to it.
The regular elements of Ck (l) and Rk (l) (and also of Ik (l) and Kk (l)) were classified in [72]. The next result,
which gives a much simpler description of these regular elements, may be deduced from [72, Theorems 3.4
and 3.8] (and vice versa), but we include a simple proof for convenience.
Proposition 5.5. The regular elements of the semigroups Cm (r) = Mmn J and Rn (r) = JMmn are given
by
Reg(Cm (r)) = Reg(Mmn J) = P J = {X ∈ Mmn J : rank(JX) = rank(X)}
Reg(Rn (r)) = Reg(JMmn ) = JP = {X ∈ JMmn : rank(XJ) = rank(X)}.
A B
Proof. We just prove the second statement
as
the
other
is
dual.
Let
X
=
∈ Mmn , and put
C D
A B
0
0
A
B
X = O O ∈ Mmn . Then JX = JX = O O ∈ Mn (where the zero matrices in the last expression
have n − r rows). Since X 0 clearly belongs to P2 (by Proposition 4.3), wehave JMmn ⊆ JP2 . Next, note
A B . Now suppose X ∈ M
that KJ = Jmmr , so that KJY = Y for all Y ∈ Mmn of the form Y = O
mn
O
A
B
is such that JX ∈ Reg(JMmn ). As above, we may assume that X = O O . So (JX) = (JX)(JY )(JX)
for some Y ∈ Mmn . But then X = K(JX) = K(JXJY JX) = XJY JX = X ? Y ? X, so that, in fact,
22
X ∈ Reg(MJmn ) = P . This completes the proof that Reg(JMmn ) ⊆ JP . The reverse inclusion is easily
checked.
A AN
A AN
Now suppose X = JY where Y = O
∈ P . Then X = JY = O
(with appropriately sized zero
O
O
matrices), so rank(X) = rank(JY ) = rank(Y ) = rank(JY J) = rank(XJ), where we have used Proposition 4.3. Conversely, suppose X ∈ JMmn is such that rank(XJ) = rank(X). As before, we may assume
that X = JY where Y ∈ P2 . Then rank(Y ) = rank(JY ) = rank(X) = rank(XJ) = rank(JY J), so that
Y ∈ P . This completes the proof.
2
Remark 5.6. As always, the condition rank(JX) = rank(X), for X 2 Mmn , is equivalent to saying that
Remark
5.6. .As
the condition
rank(JX)
X ∈ statement
Mmn , is equivalent
saying
that
rows rr+1 (X),
. . , always,
rm (X) belong
to span{r
rrrank(X),
(X)}, withfora dual
holding fortothe
condition
1 (X), . . . ,=
rows
rr+1 (X),
. . . , rm (X)The
belong
to span{r
(X),
.
.
.
,
r
(X)},
with
a
dual
statement
holding
for
the
condition
rank(XJ)
= rank(X).
regular
elements
of
the
corresponding
semigroups
of
linear
transformations
are
1
r
rank(XJ)
given by = rank(X). The regular elements of the corresponding semigroups of linear transformations are
given by
?
Reg(Km (r)) = {↵ 2 Km (r) : im(↵) \ Wmr
= {0}},
⊥
Reg(K
(r))
=
{α
∈
K
(r)
:
im(α)
∩
W
=
{0}},
m
m
mr
Reg(I (r)) = {↵ 2 I (r) : im(↵) = ↵(W )}.
n
n
nr
Reg(In (r)) = {α ∈ In (r) : im(α) = α(Wnr )}.
Putting together all the above, we have proved the following. (In the following statement, we slightly abuse
Putting
all the above,
we have
proved
theJfollowing.
notationtogether
by still denoting
the map
Cm (r)
= Mmn
! Mr by(In1 the
andfollowing
so on.) statement, we slightly abuse
notation by still denoting the map Cm (r) = Mmn J → Mr by Φ1 and so on.)
Theorem 5.7. We have the following commutative diagrams of semigroup epimorphisms:
Theorem 5.7. We have the following commutative diagrams of semigroup epimorphisms:
1:
[
A B
C D
]7![
A O
C O
]
MJmn
Cm (r)
[ A O ]7!A
Reg(MJmn )
2:
[
A B
C D
]7![
A B
O O
Rn (r)
1: C O
[ A B ]7!A
2: O O
Mr
]
1:
[
A B
C D
]7![
A O
C O
]
[ A B ]7![ OA BO ]
2: C D
Reg(Cm (r))
[ A O ]7!A
1: C O
Reg(Rn (r))
Mr
[ A B ]7!A
2: O O
2
2
JJ ). From now on, we
The
The remaining
remaining results
results of
of this
this section
section concern
concern the
the regular
regular subsemigroup
subsemigroup PP =
= Reg(M
Reg(Mmn
From
mn
⇥).
⇤ now on, we
A
AB
B ∈
denote
P , we write
denote by
by φ =
= φ11◦ ψ11 =
= φ22◦ ψ22 the
the induced
induced epimorphism
epimorphism φ :: PP →
!M
Mrr.. Also,
Also, for
for X
X=
= C
D
C D 2 P , we write
X
X=
= φ(X)
(X) =
= A.
A. The
The next
next result
result shows
shows how
how the
the second
second commutative
commutative diagram
diagram from
from Theorem
Theorem 5.7
5.7 may
may be
be
JJ ) as a special kind of subdirect product of Reg(C (r)) and Reg(R (r)).
used
to
identify
Reg(M
used to identify Reg(Mmn
m (r)) and Reg(Rnn (r)).
mn ) as a special kind of subdirect product of Reg(Cm
Proposition
Proposition 5.8.
5.8. There
There is
is an
an embedding
embedding
) → Reg(C (r)) × Reg(R (r)) : X 7→ (ψ (X), ψ (X)) = (XJ, JX).
ψ : Reg(MJmn
: Reg(MJmn ) ! Reg(Cmm (r)) ⇥ Reg(Rnn (r)) : X 7! ( 11 (X), 22 (X)) = (XJ, JX).
As such, P = Reg(MJmn
) is (isomorphic to) a pullback product of P J = Reg(Cm (r)) and JP = Reg(Rn (r)).
As such, P = Reg(MJmn ) is (isomorphic to) a pullback product of P J = Reg(Cm (r)) and JP = Reg(Rn (r)).
Namely,
Namely,
P ∼
= im(ψ) = (ψ1 (X), ψ2 (X)) : X ∈ P = (Y, Z) ∈ P J × JP : φ1 (Y ) = φ2 (Z) .
P ⇠
= im( ) = ( 1 (X), 2 (X)) : X 2 P = (Y, Z) 2 P J ⇥ JP : 1 (Y ) = 2 (Z) .
Proof. Clearly, ψ is a homomorphism. Now let X = [M, A, N ] and Y = [K, B, L] be elements of P with
Proof.
Clearly,
ψ(X)
= ψ(Y
). Thenis a homomorphism. Now let X = [M, A, N ] and Y = [K, B, L] be elements of P with
(X) = (Y ). Then
A O A AN
B O B BL
O ⇤ = ψ(X) = ψ(Y ) = ⇥KB O ⇤, ⇥O O ⇤ .
⇥MAA OO ⇤, ⇥OA AN
B O , B BL
= (X) = (Y ) = KB
.
MA O , O O
O
O O
Comparing
various
coordinates,
we
deduce
A
=
B,
M
A
=
KB
and
AN = BL, giving X = MAA MAN
AN ⇤ =
B BL
⇥ A AN
Y , completing
the proof
that ψAis=injective.
Comparing
coordinates,
we deduce
B, M A = KB and AN = BL, giving X = M A M AN =
KBL ⇤ =various
⇥KB
B BL
is injective.
KBL = Y , completing the proof that
ToKB
prove
the statement about im(ψ), let X ∈ P and put Y = ψ1 (X) = XJ and Z = ψ2 (X) = JX. Then
φTo
) = JY
JXJ = about
ZJ =im(
φ2 (Z).
× and
JP satisfies
JY ==JX.
ZJ.Then
Say
the =
statement
), letConversely,
X 2 P andsuppose
put Y =(Y, 1Z)
(X)∈ =P J
XJ
Z = 2 (X)
1 (Yprove
A
O
Y 1=
JV
A, N
] and V = [K,
B, L] (Y,
belong
Then
= JUJY
J == OZJ.
and
(YU) J=and
JY Z==JXJ
= ZJ U== [M,
Conversely,
suppose
Z) 2toPPJ. ⇥
JP JY
satisfies
2 (Z).
, where
⇥ A OO ⇤ Say
O , giving A = B. But then (Y, Z) = ψ(X), where X = [M, A, L] ∈ P .
ZJ
J = ZB
2
Y ==UJV
J and
=
JV
,
where
U
=
[M,
A,
N
]
and
V
=
[K,
B,
L]
belong
to
P
.
Then
JY
=
JU
J
=
O O and
⇥O OO ⇤
ZJ = JV J = B
,
giving
A
=
B.
But
then
(Y,
Z)
=
(X),
where
X
=
[M,
A,
L]
2
P
.
2
O O
Remark 5.9. We note that the previous result does23
not lift to a similar identification of MJmn as a pullback
Remark 5.9. We note that the previous result does not lift to a similar identification of MJmn as a pullback
product of Cm (r) and Rn (r) because the induced map
Ψ : MJmn → Cm (r) × Rn (r) : X 7→ (Ψ1 (X), Ψ2 (X)) = (XJ, JX)
A B
A O A B
A B
is not injective. Indeed, if X = C
D ∈ Mmn , then Ψ(X) =
C O , O O , with C E mapping to the
same pair for any other E ∈ Mm−r,n−r .
Remark 5.10. More generally, given a partial semigroup (S, ·, I, λ, ρ), the epimorphisms Ψ1 and Ψ2 from
Theorem 2.15(v) allow for the definition of a map
Ψ : (Sij , ?a ) → (Sij a, ·) × (aSij , ·) : x 7→ (xa, ax).
To say that Ψ is injective is to say that, for all x, y ∈ Sij , xa = ya and ax = ay together imply x = y.
Compare this to the notion of a weakly reductive semigroup S, in which, for every x, y ∈ S, the assumption
that xa = ya and ax = ay for all a ∈ S implies x = y. See for example [71, Definition 1.42].
We conclude this section with a simple but important observation that shows that P = Reg(MJmn ) is a homomorphic image of the direct product of a rectangular band by the (non-sandwich) matrix semigroup Mr .
(Recall that a rectangular band is a semigroup of the form S × T with product (s1 , t1 )(s2 , t2 ) = (s1 , t2 ).)
Its proof is routine, relying on Proposition 4.3 and the rule [M, A, N ] ? [K, B, L] = [M, AB, L]. For the
statement, recall that the kernel of a semigroup homomorphism φ : S → T (not to be confused with the
kernel of a linear transformation) is the congruence ker(φ) = {(x, y) ∈ S × S : φ(x) = φ(y)}. (A congruence
on a semigroup S is an equivalence relation ∼ for which x1 ∼ y1 and x2 ∼ y2 together imply x1 x2 ∼ y1 y2
for all x1 , x2 , y1 , y2 ∈ S; the quotient S/∼ of all ∼-classes is a semigroup under the induced operation. The
first homomorphism theorem for semigroups states that any semigroup homomorphism φ : S → T induces
an isomorphism S/ ker(φ) ∼
= im(φ).)
Proposition 5.11. Consider the semigroup U = Mm−r,r × Mr × Mr,n−r under the operation defined by
(M, A, N ) (K, B, L) = (M, AB, L).
Define an equivalence ∼ on U by
(M, A, N ) ∼ (K, B, L) ⇔ A = B, M A = KB and AN = BL.
Then ∼ is a congruence on U , and the map
ξ:U →P =
Reg(MJmn )
A
AN
: (M, A, N ) 7→ [M, A, N ] =
M A M AN
is an epimorphism with ker(ξ) = ∼. In particular, P ∼
= U/∼.
6
2
The regular subsemigroup
In this section, we continue to study the subsemigroup P = Reg(MJmn ) consisting of all regular elements
of MJmn . Eggbox diagrams of P = Reg(MJ43 (Z2 )) are given in Figure 8 for values of 0 ≤ rank(J) ≤ 3;
more examples can be seen by inspecting the regular D J -classes in Figures 4 and 5. Comparing Figure 8
with Figure 3, which pictures the full linear monoids Mr (Z2 ) for 0 ≤ r ≤ 3, an interesting pattern seems
to emerge: namely, that P = Reg(MJ43 (Z2 )) appears to be a kind of “inflation” of Mr , where r = rank(J).
One of the goals of this section is to explain this phenomenon, and we do so by further exploring the map
φ : P → Mr : X = [M, A, N ] 7→ X = A
defined after Theorem 5.7. We also calculate |P |, rank(P
and sizes of various Green’s
), and the number
J
classes. As before, we assume that J = Jnmr = IOr O
∈
M
.
Since
M
nm
mn is just a zero semigroup if
O
r = 0, we generally assume that r ≥ 1.
24
0
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
0
1
1
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
1
2
1
1
2
1
1
1
2
1
1
1
1
2
1
1
1
2
1
1
1
1
1
0
2
1
1
2
1
1
1
1
3
3
3
2
3
2
3
2
2
3
2
2
2
3
2
2
2
3
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
1
2
1
2
1
2
1
2
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
0
Figure 8: Egg box diagrams (drawn sideways) of the regular linear sandwich semigroups P = Reg(MJ43 (Z2 )),
where rank(J) = 0, 1, 2, 3 (top to bottom).
Now, Theorem 4.5 enables us to immediately describe Green’s relations on P = Reg(MJmn ). Since P is a
regular subsemigroup of MJmn , the R, L , H relations on P are just the restrictions of the corresponding
relations on MJmn (see for example [40, 44]), and it is easy to check that this is also true for the D = J
relation in this case. So if X ∈ P and K is one of R, L , H , D, we will continue to write K J for
J for the K J -class of X in P . Parts (i–iv) of the next result also appear
the K relation on P , and KX
in [9, Theorem 2.3].
Corollary 6.1. If X ∈ P , then
J = R ∩ P = {Y ∈ P : Col(X) = Col(Y )},
(i) RX
X
(ii) LJX = LX ∩ P = {Y ∈ P : Row(X) = Row(Y )},
J = H ∩ P = H = {Y ∈ P : Col(X) = Col(Y ) and Row(X) = Row(Y )},
(iii) HX
X
X
J = D ∩ P = {Y ∈ P : rank(X) = rank(Y )}.
(iv) DX
X
The D J -classes of P form a chain: D0J < · · · < DrJ , where DsJ = {X ∈ P : rank(X) = s} for each
0 ≤ s ≤ r.
2
Also, the regularity of P means that P inherits the stability property from MJmn . The next result gives
some combinatorial information about the size of P , and of various Green’s classes in P , in the case that F
is finite. Recall that {ek1 , . . . , ekk } is the standard basis of Vk = Fk and that Wks = span{ek1 , . . . , eks } for
each 0 ≤ s ≤ k.
Proposition 6.2. Suppose |F| = q < ∞. Let X ∈ P with rank(X) = s. Then
s
J | = q s(n−r) q (2) (q − 1)s [s] ! [ r ] ,
(i) |RX
q s q
s
(ii) |LJX | = q s(m−r) q (2) (q − 1)s [s]q ! [ rs ]q ,
s
J | = |G | = q (2) (q − 1)s [s] !,
(iii) |HX
s
q
J = D J is the union of:
(iv) DX
s
25
(a) q s(m−r) [ rs ]q R J -classes,
(b) q s(n−r) [ rs ]q L J -classes,
(c) q s(m+n−2r) [ rs ]2q H J -classes,
s
J | = |D J | = q s(m+n−2r) q (2) (q − 1)s [s] ! [ r ]2 .
(v) |DX
q s q
s
Consequently, |P | =
| Reg(MJmn )|
=
r
X
s=0
s
q s(m+n−2r) q (2) (q − 1)s [s]q ! [ rs ]2q .
J | = |RJ | for all Y ∈ D J = D J , we may assume X = J
Proof. We start with (i). Since |RX
mns . Now,
s
Y
X
Col(X) = Wms . By Proposition 4.3 and Corollary 6.1, we have
J
RX
= {Y ∈ P : Y R J X}
= {Y ∈ Mmn : Col(X) = Col(Y ), Col(Y ) = Col(Y J), Row(Y ) = Row(JY )}
= {Y ∈ Mmn : Col(Y ) = Col(Y J) = Wms },
i
h
A
B
since if Col(Y ) = Wms , then Y is of the form Y = Om−s,r
Om−s,n−r for some A ∈ Msr and B ∈ Ms,n−r ,
i
h
A
B
in which case JY = On−s,r
On−s,n−r automatically has the same row space as Y .
J . As noted above, we must have Y = A B for some A ∈ M and B ∈ M
Now considersome Y ∈ RX
sr
s,n−r .
O O
A O , the condition Col(Y J) = W
Since Y J = O
is
equivalent
to
Col(A)
=
V
.
In
particular,
there
is
ms
s
O
s(n−r)
no restriction on the entries of B, so B may be chosen (arbitrarily, and independently of A) in q
ways. Also, dim(Row(A)) = dim(Col(A)) = s. So A may be specified by listing its rows (in order),
which are s linearly independent row vectors from Fr . The number of possible choices for A is therefore
s
(q r − 1)(q r − q) · · · (q r − q s−1 ) = q (2) (q − 1)s [s]q ! [ rs ]q . Multiplying these two values gives (i).
Part (ii) is dual to (i). Part (iii) follows directly from Corollary 6.1(iii) and Lemma 3.4(iii). Parts (a)
J | by |H J |, respectively. Part (c) follows from (a) and (b).
and (b) of (iv) follow by dividing |LJX | and |RX
X
Part (v) follows from (iii) and part (c) of (iv). The formula for |P | is obtained by adding the sizes of the
D J -classes.
2
Recall that, for X = [M, A, N ] ∈ P , we write X = φ(X) = A ∈ Mr . We extend this notation to subsets
of P , so if Ω ⊆ P , we write Ω = {X : X ∈ Ω}. We now show how the epimorphism φ : P → Mr may be
used to relate Green’s relations on the semigroups P and Mr . If X, Y ∈ P and K is one of R, L , H , D,
b J = φ−1 (K ) = {Y ∈ P : X KbJ Y } the KbJ -class of X
we say X KbJ Y if XK Y (in Mr ). Denote by K
X
X
in P . We first need a technical result.
b J Y , then |R
b J | = |L
b J |.
bJ | = |R
bJ | and |L
Lemma 6.3. Let X, Y ∈ P . If X D
X
Y
X
Y
b J Y means that X LbJ W RbJ Y
Proof. By duality, it suffices to prove the statement about RbJ -classes. Now, X D
J
J
b = R
b , we may assume without loss of generality that X LbJ Y . Write
for some W ∈ P . Since R
Y
W
X = [M, A, N ] and Y = [K, B, L]. By definition X LbJ Y , means that AL B in Mr , so A = U B for some
bJ , and define α(Z) = [M 0 U, U −1 A0 , N 0 ]. It is easy to
U ∈ Gr by Lemma 3.3. Now let Z = [M 0 , A0 , N 0 ] ∈ R
X
00
00
check that for any other representation Z = [M , A , N 00 ], we have [M 00 U, U −1 A00 , N 00 ] = [M 0 U, U −1 A0 , N 0 ],
so that α(Z) is well-defined. Also,
Z RbJ X ⇒ A0 RA ⇒ U −1 A0 RU −1 A = B ⇒ α(Z)RbJ Y.
bJ . It is easy to check that R
bJ → R
bJ : [M 0 , A0 , N 0 ] 7→ [M 0 U −1 , U A0 , N 0 ] is the
bJ → R
Thus α is a map α : R
X
Y
Y
X
J
J
b | = |R
b |.
inverse mapping of α. We conclude that |R
2
X
Y
For the proof of the next result, we note that stability of M implies that A2 DA ⇔ A2 H A for all A ∈ Mr .
We also use the fact that an H -class H of a semigroup S is a group if and only if x2 ∈ H for some (and
hence for all) x ∈ H [44, Theorem 2.2.5]. Recall that a k × l rectangular band is a semigroup of the form
S × T with product (s1 , t1 )(s2 , t2 ) = (s1 , t2 ), where |S| = k and |T | = l. A k × l rectangular group with
respect to a group G is a direct product of a k × l rectangular band with G.
26
For the proof of the next result (and elsewhere), it will be convenient to define a number of equivalence
relations. For A ∈ Mr , we define equivalences ∼A and ≈A on Mm−r,r and Mr,n−r (respectively) by
M1 ∼A M2 ⇔ M1 A = M2 A
and
N1 ≈A N2 ⇔ AN1 = AN2 .
Theorem 6.4. Suppose |F| = q < ∞. Let X ∈ P = Reg(MJmn ) and put s = rank(X).
bJ is the union of q s(m−r) R J -classes of P .
(i) R
X
b J is the union of q s(n−r) L J -classes of P .
(ii) L
X
s
b J is the union of q s(m+n−2r) H J -classes of P , each of which has size |Gs | = q (2) (q − 1)s [s]q !. The
(iii) H
X
map φ : P → Mr is injective when restricted to any H J -class of P .
b J is a non-group.
(iv) If HX is a non-group H -class of Mr , then each H J -class of P contained in H
X
b J is a group isomorphic
(v) If HX is a group H -class of Mr , then each H J -class of P contained in H
X
b J is a q s(m−r) × q s(n−r) rectangular group with respect to Gs .
to Gs ; further, H
X
b J = D J and D
b J = DJ = DsJ = {Y ∈ P : rank(Y ) = s} is the union of:
(vi) D
X
X
(a) [ rs ]q RbJ -classes (and the same number of LbJ -classes) of P ,
(b) q s(m−r) [ rs ]q R J -classes of P ,
(b) q s(n−r) [ rs ]q L J -classes of P ,
(d) [ rs ]2q HbJ -classes of P ,
(e) q s(m+n−2r) [ rs ]2q H J -classes of P .
Proof. First observe that if ρ : S → T is an epimorphism of semigroups, and if K is a K -class of T where
K is one of R, L , H , then ρ−1 (K) is a union of K -classes of S. Throughout the proof, we write
A
AN
,
X = [M, A, N ] =
M A M AN
b J = D J immediately follows.
so A ∈ Mr satisfies rank(A) = rank(JXJ) = rank(X) = s. We note that D
bJ . Since |R
bJ | =
(i) By the first observation, it suffices to count the number of R J -classes contained in R
X
X
bJ | for all Y ∈ DsJ by Lemma 6.3, it follows that each RbJ -class of DsJ contains the same number
|R
Y
of R J -classes. By Lemma 3.4, Ds (Mr ) is the union of [ rs ]q R-classes (and the same number of L classes), so it follows that DsJ is the union of [ rs ]q RbJ -classes (and the same number of LbJ -classes).
By Proposition 6.2, DsJ is the union of q s(m−r) [ rs ]q R J -classes. Dividing these, it follows that each
RbJ -class of DsJ is the union of q s(m−r) R J -classes.
(ii) This is dual to (i).
b J follows immediately from (i)
(iii) The statement concerning the number of H J -classes contained in H
X
and (ii), and the size of these H J -classes was given in Proposition 6.2. Next, for any B ∈ Mr with
BH A, it is easy to check that [M, B, N ]H J X. So the set Ω = {[M, B, N ] : B ∈ HA } is contained
J |, we see that H J = Ω. For any Z = [M, B, N ] ∈ Ω = H J , we
in HXJ . Since |Ω| = |HA | = |Gs | = |HX
X
X
have φ(Z) = B, so it follows that φ|H J is injective.
X
b J be arbitrary. Since
(iv) Suppose HA = HX is a non-group H -class of Mr , and let Y = [K, B, L] ∈ H
X
J
Y Hb X, it follows that B = Y H X = A. Since HB = HA is not a group, we have B 2 6∈ HB , whence
B 2 6∈ DB and rank(B 2 ) < rank(B) = rank(A) = s. But then Y 2 = [K, B 2 , L] 6∈ DsJ = DYJ , so that
Y 2 6∈ HYJ , and we conclude that HYJ is not a group.
27
2
b J , so rank(Y ? Y ) = rank(Y 2 ) = rank(Y ) =
(v) Suppose HX is a group. Then Y ∈ HX for any Y ∈ H
X
rank(Y ), giving Y ? Y D J Y , so that Y ? Y ∈ HYJ and HYJ is a group. By (iii), the restriction of φ to
HYJ yields an isomorphism onto HY ∼
= Gs .
Let E ∈ Mr be the identity element of the group HA . Let ME ⊆ Mm−r,r (resp., NE ⊆ Mr,n−r ) be
a cross-section of the ∼E -classes (resp., ≈E -classes) in Mm−r,r (resp., Mr,n−r ). It is easy to check
b J may be uniquely represented as Y = [K, B, L] for some K ∈ ME , B ∈ HA and
that every Y ∈ H
X
L ∈ NE . It follows that the map
b JA : (K, B, L) 7→ [K, B, L]
ME × HA × NE → H
is a well-defined isomorphism, where the (rectangular group) product on ME × HA × NE is defined
by (K1 , B1 , L1 ) · (K2 , B2 , L2 ) = (K1 , B1 B2 , L2 ). We have already observed that HA ∼
= Gs , and the
dimensions of the rectangular band ME × NE follow from parts (i–iii) together with the observation
b J for each K ∈ ME and L ∈ NE .
that {[K, B, L] : B ∈ HA } is an H J -class contained in H
X
b J = D J . We proved (a) while proving (i), above. Parts (b), (c) and (e)
(vi) We have already noted that D
were proved in Proposition 6.2. Part (d) follows from (a).
2
The previous result explains the “inflation” phenomenon discussed at the beginning of this section; see also
Figure 8. As an immediate corollary of Theorem 6.4, we may now completely classify the isomorphism
classes of finite linear sandwich semigroups.
Theorem 6.5. Let F1 and F2 be two finite fields with |F1 | = q1 and |F2 | = q2 , let m, n, k, l ≥ 1, and let
A ∈ Dr (Mnm ) and B ∈ Ds (Mlk ). The following are equivalent:
∼ B
(i) MA
mn (F1 ) = Mkl (F2 ),
(ii) one of the following holds:
(a) r = s = 0 and q1mn = q2kl , or
(b) r = s ≥ 1, (m, n) = (k, l), and q1 = q2 .
A
B
∼ B
∼
Further, if r ≥ 1, then MA
mn (F1 ) = Mkl (F2 ) if and only if Reg(Mmn (F1 )) = Reg(Mkl (F2 )).
B
∼
Proof. Again, if r 6= s, then counting the regular D A - and D B -classes shows that MA
mn (F1 ) 6= Mkl (F2 ).
For the remainder of the proof, we assume r = s. Suppose first that r = s = 0. Then MA
mn (F1 ) and
mn and q kl , are equal.
MB
(F
)
are
both
zero
semigroups
and
so
are
isomorphic
if
and
only
if
their
sizes,
q
2
1
2
kl
For the remainder of the proof, we assume r = s ≥ 1, and write DtA and DtB for the relevant regular D A B
and D B -classes in MA
mn (F1 ) and Mkl (F2 ) for each 0 ≤ t ≤ r = s.
∼ F× , the multiplicative
By Theorem 6.4(v), any group H A -class contained in D1A is isomorphic to G1 (F1 ) =
1
×
A
∼
6
group of F1 . Since |F×
1 | = q1 − 1 and |F2 | = q2 − 1, it follows that if q1 6= q2 , then Reg(Mmn (F1 )) =
B
A
B
∼
∼
Reg(Mkl (F2 )) and, hence, Mmn (F1 ) =
6 Mkl (F2 ). Now suppose q1 = q2 (so F1 = F2 ), and write q = q1 .
By Theorem 6.4(vi), D1A (resp., D1B ) contains q m−r [ 1r ]q R A -classes (resp., q k−r [ 1r ]q R B -classes). It follows
B
A
∼
∼ B
that if m 6= k (or, dually, if n 6= l), then Reg(MA
mn (F1 )) 6= Reg(Mkl (F2 )) and, hence, Mmn (F1 ) 6= Mkl (F2 ).
∼ B
Conversely, if (b) holds, then MA
mn (F1 ) = Mkl (F2 ) by Lemma 4.1(ii).
A
B
∼ B
∼
For the final statement, first note that MA
mn (F1 ) = Mkl (F2 ) clearly implies Reg(Mmn (F1 )) = Reg(Mkl (F2 )).
∼
In the previous paragraph, we showed that the negation of (b) implies Reg(MA
6 Reg(MB
mn (F1 )) =
kl (F2 )).
This completes the proof.
2
B
∼
Remark 6.6. Of course, if rank(A) = rank(B) = 0, then Reg(MA
mn (F1 )) = {Omn } = Reg(Mkl (F2 )) =
{Okl }, regardless of m, n, k, l, q1 , q2 . So the final clause of Theorem 6.5 does not hold for r = 0.
Remark 6.7. The infinite case is not as straight-forward, since |Mmn (F)| = |F| for all m, n ≥ 1, and
×
since it is possible for two non-isomorphic fields F1 , F2 to have isomorphic multiplicative groups F×
1 , F2 (for
example, Q and Z3 (x) both have multiplicative groups isomorphic to Z2 ⊕ F , where F is a free abelian group
of countably infinite rank). So we have the following isomorphisms:
28
B
∼
(i) MA
mn (F1 ) = Mkl (F2 ) if m, n, k, l ≥ 1, |F1 | = |F2 |, and rank(A) = rank(B) = 0 — indeed, both
sandwich semigroups are zero semigroups of size |F1 | = |F2 |;
I O
× ∼ ×
B
∼
1
(ii) MA
mn (F1 ) = Mmn (F2 ) if F1 = F2 and rank(A) = rank(B) = 1 — indeed, when J = Jnm1 = O O ,
sandwich products X ?J Y involve only field multiplication and no addition:
a11 a12 · · · a1n
b11 b12 · · · b1n
a11 b11 a11 b12 · · · a11 b1n
a21 a22 · · · a2n b21 b22 · · · b2n a21 b11 a21 b12 · · · a21 b1n
..
..
..
..
.. .
.. ?J ..
.. = ..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
. .
am1 b11 am1 b12 · · · am1 b1n
am1 am2 · · · amn
bm1 bm2 · · · bmn
We leave it as an open problem to completely classify the isomorphism classes of linear sandwich semigroups
over infinite fields. But we make two simple observations:
∼ B
(iii) as in the proof of Theorem 6.5, if MA
mn (F1 ) = Mkl (F2 ), then we must have rank(A) = rank(B);
B
∼
∼
(iv) if MA
mn (F1 ) = Mkl (F2 ) with rank(A) = rank(B) = r ≥ 2, we must have F1 = F2 (since the maximal
A
subgroups of Mmn (F1 ) are isomorphic to Gs (F1 ) for 0 ≤ s ≤ r, and since Gs (F1 ) ∼
= Gs (F2 ) implies
F1 ∼
= F2 for s ≥ 2 [17]).
In what follows, the top D J -class of P = Reg(MJmn ) plays a special role. We write D for this D J -class, so
D = DrJ = φ−1 (Gr ) = {X ∈ P : rank(X) = r}.
As a special case of Theorem 6.4(v), D is a q r(m−r) ×q r(n−r) rectangular group with respect to Gr . Since D is
the pre-image of Gr under the map φ : P → Mr , we may think of D as a kind of “inflation” of Gr , the group
of units of Mr . In fact, more can be said along these lines. Recall again that the variant of a semigroup S
with respect to an element a ∈ S is the semigroup S a with underlying set S and operation ?a defined by
x ?a y = xay for all x, y ∈ S. Recall also that an element a ∈ S of a (necessarily regular) semigroup S is
regularity preserving if the variant S a is regular. The set RP(S) of all regularity preserving elements of S
was studied in [38,50]; we will not go into the details here, but it was explained in [50] that RP(S) is a useful
alternative to the group of units in the case that S is not a monoid (as with P when r = m = n does not
hold). Because of this, it is significant that D is equal to RP(P ), the set of all regularity preserving elements
of P = Reg(MJmn ), as we will soon see. We now state a result from [50] concerning regularity preserving
elements. Recall that an element u of a semigroup S is a mididentity if xuy = xy for all x, y ∈ S [98]; of
course for such an element, ?u is just the original semigroup operation.
Proposition 6.8 (Khan and Lawson [50]). Let S be a regular semigroup.
(i) An element a ∈ S is regularity preserving if and only if aH e for some regularity preserving idempotent
e ∈ E(S). (In particular, RP(S) is a union of groups.)
(ii) An idempotent e ∈ E(S) is regularity preserving if and only if f eRf L ef for all idempotents f ∈ E(S).
(iii) Any mididentity is regularity preserving.
2
In order to avoid confusion when discussing idempotents, if Ω ⊆ Mmn , we will write
EJ (Ω) = {X ∈ Ω : X = X ? X}
for the set of idempotents from Ω with respect to the ? operation on MJmn . If Σ ⊆ Mk for some k, we will
continue to write E(Σ) = {A ∈ Σ : A = A2 } for the set of idempotents from Σ with respect to the usual
matrix multiplication.
Lemma 6.9.
(i) EJ (MJmn ) = EJ (P ) = {[M, A, N ] : A ∈ E(Mr ), M ∈ Mm−r,r , N ∈ Mr,n−r }.
(ii) EJ (D) = {[M, Ir , N ] : M ∈ Mm−r,r , N ∈ Mr,n−r } is a q r(m−r) × q r(n−r) rectangular band.
(iii) Each element from EJ (D) is a mididentity for both MJmn and P .
29
(iv) D = RP(P ) is the set of all regularity-preserving elements of P .
Proof. Note that all idempotents are regular. If X = [M, A, N ] ∈ P , then X ? X = [M, A2 , N ], so
X = X ? X if and only if A = A2 , giving (i). Part (ii) follows from (i), since Ir is the only idempotent
from the group Gr = Dr (Mr ). Using (ii), it is easy to check by direct computation that X ? Y ? Z = X ? Z
for all X, Z ∈ Mmn and Y ∈ EJ (D), giving (iii). Finally, to prove (iv), note that by Proposition 6.8(i), it
suffices to show that EJ (RP(P )) = EJ (D). By (iii) and Proposition 6.8(iii), we have EJ (D) ⊆ EJ (RP(P )).
Conversely, suppose X ∈ EJ (RP(P )). Let Y ∈ EJ (D). By Proposition 6.8(ii), and the fact that L J ⊆ D J ,
X ?Y D J Y . It follows that r = rank(Y ) = rank(XJY ) ≤ rank(X) ≤ r, giving rank(X) = r, and X ∈ EJ (D).
This shows that EJ (RP(P )) ⊆ EJ (D), and completes the proof.
2
We may now calculate the rank of P = Reg(MJmn ) in the case of finite F. For the following proof, recall
from [46] that the relative rank rank(S : U ) of a semigroup S with respect to a subset U ⊆ S is defined to
be the minimum cardinality of a subset V ⊆ S such that S = hU ∪ V i.
Theorem 6.10. Suppose |F| = q < ∞. If 1 ≤ r ≤ min(m, n) and we do not have r = m = n, then
rank(P ) = rank(Reg(MJmn )) = q r(L−r) + 1,
where L = max(m, n).
Proof. Since D is a subsemigroup of P and P \ D is an ideal, it quickly follows that rank(P ) =
rank(D) + rank(P
that a rectangular group R = (S × T ) × G satisfies
: D). It is well-known [84]r(m−r)
rank(R) = max |S|, |T |, rank(G) . Since D is a q
× q r(n−r) rectangular group with respect to Gr , and
since rank(Gr ) ≤ 2 by Theorem 3.7, it immediately follows that rank(D) = q r(L−r) . Since hDiJ = D 6= P
(as r ≥ 1), we have rank(P : D) ≥ 1, so the proof will be complete if we can show that P = hD ∪ {X}iJ
J
for some X ∈ P . With this in mind, let X ∈ Dr−1
be arbitrary. Note that D = {Y : Y ∈ D} = Gr , and
X ∈ Dr−1 (Mr ). It follows from Theorem 3.7 that Mr = hD ∪ {X}i. Now let Y = [M, A, N ] ∈ P be arbitrary. Choose Z1 , . . . , Zk ∈ D ∪ {X} such that A = Z 1 · · · Z k . Then Y = [M, Ir , N ] ? Z1 ? · · · ? Zk ? [M, Ir , N ],
with [M, Ir , N ] ∈ D.
2
Remark 6.11. If r = 0, then P = {O}, while if r = m = n, then P = Mn . So rank(P ) is trivial
in the former case, and well-known in the latter (see Theorem 3.7). As in Remark 5.3, we deduce that
rank(Reg(Ck (l))) = rank(Reg(Rk (l))) = q l(k−l) + 1 for |F| = q < ∞.
7
The idempotent generated subsemigroup
J
In this section, we investigate the idempotent generated subsemigroup hEJ (MJmn )iJ of MJmn ; we write Emn
J = (P \ D) ∪ E (D)
for this idempotent generated subsemigroup. Our main results include a proof that Emn
J
J
J
and a calculation of rank(Emn ) and idrank(Emn ); in particular, we show that these two values are equal.
Since the solution to every problem we consider is trivial when r = 0, and well-known when r = m = n, we
will continue to assume that r ≥ 1 and that r = m = n does not hold. To simplify notation, we will write
J = hEi . We begin by calculating |E| in the case of finite F, for which
E = EJ (MJmn ) = EJ (P ), so Emn
J
we need the following formulae for |E(Ds (Mr ))|. Although the next result might already be known, we are
unaware of a reference and include a simple proof for convenience.
Lemma 7.1. Suppose |F| = q < ∞. If 0 ≤ s ≤ r, then |E(Ds (Mr ))| = q s(r−s) [ rs ]q . Consequently,
|E(Mr )| =
r
X
s=0
30
q
s(r−s)
r
.
s q
Proof. To specify an idempotent endomorphism α ∈ End(Vr ) of rank s, we first choose W = im(α), which
is a subspace of dimension s and may be chosen in [ rs ]q ways, and we note that α must map W identically.
If {v1 , . . . , vr } is an arbitrary basis for Vr , such that {v1 , . . . , vs } is a basis of W , then α may map each of
vs+1 , . . . , vr arbitrarily into W , and there are (q s )r−s ways to choose these images.
2
Proposition 7.2. Suppose |F| = q < ∞. If 0 ≤ s ≤ r, then |EJ (DsJ )| = q s(m+n−r−s) [ rs ]q . Consequently,
|EJ (MJmn )| =
r
X
q s(m+n−r−s)
s=0
r
.
s q
J ⊆ D J is a group (so contains an
Proof. Parts (iv) and (v) of Theorem 6.4 say that an H J -class HX
s
idempotent) if and only if HX is a group H -class of Mr , and that there are q s(m−r) × q s(n−r) idempotents
of DsJ corresponding to each rank s idempotent of Mr , of which there are q s(r−s) [ rs ]q by Lemma 7.1. The
result quickly follows.
2
We now describe the idempotent generated subsemigroup of MJmn .
J = hE (MJ )i = (P \ D) ∪ E (D).
Theorem 7.3. We have Emn
J
J
mn J
Proof. Suppose X1 , . . . , Xk ∈ E = EJ (MJmn ), and write Xi = [Mi , Ai , Ni ] for each i. So Ai ∈ E(Mr ) for
each i. Then X1 ? · · · ? Xk = [M1 , A1 · · · Ak , Nk ]. If any of A1 , . . . , Ak belongs to Mr \ Gr , then so too does
A1 · · · Ak , so that X1 ? · · · ? Xk ∈ P \ D. If all of A1 , . . . , Ak belong to Gr , then A1 = · · · = Ak = Ir , so
J ⊆ (P \ D) ∪ E (D). Conversely, it suffices to
X1 ? · · · ? Xk = [M1 , Ir , Nk ] ∈ EJ (D). This shows that Emn
J
J
show that P \ D ⊆ Emn , so suppose X ∈ P \ D, and write X = [M, A, N ]. Since X 6∈ D, we must have
rank(A) = rank(X) < r. But then A ∈ Mr \ Gr , so that A = B1 · · · Bl for some B1 , . . . , Bl ∈ E(Mr ) by
Theorem 3.5. It follows that X = [M, B1 , N ] ? · · · ? [M, Bl , N ], with all [M, Bi , N ] ∈ E.
2
Remark 7.4. Recall (see Theorem 3.5) that En = hE(Mn )i = (Mn \ Gn ) ∪ {In }. Theorem 7.3 is a pleasing
analogue of that result, since {In } = E(Gn ), where Gn is the top D-class of Mn . Also, Gn = G(Mn ) =
RP(Mn ) and, while P has no group of units as it is not a monoid, it is still the case that D = RP(P ).
J , the next natural task is to calculate its
Now that we have described the elements of the semigroup Emn
rank and idempotent rank.
Theorem 7.5. Suppose |F| = q < ∞. Then
J
J
rank(Emn
) = idrank(Emn
) = q r(L−r) + (q r − 1)/(q − 1),
where L = max(m, n).
J ) = rank(E (D)) + rank(E J
Proof. As in the proof of Theorem 6.10, we have rank(Emn
J
mn : EJ (D)).
r(m−r)
r(n−r)
Since EJ (D) is a q
×q
rectangular band (see Lemma 6.9(ii)), we again deduce from [84] that
rank(EJ (D)) = idrank(EJ (D)) = q r(L−r) . So it remains to show that:
J = hE (D) ∪ Ωi , and
(i) there exists a set Ω ⊆ E of size (q r − 1)/(q − 1) such that Emn
J
J
J = hE (D) ∪ Σi , then |Σ| ≥ (q r − 1)/(q − 1).
(ii) if Σ ⊆ P satisfies Emn
J
J
By Theorem 3.5, we may choose some set Γ ⊆ E(Mr ) with hΓi = Mr \ Gr and |Γ| = (q r − 1)/(q − 1).
For each A ∈ Γ, choose any MA ∈ Mm−r,r and NA ∈ Mr,n−r , and put Ω = {[MA , A, NA ] : A ∈ Γ}. Since
J = (P \ D) ∪ E (D), the proof of (i) will be complete if we can show that P \ D ⊆ hE (D) ∪ Ωi . So
Emn
J
J
J
let X = [K, B, L] ∈ P \ D, and write B = A1 · · · Ak where A1 , . . . , Ak ∈ Γ. Then
X = [K, Ir , L] ? [MA1 , A1 , NA1 ] ? · · · ? [MAk , Ak , NAk ] ? [K, Ir , L] ∈ hEJ (D) ∪ ΩiJ ,
31
J = hE (D) ∪ Σi , where Σ ⊆ E J \ E (D) = P \ D. We will show that Σ
as required. Next, suppose Emn
J
J
J
mn
generates Mr \ Gr . Indeed, let A ∈ Mr \ Gr be arbitrary, and choose any X ∈ P such that X = A. Since
J . Consider an expression X = Y ? · · · ? Y , where
rank(X) = rank(A) < r, it follows that X ∈ P \ D ⊆ Emn
1
k
Y1 , . . . , Yk ∈ EJ (D) ∪ Σ. Now, A = X = Y 1 · · · Y k . If any of the Yi belongs to EJ (D), then Y i = Ir , so
the factor Y i is not needed in the product A = Y 1 · · · Y k . After cancelling all such factors, we see that A
is a product of elements from Σ. Since A ∈ Mr \ Gr was arbitrary, we conclude that Mr \ Gr = hΣi. In
particular, |Σ| ≥ |Σ| ≥ rank(Mr \ Gr ) = (q r − 1)/(q − 1), giving (ii).
2
Remark 7.6. As in Remarks 5.3 and 6.11, we deduce from the results of this section that for |F| = q < ∞,
P
• Ck (l) (and Rk (l)) has ls=0 q s(k−s) [ sl ]q idempotents,
• the semigroup generated by E(Ck (l)) (and the semigroup generated by E(Rk (l))) has rank and idempotent rank equal to q l(k−l) + (q l − 1)/(q − 1).
8
Ideals
In this final section, we consider the ideals of P = Reg(MJmn ). In particular, we show that each of the
proper ideals is idempotent generated, and we calculate the rank and idempotent rank, showing that these
are equal. Although the next result is trivial if r = 0 and well-known if r = m = n (see Theorem 3.6), the
statement is valid for those parameters.
Theorem 8.1. The ideals of P = Reg(MJmn ) are precisely the sets
IsJ = D0J ∪ · · · ∪ DsJ = {X ∈ P : rank(X) ≤ s}
for 0 ≤ s ≤ r,
and they form a chain: I0J ⊆ · · · ⊆ IrJ . If 0 ≤ s < r, then IsJ = hEJ (DsJ )iJ is generated by the idempotents
in its top D J -class, and if |F| = q < ∞, then
J
J
s(L−r) r
rank(Is ) = idrank(Is ) = q
,
where L = max(m, n).
s q
Proof. For convenience, we will assume that m ≤ n throughout the proof, so that L = n. (The other case
will follow by duality.)
More generally, it may easily be checked that if the J -classes of a semigroup S form a chain, J0 < · · · < Jk ,
then the ideals of S are precisely the sets Ih = J0 ∪ · · · ∪ Jh for 0 ≤ h ≤ k (and these obviously form a chain).
Now suppose 0 ≤ s < r, let Γ ⊆ E(Ds (Mr )) be any idempotent generating set of Is (Mr ) (see Theorem 3.6),
and put ΩΓ = {[M, A, N ] : M ∈ Mm−r,r , A ∈ Γ, N ∈ Mr,n−r }. If X = [M, A, N ] ∈ IsJ is arbitrary, then
A = B1 · · · Bk for some B1 , . . . , Bk ∈ Γ, and it follows that X = [M, B1 , N ] ? · · · ? [M, Bk , N ] ∈ hΩΓ iJ . Since
ΩΓ ⊆ EJ (DsJ ), it follows that IsJ = hEJ (DsJ )iJ .
We now prove the statement about rank and idempotent rank. Suppose Ω is an arbitrary generating set
for IsJ where 0 ≤ s < r. Let X ∈ DsJ and consider an expression X = Y1 ? · · · ? Yk with Y1 , . . . , Yk ∈ Ω. Since
J
X = X ? Z ? X for some Z ∈ DsJ , we may assume that k ≥ 2. Since Is−1
is an ideal of IsJ (we interpret
J
J
J
Is−1 = ∅ if s = 0), each of Y1 , . . . , Yk must belong to Ds = DX . In particular, Yk D J X = (Y1 ? · · · ? Yk−1 ) ? Yk .
By stability, it then follows that Yk L J (Y1 ? · · · ? Yk−1 ) ? Yk = X. Since X ∈ DsJ was arbitrary, it follows
that Ω contains at least one element from each L J -class contained in DsJ , and there are q s(n−r) [ rs ]q such
L J -classes, by Theorem 6.4(vi). It follows that rank(IsJ ) ≥ q s(n−r) [ rs ]q = q s(L−r) [ rs ]q .
Since idrank(S) ≥ rank(S) for any idempotent generated semigroup S, the proof will be complete if we
can find an idempotent generating set of IsJ of the specified size. First, let Γ ⊆ E(Ds (Mr )) be such
that hΓi = Is (Mr ) and |Γ| = [ rs ]q . Fix some A ∈ Γ, and let ∼A and ≈A be the equivalence relations on
Mm−r,r and Mr,n−r defined before Theorem 6.4, and let MA and NA be cross-sections of the equivalence
classes of ∼A and ≈A . Let MA = {M1 , . . . , Mqs(m−r) } and NA = {N1 , . . . , Nqs(n−r) }. (We know MA and
32
NA have the specified sizes by Theorem 6.4.) Put Q = q s(n−r) = q s(L−r) . (Recall that we are assuming
m ≤ n.) Extend MA arbitrarily to MA0 = {M1 , . . . , MQ }. Now put ΩA = {[Mi , A, Ni ] : 1 ≤ i ≤ Q}. If
M ∈ Mm−r,r and N ∈ Mr,n−r are arbitrary, then M ∼ Mi and N ∼ Nj S
for some i, j, and we have
[M, A, N ] = [Mi , A, Nj ] = [Mi , A, Ni ] ? [Mj , A, Nj ] ∈ hΩA iJ . Now put Ω = A∈Γ ΩA . By the previous
discussion, we see that hΩiJ contains ΩΓ , which is a generating set for IsJ (by the first paragraph of this
proof), so IsJ = hΩiJ . Since |Ω| = Q|Γ| = q s(L−r) [ rs ]q , the proof is complete.
2
Remark 8.2. Again, we may deduce a corresponding statement for the ideals of the matrix semigroups
Reg(Ck (l)) and Reg(Rk (l)); the reader may supply the details if they wish.
Acknowledgements
The first named author gratefully acknowledges the support of Grant No. 174019 of the Ministry of Education, Science, and Technological Development of the Republic of Serbia, and Grant No. 1136/2014 of
the Secretariat of Science and Technological Development of the Autonomous Province of Vojvodina. The
authors wish to thank Dr Attila Egri-Nagy for constructing the GAP [67] code that enabled us to produce
the eggbox diagrams from Figures 4, 5, 7 and 8.
References
[1] Jorge Almeida, Stuart Margolis, Benjamin Steinberg, and Mikhail Volkov. Representation theory of finite semigroups,
semigroup radicals and formal language theory. Trans. Amer. Math. Soc., 361(3):1429–1461, 2009.
[2] J. Araújo and J. D. Mitchell. An elementary proof that every singular matrix is a product of idempotent matrices. Amer.
Math. Monthly, 112(7):641–645, 2005.
[3] Richard Brauer. On algebras which are connected with the semisimple continuous groups. Ann. of Math. (2), 38(4):857–872,
1937.
[4] Thomas Breuer, Robert M. Guralnick, and William M. Kantor. Probabilistic generation of finite simple groups. II. J.
Algebra, 320(2):443–494, 2008.
[5] W. P. Brown. Generalized matrix algebras. Canad. J. Math., 7:188–190, 1955.
[6] William P. Brown. The semisimplicity of ωfn . Ann. of Math. (2), 63:324–335, 1956.
[7] Phatsarapa Chanmuang and Ronnason Chinram. Some remarks on regularity of generalized transformation semigroups.
Int. J. Algebra, 2(9-12):581–584, 2008.
[8] Karen Chase. Sandwich semigroups of binary relations. Discrete Math., 28(3):231–236, 1979.
[9] R. Chinram. Green’s relations and regularity of generalized semigroups of linear transformations. Lobachevskii J. Math.,
30(4):253–256, 2009.
[10] Ronnason Chinram. Regularity and Green’s relations of generalized one-to-one partial transformation semigroups. Far
East J. Math. Sci., 30(3):513–521, 2008.
[11] Ronnason Chinram. Regularity and Green’s relations of generalized partial transformation semigroups. Asian-Eur. J.
Math., 1(3):295–302, 2008.
[12] A. H. Clifford. Matrix representations of completely simple semigroups. Amer. J. Math., 64:327–342, 1942.
[13] A. H. Clifford. Basic representations of completely simple semigroups. Amer. J. Math., 82:430–434, 1960.
[14] A. H. Clifford and G. B. Preston. The algebraic theory of semigroups. Vol. I. Mathematical Surveys, No. 7. American
Mathematical Society, Providence, R.I., 1961.
[15] R. J. H. Dawlings. Products of idempotents in the semigroup of singular endomorphisms of a finite-dimensional vector
space. Proc. Roy. Soc. Edinburgh Sect. A, 91(1-2):123–133, 1981/82.
[16] R. J. H. Dawlings. Sets of idempotents that generate the semigroup of singular endomorphisms of a finite-dimensional
vector space. Proc. Edinburgh Math. Soc. (2), 25(2):133–139, 1982.
[17] Jean A. Dieudonné. La géométrie des groupes classiques (in French). Springer-Verlag, Berlin-New York, 1971. Troisième
édition, Ergebnisse der Mathematik und ihrer Grenzgebiete, Band 5.
[18] D. Ž. Djoković. Note on a theorem on singular matrices. Canad. Math. Bull., 11:283–284, 1968.
[19] Igor Dolinka and James East. Variants of finite full transformation semigroups. Internat. J. Algebra Comput., 25(8):1187–
1222, 2015.
[20] Igor Dolinka and Robert D. Gray. Maximal subgroups of free idempotent generated semigroups over the full linear monoid.
Trans. Amer. Math. Soc., 366(1):419–455, 2014.
[21] Jie Du and Zongzhu Lin. Stratifying algebras with near-matrix algebras. J. Pure Appl. Algebra, 188(1-3):59–72, 2004.
[22] D. Easdown and T. G. Lavers. The inverse braid monoid. Adv. Math., 186(2):438–455, 2004.
[23] J. East, J. D. Mitchell, and Y. Péresse. Maximal subsemigroups of the semigroup of all mappings on an infinite set. Trans.
Amer. Math. Soc., 367(3):1911–1944, 2015.
[24] Charles Ehresmann. Catégories et structures (in French). Dunod, Paris, 1965.
[25] J. A. Erdos. On products of idempotent matrices. Glasgow Math. J., 8:118–122, 1967.
33
[26] D. G. FitzGerald and Jonathan Leech. Dual symmetric inverse monoids and representation theory. J. Austral. Math. Soc.
Ser. A, 64(3):345–367, 1998.
[27] John Fountain and Andrew Lewin. Products of idempotent endomorphisms of an independence algebra of finite rank.
Proc. Edinburgh Math. Soc. (2), 35(3):493–500, 1992.
[28] Olexandr Ganyushkin and Volodymyr Mazorchuk. Classical finite transformation semigroups, an introduction, volume 9
of Algebra and Applications. Springer-Verlag London, Ltd., London, 2009.
[29] Olexandr Ganyushkin, Volodymyr Mazorchuk, and Benjamin Steinberg. On the irreducible representations of a finite
semigroup. Proc. Amer. Math. Soc., 137(11):3585–3592, 2009.
[30] Fabio Gavarini. On the radical of Brauer algebras. Math. Z., 260(3):673–697, 2008.
[31] Nick Gill. On a conjecture of Degos. Preprint, 2015, arXiv:1502.03341.
[32] Gracinda Gomes and John M. Howie. On the ranks of certain finite semigroups of transformations. Math. Proc. Cambridge
Philos. Soc., 101(3):395–403, 1987.
[33] R. Gray. Hall’s condition and idempotent rank of ideals of endomorphism monoids. Proc. Edinb. Math. Soc. (2), 51(1):57–
72, 2008.
[34] R. Gray and N. Ruškuc. Maximal subgroups of free idempotent-generated semigroups over the full transformation monoid.
Proc. Lond. Math. Soc. (3), 104(5):997–1018, 2012.
[35] Nicolas Guay and Stewart Wilcox. Almost cellular algebras. J. Pure Appl. Algebra, 219(9):4105–4116, 2015.
[36] Robert M. Guralnick and William M. Kantor. Probabilistic generation of finite simple groups. J. Algebra, 234(2):743–792,
2000. Special issue in honor of Helmut Wielandt.
[37] T. E. Hall. The radical of the algebra of any finite semigroup over any field. J. Austral. Math. Soc., 11:350–352, 1970.
[38] J. B. Hickey. Semigroups under a sandwich operation. Proc. Edinburgh Math. Soc. (2), 26(3):371–382, 1983.
[39] J. B. Hickey. On variants of a semigroup. Bull. Austral. Math. Soc., 34(3):447–459, 1986.
[40] Peter M. Higgins. Techniques of semigroup theory. Oxford Science Publications. The Clarendon Press, Oxford University
Press, New York, 1992.
[41] Christopher Hollings. The Ehresmann-Schein-Nambooripad theorem and its successors. Eur. J. Pure Appl. Math., 5(4):414–
450, 2012.
[42] J. M. Howie. The subsemigroup generated by the idempotents of a full transformation semigroup. J. London Math. Soc.,
41:707–716, 1966.
[43] J. M. Howie. Idempotent generators in finite full transformation semigroups. Proc. Roy. Soc. Edinburgh Sect. A, 81(34):317–323, 1978.
[44] John M. Howie. Fundamentals of semigroup theory, volume 12 of London Mathematical Society Monographs. New Series.
The Clarendon Press, Oxford University Press, New York, 1995. Oxford Science Publications.
[45] John M. Howie and Robert B. McFadden. Idempotent rank in finite full transformation semigroups. Proc. Roy. Soc.
Edinburgh Sect. A, 114(3-4):161–167, 1990.
[46] John M. Howie, N. Ruškuc, and P. M. Higgins. On relative ranks of full transformation semigroups. Comm. Algebra,
26(3):733–748, 1998.
[47] Zur Izhakian, John Rhodes, and Benjamin Steinberg. Representation theory of finite semigroups over semirings. J. Algebra,
336:139–157, 2011.
[48] R. Jongchotinon, S. Chaopraknoi, and Y. Kemprasit. Isomorphism theorems for variants of semigroups of linear transformations. Int. J. Algebra, 4(25-28):1407–1412, 2010.
[49] Yupaporn Kemprasit. Regularity and unit-regularity of generalized semigroups of linear transformations. Southeast Asian
Bull. Math., 25(4):617–622, 2002.
[50] T. A. Khan and M. V. Lawson. Variants of regular semigroups. Semigroup Forum, 62(3):358–374, 2001.
[51] Steffen König and Changchang Xi. On the structure of cellular algebras. In Algebras and modules, II (Geiranger, 1996),
volume 24 of CMS Conf. Proc., pages 365–386. Amer. Math. Soc., Providence, RI, 1998.
[52] Steffen König and Changchang Xi. A characteristic free approach to Brauer algebras. Trans. Amer. Math. Soc., 353(4):1489–
1505, 2001.
[53] Thomas J. Laffey. Products of idempotent matrices. Linear and Multilinear Algebra, 14(4):309–314, 1983.
[54] Gérard Lallement and Mario Petrich. Irreducible matrix representations of finite semigroups. Trans. Amer. Math. Soc.,
139:393–412, 1969.
[55] Yanbo Li and Feng Wei. Semi-centralizing maps of generalized matrix algebras. Linear Algebra Appl., 436(5):1122–1153,
2012.
[56] Markus Linckelmann and Michal Stolorz. On simple modules over twisted finite category algebras. Proc. Amer. Math.
Soc., 140(11):3725–3737, 2012.
[57] E. S. Lyapin. Semigroups (in Russian). Gosudarstv. Izdat. Fiz.-Mat. Lit., Moscow, 1960.
[58] K. D. Magill, Jr. and S. Subbiah. Green’s relations for regular elements of sandwich semigroups. I. General results. Proc.
London Math. Soc. (3), 31(2):194–210, 1975.
[59] K. D. Magill, Jr. and S. Subbiah. Green’s relations for regular elements of sandwich semigroups. II. Semigroups of continuous
functions. J. Austral. Math. Soc. Ser. A, 25(1):45–65, 1978.
[60] Kenneth D. Magill, Jr. Semigroup structures for families of functions. I. Some homomorphism theorems. J. Austral. Math.
Soc., 7:81–94, 1967.
[61] Paul Martin. Temperley-Lieb algebras for nonplanar statistical mechanics—the partition algebra construction. J. Knot
Theory Ramifications, 3(1):51–82, 1994.
[62] D. B. McAlister. The category of representations of a completely 0-simple semigroup. J. Austral. Math. Soc., 12:193–210,
1971.
[63] D. B. McAlister. Rings related to completely 0-simple semigroups. J. Austral. Math. Soc., 12:257–274, 1971.
34
[64] Donald B. McAlister. Representations of semigroups by linear transformations. I, II. Semigroup Forum 2 (1971), no. 3,
189–263; ibid., 2(4):283–320, 1971.
[65] Suzana Mendes-Gonçalves and R. P. Sullivan. Regular elements and Green’s relations in generalized transformation
semigroups. Asian-Eur. J. Math., 6(1):1350006, 11, 2013.
[66] Suzana Mendes-Gonçalves and R. P. Sullivan. Regular elements and Green’s relations in generalised linear transformation
semigroups. Southeast Asian Bull. Math., 38(1):73–82, 2014.
[67] J. D. Mitchell. The Semigroups package for GAP, Version 2.1. http://tinyurl.com/semigroups, 2014.
[68] W. D. Munn. On semigroup algebras. Proc. Cambridge Philos. Soc., 51:1–15, 1955.
[69] W. D. Munn. Matrix representations of semigroups. Proc. Cambrdige Philos. Soc., 53:5–12, 1957.
[70] W. D. Munn. Irreducible matrix representations of semigroups. Quart. J. Math. Oxford Ser. (2), 11:295–309, 1960.
[71] Attila Nagy. Special classes of semigroups, volume 1 of Advances in Mathematics (Dordrecht). Kluwer Academic Publishers,
Dordrecht, 2001.
[72] S. Nenthein and Y. Kemprasit. Regular elements of some semigroups of linear transformations and matrices. Int. Math.
Forum, 2(1-4):155–166, 2007.
[73] Jan Okniński. Semigroup algebras, volume 138 of Monographs and Textbooks in Pure and Applied Mathematics. Marcel
Dekker, Inc., New York, 1991.
[74] Jan Okniński. Semigroups of matrices, volume 6 of Series in Algebra. World Scientific Publishing Co., Inc., River Edge,
NJ, 1998.
[75] Jan Okniński and Mohan S. Putcha. Complex representations of matrix semigroups. Trans. Amer. Math. Soc., 323(2):563–
581, 1991.
[76] I. S. Ponizovskiı̆. On matrix representations of associative systems. Mat. Sb. N.S., 38(80):241–260, 1956.
[77] Mohan S. Putcha. Linear algebraic monoids, volume 133 of London Mathematical Society Lecture Note Series. Cambridge
University Press, Cambridge, 1988.
[78] Mohan S. Putcha. Complex representations of finite monoids. Proc. London Math. Soc. (3), 73(3):623–641, 1996.
[79] Mohan S. Putcha. Complex representations of finite monoids. II. Highest weight categories and quivers. J. Algebra,
205(1):53–76, 1998.
[80] Mohan S. Putcha. Products of idempotents in algebraic monoids. J. Aust. Math. Soc., 80(2):193–203, 2006.
[81] D. Rees. On semi-groups. Proc. Cambridge Philos. Soc., 36:387–400, 1940.
[82] Lex E. Renner. Linear algebraic monoids, volume 134 of Encyclopaedia of Mathematical Sciences. Springer-Verlag, Berlin,
2005. Invariant Theory and Algebraic Transformation Groups, V.
[83] John Rhodes and Benjamin Steinberg. The q-theory of finite semigroups. Springer Monographs in Mathematics. Springer,
New York, 2009.
[84] N. Ruškuc. On the rank of completely 0-simple semigroups. Math. Proc. Cambridge Philos. Soc., 116(2):325–338, 1994.
[85] Benjamin Steinberg. Möbius functions and semigroup representation theory. J. Combin. Theory Ser. A, 113(5):866–881,
2006.
[86] Benjamin Steinberg. Möbius functions and semigroup representation theory. II. Character formulas and multiplicities. Adv.
Math., 217(4):1521–1557, 2008.
[87] Benjamin Steinberg. The representation theory of finite monoids. Universitext. Springer, In press.
[88] R. P. Sullivan. Generalised partial transformation semigroups. J. Austral. Math. Soc., 19(part 4):470–473, 1975.
[89] R. P. Sullivan. Semigroups of linear transformations with restricted range. Bull. Aust. Math. Soc., 77(3):441–453, 2008.
[90] R. P. Sullivan. Generalised transformation semigroups. Preprint, 2013.
[91] Melvin C. Thornton. Regular elements in sandwich semigroups of binary relations. Discrete Math., 41(3):303–307, 1982.
[92] R. M. Thrall. A class of algebras without unity element. Canad. J. Math., 7:382–390, 1955.
[93] Amorn Wasanawichit and Yupaporn Kemprasit. Dense subsemigroups of generalised transformation semigroups. J. Aust.
Math. Soc., 73(3):433–445, 2002.
[94] William C. Waterhouse. Two generators for the general linear groups over finite fields. Linear and Multilinear Algebra,
24(4):227–230, 1989.
[95] Hermann Weyl. The Classical Groups. Their Invariants and Representations. Princeton University Press, Princeton, N.J.,
1939.
[96] Zhankui Xiao and Feng Wei. Commuting mappings of generalized matrix algebras. Linear Algebra Appl., 433(11-12):2178–
2197, 2010.
[97] Zhankui Xiao and Feng Wei. Commuting traces and Lie isomorphisms on generalized matrix algebras. Oper. Matrices,
8(3):821–847, 2014.
[98] Miyuki Yamada. A note on middle unitary semigroups. Kōdai Math. Sem. Rep., 7:49–52, 1955.
35
| 4 |
Bidirectional Recurrent Neural Networks as
Generative Models
arXiv:1504.01575v3 [cs.LG] 2 Nov 2015
Mathias Berglund
Aalto University, Finland
Leo Kärkkäinen
Nokia Labs, Finland
Tapani Raiko
Aalto University, Finland
Akos Vetek
Nokia Labs, Finland
Mikko Honkala
Nokia Labs, Finland
Juha Karhunen
Aalto University, Finland
Abstract
Bidirectional recurrent neural networks (RNN) are trained to predict both in the
positive and negative time directions simultaneously. They have not been used
commonly in unsupervised tasks, because a probabilistic interpretation of the
model has been difficult. Recently, two different frameworks, GSN and NADE,
provide a connection between reconstruction and probabilistic modeling, which
makes the interpretation possible. As far as we know, neither GSN or NADE
have been studied in the context of time series before. As an example of an unsupervised task, we study the problem of filling in gaps in high-dimensional time
series with complex dynamics. Although unidirectional RNNs have recently been
trained successfully to model such time series, inference in the negative time direction is non-trivial. We propose two probabilistic interpretations of bidirectional
RNNs that can be used to reconstruct missing gaps efficiently. Our experiments on
text data show that both proposed methods are much more accurate than unidirectional reconstructions, although a bit less accurate than a computationally complex
bidirectional Bayesian inference on the unidirectional RNN. We also provide results on music data for which the Bayesian inference is computationally infeasible,
demonstrating the scalability of the proposed methods.
1
Introduction
Recurrent neural networks (RNN) have recently been trained successfully for time series modeling, and have been used to achieve state-of-the-art results in supervised tasks including handwriting
recognition [12] and speech recognition [13]. RNNs have also been used successfully in unsupervised learning of time series [26, 8].
Recently, RNNs have also been used to generate sequential data [1] in a machine translation context,
which further emphasizes the unsupervised setting. Bahdanau et al. [1] used a bidirectional RNN
to encode a phrase into a vector, but settled for a unidirectional RNN to decode it into a translated
phrase, perhaps because bidirectional RNNs have not been studied much as generative models. Even
more recently, Maas et al. [18] used a deep bidirectional RNN in speech recognition, generating text
as output.
Missing value reconstruction is interesting in at least three different senses. Firstly, it can be used to
cope with data that really has missing values. Secondly, reconstruction performance of artificially
missing values can be used as a measure of performance in unsupervised learning [21]. Thirdly,
reconstruction of artificially missing values can be used as a training criterion [9, 11, 27]. While
traditional RNN training criterions correspond to one-step prediction, training to reconstruct longer
gaps can push the model towards concentrating on longer-term predictions. Note that the one-step
1
Figure 1: Structure of the simple RNN (left) and the bidirectional RNN (right).
prediction criterion is typically used even in approaches that otherwise concentrate on modelling
long-term dependencies [see e.g. 19, 17].
When using unidirectional RNNs as generative models, it is straightforward to draw samples from
the model in sequential order. However, inference is not trivial in smoothing tasks, where we want to
evaluate probabilities for missing values in the middle of a time series. For discrete data, inference
with gap sizes of one is feasible - however, inference with larger gap sizes becomes exponentially
more expensive. Even sampling can be exponentially expensive with respect to the gap size.
One strategy used for training models that are used for filling in gaps is to explicitly train the model
with missing data [see e.g. 9]. However, such a criterion has not to our knowledge yet been used and
thoroughly evaluated compared with other inference strategies for RNNs.
In this paper, we compare different methods of using RNNs to infer missing values for binary
time series data. We evaluate the performance of two generative models that rely on bidirectional
RNNs, and compare them to inference using a unidirectional RNN. The proposed methods are very
favourable in terms of scalability.
2
Recurrent Neural Networks
Recurrent neural networks [24, 14] can be seen as extensions of the standard feedforward multilayer
perceptron networks, where the inputs and outputs are sequences instead of individual observations.
Let us denote the input to a recurrent neural network by X = {xt } where xt ∈ RN is an input
vector for each time step t. Let us further denote the output as Y = {yt } where yt ∈ RM is an
output vector for each time step t. Our goal is to model the distribution P (Y|X). Although RNNs
map input sequences to output sequences, we can use them in an unsupervised manner by letting the
RNN predict the next input. We can do so by setting Y = {yt = xt+1 }.
2.1
Unidirectional Recurrent Neural Networks
The structure of a basic RNN with one hidden layer is illustrated in Figure 1, where the output yt is
determined by
P yt | {xd }td=1 = φ (Wy ht + by )
(1)
ht = tanh (Wh ht−1 + Wx xt + bh )
(2)
where
and Wy , Wh , and Wx are the weight matrices connecting the hidden to output layer, hidden to
hidden layer, and input to hidden layer, respectively. by and bh are the output and hidden layer
bias vectors, respectively. Typical options for the final nonlinearity φ are the softmax function
for classification or categorical prediction tasks, or independent Bernoulli variables with sigmoid
functions for other binary prediction tasks. In this form, the RNN therefore evaluates the output yt
based on information propagated through the hidden layer that directly or indirectly depends on the
observations {xd }td=1 = {x1 , . . . , xt }.
2
2.2
Bidirectional Recurrent Neural Networks
Bidirectional RNNs (BRNN) [25, 2] extend the unidirectional RNN by introducing a second hidden layer, where the hidden to hidden connections flow in opposite temporal order. The model is
therefore able to exploit information both from the past and the future.
The output yt is traditionally determined by
P (yt | {xd }d6=t ) = φ Wyf hft + Wyb hbt + by ,
but we propose the use of
P (yt | {xd }d6=t ) = φ Wyf hft−1 + Wyb hbt+1 + by
where
hft = tanh Whf hft−1 + Wxf xt + bfh
hbt = tanh Whb hbt+1 + Wxb xt + bbh .
(3)
(4)
(5)
The structure of the BRNN is illustrated in Figure 1 (right). Compared with the regular RNN,
the forward and backward directions have separate non-tied weights and hidden activations, and are
denoted by the superscript f and b for forward and backward, respectively. Note that the connections
are acyclic. Note also that in the proposed formulation, yt does not get information from xt . We
can therefore use the model in an unsupervised manner to predict one time step given all other time
steps in the input sequence simply by setting Y = X.
3
Probabilistic Interpretation for Unsupervised Modelling
Probabilistic unsupervised modeling for sequences using a unidirectional RNN is straightforward,
as the joint distribution for the whole sequence is simply the product of the individual predictions:
T
Y
t−1
Punidirectional (X) =
P (xt | {xd }d=1
).
(6)
t=1
For the BRNN, the situation is more complicated. The network gives predictions for individual
outputs given all the others, and the joint distribution cannot be written as their product. We propose
two solutions for this, denoted by GSN and NADE.
GSN Generative Stochastic Networks (GSN) [6] use a denoising auto-encoder to estimate the data
distribution as the asymptotic distribution of the Markov chain that alternates between corruption
and denoising. The resulting distribution is thus defined only implicitly, and cannot be written
analytically. We can define a corruption function that masks xt as missing, and a denoising function
that reconstructs it from the others. It turns out that one feedforward pass of the BRNN does exactly
that.
Our first probabilistic interpretation is thus that the joint distribution defined by a BRNN is the
asymptotic distribution of a process that replaces one observation vector xt at a time in X by sampling it from PBRNN (xt | {xd }d6=t ). In practice, we will start from a random initialization and use
Gibbs sampling.
NADE The Neural Autoregressive Distribution Estimator (NADE) [27] defines a probabilistic model
by reconstructing missing components of a vector one at a time in a random order, starting from a
fully unobserved vector. Each reconstruction is given by an auto-encoder network that takes as input
the observations so far and an auxiliary mask vector that indicates which values are missing.
We extend the same idea for time series. Firstly, we concatenate an auxiliary binary element to
input vectors to indicate a missing input. The joint distribution of the time series is defined by first
drawing a random permutation od of time indices 1 . . . T and then setting data points observed one
by one in that order, starting from a fully missing sequence:
T
Y
PNADE (X | od ) =
P (xod | {xoe }d−1
(7)
e=1 ).
d=1
In practice, the BRNN will be trained with some inputs marked as missing, while all the outputs are
observed. See Section 5.1 for more training details.
3
4
Filling in gaps with Recurrent Neural Networks
The task we aim to solve is to fill in gaps of multiple consecutive data points in high-dimensional
binary time series data. The inference is not trivial for two reasons: firstly, we reconstruct multiple
consecutive data points, which are likely to depend on each other, and secondly, we fill in data in the
middle of a time series and hence need to consider the data both before and after the gap.
For filling in gaps with the GSN approach, we first train a bidirectional RNN to estimate PBRNN (xt |
{xd }d6=t ). In order to achieve that, we use the structure presented in Section 2.2. At test time,
the gap is first initialized to random values, after which the missing values are sampled from the
distribution PBRNN (xt | {xd }d6=t ) one by one in a random order repeatedly to approximate the
stationary distribution. For the RNN structures used in this paper, the computational complexity of
this approach at test time is O((dc + c2 )(T + gM )) where d is the dimensionality of a data point, c
is the number of hidden units in the RNN, T is the number of time steps in the data, g is the length
of the gap and M is the number of Markov chain Monte Carlo (MCMC) steps used for inference.
For filling in gaps with the NADE approach, we first train a bidirectional RNN where some of the
inputs are set to a separate missing value token. At test time, all data points in the gap are first
initialized with this token, after which each missing data point is reconstructed once until the whole
gap is filled. Computationally, the main difference to GSN is that we do not have to sample each
reconstructed data point multiple times, but the reconstruction is done in as many steps as there
are missing data points in the gap. For the RNN structures used in this paper, the computational
complexity of this approach at test time is O((dc + c2 )(T + g)) where d is the dimensionality of a
data point, c is the number of hidden units in the RNN, g is the length of the gap and T is the number
of time steps in the data.
In addition to the two proposed methods, one can use a unidirectional RNN to solve the same task.
We call this method Bayesian MCMC. Using a unidirectional RNN for the task of filling in gaps is
not trivial, as we need to take into account the probabilities of the values after the gap, which the
model does not explicitly do. We therefore resort to a similar approach as the GSN approach, where
we replace the PBRNN (xt | {xd }d6=t ) with a unidirectional equivalent for the Gibbs sampling. As
t−1
the unidirectional RNN models conditional probabilities of the form PRNN (xt | {xd }d=1
), we can
use Bayes’ theorem to derive:
PRNN (xt = a | {xd }d6=t )
∝ PRNN xt = a |
=
T
Y
τ =t
{xd }t−1
d=1
−1
PRNN (xτ | {xd }τd=1
)
(8)
PRNN {xe }Te=t+1
xt =a
| xt =
a, {xd }t−1
d=1
(9)
(10)
−1
where PRNN (xτ | {xd }τd=1
) is directly the output of the unidirectional RNN given an input sequence X, where one time step t, i.e. the one we Gibbs sample, is replaced by a proposal a. The
problem is that we have to go through all possible proposals a separately to evaluate the probability
P (xt = a|{xd }d6=t ). We therefore have to evaluate the product of the outputs of the unidirectional
RNN for time steps t . . . T for each possible a.
In some cases this is feasible to evaluate. For categorical data, e.g. text, there are as many possible
values for a as there are dimensions1 . However, for other binary data the number of possibilities
grows exponentially, and is clearly not feasible to evaluate. For the RNN structures used in this
paper, the computational complexity of this approach at test time is O((dc + c2 )(T + aT M )) where
a is the number of different values a data point can have, d is the dimensionality of a data point,
c is the number of hidden units in the RNN, T is the number of time steps in the data, and M is
the number of MCMC steps used for inference. The critical difference in complexity to the GSN
approach is the coefficient a, that for categorical data takes the value d, for binary vectors 2d and for
continuous data is infinite.
As a simple baseline model, we also evaluate the one-gram log-likelihood of the gaps. The one-gram
model assumes a constant context-independent categorical distribution for the categorical task, or a
1
For character-based text, the number of dimensions is the number of characters in the model alphabet.
4
vector of factorial binomial probabilities for the structured prediction task:
Pone−gram (yt ) = f (by ) .
This can be done in O(dg).
We also compare to one-way inference, where the data points in the gap are reconstructed in order
without taking the future context into account, using Equations (1) and (2) directly. The computational complexity is O((dc + c2 )T ).
5
Experiments
We run two sets of experiments: one for a categorical prediction task, and one for a binary structured
prediction task. In the categorical prediction task we fill in gaps of five characters in Wikipedia text,
while in the structural prediction task we fill in gaps of five time steps in different polyphonic music
data sets.
5.1
Training details for categorical prediction task
For the categorical prediction task, we test the performance of the two proposed methods, GSN and
NADE. In addition, we compare the performance to MCMC using Bayesian inference and one-way
inference with a unidirectional RNN. We therefore have to train three different RNNs, one for each
method.
Each RNN is trained as a predictor network, where the character at each step is predicted based
on all the previous characters (in the case of the RNN) or all the previous and following characters
(in the case of the BRNNs). We use the same data set as Sutskever et al. [26], which consists of
2GB of English text from Wikipedia. For training, we follow a similar strategy as Hermans and
Schrauwen [15]. The characters are encoded as one-hot binary vectors with a dimensionality of
d = 96 characters and the output is modelled with a softmax distribution. We train the unirectional
RNN with string lengths of T = 250 characters, where the error is propagated only from the last 200
outputs. In the BRNN we use string length of T = 300 characters, where the error is propagated
from the middle 200 outputs. We therefore avoid propagating the gradient from predictions that lack
long temporal context.
For the BRNN used in the NADE method, we add one dimension to the one-hot input which corresponds to a missing value token. During training, in each minibatch we mark g = 5 consecutive
characters every 25 time steps as a gap. During training, the error is propagated only from these
gaps. For each gap, we uniformly draw a value from 1 to 5, and set that many characters in the gap
to the missing value token. The model is therefore trained to predict the output in different stages of
inference, where a number of the inputs are still marked as missing. For comparison, we also train
a similar network, but without masking. In that variant, the error is therefore propagated from all
time steps. We refer to “NADE” masked and “NADE no mask”, respectively, for these two training
methods.
For all the models, the weight elements are drawn from the uniform distribution: wi,j ∼ U [−s, s]
where
s = 1 for the input to hidden layer, and following Glorot and Bengio [10], where s =
p
6/ (din + dout ) for the hidden-to-hidden and the hidden-to output layers. The biases are initialized to zero.
We use c = 1000 hidden units in the unidirectional RNN and c = 684 hidden units in the two hidden
layers in the BRNNs. The number of parameters in the two model types is therefore roughly the
same. In the recurrent layers, we set the recurrent activation connected to the first time step to zero.
The networks are trained using stochastic gradient descent and the gradient is calculated using backpropagation through time. We use a minibatch size of 40, i.e. each minibatch consists of 40 randomly sampled sequences of length 250. As the gradients tend to occasionally “blow up” when
training RNNs [5, 20], we normalize the gradients at each update to have length one. The step size
is set to 0.25 for all layers in the beginning of training, and it is linearly decayed to zero during
training. As training the model is very time-consuming2 , we do not optimize the hyperparameters,
or repeat runs to get confidence intervals around the evaluated performances.
2
We used about 8 weeks of GPU time for the reported results.
5
5.2
Training Details for the Binary Structured Prediction Task
In the other set of experiments, we use four polyphonic music data sets [8]. The data sets consist of at
least 7 hours of polyphonic music each, where each data point is a binary d = 88-dimensional vector
that represents one time step of MIDI-encoded music, indicating which of the 88 keys of a piano are
pressed. We test the performance of the two proposed methods, but omit training the unidirectional
RNNs as the computational complexity of the Bayesian MCMC is prohibitive (a = 288 ).
We train all models for 50 000 updates in minibatches of ≈ 3 000 individual data points3 . As the
data sets are small, we select the initial learning rate on a grid of {0.0001, 0.0003, . . . , 0.3, 1} based
on the lowest validation set cost. We use no “burn-in” as several of the scores are fairly short, and
therefore do not specifically mask out values in the beginning or end of the data set as we did for the
text data.
For the NADE method, we use an additional dimension as a missing value token in the data. For the
missing values, we set the missing value token to one and the other dimensions to zero.
Other training details are similar to the categorical prediction task.
5.3
Evaluation of Models
At test time, we evaluate the models by calculating the mean log-likelihood of the correct value of
gaps of five consecutive missing values in test data.
In the GSN and Bayesian MCMC approaches, we first set the five values in the gap to a random value
for the categorical prediction task, or to zero for the structured prediction task. We then sample all
five values in the gap in random order, and repeat the procedure for M = 100 MCMC steps4 . For
evaluating the log-likelihood of the correct value for the string, we force the last five steps to sample
the correct value, and store the probability of the model sampling those values. We also evaluate
the probability of reconstructing correctly the individual data points by not forcing the last five time
steps to sample the correct value, but by storing the probability of reconstructing the correct value
for each data point separately. We run the MCMC chain 100 times and use the log of the mean of
the likelihoods of predicting the correct value over these 100 runs.
When evaluating the performance of one-directional inference, we use a similar approach to MCMC.
However, when evaluating the log-likelihood of the entire gap, we only construct it once in sequential order, and record the probabilities of reconstructing the correct value. When evaluating the probability of reconstructing the correct value for each data point separately, we use the same approach
as for MCMC and sample the gap 100 times, recording for each step the probability of sampling the
correct value. The result for each data point is the log of the mean of the likelihoods over these 100
runs.
On the Wikipedia data, we evaluate the GSN and NADE methods on 50 000 gaps on the test data.
On the music data, all models are evaluated on all possible gaps of g = 5 on the test data, excluding
gaps that intersect with the first and last 10 time steps of a score. When evaluating the Bayesian
MCMC with the unidirectional RNN, we have to significantly limit the size of the data set, as the
method is highly computationally complex. We therefore run it on 1 000 gaps on the test data.
For NADE, we set the five time steps in the gap to the missing value token. We then reconstruct
them one by one to the correct value, and record the probability of the correct reconstruction. We
repeat this process for all possible permutations of the order in which to do the reconstruction, and
therefore acquire the exact probability of the correct reconstruction given the model and the data.
We also evaluate the individual character reconstruction probabilities by recording the probability
of sampling the correct value given all other values in the gap are set to missing.
5.4
Results
From Table 1 we can see that the Bayesian MCMC method seems to yield the best results, while
GSN or NADE outperform one-way inference. It is worth noting that in the most difficult data sets,
3
4
A minibatch can therefore consist of e.g. 100 musical scores, each of length T = 30.
M = 100 MCMC steps means that each value in the gap of g = 5 will be resampled M/g = 20 times
6
Table 1: Negative Log Likelihood (NLL) for gaps of five time steps using different models (lower
is better). In the experiments, GSN and NADE perform well, although they are outperformed by
Bayesian MCMC.
Inference strategy
Wikipedia
Nottingham
Piano
Muse
JSB
GSN
NADE masked
NADE
Bayesian MCMC
One-way inference
4.60
4.86
4.88
4.37
5.79
19.1
19.0
18.5
NA
19.2
38.8
40.4
39.4
NA
38.9
37.3
36.5
34.7
NA
37.6
43.8
44.3
44.6
NA
43.9
One-gram
23.3
145
138
147
118
2.8
10
GSN
NADE
Bayesian MCMC
One-way inference
2.6
9.5
2.2
Data point NLL
Data point NLL
2.4
2
1.8
9
8.5
1.6
1.4
8
GSN
NADE
One-way inference
1.2
1
1
1.5
2
2.5
3
3.5
4
4.5
7.5
5
Position in gap
1
1.5
2
2.5
3
3.5
4
4.5
5
Position in gap
Figure 2: Average NLL per data point using different methods with the Wikipedia data set (left)
and the Piano data set (right) for different positions in a gap of 5 consecutive missing values. The
middle data point is the most difficult to estimate for the most methods, while the one-way inference
cannot take future context into account making prediction of later positions difficult. For the leftmost position in the gap, the one-way inference performs the best since it does not require any
approximations such as MCMC.
piano and JSB, oneway inference performs very well. Qualitative examples of the reconstructions
obtained with the GSN and NADE on the Wikipedia data are shown in Table 3 (supplementary
material).
In order to get an indication of how the number of MCMC steps in the GSN approach affects
performance, we plotted the difference in NLL of GSN and NADE of the test set as a function
of the number of MCMC steps in Figure 3 (supplementary material). The figure indicates that the
music data sets mix fairly well, as the performance of GSN quickly saturates. However, for the
Wikipedia data, the performance could probably be even further improved by letting the MCMC
chain run for more than M = 100 steps.
In Figure 2 we have evaluated the NLL for the individual characters in the gaps of length five. As
expected, all methods except for one-way inference are better at predicting characters close to both
edges of the gap.
As a sanity check, we make sure our models have been successfully trained by evaluating the mean
test log-likelihood of the BRNNs for gap sizes of one. In Table 2 (supplementary material) we can
see that the BRNNs expectedly outperform previously published results with unidirectional RNNs,
which indicates that the models have been trained successfully.
6
Conclusion and Discussion
Although recurrent neural networks have been used as generative models for time series data, it has
not been trivial how to use them for inference in cases such as missing gaps in the sequential data.
7
In this paper, we proposed to use bidirectional RNNs as generative models for time series, with
two probabilistic interpretations called GSN and NADE. Both provide efficient inference in both
positive and negative directions in time, and both can be used in tasks where Bayesian inference of
a unidirectional RNN is computationally infeasible.
The model we trained for NADE differed from the basic BRNN in several ways: Firstly, we artificially marked gaps of 5 consecutive points as missing, which should help in specializing the model
for such reconstruction tasks. It would be interesting to study the effect of the missingness pattern
used in training, on the learned representations and predictions. Secondly, in addition to using all
outputs as the training signal, we tested using only the reconstructions of those missing values as the
training signal. This reduces the effective amount of training that the model went through. Thirdly,
the model had one more input (the missingness indicator) that makes the learning task more difficult.
We can see from Table 2 that the model we trained for NADE where we only used the reconstructions as the training signal has a worse performance than the BRNN for reconstructing single values.
This indicates that these differences in training have a significant impact on the quality of the final
trained probabilistic model.
We used the same number of parameters when training an RNN and a BRNN. The RNN can concentrate all the learning effort on forward prediction, and re-use the learned dependencies in backward
inference by the computationally heavy Bayesian inference. It remains an open question which
approach would work best given an optimal size of the hidden layers.
As future work, other model structures could be explored in this context, for instance the Long ShortTerm Memory [16]. Specifically to our NADE approach, it might make sense to replace the regular
additive connection from the missingness indicator input to the hidden activations in Eq. (4,5), by
a multiplicative connection that somehow gates the dynamics mappings Whf and Whb . Another
direction to extend is to use a deep architecture with more hidden layers.
The midi music data is an example of a structured prediction task: Components of the output vector
depend strongly on each other. However, our model assumes independent Bernoulli distributions
for them. One way to take those dependencies into account is to use stochastic hidden units hft and
hbt , which has been shown to improve performance on structured prediction tasks [22]. Bayer and
Osendorfer [4] explored that approach, and reconstructed missing values in the middle of motion
capture data. In their reconstruction method, the hidden stochastic variables are selected based on
an auxiliary inference model, after which the missing values are reconstructed conditioned on the
hidden stochastic variable values. Both steps are done with maximum a posteriori point selection
instead of sampling. Further quantitative evaluation of the method would be an interesting point of
comparison.
The proposed methods could be easily extended to continuous-valued data. As an example application, time-series reconstructions with a recurrent model has been shown to be effective in speech
recognition especially under impulsive noise [23].
Acknowledgements
We thank KyungHyun Cho and Yoshua Bengio for useful discussions. The software for the simulations for this paper was based on Theano [3, 7]. Nokia has supported Mathias Berglund and the
Academy of Finland has supported Tapani Raiko.
References
[1] Bahdanau, D., Cho, K., and Bengio, Y. (2015). Neural machine translation by jointly learning to align and
translate. In Proceedings of the International Conference on Learning Representations (ICLR 2015).
[2] Baldi, P., Brunak, S., Frasconi, P., Soda, G., and Pollastri, G. (1999). Exploiting the past and the future in
protein secondary structure prediction. Bioinformatics, 15(11), 937–946.
[3] Bastien, F., Lamblin, P., Pascanu, R., Bergstra, J., Goodfellow, I. J., Bergeron, A., Bouchard, N., and
Bengio, Y. (2012). Theano: new features and speed improvements. Deep Learning and Unsupervised
Feature Learning NIPS 2012 Workshop.
[4] Bayer, J. and Osendorfer, C. (2014).
arXiv:1411.7610.
Learning stochastic recurrent networks.
8
arXiv preprint
[5] Bengio, Y., Simard, P., and Frasconi, P. (1994). Learning long-term dependencies with gradient descent is
difficult. IEEE Transactions on Neural Networks, 5(2), 157–166.
[6] Bengio, Y., Yao, L., Alain, G., and Vincent, P. (2013). Generalized denoising auto-encoders as generative
models. In Advances in Neural Information Processing Systems, pages 899–907.
[7] Bergstra, J., Breuleux, O., Bastien, F., Lamblin, P., Pascanu, R., Desjardins, G., Turian, J., Warde-Farley,
D., and Bengio, Y. (2010). Theano: a CPU and GPU math expression compiler. In Proceedings of the
Python for Scientific Computing Conference (SciPy 2010). Oral Presentation.
[8] Boulanger-Lewandowski, N., Bengio, Y., and Vincent, P. (2012). Modeling temporal dependencies in
high-dimensional sequences: Application to polyphonic music generation and transcription. In Proceedings
of the 29th International Conference on Machine Learning (ICML 2012), pages 1159–1166.
[9] Brakel, P., Stroobandt, D., and Schrauwen, B. (2013). Training energy-based models for time-series imputation. The Journal of Machine Learning Research, 14(1), 2771–2797.
[10] Glorot, X. and Bengio, Y. (2010). Understanding the difficulty of training deep feedforward neural networks. In International conference on artificial intelligence and statistics, pages 249–256.
[11] Goodfellow, I., Mirza, M., Courville, A., and Bengio, Y. (2013). Multi-prediction deep boltzmann machines. In Advances in Neural Information Processing Systems, pages 548–556.
[12] Graves, A., Liwicki, M., Fernández, S., Bertolami, R., Bunke, H., and Schmidhuber, J. (2009). A novel
connectionist system for unconstrained handwriting recognition. IEEE Transactions on Pattern Analysis
and Machine Intelligence, 31(5), 855–868.
[13] Graves, A., Mohamed, A.-r., and Hinton, G. (2013). Speech recognition with deep recurrent neural
networks. arXiv preprint arXiv:1303.5778.
[14] Haykin, S. (2009). Neural networks and learning machines, volume 3. Pearson Education.
[15] Hermans, M. and Schrauwen, B. (2013). Training and analysing deep recurrent neural networks. In
C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 190–198. Curran Associates, Inc.
[16] Hochreiter, S. and Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8), 1735–
1780.
[17] Koutnı́k, J., Greff, K., Gomez, F., and Schmidhuber, J. (2014). A clockwork RNN. In Proceedings of the
31 st International Conference on Machine Learning.
[18] Maas, A. L., Hannun, A. Y., Jurafsky, D., and Ng, A. Y. (2014). First-pass large vocabulary continuous
speech recognition using bi-directional recurrent dnns. arXiv preprint arXiv:1408.2873.
[19] Mikolov, T., Joulin, A., Chopra, S., Mathieu, M., and Ranzato, M. (2014). Learning longer memory in
recurrent neural networks. arXiv preprint arXiv:1412.7753.
[20] Pascanu, R., Mikolov, T., and Bengio, Y. (2013). On the difficulty of training recurrent neural networks.
In Proceedings of the 30th International Conference on Machine Learning (ICML 2013), pages 1310–1318.
[21] Raiko, T. and Valpola, H. (2001). Missing values in nonlinear factor analysis. In Proc. of the 8th Int.
Conf. on Neural Information Processing (ICONIP01),(Shanghai), pages 822–827.
[22] Raiko, T., Berglund, M., Alain, G., and Dinh, L. (2015). Techniques for learning binary stochastic
feedforward neural networks. In International Conference on Learning Representations (ICLR 2015), San
Diego.
[23] Remes, U., Palomäki, K., Raiko, T., Honkela, A., and Kurimo, M. (2011). Missing-feature reconstruction
with a bounded nonlinear state-space model. IEEE Signal Processing Letters, 18(10), 563–566.
[24] Rumelhart, D. E., Hinton, G. E., and Williams, R. J. (1986). Learning representations by back-propagating
errors. Nature, 323, 533–536.
[25] Schuster, M. and Paliwal, K. K. (1997). Bidirectional recurrent neural networks. IEEE Transactions on
Signal Processing, 45(11), 2673–2681.
[26] Sutskever, I., Martens, J., and Hinton, G. E. (2011). Generating text with recurrent neural networks. In
Proceedings of the 28th International Conference on Machine Learning (ICML 2011), pages 1017–1024.
[27] Uria, B., Murray, I., and Larochelle, H. (2014). A deep and tractable density estimator. In Proceedings of
The 31st International Conference on Machine Learning, pages 467–475.
9
1.5
Wikipedia
Nottingham
Piano
Muse
JSB
NADE NLL minus GSN NLL
1
0.5
0
-0.5
-1
-1.5
10
20
30
40
50
60
70
80
90
100
MCMC steps
Figure 3: Difference in gap negative test log-likelihood between GSN and NADE for different data
sets. We can see that GSN outperforms NADE after a certain threshold of MCMC steps. Note
that the rising curves with music data sets might be explained by the model being overfitted to the
training set.
Table 2: Negative Log Likelihood (NLL) over the test data for the trained models. As a comparison,
similar numbers are presented for the unidirectional RNNs trained by Boulanger-Lewandowski et al.
[8] and by Hermans and Schrauwen [15]. The results serve as a sanity check that the trained models
have been trained successfully. Note that the BRNN has more context than the unidirectional RNNs,
and is hence expected to perform better measured by NLL. Also note that the training of the model
for NADE without masking is very similar to the training of the BRNN.
Inference strategy
Wikipedia
Nottingham
Piano
Muse
JSB
NLL of BRNN
NLL of NADE masked
NLL of NADE
NLL of unidirectional RNN
Boulanger-Lewandowski et al. [8]
Hermans and Schrauwen [15]
0.37
0.55
0.41
1.21
3.23
3.32
2.89
3.87
4.46
6.82
7.42
7.05
7.88
8.37
5.68
6.48
5.54
7.43
8.13
7.80
8.51
7.59
8.76
8.71
1.12
Table 3: Random samples from reconstructed gaps (underlined) using either NADE (left) or GSN
(right). Note that double spaces are difficult to distinguish from single spaces.
NADE
GSN
s practice their short for as long as possibl
s practice their show,d ar as long as possibl
nd nephews through copiene. It was reported o
nd nephews through clubs.e. It was reported o
but originally a nuclear bunker to protect th
but originally a nuclear bunker to protect th
the Opera” and ”The Heas”, have been fully r
the Opera” and ”Thereove”, have been fully r
e III. Dunmore died in March 1809 and was suc
e III. Dunmore died in March 1809 and was suc
ch fades from golden aillow at the center thr
ch fades from goldenly show at the center thr
Colorado state champion who is credited with
Colorado state champions ho is credited with
ing Bushroot, Liquida. HL and Megavolt). His
ing Bushroot, Liquidlands and Megavolt). His
acial lake bed known as the Black Dirt Region
acial lake bed known of the Black Dirt Region
e all ancient leadersidred it to laud their a
e all ancient leader cannd it to laud their a
ted November 2005. They series also featured
ted November 2005. The series also featured
TR amyloid is extractedliar. Treatment of TT
TR amyloid is extract s war. Treatment of TT
hile the gigantic ”Ston saurus sikanniensis”,
hile the gigantic ”S”So Saurus sikanniensis”,
area. Initially one other compartment was an
area. Initially one other compartment was an
10
| 9 |
arXiv:1701.07402v2 [math-ph] 29 Jul 2017
Smallest eigenvalue density for regular or
fixed-trace complex Wishart-Laguerre ensemble and
entanglement in coupled kicked tops
Santosh Kumar & Bharath Sambasivam & Shashank Anand
Department of Physics, Shiv Nadar University, Gautam Buddha Nagar, Uttar
Pradesh – 201314, India
E-mail: [email protected]
Abstract. The statistical behaviour of the smallest eigenvalue has important
implications for systems which can be modeled using a Wishart-Laguerre ensemble, the
regular one or the fixed trace one. For example, the density of the smallest eigenvalue of
the Wishart-Laguerre ensemble plays a crucial role in characterizing multiple channel
telecommunication systems. Similarly, in the quantum entanglement problem, the
smallest eigenvalue of the fixed trace ensemble carries information regarding the nature
of entanglement.
For real Wishart-Laguerre matrices, there exists an elegant recurrence scheme
suggested by Edelman to directly obtain the exact expression for the smallest
eigenvalue density. In the case of complex Wishart-Laguerre matrices, for finding exact
and explicit expressions for the smallest eigenvalue density, existing results based on
determinants become impractical when the determinants involve large-size matrices.
In this work, we derive a recurrence scheme for the complex case which is analogous to
that of Edelman’s for the real case. This is used to obtain exact results for the smallest
eigenvalue density for both the regular, and the fixed trace complex Wishart-Laguerre
ensembles. We validate our analytical results using Monte Carlo simulations. We also
study scaled Wishart-Laguerre ensemble and investigate its efficacy in approximating
the fixed-trace ensemble. Eventually, we apply our result for the fixed-trace ensemble
to investigate the behaviour of the smallest eigenvalue in the paradigmatic system of
coupled kicked tops.
1. Introduction
Wishart-Laguerre ensembles constitute an important class of random matrix
ensembles [1, 2] and have found diverse applications in the field of multivariate
statistics [3–5], problems related to time series [6–8], low energy quantum
chromodynamics [9, 10], multiple-channel telecommunication [11–13], quantum
entanglement [14–20], etc. Often the smallest eigenvalue distribution plays a crucial role
in investigating the behaviour of the system under study. For instance, in the context
of multiple input multiple output (MIMO) communication, the smallest eigenvalue of
Wishart-Laguerre ensemble determines the minimum distance between the received
Smallest eigenvalue density for complex Wishart-Laguerre ensemble
2
vectors [21], and also the lower bound on the channel capacity that eventually is used
in antenna selection techniques [22]. Similarly, the smallest eigenvalue density of fixed
trace Wishart-Laguerre ensemble serves as an important metric for characterizing the
entanglement in bipartite systems [23, 24].
Matrices governed by Wishart distribution are parametrized by their size (n) and
the degree of freedom (m) [3–5]; see section 2. In this paper we use the term regular
to mean unrestricted trace Wishart matrices with m ≥ n. The smallest eigenvalue
of Wishart matrices have been studied extensively, both for regular, and fixed trace
scenarios. Moreover, finite-dimension, as well as large-dimension asymptotic cases have
been explored. Our focus here is on the finite-dimensional scenario with the primary
objective to obtain explicit expressions for the smallest eigenvalue density.
In the case of regular Wishart-Laguerre ensemble, for real matrices at finite
n, m, Edelman, among other things, has provided a recursion-based scheme to obtain
the smallest eigenvalue density [25, 26]. For complex matrices, the result for the
cumulative distribution of the smallest eigenvalue goes back to Khatri, who worked
out a determinant-based expression [27]. Forrester and Hughes have given closed
expressions for the density of the smallest and second-smallest eigenvalues in terms
of Wronskian and Toeplitz determinants [28]. Further generalizations applicable to a
wider symmetry class of Wishart matrices have been considered in [29–31]. Damgaard
and Nishigaki have derived the probability distribution of the kth smallest eigenvalue of
Dirac operator in the microscopic scaling limit for real, complex as well as quaternion
cases and demonstrated the universality of the results [32]. These eigenvalues have
direct relationship with those of the Wishart-Laguerre ensemble. In [33] Akemann et
al. have further explored the smallest eigenvalue distribution of real Wishart-Laguerre
matrices and validated the universality in the microscopic limit for the correlated case
also. Moreover, in a very recent work by Edelman, Guionnet, and Péché [34], the
behaviour of the smallest eigenvalue coming from finite random matrices (including
Wishart) based on non-Gaussian entries has been investigated.
For the fixed trace case, Chen, Liu and Zhou [35] have derived the smallest
eigenvalue density in terms of sum of Jack polynomials. Moreover, for the complex case
this expression has been simplified to inverse Laplace transform of a certain determinant.
In [36], for the real Wishart matrices, Edelman’s recursive approach for the regular
Wishart-Laguerre ensemble has been used by Akemann and Vivo to obtain the smallest
eigenvalue density for the fixed trace Wishart-Laguerre ensemble.
For the complex case, the exact result for the smallest eigenvalue density is
available in terms of determinants involving n-dimensional [21, 27, 37] or α-dimensional
matrices [28, 35], where α = m − n is the rectangularity. These results have been used
for asymptotic analysis when n → ∞ for α fixed and these correspond to eigenvalue
distributions comprising Bessel kernel [28,32,38]. On the other hand, if both n, α → ∞,
an analysis involving Fredholm determinant with Airy kernel is possible and that leads
to the celebrated Tracy-Widom (TW) distribution [39–41]. In [42] the transition regime
between the Bessel and Airy densities has also been investigated. While these asymptotic
Smallest eigenvalue density for complex Wishart-Laguerre ensemble
3
results give a wealth of information regarding the universal aspects, if one desires
explicit expressions for the smallest eigenvalue density for large but finite n, α then
these determinant based results turn out to be impractical, even with the availability
of modern computational packages. We should remark that the smallest eigenvalue
density has also been worked out for correlated Wishart-Laguerre ensembles, both for
real and complex cases [37, 43, 44]. However, the exact results are, again, in terms of
determinants or Pfaffians.
The iterative scheme provided by Edelman [25, 26] is quite an effective and
convenient way to calculate the density for the case of real matrices, and one can handle
large values of dimensionality n and rectangularity α. For the complex Wishart-Laguerre
ensemble, Forrester and Hughes have worked out an iterative scheme for the cumulative
distribution of the smallest eigenvalue. However, to the best of our knowledge, an
iterative scheme analogous to that of Edelman’s, for direct evaluation of the probability
density of the smallest eigenvalue has hitherto remained unavailable. In the present
work, we derive the recurrence scheme that applies to the complex Wishart-Laguerre
ensemble. These results involve an ‘exponential times polynomial’ structure. Since the
fixed trace ensemble is related to the regular Wishart-Laguerre ensemble via a Laplace
transform, the structure of the smallest eigenvalue density in the latter leads to a very
convenient calculation of density in the former case as well [36]. Moreover, arbitrary
moments of the smallest eigenvalue are also readily obtained. In addition, for the regular
Wishart-Laguerre ensemble we also indicate a relation between the recurrence relation
and the determinantal results of Forrester and Hughes [28], and explicitly demonstrate
the equivalence between the two results for rectangularity α = 0, 1. Similarly, for
the fixed-trace scenario we prove the equivalence of the recursion-based expression and
the result of Chen, Liu and Zhou [35] based on the inverse Laplace transform of a
determinant, again for α = 0, 1.
Finally, we use the smallest eigenvalue density of the fixed trace ensemble to study
entanglement formation in the paradigmatic system of coupled kicked tops. We should
note that although the Floquet operator involved in this system belongs to the circular
orthogonal ensemble (COE) [1,2], the dynamically generated states are not random real
vectors in the Schmidt basis [45]. Rather, they are complex, and hence, the results for
the complex fixed trace Wishart-Laguerre ensemble are applicable.
2. Wishart-Laguerre ensemble
Consider complex matrices A of dimensions n × m taken from the density PA (A) ∝
exp − tr AA† . We assume without loss of generality that n ≤ m. Then, the
non-negative definite matrices W = AA† constitute the (regular) Wishart-Laguerre
ensemble with the probability density
PW (W) ∝ (det W)m−n exp (− tr W) .
(1)
The joint probability density of unordered eigenvalues (λj ≥ 0, j = 1, ..., n) of W is
Smallest eigenvalue density for complex Wishart-Laguerre ensemble
4
given by [1, 2] ‡
P (λ1 , ..., λn ) =
Cn,α ∆2n ({λ})
n
Y
λαj e−λj ,
(2)
j=1
with α = m − n, and
−1
Cn,α
=
n
Y
Γ(j + 1)Γ(j + α).
(3)
j=1
Q
Also, ∆n ({λ}) = 1≤k<j≤n (λj − λk ) is the Vandermonde determinant. For this classical
ensemble, exact expression for correlation functions of all orders are known [1, 2]. For
example, the first order marginal density (one-level density),
Z ∞
Z ∞
p(λ) =
dλ2 · · ·
dλn P (λ, λ2 , ..., λn ),
(4)
0
0
is given by
n−1
1 −λ α X Γ(j + 1) (α) 2
Lj (λ)
p(λ) = e λ
n
Γ(j + α + 1)
j=0
=
Γ(n) −λ α (α)
(α+1)
e λ [Ln−1 (λ)L(α+1)
(λ) − L(α)
n
n (λ)Ln−1 (λ)].
Γ(m)
(5)
(γ)
Here Li (λ) represents the associated Laguerre polynomials [46].
We now focus on the statistics of the smallest eigenvalue of W. The probability
density for the smallest eigenvalue can be calculated using the joint probability
density (2) as [25, 26, 28]
Z ∞
Z ∞
dλn P (x, λ2 , ..., λn ).
(6)
dλ2 · · ·
f (x) = n
x
x
As shown in Appendix A, this can be brought down to the form
f (x) = cn,m e−nx xα gn,m (x).
(7)
Here the normalization factor cn,m is given by
n−1
cn,m
Y Γ(i + 2)
1
=
,
Γ(n)Γ(m) i=1 Γ(i + α)
(8)
and gn,m (x) is obtained using the following recurrence scheme:
I. Set S0 = gn,m−1 (x), S−1 = 0
II. Iterate the following for i = 1 to n − 1 :
x dSi−1
Si = (x + m − i + 1)Si−1 −
n − i dx
m−i
+ x (i − 1)
Si−2
n−i
III. Obtain gn,m (x) = Sn−1
‡ We should note that m × m–dimensional matrices A† A share the eigenvalues λ1 , ..., λn of AA† . The
other m − n eigenvalues are all zeros. The joint density in this case can be written by introducing deltafunctions for these zero-eigenvalues and implementing proper symmetrization among all eigenvalues.
Smallest eigenvalue density for complex Wishart-Laguerre ensemble
5
The initial case (m = n) is given by gn,n (x) = 1. Thus, starting from the square case,
for which the result is simple (f (x) = ne−nx ), we can go up to any desired rectangularity
by repeated application of the above recurrence scheme. We note that (7) is of the form
f (x) =
αn+1
X
hj xj−1 e−nx ,
(9)
j=α+1
where hj are n, m dependent rational coefficients. The lower and upper limits of the
summation in (9) are α + 1 and αn + 1, respectively. This is because the recurrence
scheme applied for rectangularity α gives gn,m (x) as a polynomial of degree α(n − 1),
and there is the factor xα in (7). The coefficients hj can be extracted once the above
recursion has been applied.
The above simple structure for the probability density gives easy access to the η–th
moment of the smallest eigenvalue of the Wishart-Laguerre ensemble. We obtain
Z ∞
αn+1
X hj
η
η
Γ(j + η).
(10)
hx i =
x f (x) dx =
nj+η
0
j=α+1
We would like to remark that this relationship holds not only for non-negative integer
values of η, but also for any complex η such that Re(η) > −α − 1.
In Appendix B we provide simple Mathematica [47] codes that produce exact results
for the density as well as the η–th moment for any desired value of n, m by implementing
the above results.
In figure 1 we consider the smallest eigenvalue density and compare the analytical
results with Monte Carlo simulations using 105 matrices for n = 8, 15, and several α
values. We find excellent agreement in all cases.
8
1.4
(a)
1.2
α=0
6
� (�)
α=1
4
α=2
α=3
2
(b)
1.0
α=5
0.8
α = 10
0.6
α = 15
0.4
0.2
0
0.0
0.5
1.0
1.5
2.0
0.0
0
�
1
2
3
4
5
6
7
Probability density of the smallest eigenvalue for the WishartLaguerre ensemble with (a) n = 8, (b) n = 15, and α values as indicated. The
solid lines are analytical predictions based on (7), while the symbols (filledcircles, squares, triangles) represent results of Monte Carlo simulations.
Figure 1.
Smallest eigenvalue density for complex Wishart-Laguerre ensemble
Forrester and Hughes’ result for the smallest eigenvalue density reads [28]
i
Γ(n + 1) −nx α h α+j−k−1 (3−α)
e x det Dt
Lm−2 (t)|t=−x
,
f (x) = (−1)α(α−1)/2
Γ(m)
j,k=1,...,α
where Dt ≡ d/dt. Comparing this result with (7), we find that
h
i
α+j−k−1 (3−α)
α(α−1)/2 Γ(n + 1)
det Dt
Lm−2 (t)|t=−x
.
gn,m (x) = (−1)
Γ(m) cn,m
j,k=1,...,α
6
(11)
(12)
Therefore, the recurrence scheme essentially leads to the evaluation of the above
determinant, which otherwise becomes difficult to evaluate directly for large α values.
Demonstrating the equality of the two sides in (12) directly seems challenging for
arbitrary n, m, if at all feasible. However, as shown below, for α = 1, we find that gn,m (x)
does lead to the associated Laguerre polynomial as evaluated by the determinantal
expression. Before analyzing the results of α = 1, we also consider α = 0, which
corresponds to the square case.
2.1. Results for α = 0
In this case gnm = 1 and in the expression (9), there is just one term in the sum, viz.
j = 1. The corresponding value of the coefficient hj is n. Thus, the smallest eigenvalue
density reads
f (x) = ne−nx .
(13)
Also, the moment-expression is given by
Γ(η + 1)
.
(14)
nη
These expressions agree with those derived in [25,28], as they should. We note that (11)
leads to the correct density, as the determinant part has to be taken as 1 for α = 0.
hxη i =
2.2. Results for α = 1
This is a nontrivial case. As shown in Appendix C, in this case, Si in the
(x). Consequently, gn,n+1 (x) =
recurrence relation can be identified with Γ(i + 1)Ln−i+1
i
(2)
Γ(n)Ln−1 (−x). Also cn,n+1 = 1/Γ(n), which leads to the smallest eigenvalue expression
(2)
f (x) = e−nx xLn−1 (−x).
(15)
This agrees with (11) when evaluated for α = 1. The use of the expansion of the
Laguerre polynomial [46] leads to the coefficient hj in (9) as
hj =
Γ(n + 2)
,
Γ(n − j + 2)Γ(j + 1)Γ(j − 1)
j = 2, 3, ..., n + 1.
(16)
The η–th moment follows as
η
hx i = Γ(n + 2)
n+1
X
j=2
nj+η Γ(n
Γ(j + η)
.
− j + 2)Γ(j + 1)Γ(j − 1)
(17)
Smallest eigenvalue density for complex Wishart-Laguerre ensemble
7
3. Fixed trace Wishart-Laguerre ensemble
Fixed trace ensembles constitute a special class of random matrices and can take care
of system dependent constraints. For the Wishart-Laguerre case, the corresponding
fixed trace ensemble arises naturally in the quantum entanglement problem in bipartite
systems [15, 17, 18]. With the trace value fixed at unity, it models the reduced density
matrix; see section 5. Using the Wishart matrices W from the preceding section, the
fixed trace ensemble can be realized by considering the matrices F = W/ tr W [18, 20].
The corresponding probability density is
PF (F) ∝ (det F)α δ(tr F − 1).
(18)
The joint density of unordered eigenvalues (0 ≤ µj ≤ 1; j = 1, ..., n) of F is obtained
as [15, 17, 18]
!
n
n
X
Y
F
2
PF (µ1 , ..., µn ) = Cn,α δ
µi − 1 ∆n ({µ})
µαj ,
(19)
i=1
j=1
F
Cn,α
= Γ(nm) Cn,α [48]. The corresponding marginal density has been derived
where
in [49] as a single sum over hypergeometric 5 F4 , and as a double sum over polynomials
in [50]. In [48] it has been given as a single sum over the Gauss hypergeometric function
(2 F1 ):
n−1
X
pF (µ) =
Ki µi+α (1 − µ)−i+nm−α−2
×
i=0
1−n,i−nm+α+1
n Fα+1
−n,i−nm+α+1
− (n − i − 1)Fα+1
.
(20)
µ
)/Γ(c). Also, the coefficient Ki is given
Here we used the notation Fca,b := 2 F1 (a, b; c; 1−µ
by
(−1)i Γ(m + 1)Γ(nm)
Ki =
.
(21)
nΓ(i + 1)Γ(n − i)Γ(i + α + 2)Γ(nm − α − i − 1)
Figure 2 shows the comparison between analytical and Monte Carlo results for the
marginal density of the fixed trace ensemble. We find excellent agreement.
Using Selberg’s integral and its properties [1, 2], it can be shown that both the
average and the variance of the trace for the regular Wishart-Laguerre ensemble is mn.
Therefore, if we consider the ensemble of matrices W/mn, the corresponding eigenvalues
are 1/mn times the eigenvalues of W. Moreover, while individually these scaled matrices
may not have trace one, on average, it is one. In addition, the variance of trace for this
scaled ensemble is 1/mn, which becomes negligible for large n, m. Therefore, it is
expected that this scaled ensemble will approximate the behaviour of the fixed trace
ensemble. For instance, the marginal density for this scaled ensemble,
pe(µ) = mn p(mnµ),
(22)
should serve as an approximation to pF (µ). We can also use Marčenko-Pastur
density [51] to write down an expression for pe(µ) valid for large n, m:
p
p
(1 ± n/m)
m (µ+ − µ)(µ − µ− )
peMP (µ) =
; µ± =
.
(23)
2π
µ
n
Smallest eigenvalue density for complex Wishart-Laguerre ensemble
8
Marginal density for fixed trace Wishart-Laguerre ensemble
using (20) with n = 8, and α values as indicated. The solid lines are analytical
predictions based on (20), and the symbols correspond to Monte Carlo results.
Figure 2.
This relation of the fixed trace ensemble with the scaled ensemble has been used
in [19, 52–54]. In figure 3 we plot the exact one-eigenvalue density (20) for the fixed
trace ensemble, as well as the densities (22), (23) based on the scaled ensemble. We
find that while the density obtained using the scaled ensemble is not able to capture the
oscillations, it does capture the overall shape of the density quite well.
25
�� ( μ )� � ( μ )� � ��( μ )
(a)
(b)
25
20
20
15
10
5
20
(c)
15
15
10
10
5
5
0
0
0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.00
0.05
0.10
0.15
0
0.00 0.02 0.04 0.06 0.08 0.10
�
Figure 3. Marginal density of eigenvalues for the fixed trace Wishart-Laguerre
ensemble: Comparison between exact (solid black), scaled (dashed red), scaled
Marčenko-Pastur (dotted blue) as given by (20), (22) and (23), respectively.
(a) n = m = 15, (b) n = 20, m = 30, (c) n = 25, m = 75.
The exact result for the density of the smallest eigenvalue for the fixed trace
ensemble can be obtained using (7), (9), and the Laplace-inversion result
L−1 {s−a e−nsx }(t = 1) = (1 − nx)a−1 Θ(1 − nx)/Γ(a),
with Θ(z) being the Heaviside-theta function. We have
fF (x) = Γ(nm)L−1 {s1−nm f (sx)}(t = 1)
(24)
Smallest eigenvalue density for complex Wishart-Laguerre ensemble
500
9
400
(a)
400
α=0
300
α=5
α=1
300
� � (�)
(b)
α=2
200
α = 10
200
α = 15
α=3
100
100
0
0
0.000 0.005 0.010 0.015 0.020 0.025 0.030 0.000
�
0.005
0.010
0.015
0.020
Figure 4. Probability density of the smallest eigenvalue for the fixed trace
Wishart-Laguerre ensemble with (a) n = 8, (b) n = 15, and α values as
indicated. The solid lines are analytical predictions based on (25), while the
symbols (filled- circles, squares, triangles) represent results of Monte Carlo
simulations.
= Γ(nm)
αn+1
X
j=α+1
hj
(1 − nx)nm−j−1 xj−1
Θ(1 − nx).
Γ(nm − j)
(25)
F
/Cn,α . In [36] a
The prefactor Γ(nm) comes from the ratio of normalizations, viz. Cn,α
similar strategy has been used for the real case. In figure 4 we show the comparison
between the analytical prediction and the numerical simulation for the smallest
eigenvalue density. They are in excellent agreement.
The idea of using scaled Wishart-Laguerre ensemble, as discussed above, can be
applied here as well. Therefore, an approximate density for the smallest eigenvalue can
be written using (7) as
fe(x) = mnf (mnx).
(26)
In figure 5 we compare this approximation with the exact result given by (25). The
approximation works pretty well. This approximate relation between the two densities
is also the reason behind the very similar shapes of the curves in figures 1 and 4,
respectively.
We also find that, using the first equality in (25), it follows that the η–th moment
of the smallest eigenvalue for the fixed trace ensemble is related to that of the regular
Wishart-Laguerre ensemble as
hxη iF =
Γ(nm)
hxη i.
Γ(nm + η)
(27)
This, similar to (10), holds for Re(η) > −α − 1.
Mathematica [47] codes to obtain explicit results for the above smallest eigenvalue
density of the fixed trace ensemble, as well as the moments are given in Appendix B.
Smallest eigenvalue density for complex Wishart-Laguerre ensemble
120
(a)
� � (�)� � (�)
100
80
(b)
(c)
400
60
80
300
40
60
500
10
200
40
20
20
100
0
0
0
0.00 0.01 0.02 0.03 0.04 0.05 0.00 0.01 0.02 0.03 0.04 0.05 0.000 0.002 0.004 0.006 0.008 0.010
�
Figure 5. Comparison between the exact (fF (x): solid black) and approximate
(fe(x): dashed red) probability densities for the smallest eigenvalue of the fixed
trace Wishart-Laguerre ensemble, as given by (25) and (26), respectively. The
parameter values used are (a) n = m = 5, (b) n = 8, m = 13, and (c)
n = 20, m = 30.
Similar to the unrestricted trace case, we discuss below the cases α = 0, 1 for the
fixed trace scenario.
3.1. Results for α = 0
For α = 0 we just have one term in the series (25), and h1 = n. Therefore, we arrive at
2 −2
fF (x) = n(n2 − 1)(1 − nx)n
Θ(1 − nx).
(28)
Also, the expression for the η-th moment is given by
Γ(η + 1)Γ(n2 )
hx iF = η
.
n Γ(n2 + η)
η
(29)
These expressions are in agreement with those obtained in [23, 24].
3.2. Results for α = 1
In this case use of the result (16) for hj in (25) leads to the smallest eigenvalue density
expression
2
fF (x) = Γ(n +n)Γ(n+2)
n+1
X
j=2
2
(1 − nx)n +n−j−1 xj−1
Θ(1−nx).(30)
Γ(n − j + 2)Γ(j + 1)Γ(j − 1)Γ(n2 + n − j)
Also, the η–th moment follows as
n+1
Γ(n2 + n)Γ(n + 2) X
Γ(j + η)
hx iF =
.
2
j+η
Γ(n + n + η) j=2 n Γ(n − j + 2)Γ(j + 1)Γ(j − 1)
η
(31)
Chen, Liu and Zhou have provided the result for the cumulative distribution § of
the smallest eigenvalue for the complex case in terms of an inverse-Laplace transform
§ More appropriately, the survival function or the reliability function.
Smallest eigenvalue density for complex Wishart-Laguerre ensemble
11
involving a determinant [35]:
n
o 1 − nx
1
(k)
mn−1 −1
−mn
Q(x) = Γ(mn)x
L
s
det[Ln+j−k (−s)]j,k=0,...,α−1
; 0 < x ≤ .(32)
x
n
We set α = 1 in this expression and use the expansion for associated Laguerre
polynomial [46], later on. The inverse Laplace transform can then be explicitly
performed leading us to
2
n
X
xj (1 − nx)n +n−j−1
2
.
(33)
Q(x) = Γ(n + 1)Γ(n + n)
Γ2 (j + 1)Γ(n − j + 1)Γ(n2 + n − j)
j=0
The probability density of the smallest eigenvalue follows upon using fF (x) =
−dQ(x)/dx as
2
n
X
nxj (1 − nx)n +n−j−2
2
fF (x) = Γ(n + 1)Γ(n + n)
Γ2 (j + 1)Γ(n − j + 1)Γ(n2 + n − j − 1)
j=0
2
−Γ(n + 1)Γ(n + n)
n+1
X
j=1
2
xj−1 (1 − nx)n +n−j−1
.
Γ(j)Γ(j + 1)Γ(n − j + 1)Γ(n2 + n − j)
(34)
In the second term we start the sum from j = 1 as j = 0 term is zero because of the
diverging gamma function Γ(j) in the denominator. Moreover, we have added a term
j = n + 1 which, again, is zero because of the diverging Γ(n − j + 1) in the denominator.
Next, we consider j → j − 1 in the summand of the first term, and hence the sum runs
from j = 1 to n + 1. The two terms can then be combined to yield (30) by noticing that
the n = 1 term is zero. We note that (32) also produces the correct result for α = 0 if
the determinant value in this case is interpreted as 1.
4. Large n, α evaluations and comparison with Tracy-Widom density
The universality aspects of the regular Wishart-Laguerre ensemble have been explored in
several notable works [29–34,38–41,55–60]. For the fixed trace case, the local statistical
properties of the eigenvalues have been studied in [35, 61]. In particular, it has been
shown that the fixed trace and the regular Wishart-Laguerre ensembles share identical
universal behaviour for large n at the hard edge, in the bulk and at the soft edge for α
fixed [61].
For the complex Wishart-Laguerre ensemble, in the square case (α = 0), the
smallest eigenvalue scaled by n gives rise to an exponential density [23, 25, 26].
Interestingly, this is true for all n in this case, as evident from (13). For large n it
has been shown that this result holds even if the matrices W are constructed from nonGaussian A [55] (cf. section 2) with certain considerations. For the fixed trace case,
in view of its connection to the scaled Wishart-Laguerre ensemble (25), as discussed
in section 3, the smallest eigenvalue has to be scaled by n3 to obtain the exponential
density [23, 24]. This can be easily verified to be true from (28). Furthermore, very
recently, 1/n corrections to the scaled smallest eigenvalue has been worked our for close
to square cases [34, 59, 60].
Smallest eigenvalue density for complex Wishart-Laguerre ensemble
0.5
0.5
0.5
����������� �������
(a)
(b)
(c)
0.4
0.4
0.4
0.3
0.3
0.3
0.2
0.2
0.2
0.1
0.1
0.1
0.0
-6
-4
-2
0
2
0.5
0.0
-6
-4
-2
0
2
0.5
0.0
-6
(d)
0.3
0.3
0.3
0.2
0.2
0.2
0.1
0.1
0.1
0
0
2
0.0
-6
2
(f)
0.4
-2
-2
(e)
0.4
-4
-4
0.5
0.4
0.0
-6
12
-4
-2
�
0
2
0.0
-6
-4
-2
0
2
Figure 6. Comparison of Tracy-Widom density (solid black) with densities
−σf (σx + η) (dashed red) and −(σ/mn)fF ((σx + η)/mn) (dotted blue) for
(a) n = 25, m = 125, (b) n = 25, m = 225, (c) n = 25, m = 425, (d)
n = 50, m = 150, (e) n = 50, m = 250, and (f) n = 50, m = 450. It should be
noted that the rectangularity α varies as 100 for (a), (d); 200 for (b), (e); and
400 for (c), (f). Also, the aspect ratio n/m is 1/5 for (a), (e), and 1/9 for (b),
(f), respectively.
For the rectangular case, Feldheim and Sodin [56] have shown that, in the limit m →
∞, n → ∞ with n/m bounded away from 1, the shifted and scaled smallest eigenvalue,
(λmin − η)/σ, leads to the Tracy-Widom density [39, 40]. Here η = (n1/2 − m1/2 )2 and
σ = (n1/2 − m1/2 )(n−1/2 − m−1/2 )1/3 < 0. The convergence, however, is slower when
α = m − n is close to 0. This can be attributed to the fact that the hard-edge behaviour
is prevalent unless α is large [58]. We should also mention that the Tracy-Widom
density captures the largest typical fluctuations of the smallest eigenvalue, while the
larger atypical fluctuations are described by large deviation results, as derived in [57]
by Katzav and Castillo.
As a consequence of identical universal behaviour of spectra of the regular and
fixed-trace ensembles [61], the Tracy-Widom density is also expected in the case of fixed
trace ensemble. The proper scaling in this case can be inferred from the connection
with the scaled Wishart-Laguerre ensemble, as discussed in Sec. 3. This implies that
the density of (mnµmin − η)/σ will converge to the Tracy-Widom result.
The recursion scheme given in section 2 enables us to work out the exact results
for the smallest eigenvalue density for large values of n and α and hence to explore
the above limit. In view of the scaling and shift indicated above, −σf (σx + η)
Smallest eigenvalue density for complex Wishart-Laguerre ensemble
13
and −(σ/mn)fF ((σx + η)/mn) should coincide with the Tracy-Widom density of
the unitarily-invariant class. We examine this in figure 6. We can see that as the
rectangularity α increases the agreement improves for both n = 25 and n = 50. This
is because the hard-edge effect is diminished with increasing α. We also find that for a
given aspect ratio n/m < 1, as expected, the agreement is better for larger n.
5. Entanglement in bipartite systems
Consider a bipartite partition of an N1 N2 –dimensional Hilbert space H(N1 N2 ) consisting
(N )
(N )
of subsystems A and B, which belong to Hilbert spaces HA 1 and HB 2 , respectively,
(N )
(N )
such that H(N1 N2 ) = HA 1 ⊗ HB 2 . A general state |ψi of H(N1 N2 ) is given in terms of
(N )
(N )
the orthonormal states |iA i of HA 1 , and |αB i of HB 2 as
|ψi =
N1 X
N2
X
xi,α |iA i ⊗ |αB i,
(35)
i=1 α=1
PN1 PN2
2
where xi,α are complex coefficients, such that hψ|ψi =
i=1
α=1 |xi,α | = 1. The
density matrix for the composite system, considering a pure state scenario, is given by
ρ = |ψihψ| =
N1 X
N2
X
xi,α x∗j,β |iA ihj A | ⊗ |αB ihβ B |,
(36)
i,j=1 α,β=1
with tr[ρ] = 1. The reduced density matrix for subsystem, say A, can be obtained by
tracing out the subsystem B as
ρA =
N2
X
0
0
hα |ρ|β i =
α0 ,β 0 =1
N1
X
Fi,j |iA ihj A |,
(37)
i,j=1
P 2
∗
where Fi,j = N
α=1 xi,α xj,α can be viewed as the matrix elements of some N1 × N1 –
dimensional matrix F = XX† . Here X is a rectangular matrix of dimension N1 × N2
that has xi,α as its elements. In the eigenbasis of F, (37) can be written as
ρA =
N1
X
µi |µAi ihµAi |.
(38)
i=1
The eigenvalues µi of F are referred to as the Schmidt eigenvalues or Schmidt numbers.
Due to the trace condition, they satisfy
N1
X
µi = tr XX† = tr F = 1.
(39)
i=1
Suppose N1 ≤ N2 . Now, if we sample all normalized density matrices with equal
probabilities, i.e., if we choose the coefficients xi,α randomly using the Hilbert-Schmidt
density PX (X) ∝ δ(tr XX† − 1), then F defined here is statistically equivalent to the
F defined in (18), and the statistics of the Schmidt eigenvalues are described exactly
by the joint eigenvalue density of the fixed trace Wishart-Laguerre ensemble (19), with
N1 = n, N2 = m [19, 20]. It should be noted that the reduced density matrix for
Smallest eigenvalue density for complex Wishart-Laguerre ensemble
14
the subsystem B will correspond to the matrix X† X, which will share the eigenvalues
µ1 , ..., µn , and will have the rest of its m − n eigenvalues as zero. As such, it carries the
same amount of information as F.
The Schmidt eigenvalues can be used to study various entanglement measures
such as von-Neumann entropy, Renyi entropies, concurrence, purity etc. As a
consequence, fixed trace Wishart-Laguerre ensemble has been extensively used to model
the reduced density matrices arising in the study of entanglement formation in bipartite
systems [14–16, 20, 23, 24, 35, 36, 48–50, 52–54, 61–67]. These works have explored several
aspects such as moments and distributions of Schmidt eigenvalues and entanglement
measures.
The density of the minimum eigenvalue in the present context not only sheds light
on the nature of the entanglement, but also provides important information about the
degree to which the effective dimension of the Hilbert space of the smaller subsystem
can be reduced [23, 24]. The smallest eigenvalue assumes values from 0 to 1/n. In
P
the extreme case of 1/n, it follows from the trace constraint ni=1 µi = 1, that all the
eigenvalues must have the same value 1/n. Consequently, the von-Neumann entropy,
P
− ni=1 µi ln µi , assumes its maximum value ln n, thereby making the corresponding
state maximally entangled. In the other extreme of the smallest eigenvalue being 0 (or
very close to 0), while it does not provide information regarding entanglement, from
the Schmidt decomposition it follows that the effective Hilbert space dimension of the
subsystem gets reduced by one.
In the next section we consider a system of coupled kicked tops and explore to what
extent the behaviour of the smallest Schmidt eigenvalue is described by the fixed trace
Wishart-Laguerre ensemble.
6. Coupled kicked tops
The kicked top system has been a paradigm for studying chaos, both classically and
quantum mechanically [68, 69]. Remarkably, it has also been realized experimentally
using an ensemble of Caesium atoms [70]. In the study of entanglement formation in
bipartite systems, a coupled system of two kicked tops has turned out to be of great
importance and has been investigated by a number of researchers [45, 53, 54, 65, 71, 72].
The full Hamiltonian of the coupled kicked top system is
H = H1 ⊗ 1N2 + 1N1 ⊗ H2 + H12 .
(40)
∞
kr 2 X
π
J
δ(t − ν), r = 1, 2,
Hr = Jyr +
2
2jr zr ν=−∞
(41)
Here,
represent the Hamiltonians for the individual tops, and
∞
X
H12 = √
(Jz ⊗ Jz2 )
δ(t − ν)
j1 j2 1
ν=−∞
(42)
Smallest eigenvalue density for complex Wishart-Laguerre ensemble
15
is the interaction term. Also, 1Nr represents identity operator that acts on Nr dimensional Hilbert space H(Nr ) . The Hamiltonians H1 and H2 correspond respectively
to N1 (= 2j1 + 1)-dimensional, and N2 (= 2j2 + 1)-dimensional Hilbert spaces H(N1 )
and H(N2 ) , respectively. The Hamiltonian for the coupled kicked tops corresponds
to an N1 N2 -dimensional Hilbert space H(N1 N2 ) = H(N1 ) ⊗ H(N2 ) . Also, (Jxr , Jyr , Jyr )
are angular momentum operators for the rth top and satisfy the usual commutation
relations. The parameter kr controls the chaotic behaviour of the individual tops. The
parameter takes care of the coupling between the two tops.
The unitary time evolution operator (Floquet operator) corresponding to the
Hamiltonian (40) is
U = (U1 ⊗ U2 )U12 ,
(43)
with
ιkr 2
ιπ
J
, r = 1, 2;
Ur = exp − Jyr −
2
2jr zr
ι
U12 = exp − √
Jz ⊗ Jz2 .
j1 j2 1
(44)
(45)
√
Here ι = −1 represents the imaginary unit. The initial state for the individual
tops is chosen as a generalized SU(2) coherent state or the directed angular
(r)
(r)
momentum state [68,
is given in |jr , mr i basis as hjr , mr |θ0 , φ0 i =
r69], which
−j
(r)
(r)
2jr
(1 + |γr |2 ) r γrjr −mr
with γr ≡ exp(ιφ0 ) tan(θ0 /2). For later use, we define
jr +mr
Nr -dimensional vectors given by
(r)
(r)
χr = [hjr , mr |θ0 , φ0 i]mr =−jr ,...,+jr .
(46)
For the coupled top, the initial state is taken as the tensor-product of the states
(1)
(1)
(2)
(2)
of the individual tops: |ψ(0)i = |θ0 , φ0 i ⊗ |θ0 , φ0 i. We can implement the time
evolution to obtain the state |ψ(ν)i starting from |ψ(0)i using the iteration scheme
|ψ(ν)i = U |ψ(ν − 1)i = (U1 ⊗ U2 )U12 |ψ(ν − 1)i, which, when written in hj1 , s1 ; j2 , s2 |
basis, is [72]
hj1 , s1 ; j2 , s2 |ψ(ν)i = exp −ι √
s1 s2
j1 j2
+j1
+j2
X
X
×
hj1 , s1 |U1 |j1 , m1 ihj2 , s2 |U2 |j2 , m2 ihj1 , m1 ; j2 , m2 |ψ(ν − 1)i.
m1 =−j1 m2 =−j2
A convenient approach for implementing this iteration scheme and eventually
calculating the reduced density matrix involves writing the states as N1 × N2 matrices:
Ψ(ν) = V ◦ (U1 Ψ(ν − 1)UT2 ).
(47)
Here ‘◦’ represents the Hadamard product and ‘T ’ the transpose. V is an N1 × N2
matrix given by
V = exp −ι √
ab
.
(48)
a=−j1 ,...,+j1
j1 j2
b=−j2 ,...,+j2
Smallest eigenvalue density for complex Wishart-Laguerre ensemble
(a)
50
(1)
(1)
(1)
(2)
(2)
(1)
(1)
(2)
(2)
θ0 =3.08,ϕ0 =2.35,
30
(c)
12
(1)
(1)
10
θ0 =1.62,ϕ0 =1.53
8
θ0 =1.71,ϕ0 =1.42,
θ0 =0.34,ϕ0 =0.47
10
0
0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35
(i)
(2)
(1)
(2)
(2)
θ0 =1.33,ϕ0 =2.12
6
20
(2)
(1)
(d)
12
(1)
(1)
10
θ0 =2.35,ϕ0 =0.83
8
θ0 =2.53,ϕ0 =1.61,
(2)
(2)
(1)
(1)
(2)
(2)
10
θ0 =2.81,ϕ0 =1.72
8
θ0 =1.73,ϕ0 =1.21,
(2)
(2)
(1)
(1)
(2)
(2)
θ0 =1.64,ϕ0 =2.52
θ0 =1.81,ϕ0 =1.52
6
(1)
θ0 =2.55,ϕ0 =0.63,
θ0 =1.22,ϕ0 =2.13,
θ0 =2.11,ϕ0 =0.96,
θ0 =2.43,ϕ0 =0.89
40
pF ( μ)
(b)
12
θ0 =0.75,ϕ0 =1.71,
16
6
4
4
4
2
2
2
0
0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35
0
0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35
0
0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35
μ
140
(ii)
120
(iii)
(iv)
120
120
400
100
100
100
� � (�)
300
80
80
80
60
60
60
200
40
40
40
100
20
0
0.000
0.005
0.010
0.015
0.020
0.025
0.030
0
0.000
20
20
0.005
0.010
0.015
0.020
0.025
0.030
�
0
0.000
0.005
0.010
0.015
0.020
0.025
0.030
0
0.000
0.005
0.010
0.015
0.020
0.025
0.030
Figure 7. Comparison of marginal density (top row) and the smallest eigenvalue
density (bottom row) of the coupled kicked top system with the fixed trace
ensemble results for N1 = 11, N2 = 21 and = 1. For plots (a)-(d) as well as
(i)-(iv), the parameters (k1 , k2 ) vary as (0.5, 1), (0.5, 8), (2.5, 3), (7, 8). In
each case the solid (black) curves correspond to the analytical results, while the
dotted (blue) and dashed (red) curves correspond to different initial conditions.
The θ0 and φ0 values used for different initial conditions are mentioned in (a)(d), and hold, respectively, for (i)-(iv) also.
Also, Ur is an Nr × Nr dimensional matrix
kr 2
(jr ) π
Ur = exp −ι a da,b
.
2jr
2 a,b=−jr ,...,+jr
(49)
(j )
Here da,br represents the Wigner (small) d matrix elements. We use the inbuilt function
in Mathematica [47] for Wigner (big) D matrix to evaluate it. The initial N1 × N2 dimensional state matrix is given by
Ψ(0) = χ1 ⊗ χT2 .
(50)
Eventually, the reduced density matrix of dimension N1 × N1 can be constructed as
ρd = Ψ(ν)Ψ(ν)† .
(51)
The eigenvalues of this matrix are the sought after Schmidt eigenvalues, whose statistics
is of interest to us. To obtain an ensemble of states we proceed as follows. We begin
with an initial state and apply (47) iteratively. After ignoring initial 500 states to safely
avoid the transient regime [54], we start considering states separated by 20 time steps
to put off any unwanted correlations. In all, we consider 50000 states for statistical
analysis.
In figure 7 we set N1 = 11, N2 = 21, = 1, and examine the effect of different choices
of k1 , k2 on one level density and smallest eigenvalue density for the coupled kicked tops.
For (a), (i) we have k1 = 0.5, k2 = 1 for which the classical phase spaces of the individual
Smallest eigenvalue density for complex Wishart-Laguerre ensemble
14
pF (μ )
14
(a)
12
14
(b)
12
10
10
8
8
8
6
6
6
4
4
4
2
2
0.1
0.2
0.3
0.4
0.5
0.6
25
2
0
0.0
0.1
0.2
0.3
0.4
0
0.0
0.5
25
(d)
20
20
15
10
10
10
5
5
5
0.10
0.15
0.20
0.25
0.30
0.2
0.3
0.4
0
0.00
0.05
0.10
0.15
0.20
0.25
(f)
20
15
0.05
0.1
25
(e)
15
0
0.00
(c)
12
10
0
0.0
17
0.30
0
0.00
0.05
0.10
0.15
0.20
μ
2000
350
(i)
1500
(ii)
300
(iii)
150
250
100
200
1000
150
50
100
500
f F (x)
50
0
0.000
0.001
0.002
0.003
0.004
0.005
10 000
0
0.000
0.005
0.010
(v)
(iv)
0.01
0.02
0.03
0.04
(vi)
500
400
1000
6000
0
0.00
600
1500
8000
0.015
300
4000
200
500
2000
0
0.0000
100
0.0002
0.0004
0.0006
0.0008
0
0.0000 0.0005 0.0010 0.0015 0.0020 0.0025
0
0.000
0.002
0.004
0.006
0.008
x
Figure 8. Effect of varying on marginal density ((a)-(f)), and the smallest
eigenvalue density ((i)-(vi)). The solid lines (black) correspond to analytical
results, while the dotted (red), dot-dashed (blue) and dashed (green) curves
result from coupled top simulation for = 0.05, 0.1 and 0.5, respectively. The
parameters k1 , k2 are fixed at 7, 8, while dimension parameters (N1 , N2 ) vary
for the figures (a)-(f) as well as (i)-(vi) as (11, 11), (11, 15), (11, 25), (21, 21),
(21, 25), (21, 35).
tops consist mostly of regular orbits [54]. In this case, we can see deviations from the
(r)
(r)
fixed trace ensemble results with strong sensitivity to initial conditions, i.e. θ0 and φ0
values. In (b), (ii) we set k1 = 0.5, k2 = 8. In this case highly chaotic phase space [54]
of the second top leads to an agreement with the results of the fixed trace ensemble,
even though the phase space of the first top is mostly regular. Moreover, there is a
weak sensitivity to initial conditions. In (c), (iii) we consider k1 = 2.5, k2 = 3, both of
which correspond to mixed type phase space [54]. Here we observe deviations, along
with some sensitivity to initial conditions. Finally, in (d), (iv) we take k1 = 7, k2 = 8,
Smallest eigenvalue density for complex Wishart-Laguerre ensemble
18
for which phase spaces of both the tops are highly chaotic. In this case, we have very
good agreement with the random matrix results and very weak sensitivity to the initial
conditions.
In figure 8 we consider a chaotic regime (k1 = 7, k2 = 8) and examine the effect of
varying for various combinations of n and α. We observe that for a given , increase in
n or α leads to a better agreement with the fixed trace ensemble results. Recent studies
in a similar direction have investigated the universal aspects of spectral fluctuations and
entanglement transitions in strongly chaotic subsystems [73, 74].
A quantifier to measure the fraction of close to maximally entangled states can be
R 1/n
the cumulative probability R(δ) = 1/n−δ fF (x) dx [24], that turns out to be vanishingly
small for δ << 1/n and thus, implies that the actual fraction of such states is extremely
small. For example, using (25), we obtain R(δ = 0.1/n) value to be roughly (i) 8 × 10−6
for n = 3, m = 11, (ii) 1×10−35 for n = 7, m = 19, and (iii) 5×10−91 for n = 11, m = 25.
7. Summary and conclusion
We considered complex Wishart-Laguerre ensemble, both with and without the fixed
trace condition, and provided an easily implementable recurrence scheme to obtain the
exact result for the density of the smallest eigenvalue. This method also gives access to
arbitrary moments of the smallest eigenvalue. The recursion-based approach for exact
and explicit expressions for the density is preferable to the results based on determinants
which are difficult to handle with increasing dimensionality n or rectangularity α.
We also demonstrated the equivalence of the recurrence scheme and the determinantbased results for α = 0 and 1. We validated our analytical results using Monte Carlo
simulations and also used large n and α evaluations to compare with the Tracy-Widom
density. As an application to quantum entanglement problem we explored the behaviour
of Schmidt eigenvalues of the coupled kicked top system. Among other things, we found
that in the chaotic regime, the fixed trace ensemble describes the behaviour of the
Schmidt eigenvalues very well if sufficient coupling is provided between the constituent
tops.
Acknowledgments
This work initiated from a project that was carried out at Shiv Nadar University under
the Opportunities for Undergraduate Research (OUR) scheme. The authors are grateful
to the anonymous reviewers for fruitful suggestions.
Appendix A. Recurrence scheme
We begin with (6) and apply the shift λi → λi + x. This results in
Z ∞
Z ∞
n
Y
Y
α −nx
2
f (x) = nCn,α x e
dλ2 · · ·
dλn
(λj − λk )
λ2i (λi + x)α e−λi .
0
0
2≤k<j≤n
i=2
(A.1)
Smallest eigenvalue density for complex Wishart-Laguerre ensemble
19
To derive the recurrence relation, we will proceed parallel to the steps in chapter 4
of [25], or as in [26]. To this end, we shift the indices of the integration variables as
λi → λi−1 , and also introduce the measure dΩi = λ2i e−λi dλi . Consequently, we arrive
at the following expression:
Z ∞
Z ∞
n−1
Y
2
α −nx
dΩn−1 ∆n−1 ({λ})
(λi + x)α . (A.2)
dΩ1 · · ·
f (x) = nCn,α x e
0
0
i=1
Next, we define
α
Ii,j
Z
∞
Z
∞
dΩn−1 ∆2n−1 ({λ})u(x),
dΩ1 · · ·
=
(A.3)
0
0
where the integrand u(x) is
(λ + x)α · · · (λi + x)α (λi+1 + x)α−1 · · · (λi+j + x)α−1
| 1
{z
}|
{z
}
i terms
j terms
α−2
α−2
.
× (λi+j+1 + x)
· · · (λn + x)
{z
}
|
(A.4)
n−i−j−1 terms
We also define the operator
Z ∞
Z
α
Ii,j [v] =
dΩ1 · · ·
0
∞
dΩn−1 ∆2n−1 ({λ})u(x) v.
(A.5)
0
Using the above notation the smallest eigenvalue density can be written as
α
f (x) = nCn,α xα e−nx In−1,0
.
(A.6)
With these, Lemma 4.2 of [25] (or, equivalently, Lemma 4.1 of [26]) holds in the complex
case also:
(
α
α
Ii+1,j−1
− xIi,j
if i < k ≤ i + j,
α
Ii,j [λk ] =
(A.7)
α
α
Ii,j+1 − xIi,j
if i + j < k < n.
The above result follows by writing λk as (λk + x) − x and then using the operator
defined in (A.5). Next, if the terms (λk + x) and (λl + x) share the same exponent in
the integrals (i.e., both k and l fall within one of the closed intervals [1, i], [i + 1, i + j],
or [i + j + 1, n − 1]), then
λk λl
α
Ii,j
= 0,
(A.8)
λk − λl
λk
1 α
α
Ii,j
= Ii,j
,
(A.9)
λk − λl
2
λ2k
α
α
Ii,j
= Ii,j
[λk ].
(A.10)
λk − λl
Equation (A.8) follows because of the asymmetry in λk and λl , while (A.9) is obtained
using the identity λk /(λk −λl )+λl /(λl −λk ) = 1 and using symmetry. The result (A.10)
is obtained using the identity λ2k /(λk − λl ) = λk + λk λl /(λk − λl ) and (A.8).
Smallest eigenvalue density for complex Wishart-Laguerre ensemble
20
The crucial difference occurs in the first equation of Lemma 4.3 [25] (or Lemma
4.2 [26]), which reads for the present case as
α
α
α
α
Ii,j
= (x + α + j + 2k + 2)Ii−1,j+1
− x[k + (α − 1)]Ii−1,j
+ (i − 1)xIi−2,j+2
α
I0,j
=
α−1
Ij,n−j−1
,
(A.11)
(A.12)
with k = n − i − j − 1. The second equation of this set, (A.12), follows readily from the
definition (A.3). This first equation of this set, (A.11), is derived using
α
α
α
Ii,j
= xIi−1,j+1
+ Ii−1,j+1
[λi ],
(A.13)
which is a consequence of (A.7). The difference in the result compared to the real case
α
occurs due to the term Ii−1,j+1
[λi ]. For the complex case, this involves observing the
following:
Z ∞
Z ∞
Y
Y
d
α−1
2 3 −λi
[(λi +x)α−1 (λl −λi )2 λ3i ]e−λi dλi .(A.14)
(λi +x)
(λl −λi ) λi e dλi =
dλi
0
0
i<l
i<l
Also, using the result
α
dIi−1,j+1
α
α
= (i − 1)αIi−2,j+2
+ (j + 1)(α − 1)Ii−1,j
(A.15)
dx
for i + j = n − 1, as given in the proof for Lemma 4.4 of [25] (or Lemma 4.3 of [26]),
yields in the present case
α
x d α
α
α
α
Ii−1,j+1 + x(i − 1) 1 +
Ii−2,j+2
. (A.16)
Ii,j = (x + α + j + 2)Ii−1,j+1 −
j + 1 dx
j+1
α−1
α
in view of (A.12). Equation (A.16)
Now, we can begin with In−1,0
, which is same as I0,n−1
α
can be used with j = n−i−1 repeatedly for i = 1 to n−1 to arrive at In−1,0
, starting from
α
α
I0,n−1 . We note that In−1,0 is the term needed to obtain the smallest eigenvalue density
expression (A.6) explicitly. This is essentially what has been employed in the recurrence
0
0
α
= 1/Cn−1,2 .
involving Si := Ii,n−i−1
/In−1,0
for gn,m (x) in (7). We also note that In−1,0
The constant cn,m of (7) is therefore nCn,α /Cn−1,2 .
Appendix B. Mathematica codes
The following code can be implemented in Mathematica [28] to obtain exact expressions
for the smallest eigenvalue density for the Wishart-Laguerre ensemble:
Smallest eigenvalue density for complex Wishart-Laguerre ensemble
21
� = ��� � = ���
�[��_� ��_] �=
�
�����[��] �����[��]
�������
�����[� + �]
�����[� + �� - ��]
� {�� �� �� - �}�
α = � - �� �[-�] = �� �[�] = �� �[� - �] = ��
���� = �� � < α + �� � ++�
� = � + ��
���� = �� � < �� � ++�
�[�] = ������(� + � - � + �) �[� - �] -
�
�-�
�[�[� - �]� �] + � (� - �)
�-�
�-�
�[� - �]�
�[-�] = �� �[�] = �[� - �]�
�[�_ ] = �[�� �] ⅇ-� � �α ������[�[� - �]]
For generating the smallest eigenvalue density for the fixed trace Wishart-Laguerre
ensemble, the following code can be used along with the above.
� = ���������������ⅇ� � �[�]� ��
��[�_ ] = �����������[� �] ����[[�]]
(� - � �)� �-�-� ��-�
�����[� � - �]
� {�� α + �� α � + �} ��������������[� - � �]
We can also directly implement the inverse Laplace transform function built in
Mathematica:
��[�_ ] = �����������[� �] �������������������������-� � �[� �]� �� �
The ‘Factor’ option in the above codes is for printing compact expressions on-screen.
For computation involving large n or α values, it may be removed, since factoring very
large expressions may result in a large computation time.
The moments of the smallest eigenvalue of the regular or the fixed-trace WishartLaguerre ensemble can be obtained using the following functions:
���[η_ ] �= ���
����[η_ ] �=
�[[�]]
��+η
�����[� + η ]� {�� α + �� α � + �}
�����[� �]
�����[� � + η ]
���[η ]
Appendix C. Relation with associated Laguerre polynomial
The associated Laguerre polynomials satisfy the following relations [46]:
(k)
(k+1)
(k+2)
iLi (−x) = (x + k + 1)Li−1 (−x) + xLi−2 (−x),
d (k)
(k+1)
Li (−x) = Ln−1 (−x).
dx
These two can be combined to obtain the following relation:
x d (k+1)
xk (k+2)
(k)
(k+1)
iLi (−x) = (x + k + 1)Li−1 (−x) −
Li−1 (−x) +
L
(−x).
k − 1 dx
k − 1 i−2
(C.1)
(C.2)
(C.3)
Smallest eigenvalue density for complex Wishart-Laguerre ensemble
22
Considering k = n − i + 1 gives
(n−i+1)
iLi
(n−i+2)
(−x) = (x + n − i + 2)Li−1
(−x) −
x d (n−i+2)
L
(−x)
n − i dx i−1
x(n − i + 1) (n−i+3)
Li−2
(−x).
(C.4)
n−i
Multiplying this equation by Γ(i) and then calling Si = Γ(i + 1)Li (n − i + 1)(−x), we
get
+
x dSi−1
(n − i + 1)
+ x(i − 1)
Si−2 . (C.5)
n − i dx
n−i
This recurrence relation is the same as that given in section 2 when used for m = n + 1.
(2)
Hence, gn,n+1 (x) = Sn−1 = Γ(n)Ln−1 (−x).
Si = (n − i + 2 + x)Si−1 −
Appendix D. Some explicit results
For α = m − n = 0, the smallest eigenvalue density expressions valid for all n are quite
compact and are already provided in (13) and (28), respectively, for the regular WishartLaguerre ensemble and for the fixed trace Wishart-Laguerre ensemble. For a few other
cases we tabulate the exact results in Tables D1 and D2 using the above Mathematica
codes. This includes the α = 1 case for which closed-form results for any n are given in
(15) and (30). In the case of fixed trace ensemble there is a Θ(1 − nx) term in each of
the probability density expressions that we have not shown in the table for the sake of
compactness.
Smallest eigenvalue density for complex Wishart-Laguerre ensemble
n
23
m
f (x)
3
e−2x x (x + 3)
4
.
e−2x x2 x2 + 6x + 12 6
5
e−2x x3 x3 + 9x2 + 36x + 60 /72
6
e−2x x4 x4 + 12x3 + 72x2 + 240x + 360 /1440
4
.
e−3x x x2 + 8x + 12 2
5
.
e−3x x2 x4 + 16x3 + 96x2 + 240x + 240 48
6
.
e−3x x3 x6 + 24x5 + 252x4 + 1440x3 + 4680x2 + 8640x + 7200 2880
7
.
e−3x x4 x8 + 32x7 + 480x6 + 4320x5 + 25200x4 + 97920x3 + 253440x2 + 403200x + 302400 345600
5
.
e−4x x x3 + 15x2 + 60x + 60 6
6
.
e−4x x2 x6 + 30x5 + 360x4 + 2160x3 + 6840x2 + 10800x + 7200 720
e−4x x3 x9 + 45x8 + 900x7 + 10380x6 + 75600x5 + 360720x4 + 1130400x3 + 2268000x2
.
+2721600x + 1512000 259200
e−4x x4 x12 + 60x11 + 1680x10 + 28800x9 + 334800x8 + 2773440x7 + 16790400x6 + 74995200x5
.
+246456000x4 + 586656000x3 + 972518400x2 + 1016064000x + 508032000 217728000
2
3
4
7
8
6
.
e−5x x x4 + 24x3 + 180x2 + 480x + 360 24
7
.
e−5x x2 x8 + 48x7 + 960x6 + 10320x5 + 64800x4 + 241920x3 + 524160x2 + 604800x + 302400 17280
e−5x x3 x12 + 72x11 + 2340x10 + 45120x9 + 572400x8 + 5019840x7 + 31157280x6 + 137894400x5
.
+432734400x4 + 943488000x3 + 1371686400x2 + 1219276800x + 508032000 43545600
e−5x x4 x16 + 96x15 + 4320x14 + 120480x13 + 2323440x12 + 32780160x11 + 349493760x10 + 2870380800x9
+18353563200x8 + 91755417600x7 + 358177075200x6 + 1083937075200x5 + .
2506629888000x4
5
8
9
+4316239872000x3 + 5267275776000x2 + 4096770048000x + 1536288768000
Table D1. Results for the Wishart-Laguerre ensemble
292626432000
Smallest eigenvalue density for complex Wishart-Laguerre ensemble
n
m
fF (x)
3
2
60x 1 − x 1 − 2x
4
2
2
420x2 1 − x
1 − 2x
5
3
2
2520x3 1 − x
1 − 2x
6
4
2
13860x4 1 − x
1 − 2x
4
7
660x 1 − 3x2 1 − 3x
5
7
10920x2 1 − x − x2 − 9x3 + 15x4 1 − 3x
6
7
28560x3 5 − 12x + 12x2 − 48x3 − 48x4 + 432x5 − 411x6 1 − 3x
7
7
1627920x4 1 − 4x + 8x2 − 16x3 + 320x6 − 756x7 + 489x8 1 − 3x
5
3420x 1 + 5x − 20x2 + 4x3 1 − 4x)14
2
3
6
4
7
8
6
7
5
8
9
14
106260x2 1 + 6x + x2 − 204x3 + 486x4 − 424x5 + 356x6 1 − 4x
491400x3 5 + 27x + 51x2 − 683x3 − 5286x4 + 35910x5 − 85295x6 + 116895x7
14
−79980x8 − 9196x9 1 − 4x
6796440x4 7 + 28x + 86x2 − 540x3 − 6775x4 − 18416x5 + 440876x6 − 2012008x7
14
+4901710x8 − 7145600x9 + 5855692x10 − 3288592x11 + 2386196x12 1 − 4x
23
12180x 1 + 16x − 39x2 − 140x3 + 220x4 1 − 5x
23
628320x2 1 + 22x + 142x2 − 1234x3 − 580x4 + 4676x5 + 29788x6 − 92420x7 + 75355x8 1 − 5x
23030280x3 1 + 24x + 243x2 + 280x3 − 19962x4 + 50208x5 − 31022x6 + 649056x7 − 1420095x8
23
−7867032x9 + 35763831x10 − 53675640x11 + 27627140x12 1 − 5x
97740720x4 7 + 168x + 1968x2 + 9642x3 − 75517x4 − 1457898x5 + 10143328x6 − 31939648x7
+134132583x8 − 323536148x9 − 511260568x10 + 786421818x11 + 22191959881x12
23
−105911938466x13 + 211492028376x14 − 203837200540x15 + 80216630930x16 1 − 5x
Table D2. Results for the fixed trace Wishart-Laguerre ensemble
24
Smallest eigenvalue density for complex Wishart-Laguerre ensemble
25
References
[1] Mehta M L 2004 Random Matrices (New York: Academic Press)
[2] Forrester P J 2010 Log-Gases and Random Matrices (LMS-34) (Princeton University Press,
Princeton, NJ)
[3] Anderson T W 2003 An Introduction to Multivariate Statistical Analysis (John Wiley & Sons)
[4] Muirhead R J 2005 Aspects of Multivariate Statistical Theory (Wiley Interscience)
[5] James A T 1964 Ann. Math. Statist. 35 475
[6] Gnanadesikan R 1997 Methods for Statistical Data Analysis of Multivariate Observations (John
Wiley & Sons)
[7] Plerou V, Gopikrishnan P, Rosenow B, Amaral L A N, Guhr T and Stanley H E 2002 Phys. Rev.
E 65 066126
[8] Vinayak and Pandey A 2010 Phys. Rev. E 81 036202
[9] Verbaarschot J 1994 Phys. Rev. Lett. 72 2531
[10] Verbaarschot J J M and Wettig T 2000 Ann. Rev. Nucl. Part. Sci. 50 343
[11] Tulino A M and Verdú S 2004 Random Matrix Theory and Wireless Communications (Now
Publishers Inc)
[12] Foschini G J and Gans M J 1998 Wireless Pers. Commun. 6 311
[13] Telatar I E 1999 Europ. Trans. Telecommun. 10 585
[14] Lubkin E 1978 J. Math. Phys. (N.Y.) 19 1028
[15] Lloyd S and Pagels H 1988 Ann. Phys. (NY) 188 186
[16] Page D N 1993 Phys. Rev. Lett. 71 1291
[17] Hall M J W 1998 Phys. Lett. A 242 123
[18] Życzkowski K and Sommers H-J 2001 J. Phys. A: Math. Gen. 34 7111
[19] Sommers H-J and Życzkowski K 2004 J. Phys. A: Math. Gen 37 8457
[20] Osipov V A, Sommers H-J and Życzkowski K 2010 J. Phys. A: Math. Theor. 43 055302
[21] Burel G 2002 Proc. of the WSEAS Int. Conf. on Signal, Speech and Image Processing (ICOSSIP
2002)
[22] Park C S and Lee K B 2008 IEEE Trans. Wireless Commun. 7 4432
[23] Majumdar S N, Bohigas O and Lakshminarayan A 2008 J. Stat. Phys. 131 33
[24] Majumdar S N 2011 Handbook of Random Matrix Theory, eds. Akemann G, Baik J and Di
Francesco P (Oxford Press, New York)
[25] Edelman A 1989 Eigenvalues and Condition Numbers of Random Matrices Ph.D. thesis, MIT
[26] Edelman A 1991 Lin. Alg. Appl. 159 55
[27] Khatri C G 1964 Ann. Math. Statist. 35 1807
[28] Forrester P J and Hughes T D 1994 J. Math. Phys. 35 6736
[29] Forrester P J 1993 Nucl. Phys. B 402 709
[30] Forrester P J 1994 J. Math. Phys. 35 2539
[31] Nagao T and Forrester P J 1998 Nucl. Phys. B 509 561
[32] Damgaard P H and Nishigaki S M 2001 Phys. Rev. D 63 045012
[33] Akemann G, Guhr T, Kieburg M, Wegner R and Wirtz T 2014 Phys. Rev. Lett. 113 250201
[34] Edelman A, Guionnet A and Péché S 2016 Ann. App. Prob. 26 1659
[35] Chen Y, Liu D-Z and Zhou D-S 2010 J. Phys. A: Math. Theor. 43 315303
[36] Akemann G and Vivo P 2011 J. Stat. Mech. 2011 P05020
[37] Zanella A, Chiani M and Win M Z 2009 IEEE Trans. Commun. 57 1050
[38] Nishigaki S M, Damgaard P H and Wettig T 1998 Phys. Rev. D 58 087704
[39] Tracy C A and Widom H 1993 Phys. Lett. B 305 115
[40] Tracy C A and Widom H 1994 Commun. Math. Phys. 159 151
[41] Tracy C A and Widom H 1994 Commun. Math. Phys. 161 289
[42] Borodin A and Forrester P J 2003 J. Phys. A: Math. Gen. 36 2963
[43] Forrester P J 2007 J. Phys. A: Math. Theor. 40 11093
Smallest eigenvalue density for complex Wishart-Laguerre ensemble
[44]
[45]
[46]
[47]
[48]
[49]
[50]
[51]
[52]
[53]
[54]
[55]
[56]
[57]
[58]
[59]
[60]
[61]
[62]
[63]
[64]
[65]
[66]
[67]
[68]
[69]
[70]
[71]
[72]
[73]
[74]
26
Wirtz T and Guhr T 2013 Phys. Rev. Lett. 111 094101
Trail C M, Madhok V and Deutsch I H 2008 Phys. Rev. E 78 046211
Szego G 1975 Orthogonal Polynomials (American Mathematical Society, Providence)
Wolfram Research Inc. Mathematica Version 10.0 (Wolfram Research Inc.: Champaign,
Illinois)
Kumar S and Pandey A 2011 J. Phys. A: Math. Theor. 44 445301
Adachi S, Toda M and Kubotani H 2009 Ann. Phys. 324 2278
Vivo P 2010 J. Phys. A: Math. Theor. 43 405206
Marčenko V A and Pastur L A 1967 Math. USSR-Sb. 1 457
Žnidarič M 2007 J. Phys. A: Math. Theor. 40 F105
Bandyopadhyay J N and Lakshminarayan A 2002 Phys. Rev. Lett. 89 060402
Bandyopadhyay J N and Lakshminarayan A 2004 Phys. Rev. E 69 016201
Tao T and Vu V 2010 Geom. Funct. Anal. 20 260
Feldheim O N and Sodin S 2010 Geom. Funct. Anal. 20 88
Katzav E and Castillo I P 2010 Phys. Rev. E 82 040104(R)
Wirtz T, Kieburg M and Guhr T 2015 EPL 109 20005
Perret A and Schehr G 2016 Random Matrices: Theory Appl. 05 1650001
Bornemann F 2016 Ann. Appl. Probab. 26 1942
Liu D-Z and Zhou D-S 2011 Int. Math. Res. Notices 2011 725
Giraud O 2007 J. Phys. A: Math. Theor. 40 F1053
Facchi P, Marzolino U, Parisi G, Pascazio S and Scardicchio A 2008 Phys. Rev. Lett. 101 050502
Vivo P, Pato M P and Oshanin G 2016 Phys. Rev. E 93 052106
Kubotani H, Adachi S and Toda M 2013 Phys. Rev. E 87 062921
Nadal C, Majumdar S N and Vergassola M 2010 Phys. Rev. Lett. 104 110501
Nadal C, Majumdar S N and Vergassola M 2011 J. Stat. Phys. 142 403
Haake F, Kuś M and Scharf R 1987 Z. Phys. B - Condensed Matter 65 381
Haake F 2010 Quantum Signatures of Chaos 3rd ed. (Springer-Verlag, Berlin)
Chaudhury S, Smith A, Anderson B E, Ghose S and Jessen P S 2009 Nature 461 768
Fujisaki H, Miyadera T and Tanaka A 2003 Phys. Rev. E 67 066201
Miller P A and Sarkar S 1999 Phys. Rev. E 60 1542
Srivastava S C L, Tomsovic S, Lakshminarayan A, Ketzmerick R and Bäcker A 2016 Phys. Rev.
Lett. 116 054101
Lakshminarayan A, Srivastava S C L, Ketzmerick R, Bäcker A and Tomsovic S 2016 Phys. Rev.
E 94 010205(R)
| 10 |
Perceptual Context in Cognitive Hierarchies
Bernhard Hengst and Maurice Pagnucco and David Rajaratnam
Claude Sammut and Michael Thielscher
arXiv:1801.02270v1 [] 7 Jan 2018
School of Computer Science and Engineering
The University of New South Wales, Australia
{bernhardh,morri,daver,claude,mit}@cse.unsw.edu.au
Abstract
Cognition does not only depend on bottom-up sensor feature abstraction, but also relies on contextual information being passed top-down. Context is higher level information that
helps to predict belief states at lower levels. The main contribution of this paper is to provide a formalisation of perceptual
context and its integration into a new process model for cognitive hierarchies. Several simple instantiations of a cognitive
hierarchy are used to illustrate the role of context. Notably,
we demonstrate the use context in a novel approach to visually track the pose of rigid objects with just a 2D camera.
Introduction
There is strong evidence that intelligence necessarily involves hierarchical structures (Ashby 1952; Brooks 1986;
Dietterich 2000; Albus and Meystel 2001; Beer 1966;
Turchin 1977; Hubel and Wiesel 1979; Minsky 1986;
Drescher 1991; Dayan and Hinton 1992; Kaelbling 1993;
Nilsson 2001; Konidaris et al. 2011; Jong 2010; Marthi,
Russell, and Andre 2006; Bakker and Schmidhuber 2004).
Clark et al. (2016) recently have addressed the formalisation of cognitive hierarchies that allow for the integration of disparate representations, including symbolic and
sub-symbolic representations, in a framework for cognitive
robotics. Sensory information processing is upward-feeding,
progressively abstracting more complex state features, while
behaviours are downward-feeding progressively becoming
more concrete, ultimately controlling robot actuators.
However, neuroscience suggests that the brain is subject
to top-down cognitive influences for attention, expectation
and perception (Gilbert and Li 2013). Higher level signals
carry important information to facilitate scene interpretation.
For example, the recognition of the Dalmatian, and the disambiguation of the symbol /−\ in Figure 1 intuitively show
that higher level context is necessary to correctly interpret
these images1 . Furthermore, the human brain is able to make
sense of dynamic 3D scenes from light falling on our 2D
retina in varying lighting conditions. Replicating this ability
is still a challenge in artificial intelligence and computer vision, particularly when objects move relative to each other,
can occlude each other, and are without texture. Prior, more
1
Both of these examples appear in (Johnson 2010) but are also
well-known in the cognitive psychology literature.
Figure 1: The image on the left would probably be indiscernible without prior knowledge of Dalmations. The ambiguous symbol /−\ on the right can be interpreted as either
an “H” or an “A” depending on the word context.
abstract contextual knowledge is important to help segment
images into objects or to confirm the presence of an object
from faint or partial edges in an image.
In this paper we extend the existing cognitive hierarchy
formalisation (Clark et al. 2016) by introducing the notion
of perceptual context, which modifies the beliefs of a child
node given the beliefs of its parent nodes. It is worth emphasising that defining the role of context as a top-down predictive influence on a node’s belief state and the corresponding process model that defines how the cognitive hierarchy
evolves over time is non-trivial. Our formalisation captures
the dual influences of context and behaviour as a predictive
update of a node’s belief state. Consequently, the main contribution of this paper is the inclusion and formalisation of
contextual influences as a predictive update within a cognitive hierarchy.
As a meta-framework, the cognitive hierarchy requires instantiation. We provide two simple instantiation examples
to help illustrate the formalisation of context. The first is a
running example using a small belief network. The second
example involves visual servoing to track a moving object.
This second example quantifies the benefit of context and
demonstrates the role of context in a complete cognitive hierarchy including behaviour generation.
As a third, realistic and challenging example that highlights the importance of context we consider the tracking of
the 6 DoF pose of multiple, possibly occluded, marker-less
objects with a 2D camera. We provide a novel instantiation
of a cognitive hierarchy for a real robot using the context of
a spatial cognitive node modelled using a 3D physics simulator. Note, this formalisation is provided in outline only due
to space restrictions.
Finally, for completeness of our belief network running
example, we prove that general belief propagation in causal
trees (Pearl 1988) can be embedded into our framework, illustrating the versatility of including context in the cognitive
hierarchy. We include this proof as an appendix.
The Architectural Framework
For the sake of brevity the following presentation both summarises and extends the formalisation of cognitive hierarchies as introduced in (Clark et al. 2016). We shall, however, highlight how our contribution differs from their work.
The essence of this framework is to adopt a meta-theoretic
approach, formalising the interaction between abstract cognitive nodes, while making no commitments about the representation and reasoning mechanism within individual nodes.
Motivating Example
As an explanatory aid to formalising the use of context in a
hierarchy we will use the disambiguation of the symbol /−\
in Figure 1 as a simple running example. This system can
be modelled as a two layer causal tree updated according
Pearl’s Bayesian belief propagation rules (Pearl 1988). The
lower-level layer disambiguates individual letters while the
higher-level layer disambiguates complete words (Figure 2).
We assume that there are only two words that are expected
to be seen, with equal probability: “THE” and “CAT”.
Figure 2: Disambiguating the symbol /−\ requires context
from the word recognition layer.
There are three independent letter sensors with the middle
sensor being unable to disambiguate the observed symbol
/−\ represented by the conditional probabilities p(/−\|H) =
0.5 and p(/−\|A) = 0.5. These sensors feed into the lowerlevel nodes (or processors in Pearl’s terminology), which we
label as N 1 , N 2 , N 3 . The results of the lower level nodes are
combined at N 4 to disambiguate the observed word.
Each node maintains two state variables; the diagnostic
and causal supports (displayed as the pairs of values in
Figure 2). Intuitively, the diagnostic support represents the
knowledge gathered through sensing while the causal support represents the contextual bias. A node’s overall belief is
calculated by the combination of these two state variables.
While sensing data propagates up the causal tree, the example highlights how node N 2 is only able to resolve the
symbol /−\ in the presence of contextual feedback from N 4 .
Nodes
A cognitive hierarchy consists of a set of nodes. Nodes are
tasked to achieve a goal or maximise future value. They have
two primary functions: world-modelling and behaviourgeneration. World-modelling involves maintaining a belief
state, while behaviour-generation is achieved through policies, where a policy maps states to sets of actions. A node’s
belief state is modified by sensing or by the combination
of actions and higher-level context. We refer to this latter
as prediction update to highlight how it sets an expectation
about what the node is expecting to observe in the future.
Definition 1. A cognitive language is a tuple L =
(S, A, T , O, C), where S is a set of belief states, A is a set
of actions, T is a set of task parameters, O is a set of observations, and C is a set of contextual elements. A cognitive
node is a tuple N = (L, Π, λ, τ , γ, s0 , π 0 ) s.t:
• L is the cognitive language for N , with initial belief state
s0 ∈ S.
• Π a set of policies such that for all π ∈ Π, π : S → 2A ,
with initial policy π 0 ∈ Π.
• A policy selection function λ : 2T → Π, s.t. λ({}) = π 0 .
• A observation update operator τ : 2O × S → S.
• A prediction update operator γ : 2C × 2A × S → S.
Definition 1 differs from (Clark et al. 2016) in two ways:
the introduction of a set of context elements in the cognitive language, and the modification of the prediction update
operator, previously called the action update operator, to include context elements when updating the belief state.
This definition can now be applied to the motivating example to instantiate the nodes in the Bayesian causal tree.
We highlight only the salient features for this instantiation.
Example. Let E = {hx, yi | 0 ≤ x, y ≤ 1.0} be the set of
probability pairs, representing the recognition between two
distinct features. For node N 2 , say (cf. Figure 2), these features are the letters “H” and “A” and for N 4 these are the
words “THE” and “CAT”. The set of belief states for N 2 is
S 2 = {hhdi, ci | d, c ∈ E}, where d is the diagnostic support
and c is the causal support. Note, the vector-in-vector format allows for structural uniformity across nodes. Assuming equal probability over letters, the initial belief state is
hhh0.5, 0.5ii, h0.5, 0.5ii. For N 4 the set of belief states is
S 4 = hhd1 , d2 , d3 i, ci | d1 , d2 , d3 , c ∈ E}, where di is the
contribution of node N i to the diagnostic support of N 4 .
For N 2 the context is the causal supports from above,
C 2 = E, while the observations capture the influence of
the “H”-“A” sensor, O2 = {hdi | d ∈ E}. In contrast the observations for N 4 need to capture the influence of the different child diagnostic supports, so O4 =
{hd1 , d2 , d3 i | d1 , d2 , d3 ∈ E}.
The observation update operators need to replace the diagnostic supports of the current belief with the observation,
which is more complicated for N 4 due to its multiple chil~ ci) = hΣ3 d~i , ci. Ignoring the
dren, τ 2 ({d~1 , d~2 , d~3 }, hd,
i=1
influence of actions, the prediction update operator simply
replaces the causal support of the current belief with the
~ ci) = hhdi,
~ c0 i.
context from above, so γ 2 ({c0 }, ∅, hhdi,
Cognitive Hierarchy
Nodes are interlinked in a hierarchy, where sensing data is
passed up the abstraction hierarchy, while actions and context are sent down the hierarchy (Figure 3).
1
3
2
1 Sensing function
State
World
Modelling
Actions
Behaviour
Generation
2 Context enrichment function
3 Task parameter function
External world node
Figure 3: A cognitive hierarchy, highlighting internal interactions as well as the sensing, action, and context graphs.
Definition 2. A cognitive hierarchy is a tuple H
(N , N 0 , F ) s.t:
=
• N is a set of cognitive nodes and N 0 ∈ N is a distinguished node corresponding to the external world.
• F is a set of function triples hφi,j , ψ j,i , %j,i i ∈ F that
connect nodes N i , N j ∈ N where:
– φi,j : S i → 2Oj is a sensing function, and
– ψ j,i : 2Aj → 2T i is a task parameter function.
– %j,i : S j → 2C i is a context enrichment function.
• Sensing graph: each φi,j represents an edge from node
N i to N j and forms a directed acyclic graph (DAG) with
N 0 as the unique source node of the graph.
• Prediction graph: the set of task parameter functions
(equivalently, the context enrichment functions) forms a
converse to the sensing graph such that N 0 is the unique
sink node of the graph.
Definition 2 differs from the original with the introduction of the context enrichment functions and the naming of
the prediction graph (originally the action graph). The connection between nodes consists of triples of sensing, task parameter and context functions. The sensing function extracts
observations from a lower-level node in order to update a
higher level node, while the context enrichment function
performs the converse. The task parameter function translates a higher-level node’s actions into a set of task parameters, which is then used to select the active policy for a node.
Finally, the external world is modelled as a distinguished
node, N 0 . Sensing functions allow other nodes to observe
properties of the external world, and task parameter functions allow actuator values to be modified, but N 0 doesn’t
“sense” properties of other nodes, nor does it generate task
parameters for those nodes. Similarly, context enrichment
functions connected to N 0 would simply return the empty
set, unless one wanted to model unusual properties akin to
the quantum effects of observations on the external world.
Beyond this, the internal behaviour of N 0 is considered to
be opaque.
The running example can now be encoded formally as a
cognitive hierarchy, again with the following showing only
the salient features of the encoding.
Example. We construct a hierarchy H = (N , N 0 , F ), with
N = {N 0 , N 1 , . . . , N 4 }. The function triples in F will include φ0,2 for the visual sensing of the middle letter, and φ2,4
and %4,2 for the sensing and context between N 2 and N 4 .
The function φ0,2 returns the probability of the input being
the characters “H” and “A”. Here φ0,2 (/−\) = {h0.5, 0.5i}.
Defining φ2,4 and %4,2 requires a conditional probability
1 0
matrix M =
to capture how the letters “H” and
0 1
“A” contribute to the recognition of “THE” and “CAT”.
For sensing from N 2 we use zeroed vectors to prevent
influence from the diagnostic support components from N 1
and N 2 . Hence φ2,4 (hhdi, ci) = {hh0, 0i,η · M · dT , h0, 0ii},
where dT is the transpose of vector d, and η is a normalisation constant.
For context we capture how N 4 ’s causal support and its
diagnostic support components from N 1 and N 2 influences
the causal support of N 2 . Note, that this also prevents any
feedback from N 2 ’s own diagnostic support to its causal
support. So, %4,2 (hhd1 , d2 , d3 i, ci) = {η · (d1 · d3 · c) · M }.
Active Cognitive Hierarchy
The above definitions capture the static aspects of a system
but require additional details to model its operational behaviour. Note, the following definitions are unmodified from
the original formalism and are presented here because they
are necessary to the developments of later sections.
Definition 3. An active cognitive node is a tuple Q =
(N , s, π, a) where: 1) N is a cognitive node with S, Π, and
A being its set of belief states, set of policies, and set of actions respectively, 2) s ∈ S is the current belief state, π ∈ Π
is the current policy, and a ∈ 2A is the current set of actions.
Essentially an active cognitive node couples a (static) cognitive node with some dynamic information; in particular the
current belief state, policy and set of actions.
Definition 4. An active cognitive hierarchy is a tuple X =
(H, Q) where H is a cognitive hierarchy with set of cognitive nodes N such that for each N ∈ N there is a corresponding active cognitive node Q = (N , s, π, a) ∈ Q and
vice-versa.
The active cognitive hierarchy captures the dynamic state
of the system at a particular instance in time. Finally, an initial active cognitive hierarchy is an active hierarchy where
each node is initialised with the initial belief state and policy
of the corresponding cognitive node, as well as an empty set
of actions.
Cognitive Process Model
The process model defines how an active cognitive hierarchy
evolves over time and consists of two steps. Firstly, sensing
observations are passed up the hierarchy, progressively updating the belief state of each node. Next, task parameters
and context are passed down the hierarchy updating the active policy, the actions, and the belief state of the nodes.
We do not present all definitions here, in particular we
omit the definition of the sensing update operator as this remains unchanged in our extension. Instead we define a prediction update operator, replacing the original action update,
with the new operator incorporating both context and task
parameters in its update. First, we characterise the updating
of the beliefs and actions for a single active cognitive node.
Definition 5. Let X = (H, Q) be an active cognitive hierarchy with H = (N , N 0 , F ). The prediction update of X with
respect to an active cognitive node Qi = (N i , si , π i , ai ) ∈
Q, written as PredUpdate0 (X , Qi ) is an active cognitive hierarchy X 0 = (H, Q0 ) where Q0 = Q \ {Qi } ∪ {Q0i } and
Q0i = (N i , γ i (C, a0i , si ), π 0i , a0i ) s.t:
• if there is no node N x where hφi,x , ψ x,i , %x,i i ∈ F then:
π 0i = π i , a0i = π i (si ) and C = ∅,
• else:
π 0i = λSi (T ) and a0i = π 0i (si ),
T =
{ψ x,i (ax ) | hφi,x , ψ x,i , %x,i i ∈ F where
S Qx = (N x , sx , π x , ax ) ∈ Q}
C =
{%x,i (sx ) | hφi,x , ψ x,i , %x,i i ∈ F where
Qx = (N x , sx , π x , ax ) ∈ Q}
The intuition for Definition 5 is straightforward. Given
a cognitive hierarchy and a node to be updated, the update
process returns an identical hierarchy except for the updated
node. This node is updated by first selecting a new active
policy based on the task parameters of all the connected
higher-level nodes. The new active policy is applied to the
existing belief state to generate a new set of actions. Both
these actions and the context from the connected higherlevel nodes are then used to update the node’s belief state.
Using the single node update, updating the entire hierarchy simply involves successively updating all its nodes.
Definition 6. Let X = (H, Q) be an active cognitive hierarchy with H = (N , N 0 , F ) and Ψ be the prediction graph
induced by the task parameter functions in F . The action
process update of X , written PredUpdate(X ), is an active
cognitive model:
X 0 = PredUpdate0 (. . . PredUpdate0 (X , Qn ), . . . Q0 )
where the sequence [Qn , . . . , Q0 ] consists of all active cognitive nodes of the set Q such that the sequence satisfies the
partial ordering induced by the prediction graph Ψ.
Importantly, the update ordering in Definition 6 satisfies
the partial ordering induced by the prediction graph, thus
guaranteeing that the prediction update is well-defined.
Lemma 1. For any active cognitive hierarchy X the prediction process update of X is well-defined.
Proof. Follows from the DAG.
The final part of the process model, which we omit here,
is the combined operator, Update, that first performs a sensing update followed by a prediction update. This operation
follows exactly the original and similarly the theorem that
the process model is well-defined also follows.
We can now apply the update process (sensing then prediction) to show how it operates on the running example.
Example. When N 2 senses the symbol /−\, φ0,2 returns that
“A” and “H” are equally likely, so τ 2 updates the diagnostic support of N 2 to hh0.5, 0.5ii. On the other hand N 1
and N 2 unambiguously sense “C” and “T” respectively, so
N 4 ’s observation update operator, τ 4 , will update its diagnostic support components to hh0, 1i, h0.5, 0.5i, h0, 1ii. The
nodes overall belief, h0, 1i, is the normalised product of the
diagnostic support components and the causal support, indicating here the unambiguous recognition of “CAT”.
Next, during prediction update, context from N 4 is passed
back down to N 2 , through φ4,2 and γ 2 , updating the causal
support of N 2 to h0, 1i. Hence, N 2 is left with the belief
state hhh0.5, 0.5ii, h0, 1ii, which when combined, indicates
that the symbol /−\ should be interpreted as an “A”.
We next appeal to another simple example to illustrate the
use of context to improve world modelling and in turn behaviour generation in a cognitive hierarchy.
A Simple Visual Servoing Example
Consider a mobile camera tasked to track an object sliding
down a frictionless inclined plane. The controller is constructed as a three-node cognitive hierarchy. Figure 4 depicts
the cognitive hierarchy and the scene.
Figure 4: A three-node cognitive hierarchy controller tasked
to visually follow an object. Context flow is shown in red.
The performance of the controller will be determined by
how well the camera keeps the object in the centre of its
field-of-view, specifically the average error in the tracking
distance over a time period of 3 seconds.
The details of the instantiation of the cognitive hierarchy controller follow. The cognitive hierarchy is H =
(N , N 0 , F ) with N = {N 0 , N 1 , N 2 }. N 0 is the unique
opaque node representing the environment. The cognitive
language for N 1 is a tuple L1 = (S 1 , A1 , T 1 , O1 , C 1 ),
and for N 2 it is L2 =(S 2 , A2 , T 2 , O2 , C 2 ). The cognitive
nodes are N 1 = (L1 , Π1 , λ1 , τ 1 , γ 1 , s01 , π 01 ) and N 2 =
(L2 , Π2 , λ2 , τ 2 , γ 2 , s02 , π 02 ). For brevity we only describe the
material functions.
The belief state of N 1 is the position of the object: S 1 =
{x | x ∈ R}. The belief state of N 2 is both the position and
velocity of the object: S 2 = {hx, vi | x, v ∈ R}. The object
starts at rest on the inclined plane at the origin: s01 = 0.0 and
s02 = h0.0, 0.0i.
N 1 receives object position observations from the environment: O1 = {x | x ∈ R}. These measurements are simulated from the physical properties of the scene and include a
noise component to represent errors in the sensor measurements: φ0,1 (·) = {0.5kt2 + ν}, with constant acceleration
k = 8.49 m/s2 , t the elapsed time and ν zero mean Gaussian random noise with a standard deviation of 0.1. The acceleration assumes an inclined plane of 60 degrees in a 9.8
m/s2 gravitational field. The N 1 observation update operator implements a Kalman filter with a fixed gain of 0.25:
τ 1 (h{x}, yi) = (1.0 − 0.25)y + 0.25x.
N 2 receives observations O2 = {x | x ∈ R} from N 1 :
φ1,2 (x) = {x}. In turn it updates its position estimate accepting the value from N 1 : τ 2 (h{x}, hy, vii) = hx, vi. The
prediction update operator uses a physics model to estimate
the new position and velocity of the object after time-step
δt = 0.05 seconds: γ 2 (h{}, {}, hx, vii) = hx + vδt +
0.5kδt2 , v + kδti with known acceleration k = 8.49.
Both N 1 and N 2 have one policy function each. The N 2
policy selects the N 1 policy. The effect of the N 1 policy:
π1 (x) = {x}, is to move the camera to the estimated position of the object via the task parameter function connecting
the environment: ψ 1,0 ({x}) = {x}.
We consider two versions of the N 1 prediction update operator. Without context the next state is the commanded policy action: γ 1 (h{x}, {y}, zi) = y. With context the context enrichment function passes the N 2 estimate
of the position of the object to N 1 : %2,1 (hx, vi) = {x},
where C 1 = {x | x ∈ R}. The update operator becomes:
γ 1 (h{x}, {y}, zi) = x.
When we simulate the dynamics and the repeated update of the cognitive hierarchy at 1/δt Hertz for 3 seconds,
we find that without context the average tracking error is
2.004 ± 0.009. Using context the average tracking error reduces to 0.125 ± 0.015—a 94% error reduction.2
However, when multiple objects are tracked, independent CAD models fail to handle object occlusion. In place
of the CAD models we use the machinery provided by a
3D physics simulator. The object-scene and virtual cameras
from a simulator are ideal to model the higher level context
for vision. We now describe how this approach is instantiated as a cognitive hierarchy with contextual feedback. It is
important to note that the use of the physics simulator is not
to replace the real-world, but is used as mental imagery efficiently representing the spatial belief state of the robot.
Cognitive Hierarchy for Visual Tracking
We focus on world-modelling in a two-node cognitive hierarchy (Figure 5). The external world node that includes the
Baxter robot, streams the camera pose and RGB images as
sensory input to the arm node. The arm node belief state
s = {pa } ∪ {hpia , ci i|object i}, where pa is the arm pose,
and for all recognised objects i in the field of view of the
arm camera, pia is the object pose relative to the arm camera,
and ci is the set of object edge lines and their depth. The
objects in this case include scattered cubes on a table. Information from the arm node is sent to the spatial node that
employs a Gazebo physics simulator as mental imagery to
model the objects.
Using Context to Track Objects Visually
Object tracking has application in augmented reality, visual servoing, and man-machine interfaces. We consider the
problem of on-line monocular model-based tracking of multiple objects without markers or texture, using the 2D RGB
camera built into the hand of a Baxter robot. The use of natural object features makes this a challenging problem.
Current practice for tackling this problem is to use 3D
knowledge in the form of a CAD model, from which to generate a set of edge points (control points) for the object (Lepetit and Fua 2005) . The idea is to track the corresponding 2D
camera image points of the visible 3D control points as the
object moves relatively to the camera. The new pose of the
object relative to the camera is found by minimising the perspective re-projection error between the control points and
their corresponding 2D image.
2
It is of course intuitive in this simple example that as N 2 has
the benefit of the knowledge of the transition dynamics of the object it can better estimate its position and provide this context to
direct the camera.
Figure 5: Cognitive hierarchy comprising an arm node and a
spatial node. Context from the spatial node is in the form of
an object segmented depth image from a simulated special
camera that shadows the real camera.
A novel feature of the spatial node is that it simulates the
robot’s arm camera as an object aware depth camera. No
such camera exist reality, but the Gazebo spatial belief state
of the robot is able to not only provide a depth image, but
one that segments the depth image by object. This object
aware depth image provides the context to the arm node to
generate the required control points.
Update Functions and Process Update
We now describe the update functions and a single cycle of
the process update for this cognitive hierarchy.
The real monocular RGB arm camera is simulated in
Gazebo with an object aware depth camera with identical
characteristics (i.e. the same intrinsic camera matrix). The
simulated camera then produces depth and an object segmentation images from the simulated objects that corresponds to the actual camera image. This vital contextual information is then used for correcting the pose of the visible
objects.
The process update starts with the sensing function
φN0 ,Arm that takes the raw camera image and observes all
edges in the image, represented as a set of line segments, l.
φN0 ,Arm ({rawImage}) = {l}
The observation update operator τ Arm takes the expected
edge lines ci for each object i and transforms the lines to
best match the image edge lines l (Lepetit and Fua 2005).
The update function uses the OpenCV function solvePnP to
find a corrected pose pia for each object i relative to the armcamera a 3 .
τ Arm ({l, ci |object i}) = {pia |object i}
The sensing function from the arm to spatial node takes
the corrected pose pia for each object i, relative to the camera
frame a, and transforms it into the Gazebo reference frame
via the Baxter’s reference frame given the camera pose pa .
φArm,Spatial ({pa , hpia , ci i|object i}) = {gai |object i}
The spatial node observation update τ Spatial , updates the
pose of all viewed objects gai in the Gazebo physics simulator. Note {gai |object i} ⊂ gazebo state.
τ Spatial ({gai |object i}) = gazebo.move(i, gai ) ∀i
The update cycle now proceeds down the hierarchy with
prediction updates. The prediction update for the spatial
node γ Spatial consists of predicting the interaction of objects in the simulator under gravity. Noise introduced during
the observation update may result in objects separating due
to detected collisions or settling under gravity.
γ Spatial (gazebo state) = gazebo.simulate(gazebo state))
We now turn to the context enrichment function
%Spatial,Arm that extracts predicted camera image edge lines
and depth data for each object in view of the simulator.
%Spatial,Arm (gazebo state) = {ci |object i}
The stages of the context enrichment function %Spatial,Arm
are shown in Figure 6. The simulated depth camera extracts
an object image that identifies the object seen at every pixel
location. It also extracts a depth image that gives the depth
from the camera of every pixel. The object image is used to
mask out each object in turn. Applying a Laplacian function
to the part of the depth image masked out by the object yields
all visible edges of the object. A Hough line transform identifies line end points in the Laplacian image and finds the
depth of their endpoints from the depth image, producing ci .
Figure 7 shows the cognitive hierarchy tracking several
different cube configurations. This is only possible given the
context from the spatial belief state. Keeping track of the
pose of objects allows behaviours to be generated that for
example pick up a cube with appropriately oriented grippers.
3
The pose of a rigid object in 3D space has 6 degrees of freedom, three describing its translated position, and three the rotation
or orientation, relative to a reference pose.
Figure 6: The Process update showing stages of the context
enrichment function and the matching of contextual information to the real camera to correct the arm and spatial node
belief state.
Figure 7: Tracking several cube configurations. Top row:
Gazebo GUI showing spatial node state. 2nd row: matching
real image edges in green to simulated image edges in red.
Bottom row: camera image overlaid with edges in green.
Related Support and Conclusion
There is considerable evidence supporting the existence and
usefulness of top-down contextual information. Reliability
(Biederman, Kubovy, and Pomerantz 1981) and speed (Cavanagh 1991) of scene analysis provides early evidence.
These observations are further supported by neuroscience,
suggesting that feedback pathways from higher more abstract processing areas of the brain down to areas closer to
the sensors are greater than those transmitting information
upwards (Hawkins and Blakeslee 2004). The authors summarise the process - “what is actually happening flows up,
and what you expect to happen flows down”. Gilbert and
Li (2013) argue that the traditional idea that the processing
of visual information consists of a sequence of feedforward
operations needs to be supplemented by top-down contextual influences.
In the field of robotics, recent work in online interac-
tive perception shows the benefit of predicted measurements
from one level being passed to the next-lower level as state
predictions (Martin and Brock 2014).
This paper has included and formalised the essential element of context in the meta framework of cognitive hierarchies. The process model of an active cognitive hierarchy
has been revised to include context updates satisfying the
partial order induced by the prediction graph. We have illustrated the role of context with two simple examples and a
novel way to track the pose of texture-less objects with a single 2D camera. As a by-product contribution we prove that
general belief propagation in causal trees can be embedded
into our framework testifying to its representation versatility.
Appendices
Causal Networks as Cognitive Hierarchies
The motivating example highlighted the use of context in a
cognitive hierarchy inspired by belief propagation in causal
trees. In this appendix we extend the example to the general
result that any Bayesian causal tree can be encoded as a cognitive hierarchy. We do this by constructively showing how
to encode a causal tree as a cognitive hierarchy and proving
the correctness of this method with respect to propagating
changes through the tree.
Pearl describes a causal tree as a set of processors where
the connection between processors is explicitly represented
within the processors themselves. Each processor maintains
diagnostic and causal supports, as well as maintaining a
conditional probability matrix for translating to the representation of higher-level processors. The description of the
operational behaviour of causal trees is presented throughout
Chapter 4 (and summarised in Figure 4.15) of Pearl (1988).
The cognitive hierarchies introduced here are concerned
with robotic systems and consequently maintain an explicit
notion of sensing over time. In contrast causal networks are
less precise about external inputs and changes over time. As
a bridge, we model that each processor has a diagnostic support component that can be set externally. Finally, note that
we adopt the convenience notation f∅ to represent a function
of arbitrary arity that always returns the empty set.
Definition 7. Let {P 1 , . . . P n } be a causal tree. We
construct a corresponding cognitive hierarchy H =
({N 0 , N 1 , . . . , N n }, N 0 , F ) as follows:
• For processor P i with m children, and diagnostic and causal supports d, c ∈ Rn , define S i =
{hhdE , d1 , . . . , dm i, c0 i|dE , d1 , . . . , dm , c0 ∈ Rn }, with
initial belief state si = hhd, . . . , di, ci. Define Oi =
{hdE , d1 , . . . , dm i|dE , d1 , . . . , dm ∈ Rn } and C i = Rn .
• For processor P i with corresponding cognitive
~ ci) = hΣ ~0 d~0 , ci, and
node N i , define τ i (o, hd,
d ∈o
~ ci) = hd,
~ c0 i.
γ i ({c0 }, ∅, hd,
• For each pair of processors P i and P j , where P j is the
k-th child of P i ’s m children (from processor subscript
numbering), and Mj is the conditional probability matrix
of P j , then define a triple hφj,i , f∅ , %i,j i ∈ F s.t:
~ ci) = {hdE , d1 , . . . , dm i}, where dh6=k are ze– φj,i (hd,
Q
roed vectors and dk = η · Mj · ( d0 ∈d~ d0 )T .
0
0
– %Q
i,j (hhdE , d1 , . . . , dm i, ci) = {c }, such that c = η ·
( h6=k dh · c) · Mj .
– where η is a normalisation constant for the respective
vectors, and xT is the transpose of vector x.
• For processors P i , with diagnostic support d ∈ Rn , define a triple hφ0,i , f∅ , f∅ i ∈ F where φ0,i (s0 ∈ S 0 ) =
{hdE , dZ , . . . , dZ i}, where dZ is a zeroed vector and
dE ∈ Rn is the external input of P i .
While notationally dense, Definition 7 is simply a generalisation of the construction used in the running example
and is a direct encoding of causal trees. Note, this construction could be further extended to poly-trees, which Pearl also
considers, but would require a more complex encoding.
To establish the correctness of this transformation we can
compare how the structures evolve with sensing. The belief
measure of a processor is captured as the normalised product
of the diagnostic and causal supports, BEL(Pi ) = η · di · ci .
However, for a cognitive node the diagnostic support needs
to be computed from its components. Hence, given the belief state hhdE , d1, . . . , dm i, ci of an active node Qi with
mQ
children, we can compute the belief as BEL(Qi ) =
m
η · j=1 dj · c.
Lemma 2. Given a causal tree {P 1 , . . . P n } and a corresponding cognitive hierarchy H constructed via Definition 7, then the causal tree and the initial active cognitive
hierarchy corresponding to H share the same belief.
Proof. By inspection, BEL(P i ) = BEL(Qi ) for each
i.
Now, we establish that propagating changes through an
active cognitive hierarchy is consistent with propagating beliefs through a causal tree. We abuse notation here to express
the overall belief of a casual tree (resp. active cognitive hierarchy) as simply the beliefs of its processors (resp. nodes).
Theorem 3. Let T be a causal tree and X be the corresponding active cognitive hierarchy constructed via Definition 7, such that BEL(T ) = BEL(X ). Then for any changes
to the external diagnostic supports of the processors and
corresponding changes to the sensing inputs for the active
cognitive hierarchy, BEL(Prop(T )) = BEL(Update(X )).
Proof. Pearl establishes that changes propagated through a
causal tree converge with a single pass up and down the tree.
Any such pass satisfies the partial ordering for the cognitive
hierarchy process model. Hence the proof involves the iterative application of the process model, confirming at each
step that the beliefs of the processors and nodes align.
Theorem 3 establishes that Bayesian causal trees can be
captured as cognitive hierarchies. This highlights the significance of extending cognitive hierarchies to include context,
allowing for a richer set of potential applications.
Acknowledgments
This material is based upon work supported by the
Asian Office of Aerospace Research and Development
(AOARD) under Award No: FA2386-15-1-0005. This research was also supported under Australian Research Council’s (ARC) Discovery Projects funding scheme (project
number DP 150103035). Michael Thielscher is also affiliated with the University of Western Sydney.
Disclaimer
Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the authors
and do not necessarily reflect the views of the AOARD.
References
[Albus and Meystel 2001] Albus, J. S., and Meystel, A. M.
2001. Engineering of Mind: An Introduction to the Science
of Intelligent Systems. Wiley-Interscience.
[Ashby 1952] Ashby, W. R. 1952. Design for a Brain. Chapman and Hall.
[Bakker and Schmidhuber 2004] Bakker, B., and Schmidhuber, J. 2004. Hierarchical reinforcement learning based on
subgoal discovery and subpolicy specialization. In Proceedings of the 8-th Conference on Intelligent Autonomous Systems, IAS-8, 438–445.
[Beer 1966] Beer, S. 1966. Decision and Control. London:
John Wiley and Sons.
[Biederman, Kubovy, and Pomerantz 1981] Biederman, I.;
Kubovy, M.; and Pomerantz, J. 1981. On the semantics of a
glance at a scene. In Perceptual Organization. New Jersey:
Lawrence Erlbaum. 213–263.
[Brooks 1986] Brooks, R. A. 1986. A robust layered control
system for a mobile robot. Robotics and Automation, IEEE
Journal of 2(1):14–23.
[Cavanagh 1991] Cavanagh, P. 1991. What’s up in top-down
processing? In Gorea, A., ed., Representations of Vision:
Trends and tacit assumptions in vision research, 295–304.
[Clark et al. 2016] Clark, K.; Hengst, B.; Pagnucco, M.; Rajaratnam, D.; Robinson, P.; Sammut, C.; and Thielscher,
M. 2016. A framework for integrating symbolic and subsymbolic representations. In 25th Joint Conference on Artificial Intelligence (IJCAI -16).
[Dayan and Hinton 1992] Dayan, P., and Hinton, G. E. 1992.
Feudal reinforcement learning. Advances in Neural Information Processing Systems 5 (NIPS).
[Dietterich 2000] Dietterich, T. G. 2000. Hierarchical reinforcement learning with the MAXQ value function decomposition. Journal of Artificial Intelligence Research (JAIR)
13:227–303.
[Drescher 1991] Drescher, G. L. 1991. Made-up Minds:
A constructionist Approach to Artificial Intelligence. Cambridge, Massachusetts: MIT Press.
[Gilbert and Li 2013] Gilbert, C. D., and Li, W. 2013. Topdown influences on visual processing. Nature reviews. Neuroscience 14(5):10.1038/nrn3476.
[Hawkins and Blakeslee 2004] Hawkins, J., and Blakeslee,
S. 2004. On Intelligence. Times Books, Henry Holt and
Company.
[Hubel and Wiesel 1979] Hubel, D. H., and Wiesel, T. N.
1979. Brain mechanisms of vision. A Scientific American
Book: the Brain 84–96.
[Johnson 2010] Johnson, J. 2010. Designing with the Mind
in Mind: Simple Guide to Understanding User Interface Design Rules. San Francisco, CA, USA: Morgan Kaufmann
Publishers Inc.
[Jong 2010] Jong, N. K. 2010. Structured Exploration for
Reinforcement Learning. Ph.D. Dissertation, University of
Texas at Austin.
[Kaelbling 1993] Kaelbling, L. P. 1993. Hierarchical learning in stochastic domains: Preliminary results. In Machine
Learning Proceedings of the Tenth International Conference, 167–173. San Mateo, CA: Morgan Kaufmann.
[Konidaris et al. 2011] Konidaris, G.; Kuindersma, S.; Grupen, R.; and Barto, A. 2011. Robot learning from demonstration by constructing skill trees. The International Journal of Robotics Research.
[Lepetit and Fua 2005] Lepetit, V., and Fua, P. 2005. Monocular model-based 3d tracking of rigid objects. Found.
Trends. Comput. Graph. Vis. 1(1):1–89.
[Marthi, Russell, and Andre 2006] Marthi, B.; Russell, S.;
and Andre, D. 2006. A compact, hierarchical q-function
decomposition. In Proceedings of the Proceedings of the
Twenty-Second Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-06), 332–340. Arlington, Virginia: AUAI Press.
[Martin and Brock 2014] Martin, R. M., and Brock, O. 2014.
Online interactive perception of articulated objects with
multi-level recursive estimation based on task-specific priors. In IROS, 2494–2501. IEEE.
[Minsky 1986] Minsky, M. 1986. The Society of Mind. New
York, NY, USA: Simon & Schuster, Inc.
[Nilsson 2001] Nilsson, N. J. 2001. Teleo-Reactive programs and the triple-tower architecture. Electronic Transactions on Artificial Intelligence 5:99–110.
[Pearl 1988] Pearl, J. 1988. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. San
Francesco: Morgan Kaufmann, revised second printing edition.
[Turchin 1977] Turchin, V. F. 1977. The Phenomenon of
Science. Columbia University Press.
| 2 |
Data-Efficient Policy Evaluation Through Behavior Policy Search
Josiah P. Hanna 1 Philip S. Thomas 2 3 Peter Stone 1 Scott Niekum 1
arXiv:1706.03469v1 [] 12 Jun 2017
Abstract
We consider the task of evaluating a policy for
a Markov decision process (MDP). The standard
unbiased technique for evaluating a policy is to
deploy the policy and observe its performance.
We show that the data collected from deploying
a different policy, commonly called the behavior
policy, can be used to produce unbiased estimates
with lower mean squared error than this standard
technique. We derive an analytic expression for
the optimal behavior policy—the behavior policy that minimizes the mean squared error of the
resulting estimates. Because this expression depends on terms that are unknown in practice, we
propose a novel policy evaluation sub-problem,
behavior policy search: searching for a behavior policy that reduces mean squared error. We
present a behavior policy search algorithm and
empirically demonstrate its effectiveness in lowering the mean squared error of policy performance estimates.
1. Introduction
Many sequential decision problems, including diabetes
treatment (Bastani, 2014), digital marketing (Theocharous
et al., 2015), and robot control (Lillicrap et al., 2015), are
modeled as Markov decision processes (MDPs) and solved
using reinforcement learning (RL) algorithms. One important problem when applying RL to real problems is policy
evaluation. The goal in policy evaluation is to estimate the
expected return (sum of rewards) produced by a policy. We
refer to this policy as the evaluation policy, πe . The standard policy evaluation approach is to repeatedly deploy πe
and average the resulting returns. While this naı̈ve Monte
Carlo estimator is unbiased, it may have high variance.
1
The University of Texas at Austin, Austin, Texas, USA
The University of Massachusetts, Amherst, Massachusetts, USA
3
Carnegie Mellon University, Pittsburgh, Pennsylvania, USA.
Correspondence to: Josiah P. Hanna <[email protected]>.
Methods that evaluate πe while selecting actions according
to πe are termed on-policy. Previous work has addressed
variance reduction for on-policy returns (Zinkevich et al.,
2006; White & Bowling, 2009; Veness et al., 2011). An
alternative approach is to estimate the performance of πe
while following a different, behavior policy, πb . Methods
that evaluate πe with data generated from πb are termed offpolicy. Importance sampling (IS) is one standard approach
for using off-policy data in RL. IS reweights returns observed while executing πb such that they are unbiased estimates of the performance of πe .
Presently, IS is usually used when off-policy data is already
available or when executing πe is impractical. If πb is not
chosen carefully, IS often has high variance (Thomas et al.,
2015). For this reason, an implicit assumption in the RL
community has generally been that on-policy evaluation is
more accurate when it is feasible. However, IS can also be
used for variance reduction when done with an appropriately selected distribution of returns (Hammersley & Handscomb, 1964). While IS-based variance reduction has been
explored in RL, this prior work has required knowledge of
the environment’s transition probabilities and remains onpolicy (Desai & Glynn, 2001; Frank et al., 2008; Ciosek
& Whiteson, 2017). In contrast to this earlier work, we
show how careful selection of the behavior policy can lead
to lower variance policy evaluation than using the evaluation policy and do not require knowledge of the environment’s transition probabilities.
In this paper, we formalize the selection of πb as the behavior policy search problem. We introduce a method for this
problem that adapts the policy parameters of πb with gradient descent on the variance of importance-sampling. Empirically we demonstrate behavior policy search with our
method lowers the mean squared error of estimates compared to on-policy estimates. To the best of our knowledge,
this work is the first to propose adapting the behavior policy to obtain better policy evaluation in RL. Furthermore
we present the first method to address this problem.
2
Proceedings of the 34 th International Conference on Machine
Learning, Sydney, Australia, PMLR 70, 2017. Copyright 2017
by the author(s).
2. Preliminaries
This section details the policy evaluation problem setting,
the Monte Carlo and Advantage Sum on-policy methods,
and importance-sampling for off-policy evaluation.
Data-Efficient Policy Evaluation Through Behavior Policy Search
2.1. Background
2.3. Advantage Sum Estimates
We use notational standard MDPNv1 (Thomas, 2015), and
for simplicity, we assume that S, A, and R are finite.1 Let
H := (S0 , A0 , R0 , S1 , . . . , SL , AL , RL ) be a trajectory
PL
and g(H) := t=0 γ t Rt be the discounted return of trajectory H. Let ρ(π) := E[g(H)|H ∼ π] be the expected
discounted return when the stochastic policy π is used from
S0 sampled from the initial state distribution. In this work,
we consider parameterized policies, πθ , where the distribution over actions is determined by the vector θ. We assume
that the transitions and reward function are unknown and
that L is finite.
Like the Monte-Carlo estimator, the advantage sum (ASE)
estimator selects θ i = θ e for all i. However, it introduces a control variate to reduce the variance without introducing bias. This control variate requires an approximate
model of the MDP to be provided. Let the reward function of this model be given as r̂(s, a). Let q̂ πe (st , at ) =
PL
0
E[ t0 =t γ t r̂(st0 , at0 )] and v̂ πe (st ) = E[q̂ πe (st , at )|at ∼
πe ], i.e., the action-value function and state-value function
of πe in this approximate model. Then, the advantage sum
estimator is given by:
We are given an evaluation policy, πe , for which we would
like to estimate ρ(πe ). We assume there exists a policy
parameter vector θ e such that πe = πθe and that this vector is known. We consider an incremental setting where,
at iteration i, we sample a single trajectory Hi with a policy πθi and add {Hi , θ i } to a set D. We use Di to denote
the set at iteration i. Methods that always (i.e., ∀i) choose
θ i = θ e are on-policy; otherwise, the method is off-policy.
A policy evaluation method, PE, uses all trajectories in Di
to estimate ρ(πe ). Our goal is to design a policy evaluation algorithm that produces estimates of ρ(πe ) that have
low mean squared error (MSE). Formally, the goal of policy evaluation with PE is to minimize (PE(Di ) − ρ(πe ))2 .
While other measures of policy evaluation accuracy could
be considered, we follow earlier work in using MSE (e.g.,
(Thomas & Brunskill, 2016; Precup et al., 2000)).
AS(Di ) :=
We focus on unbiased estimators of ρ(πe ). While biased estimators (e.g., bootstrapping methods (Sutton &
Barto, 1998), approximate models (Kearns & Singh, 2002),
etc.) can sometimes produce lower MSE estimates they are
problematic for high risk applications requiring confidence
intervals. For unbiased estimators, minimizing variance is
equivalent to minimizing MSE.
2.4. Importance Sampling
i
L
1 XX t
γ (Rt − q̂ πe (St , At ) + v̂ πe (St )).
i + 1 j=0 t=0
Intuitively, ASE is replacing part of the randomness of the
Monte Carlo return with the known expected return under
the approximate model. Provided q πe (St , At ) − v̂ πe (St ) is
sufficiently correlated with Rt , the variance of ASE is less
than that of MC.
Notice that, like the MC estimator, the ASE estimator is
on-policy, in that the behavior policy is always the policy
that we wish to evaluate. Intuitively it may seems like this
choice should be optimal. However, we will show that it is
not—selecting behavior policies that are different from the
evaluation policy can result in estimates of ρ(πe ) that have
lower variance.
Importance Sampling is a method for reweighting returns
from a behavior policy, θ, such that they are unbiased returns from the evaluation policy. In RL, the re-weighted IS
return of a trajectory, H, sampled from πθ is:
IS(H, θ) := g(H)
2.2. Monte-Carlo Estimates
Perhaps the most commonly used policy evaluation method
is the on-policy Monte-Carlo (MC) estimator. The estimate
of ρ(πe ) at iteration i is the average return:
L
Y
πe (St |At )
.
π
(St |At )
t=0 θ
The IS off-policy estimator is then a Monte Carlo estimate
of E [IS(H, θ)|H ∼ πθ ]:
i
i
L
i
IS(Di ) :=
1 X
1 XX t
γ Rt =
g(Hj ).
MC(Di ) :=
i + 1 j=0 t=0
i + 1 j=0
This estimator is unbiased and strongly consistent given
mild assumptions.2 However, this method can have high
variance.
1
The methods, and theoretical results discussed in this paper
are applicable to both finite and infinite S, A and R as well as
partially-observable Markov decision processes.
2
Being a strongly consistent estimator of ρ(πe ) means that
1 X
IS(Hj , θ j ).
i + 1 j=0
In RL, importance sampling allows off-policy data to be
used as if it were on-policy. In this case the variance of
the IS estimate is often much worse than the variance of
on-policy MC estimates because the behavior policy is not
Pr
lim MC(Di ) = ρ(πe )
i→∞
= 1.
If ρ(πe ) exists, MC is
strongly consistent by the Khintchine Strong law of large numbers (Sen & Singer, 1993).
Data-Efficient Policy Evaluation Through Behavior Policy Search
chosen to minimize variance, but is a policy that is dictated
by circumstance.
3. Behavior Policy Search
Importance sampling was originally intended as a variance
reduction technique for Monte Carlo evaluation (Hammersley & Handscomb, 1964). When an evaluation policy rarely
samples trajectories with high magnitude returns a Monte
Carlo evaluation will have high variance. If a behavior policy can increase the probability of observing such trajectories then the off-policy IS estimate will have lower variance
than an on-policy Monte Carlo estimate. In this section we
first describe the theoretical potential for variance reduction
with an appropriately selected behavior policy. In general
this policy will be unknown. Thus, we propose a policy
evaluation subproblem — the behavior policy search problem — solutions to which will adapt the behavior policy to
provide lower mean squared error policy performance estimates. To the best of our knowledge, we are the first to
propose behavior policy adaptation for policy evaluation.
3.1. The Optimal Behavior Policy
An appropriately selected behavior policy can lower variance to zero. While this fact is generally known for
importance-sampling, we show here that this policy exists
for any MDP and evaluation policy under two restrictive
assumptions: all returns are positive and the domain is deterministic. In the following section we describe how an
initial policy can be adapted towards the optimal behavior
policy even when these conditions fail to hold.
QL
Let wπ (H) := t=0 π(At |St ). Consider a behavior policy
πb? such that for any trajectory, H:
ρ(πe ) = IS(H, πb? ) = g(H)
wπe (H)
.
wπb? (H)
Rearranging the terms of this expressions yields:
wπb? (H) = g(H)
wπe (H)
.
ρ(πe )
Thus, if we can select πb? such that the probability of obg(H)
serving any H ∼ πb? is ρ(π
times the likelihood of obe)
serving H ∼ πe then the IS estimate has zero MSE with
only a single sampled trajectory. Regardless of g(H), the
importance-sampled return will equal ρ(πe ).
Furthermore, the policy πb? exists within the space of all
feasible stochastic policies. Consider that a stochastic policy can be viewed as a mixture policy over time-dependent
(i.e., action selection depends on the current time-step) deterministic policies. For example, in an MDP with one
state, two actions and a horizon of L there are 2L possible time-dependent deterministic policies, each of which
defines a unique sequence of actions. We can express any
evaluation policy as a mixture of these deterministic policies. The optimal behavior policy πb? can be expressed similarly and thus the optimal behavior policy exists.
Unfortunately, the optimal behavior policy depends on the
unknown value ρ(πe ) as well as the unknown reward function R (via g(H)). Thus, while there exists an optimal behavior policy for IS – which is not πe – in practice we cannot analytically determine πb? . Additionally, πb? may not be
representable by any θ in our policy class.
3.2. The Behavior Policy Search Problem
Since the optimal behavior policy cannot be analytically
determined, we instead propose the behavior policy search
(BPS) problem for finding πb that lowers the MSE of estimates of ρ(πe ). A BPS problem is defined by the inputs:
1. An evaluation policy πe with policy parameters θ e .
2. An off-policy policy evaluation algorithm,
OPE(H, θ), that takes a trajectory, H ∼ πθ , or,
alternatively, a set of trajectories, and returns an
estimate of ρ(πe ).
A BPS solution is a policy, πθb such that off-policy estimates with OPE have lower MSE than on-policy estimates.
Methods for this problem are BPS algorithms.
Recall we have formalized policy evaluation within an incremental setting where one trajectory for policy evaluation
is generated each iteration. At the ith iteration, a BPS algorithm selects a behavior policy that will be used to generate
a trajectory, Hi . The policy evaluation algorithm, OPE,
then estimates ρ(πe ) using trajectories in Di . Naturally,
the selection of the behavior policy depends on how OPE
estimates ρ(πe ).
In a BPS problem, the ith iteration proceeds as follows.
i−1
First, given all of the past behavior policies, {θ i }i=0
, and
i−1
the resulting trajectories, {Hi }i=0 , the BPS algorithm must
select θ i . The policy πθi is then run for one episode to
create the trajectory Hi . Then the BPS algorithm uses
OPE to estimate ρ(πe ) given the available data, Di :=
{(θ i , Hi )}ii=0 . In this paper, we consider the one-step
problem of selecting θ i and estimating ρ(πe ) at iteration
i in a way that minimizes MSE. That is, we do not consider
how our selection of θ i will impact our future ability to select an appropriate θ j for j > i and thus to produce more
accurate estimates in the future.
One natural question is: if we are given a limit on the
number of trajectories that can be sampled, is it better to
“spend” some of our limited trajectories on BPS instead
of using on-policy estimates? Since each OP E(Hi , θ i )
is an unbiased estimator of ρ(πe ), we can use all sampled
trajectories to compute OPE(Di ). Provided for all itera-
Data-Efficient Policy Evaluation Through Behavior Policy Search
tions, Var[OPE(H, θ i )] ≤ V ar[M C] then, in expectation,
a BPS algorithm will always achieve lower MSE than MC,
showing that it is, in fact, worthwhile to do so. This claim
is supported by our empirical study.
4. Behavior Policy Gradient Theorem
We now introduce our primary contributions: an analytic
expression for the gradient of the mean squared error of
the IS estimator and a stochastic gradient descent algorithm
that adapts θ to minimize the MSE between the IS estimate
and ρ(πe ). Our algorithm — Behavior Policy Gradient
(BPG) — begins with on-policy estimates and adapts the
behavior policy with gradient descent on the MSE with respect to θ. The gradient of the MSE with respect to the
policy parameters is given by the following theorem:
Theorem 1.
"
L
X
∂
∂
MSE[IS(H, θ)] = E − IS(H, θ)2
log πθ (At |St )
∂θ
∂θ
t=0
#
where the expectation is taken over H ∼ πθ .
Proof. Proofs for all theoretical results are included in Appendix A.
BPG uses stochastic gradient descent in place of exact gradient descent: replacing the intractable expectation in Theorem 1 with an unbiased estimate of the true gradient. In
our experiments, we sample a batch, Bi , of k trajectories
with πθi to lower the variance of the gradient estimate at
iteration i. In the BPS setting, sampling a batch of trajectories is equivalent to holding θ fixed for k iterations and
then updating θ with the k most recent trajectories used to
compute the gradient estimate.
Full details of BPG are given in Algorithm 1. At iteration i, BPG samples a batch, Bi , of k trajectories and adds
{(θ i , Hi )ki=0 } to a data set D (Lines 4-5). Then BPG updates θ with an empirical estimate of Theorem 1 (Line 6).
After n iterations, the BPG estimate of ρ(πe ) is IS(Dn ) as
defined in Section 2.4.
Given that the step-size, αi , is consistent with standard gradient descent convergence conditions, BPG will converge
to a behavior policy that locally minimizes the variance
(Bertsekas & Tsitsiklis, 2000). At best, BPG converges to
the globally optimal behavior policy within the parameterization of πe . Since the parameterization of πe determines
the class of representable distributions it is possible that
the theoretically optimal behavior policy is unrepresentable
under this parameterization. Nevertheless, a suboptimal behavior policy still yields better estimates of ρ(πe ), provided
it decreases variance compared to on-policy returns.
Algorithm 1 Behavior Policy Gradient
Input: Evaluation policy parameters, θ e , batch size k, a
step-size for each iteration, αi , and number of iterations n.
Output: Final behavior policy parameters θ n and the IS
estimate of ρ(πe ) using all sampled trajectories.
1: θ 0 ← θ e
2: D0 = {}
3: for all i ∈ 0...n do
4:
Bi = Sample k trajectories H ∼ πθi
5:
Di+1 = Di ∪ Bi
L
X
X
∂
6:
θ i+1 = θ i + αki
IS(H, θ)2
log πθi (At |St )
∂θ
t=0
H∈B
7: end for
8: Return θ n , IS(Dn )
4.1. Control Variate Extension
In cases where an approximate model is available, we can
further lower variance adapting the behavior policy of the
doubly robust estimator (Jiang & Li, 2016; Thomas &
Brunskill, 2016). Based on a similar intuition as the Advantage Sum estimator (Section 2.3), the Doubly Robust (DR)
estimator uses the value functions of an approximate model
as a control variate to lower the variance of importancesampling.3 We show here that we can adapt the behavior
policy to lower the mean squared error of DR estimates.
We denote this new method DR-BPG for Doubly Robust
Behavior Policy Gradient.
Qt
Let wπ,t (H) = i=0 π(At |St ) and recall that v̂ πe and q̂ πe
are the state and action value functions of πe in the approximate model. The DR estimator is:
L
X
wπe ,t
(Rt −q̂ πe (St , At )+v̂ πe (St+1 )).
DR(H, θ) := v̂(S0 )+
w
πθ ,t
t=0
We can reduce the mean squared error of DR with gradient
descent using unbiased estimates of the following corollary
to Theorem 1:
Corollary 1.
L
X
∂
∂
MSE [DR(H, θ)] = E[(DR(H, θ)2
log πθ (At |St )
∂θ
∂θ
t=0
L
t
X
wπ ,t X ∂
− 2 DR(H, θ)(
γ t δt e
log πθ (Ai |Si ))]
wθ,t i=0 ∂θ
t=0
where δt = Rt − q̂(St , At ) + v̂(St+1 ) and the expectation
is taken over H ∼ πθ .
∂
M SE is analogous to the gradient of
The first term of ∂θ
the importance-sampling estimate with IS(H, θ) replaced
3
DR lowers the variance of per-decision importance-sampling
which importance samples the per time-step reward.
Data-Efficient Policy Evaluation Through Behavior Policy Search
by DR(H, θ). The second term accounts for the covariance
of the DR terms.
AS and DR both assume access to a model, however, they
make no assumption about where the model comes from
except that it must be independent of the trajectories used to
compute the final estimate. In practice, AS and DR perform
best when all trajectories are used to estimate the model and
then used to estimate ρ(πe ) (Thomas & Brunskill, 2016).
However, for DR-BPG, changes to the model change the
surface of the MSE objective we seek to minimize and thus
DR-BPG will only converge once the model stops changing. In our experiments, we consider both a changing and
a fixed model.
4.2. Connection to REINFORCE
BPG is closely related to existing work in policy gradient RL (c.f., (Sutton et al., 2000)) and we draw connections between one such method and BPG to illustrate how
BPG changes the distribution of trajectories. REINFORCE
(Williams, 1992) attempts to maximize ρ(πθ ) through gradient ascent on ρ(πθ ) using the following unbiased gradient
of ρ(πθ ):
#
"
L
X
∂
∂
ρ(πθ ) = E g(H)
log πθ (At |St ) H ∼ πθ .
∂θ
∂θ
t=0
Intuitively, REINFORCE increases the probability of all
actions taken during H as a function of g(H). This update increases the probability of actions that lead to high
return trajectories. BPG can be interpreted as REINFORCE where the return of a trajectory is the square of
its importance-sampled return. Thus BPG increases the
probability of all actions taken along H as a function of
IS(H, θ)2 . The magnitude of IS(H, θ)2 depends on two
qualities of H:
1. g(H)2 is large (i.e., a high magnitude event).
2. H is rare relative to its probability under the evaluaQL
t |St )
tion policy (i.e., t=0 ππθe (A
(At |St ) is large).
These two qualities demonstrate a balance in how BPG
changes trajectory probabilities. Increasing the probability of a trajectory under πθ will decrease IS(H, θ)2 and so
BPG increases the probability of a trajectory when g(H)2
is large enough to offset the decrease in IS(H, θ)2 caused
by decreasing the importance weight.
5. Empirical Study
This section presents an empirical study of variance reduction through behavior policy search. We design our experiments to answer the following questions:
• Can behavior policy search with BPG reduce policy
evaluation MSE compared to on-policy estimates in
both tabular and continuous domains?
• Does adapting the behavior policy of the Doubly Robust estimator with DR-BPG lower the MSE of the
Advantage Sum estimator?
• Does the rarety of actions that cause high magnitude
rewards affect the performance gap between BPG and
Monte Carlo estimates?
5.1. Experimental Set-up
We address our first experimental question by evaluating
BPG in three domains. We briefly describe each domain
here; full details are available in appendix C.
The first domain is a 4x4 Gridworld. We obtain two evaluation policies by applying REINFORCE to this task, starting from a policy that selects actions uniformly at random.
We then select one evaluation policy, π1 , from the early
stages of learning – an improved policy but still far from
converged – and one after learning has converged, π2 . We
run all experiments once with πe := π1 and a second time
with πe := π2 .
Our second and third tasks are the continuous control Cartpole Swing Up and Acrobot tasks implemented within RLLAB (Duan et al., 2016). The evaluation policy in each domain is a neural network that maps the state to the mean of a
Gaussian distribution. Policies are partially optimized with
trust-region policy optimization (Schulman et al., 2015) applied to a randomly initialized policy.
5.2. Main Results
Gridworld Experiments Figure 1 compares BPG to
Monte Carlo for both Gridworld policies, π1 and π2 . Our
main point of comparison is the mean squared error (MSE)
of both estimates at iteration i over 100 trials. For π1 , BPG
significantly reduces the MSE of on-policy estimates (Figure 1a). For π2 , BPG also reduces MSE, however, it is only
a marginal improvement.
At the end of each trial we used the final behavior policy to collect 100 more trajectories and estimate ρ(πe ). In
comparison to a Monte Carlo estimate with 100 trajectories
from π1 , MSE is 85.48 % lower with this improved behavior policy. For π2 , the MSE is 31.02 % lower. This result
demonstrates that BPG can find behavior policies that substantially lower MSE.
To understand the disparity in performance between these
two instances of policy evaluation, we plot the distribution
of g(H) under πe (Figures 1c and 1d). These plots show
the variance of π1 to be much higher; it sometimes samples
returns with twice the magnitude of any sampled by π2 . To
quantify this difference,
we also measure the variance of
2
IS(H, θ i ) as E IS(H)2 H ∼ πθi − E [IS(H)|H ∼ πθi ]
where the expectations are estimated with 10,000 trajecto-
Data-Efficient Policy Evaluation Through Behavior Policy Search
(a) Mean Squared Error
(b) Mean Squared Error
(a) Cart-pole Swing Up MSE
(b) Acrobot MSE
Figure 2: Mean squared error reduction on the Cart-pole
Swing Up and Acrobot domains. We adapt the behavior
policy for 200 iterations and average results over 100 trials.
Error bars are for 95% confidence intervals.
(c) Histogram of π1 Returns (d) Histogram of π2 Returns
(e) Variance Reduction
step size parameter α. If α is set too high we risk making too large of an update to θ — potentially stepping to
a worse behavior policy. If we are too conservative then it
will take many iterations for a noticeable improvement over
Monte Carlo estimation. Figure 1f shows variance reduction for a number of different α values in the GridWorld
domain. We found BPG in this domain was robust to a
variety of step size values. We do not claim this result is
representative for all problem domains; stepsize selection
in the behavior policy search problem is an important area
for future work.
(f) Learning Rate Sensitivity
Figure 1: Gridworld experiments when πe is a partially optimized policy, π1 , (1a) and a converged policy, π2 , (1b).
The first and second rows give results for π1 on the left
and π2 on the right. Results are averaged over 100 trials
of 1000 iterations with error bars given for 95 % confidence intervals. In both instances, BPG lowers MSE more
than on-policy Monte Carlo returns (statistically significant, p < 0.05). The second row shows the distribution of
returns under the two different πe . Figure 1e shows a substantial decrease in variance when evaluating π1 with BPG
and a lesser decrease when evaluating π2 with BPG. Figure 1f shows the effect of varying the step-size parameter
for representative α (BPG diverged for high settings of α).
We ran BPG for 250 iterations per value of α and averaged
over 5 trials. Axes in 1a, 1b, and 1e are log-scaled.
ries. This evaluation is repeated 5 times per iteration and
the reported variance is the mean over these evaluations.
The decrease in variance for each policy is shown in Figure 1e. The high initial variance means there is much more
room for BPG to improve the behavior policy when θ e is
the partially optimized policy.
We also test the sensitivity of BPG to the learning rate parameter. A critical issue in the use of BPG is selecting the
Continuous Control Figure 2 shows reduction of MSE
on the Cartpole Swing-up and Acrobot domains. Again we
see that BPG reduces MSE faster than Monte Carlo evaluation. In contrast to the discrete Gridworld experiment,
this experiment demonstrates the applicability of BPG to
the continuous control setting. While BPG significantly
outperforms Monte Carlo evaluation in Cart-pole Swingup, the gap is much smaller in Acrobot. This result also
demonstrates BPG (and behavior policy search) when the
policy must generalize across different states.
5.3. Control Variate Extensions
In this section, we evaluate the combination of modelbased control variates with behavior policy search. Specifically, we compare the AS estimator with Doubly Robust
BPG (DR-BPG). In these experiments we use a 10x10
stochastic gridworld. The added stochasticity increases the
difficulty of building an accurate model from trajectories.
Since these methods require a model we construct this
model in one of two ways. The first method uses all trajectories in D to build the model and then uses the same set to
estimate ρ(πe ) with ASE or DR. The second method uses
trajectories from the first 10 iterations to build the model
and then fixes the model for the remaining iterations. For
DR-BPG, behavior policy search starts at iteration 10 un-
Data-Efficient Policy Evaluation Through Behavior Policy Search
5.4. Rareness of Event
(a) Control Variate MSE
(b) Rare Event Improvement
Figure 3: 3a: Comparison of DR and ASE on a larger
stochastic Gridworld. For the fixed model methods, the
significant drop in MSE at iteration 10 is due to the introduction of the model control variate. For clarity we do
not show error bars. The difference between the final estimate of DR-BPG and ASE with the fixed model is statistically significant (p < 0.05); the difference between the
same methods with a constantly improving model is not.
3b: Varying the probability of a high rewarding terminal
action in the GridWorld domain. Each point on the horizontal axis is the probability of taking this action. The vertical
axis gives the relative decrease in variance after adapting θ
for 500 iterations. Denoting the initial variance as vi and
the final variance as vf , the relative decrease is computed
v −v
as i vi f . Error bars for 95% confidence intervals are given
but are small.
der this second condition. We call the first method “update”
and the second method “fixed.” The update method invalidates the theoretical guarantees of these methods but learns
a more accurate model. In both instances, we build maximum likelihood tabular models.
Figure 3 demonstrates that combining BPG with a modelbased control variate (DR-BPG) can lead to further reduction of MSE compared to the control variate alone (ASE).
Specifically, with the fixed model, DR-BPG outperformed
all other methods. DR-BPG using the update method for
building the model performed competitively with ASE although not statistically significantly better. We also evaluate the final learned behavior policy of the fixed model
variant of DR-BPG. For a batch size of 100 trajectories,
the DR estimator with this behavior policy improves upon
the ASE estimator with the same model by 56.9 %.
For DR-BPG, estimating the model with all data still allowed steady progress towards lower variance. This result
is interesting since a changing model changes the surface
of our variance objective and thus gradient descent on the
variance has no theoretical guarantees of convergence. Empirically, we observe that setting the learning rate for DRBPG was more challenging for either model type. Thus
while we have shown BPG can be combined with control
variates, more work is needed to produce a robust method.
Our final experiment aims to understand how the gap between on- and off-policy variance is affected by the probability of rare events. The intuition for why behavior policy search can lower the variance of on-policy estimates is
that a well selected behavior policy can cause rare and high
magnitude events to occur. We test this intuition by varying
the probability of a rare, high magnitude event and observing how this change affects the performance gap between
on- and off-policy evaluation. For this experiment, we use
a variant of the deterministic Gridworld where taking the
UP action in the initial state (the upper left corner) causes
a transition to the terminal state with a reward of +50. We
use π1 from our earlier Gridworld experiments but we vary
the probability of choosing UP when in the initial state. So
with probability p the agent will receive a large reward and
end the trajectory. We use a constant learning rate of 10−5
for all values of p and run BPG for 500 iterations. We plot
the relative decrease of the variance as a function of p over
100 trials for each value of p. We use relative variance to
normalize across problem instances. Note that under this
measure, even when p is close to 1, the relative variance
is not equal to zero because as p approaches 1 the initial
variance also goes to zero.
This experiment illustrates that as the initial variance increases, the amount of improvement BPG can achieve increases. As p becomes closer to 1, the initial variance becomes closer to zero and BPG barely improves over the
variance of Monte Carlo (in terms of absolute variance
there is no improvement). When the πe rarely takes the
high rewarding UP action (p close to 0), BPG improves policy evaluation by increasing the probability of this action.
This experiment supports our intuition for why off-policy
evaluation can outperform on-policy evaluation.
6. Related Work
Behavior policy search and BPG are closely related to
existing work on adaptive importance-sampling. While
adaptive importance-sampling has been studied in the estimation literature, we focus here on adaptive importancesampling for MDPs and Markov Reward Processes (i.e., an
MDP with a fixed policy). Existing work on adaptive IS
in RL has considered changing the transition probabilities
to lower the variance of policy evaluation (Desai & Glynn,
2001; Frank et al., 2008) or lower the variance of policy
gradient estimates (Ciosek & Whiteson, 2017). Since the
transition probabilities are typically unknown in RL, adapting the behavior policy is a more general approach to adaptive IS. Ciosek and Whiteson also adapt the distribution of
trajectories with gradient descent on the variance (Ciosek
& Whiteson, 2017) with respect to parameters of the transition probabilities. The main focus of this work is increasing
Data-Efficient Policy Evaluation Through Behavior Policy Search
the probability of simulated rare events so that policy improvement can learn an appropriate response. In contrast,
we address the problem of policy evaluation and differentiate with respect to the (known) policy parameters.
The cross-entropy method (CEM) is a general method for
adaptive importance-sampling. CEM attempts to minimize
the Kullback-Leibler divergence between the current sampling distribution and the optimal sampling distribution. As
discussed in Section 3.1, this optimal behavior policy only
exists under a set of restrictive conditions. In contrast we
adapt the behavior policy by minimizing variance.
Other methods exist for lowering the variance of on-policy
estimates. In addition to the control variate technique used
by the Advantage Sum estimator (Zinkevich et al., 2006;
White & Bowling, 2009), Veness et al. consider using common random numbers and antithetic variates to reduce the
variance of roll-outs in Monte Carlo Tree Search (MCTS)
(2011). These techniques require a model of the environment (as is typical for MCTS) and do not appear to be applicable to the general RL policy evaluation problem. BPG
could potentially be applied to find a lower variance rollout policy for MCTS.
In this work we have focused on unbiased policy evaluation. When the goal is to minimize MSE it is often permissible to use biased methods such as temporal difference
learning (van Seijen & Sutton, 2014), model-based policy
evaluation (Kearns & Singh, 2002; Strehl et al., 2009), or
variants of weighted importance sampling (Precup et al.,
2000). It may be possible to use similar ideas to BPG to
reduce bias and variance although this appears to be difficult since the bias contribution to the mean squared error is squared and thus any gradient involving bias requires
knowledge of the estimator’s bias. We leave behavior policy search with biased off-policy methods to future work.
7. Discussion and Future Work
Our experiments demonstrate that behavior policy search
with BPG can lower the variance of policy evaluation. One
open question is characterizing the settings where adapting
the behavior policy substantially improves over on-policy
estimates. Towards answering this question, our Gridworld
experiment showed that when πe has little variance, BPG
can only offer marginal improvement. BPG increases the
probability of observing rare events with a high magnitude.
If the evaluation policy never sees such events then there
is little benefit to using BPG. However, in expectation and
with an appropriately selected step-size, BPG will never
lower the data-efficiency of policy evaluation.
It is also necessary that the evaluation policy contributes to
the variance of the returns. If all variance is due to the environment then it seems unlikely that BPG will offer much
improvement. For example, Ciosek and Whiteson (2017)
consider a variant of the Mountain Car task where the dynamics can trigger a rare event — independent of the action
— in which rewards are multiplied by 1000. No behavior
policy adaptation can lower the variance due to this event.
One limitation of gradient-based BPS methods is the necessity of good step-size selection. In theory, BPG can never
lead to worse policy evaluation compared to on-policy estimates. In practice, a poorly selected step-size may cause a
step to a worse behavior policy at step i which may increase
the variance of the gradient estimate at step i + 1. Future
work could consider methods for adaptive step-sizes, second order methods, or natural behavior policy gradients.
One interesting direction for future work is incorporating
behavior policy search into policy improvement. A similar
idea was explored by Ciosek and Whiteson who explored
off-environment learning to improve the performance of
policy gradient methods (2017). The method presented in
that work is limited to simulated environments with differential dynamics. Adapting the behavior policy is a potentially much more general approach.
8. Conclusion
We have introduced the behavior policy search problem
in order to improve estimation of ρ(πe ) for an evaluation
policy πe . We present a solution — Behavior Policy Gradient — for this problem which adapts the behavior policy with stochastic gradient descent on the variance of the
importance-sampling estimator. Experiments demonstrate
BPG lowers the mean squared error of estimates of ρ(πe )
compared to on-policy estimates. We also demonstrate
BPG can further decrease the MSE of estimates in conjunction with a model-based control variate method.
9. Acknowledgements
We thank Daniel Brown and the anonymous reviewers for useful comments on the work and its presentation. This work has
taken place in the Personal Autonomous Robotics Lab (PeARL)
and Learning Agents Research Group (LARG) at the Artificial Intelligence Laboratory, The University of Texas at Austin. PeARL
research is supported in part by NSF (IIS-1638107, IIS-1617639).
LARG research is supported in part by NSF (CNS-1330072,
CNS-1305287, IIS-1637736, IIS-1651089), ONR (21C184-01),
AFOSR (FA9550-14-1-0087), Raytheon, Toyota, AT&T, and
Lockheed Martin. Josiah Hanna is supported by an NSF Graduate Research Fellowship. Peter Stone serves on the Board of Directors of Cogitai, Inc. The terms of this arrangement have been
reviewed and approved by the University of Texas at Austin in
accordance with its policy on objectivity in research.
Data-Efficient Policy Evaluation Through Behavior Policy Search
References
Bastani, Meysam. Model-free intelligent diabetes management using machine learning. PhD thesis, Masters thesis, Department of Computing Science, University of Alberta, 2014.
Bertsekas, Dimitri P. and Tsitsiklis, John N. Gradient convergence in gradient methods with erros. 10:627–642,
2000.
Ciosek, Kamil and Whiteson, Shimon. OFFER: Offenvironment reinforcement learning. In Proceedings
of the 31st AAAI Conference on Artificial Intelligence
(AAAI), 2017.
Desai, Paritosh Y and Glynn, Peter W. Simulation in optimization and optimization in simulation: A Markov
chain perspective on adaptive Monte Carlo algorithms.
In Proceedings of the 33rd conference on Winter simulation, pp. 379–384. IEEE Computer Society, 2001.
Duan, Yan, Chen, Xi, Houthooft, Rein, Schulman, John,
and Abbeel, Pieter. Benchmarking deep reinforcement
learning for continuous control. In In Proceedings of
the 33rd International Conference on Machine Learning,
2016.
Frank, Jordan, Mannor, Shie, and Precup, Doina. Reinforcement learning in the presence of rare events. In Proceedings of the 25th International Conference on Machine learning, pp. 336–343. ACM, 2008.
Hammersley, JM and Handscomb, DC. Monte Carlo methods, methuen & co. Ltd., London, pp. 40, 1964.
Jiang, Nan and Li, Lihong. Doubly robust off-policy evaluation for reinforcement learning. In Proceedings of
the 33rd International Conference on Machine Learning
(ICML), 2016.
Kearns, Michael and Singh, Satinder. Near-optimal reinforcement learning in polynomial time. Machine Learning, 49(2-3):209–232, 2002.
Lillicrap, Timothy P., Hunt, Jonathan J., Pritzel, Alexander,
Heess, Nicolas, Erez, Tom, Tassa, Yuval, Silver, David,
and Wierstra, Daan. Continuous control with deep reinforcement learning. CoRR, abs/1509.02971, 2015.
Precup, D., Sutton, R. S., and Singh, S. Eligibility traces
for off-policy policy evaluation. In Proceedings of the
17th International Conference on Machine Learning, pp.
759–766, 2000.
Schulman, John, Levine, Sergey, Moritz, Philipp, Jordan,
Michael, and Abbeel, Pieter. Trust region policy optimization. In Proceedings of the 32nd International Conference on Machine Learning ( ICML), 2015.
Sen, P.K. and Singer, J.M. Large Sample Methods in Statistics: An Introduction with Applications. Chapman &
Hall, 1993.
Strehl, Alexander L, Li, Lihong, and Littman, Michael L.
Reinforcement learning in finite mdps: PAC analysis.
Journal of Machine Learning Research, 10:2413–2444,
2009.
Sutton, Richard S. and Barto, Andrew G. Reinforcement
Learning: An Introduction. MIT Press, 1998.
Sutton, Richard S., McAllester, David, Singh, Satinder, and
Mansour, Yishay. Policy gradient methods for reinforcement learning with function approximation. In Proceedings of the 13th Conference on Neural Information Processing Systems (NIPS), 2000.
Theocharous, Georgios, Thomas, Philip S., and
Ghavamzadeh, Mohammad.
Personalized ad recommendation systems for life-time value optimization
with guarantees. In Proceedings of the 27th International Joint Conference on Artificial Intelligence
(IJCAI), pp. 1806–1812, 2015.
Thomas, Philip S. A notation for Markov decision processes. ArXiv, arXiv:1512.09075v1, 2015.
Thomas, Philip S. and Brunskill, Emma. Data-efficient
off-policy policy evaluation for reinforcement learning.
In Proceedings of the 33rd International Conference on
Machine Learning (ICML), 2016.
Thomas, Philip S., Theocharous, Georgios, and
Ghavamzadeh, Mohammad. High confidence off-policy
evaluation. In Proceedings of the AAAI Conference on
Artificial Intelligence (AAAI), 2015.
van Seijen, Harm and Sutton, Richard S. True online TD
(λ). In Proceedings of the 31st International Conference
on Machine Learning (ICML), volume 14, pp. 692–700,
2014.
Veness, J., Lanctot, M., and Bowling, M. Variance reduction in Monte-Carlo tree search. In Proceedings of the
24th Conference on Neural Information Processing Systems, pp. 1836–1844, 2011.
White, M. and Bowling, M. Learning a value analysis tool
for agent evaluation. In Proceedings of the 21st International Joint Conference on Artificial Intelligence (IJCAI), pp. 1976–1981, 2009.
Williams, Ronald J. Simple statistical gradient-following
algorithms for connectionist reinforcement learning.
Machine learning, 8(3-4):229–256, 1992.
Data-Efficient Policy Evaluation Through Behavior Policy Search
Zinkevich, M., Bowling, M., Bard, N., Kan, M., and
Billings, D. Optimal unbiased estimators for evaluating
agent performance. In Proceedings of the 21st National
Conference on Artificial Intelligence (AAAI), pp. 573–
578, 2006.
Data-Efficient Policy Evaluation Through Behavior Policy Search
A. Proof of Theorem 1
In Appendix A, we give the full derivation of our primary theoretical contribution — the importance-sampling (IS) variance
gradient. We also present the variance gradient for the doubly-robust (DR) estimator.
We first derive an analytic expression for the gradient of the variance of an arbitrary, unbiased off-policy policy evaluation estimator, OPE(H, θ). Importance-sampling is one such off-policy policy evaluation estimator. From our general
derivation we derive the gradient of the variance of the IS estimator and then extend to the DR estimator.
A.1. Variance Gradient of an Unbiased Off-Policy Policy Evaluation Method
We first present a lemma from which
∂
∂θ
MSE[IS(H, θ)] and
∂
∂θ
MSE[DR(H, θ)] can both be derived.
Lemma 1 gives the gradient of the mean squared error (MSE) of an unbiased off-policy policy evaluation method.
Lemma 1.
#
"
L
X
∂
∂
∂
2
2
MSE[OPE(H, θ)] = E OPE(H, θ) (
log πθ (At |St )) +
OPE(H, θ) H ∼ πθ
∂θ
∂θ
∂θ
t=0
Proof. We begin by decomposing Pr(H|π) into two components—one that depends on π and the other that does not. Let
wπ (H) :=
L
Y
π(At |St ),
t=0
and
p(H) := Pr(H|π)/wπ (H),
for any π such that H ∈ supp(π) (any such π will result in the same value of p(H)). These two definitions mean that
Pr(H|π) = p(H)wπ (H).
The MSE of the OPE estimator is given by:
2
MSE[OPE(H, θ)] = Var[OPE(H, θ)] + (E[OPE(H, θ)] − ρ(πe )) .
|
{z
}
bias2
Since the OPE estimator is unbiased, i.e., E[OPE(H, θ)] = ρ(πe ), the second term is zero and so:
MSE(OPE(H, θ)) = Var(OPE(H, θ))
=E OPE(H, θ)2 H ∼ πθ − E[OPE(H, θ)|H ∼ πθ ]2
=E OPE(H, θ)2 H ∼ πθ − ρ(πe )2
To obtain the MSE gradient, we differentiate MSE(OPE(H, θ)) with respect to θ:
∂
∂
MSE[OPE(H, θ)] =
E OPE(H, θ)2 H ∼ πθ − ρ(πe )2
∂θ
∂θ
∂
= EH∼πθ OPE(H, θ)2
∂θ
∂ X
=
Pr(H|θ) OPE(H, θ)2
∂θ
H
X
∂
∂
=
Pr(H|θ)
OPE(H, θ)2 + OPE(H, θ)2
Pr(H|θ)
∂θ
∂θ
H
X
∂
∂
=
Pr(H|θ)
OPE(H, θ)2 + OPE(H, θ)2 p(H) wπθ (H)
∂θ
∂θ
H
(1)
Data-Efficient Policy Evaluation Through Behavior Policy Search
Consider the last factor of the last term in more detail:
L
∂
∂ Y
wπθ (H) =
πθ (At |St )
∂θ
∂θ t=0
! L
L
X
(a) Y
=
πθ (At |St )
t=0
t=0
=wπθ (H)
∂
∂θ πθ (At |St )
!
πθ (At |St )
L
X
∂
log (πθ (At |St )) ,
∂θ
t=0
where (a) comes from the multi-factor product rule. Continuing from (1) we have that:
#
"
L
X
∂
∂
∂
2
2
MSE(OPE(H, θ)) =E OPE(H, θ)
log (πθ (At |St )) +
OPE(H, θ) H ∼ πθ .
∂θ
∂θ
∂θ
t=0
A.2. Behavior Policy Gradient Theorem
We now use Lemma 1 to prove the Behavior Policy Gradient Theorem which is our main theoretical contribution.
Theorem 2.
"
#
L
X
∂
∂
MSE[IS(H, θ)] = E − IS(H, θ)2
log πθ (At |St ) H ∼ πθ
∂θ
∂θ
t=0
where the expectation is taken over H ∼ πθ .
Proof. We first derive
Lemma 1.
∂
∂θ
IS(H, θ)2 . Theorem 1 then follows directly from using
∂
∂θ
IS(H, θ)2 as
∂
∂θ
OPE(H, θ)2 in
2
wπe
IS(H, θ) =
g(H)
wθ
2
∂ wπe (H)
∂
IS(H, θ)2 =
g(H)
∂θ
∂θ wθ (H)
wπ (H) ∂
wπ (H)
g(H) e
=2 · g(H) e
wθ (H) ∂θ
wθ (H)
L
wπ (H)
wπ (H) X ∂
(a)
= − 2 · g(H) e
g(H) e
log πθ (At |St )
wθ (H)
wθ (H) t=0 ∂θ
2
= − 2 IS(H, θ)2
L
X
∂
log πθ (At |St )
∂θ
t=0
where (a) comes from the multi-factor product rule and using the likelihood-ratio trick (i.e.,
∂
∂θ πθ (A|S)
πθ (A|S)
Substituting this expression into Lemma 1 completes the proof:
"
#
L
X
∂
∂
2
MSE[IS(H, θ)] = E − IS(H, θ)
log πθ (At |St ) H ∼ πθ
∂θ
∂θ
t=0
= log πθ (A|S))
Data-Efficient Policy Evaluation Through Behavior Policy Search
A.3. Doubly Robust Estimator
Our final theoretical result is a corollary to the Behavior Policy Gradient Theorem: an extension of the IS variance gradient
to the Doubly Robust (DR) estimator. Recall that for a single trajectory DR is given as:
DR(H, θ) := v̂ πe (S0 ) +
L
X
γt
t=0
wπe ,t
(Rt − q̂ πe (St , At ) + v̂ πe (St+1 ))
wθ,t
where v̂ πe is the state-value
function of πe under an approximate model, q̂ πe is the action-value function of πe under the
Qt
model, and wπ,t := j=0 π(Aj |Sj ).
The gradient of the mean squared error of the DR estimator is given by the following corollary to the Behavior Policy
Gradient Theorem:
Corollary 2.
∂
MSE [DR(H, θ)]
∂θ
=
E[(DR(H, θ)2
L
L
t
X
X
∂
wπ ,t X ∂
log πθ (At |St ) − 2 DR(H, θ)(
log πθ (Ai |Si ))]
γ t δt e
∂θ
wθ,t i=0 ∂θ
t=0
t=0
where δt = Rt − q̂(St , At ) + v̂(St+1 ) and the expectation is taken over H ∼ πθ .
Proof. As with Theorem 1, we first derive
∂
2
∂θ OPE(H, θ) in Lemma 1.
2
DR(H, θ) =
∂
∂θ
DR(H, θ)2 . Corollary 1 then follows directly from using
πe
v̂ (S0 ) +
L
X
γ
t wπe ,t
t=0
∂
∂
DR(H, θ)2 =
∂θ
∂θ
πe
v̂ (S0 ) +
∂
=2 DR(H, θ)
∂θ
= − 2 DR(H, θ)(
L
X
γ
t=0
t wπe ,t
wθ,t
πe
v̂ (S0 ) +
wθ,t
t=0
γt
DR(H, θ)2 as
!2
πe
πe
(Rt − q̂ (St , At ) + v̂ (St+1 ))
!2
πe
πe
(Rt − q̂ (St , At ) + v̂ (St+1 ))
L
X
t=0
L
X
∂
∂θ
γ
t wπe ,t
wθ,t
!
πe
πe
(Rt − q̂ (St , At ) + v̂ (St+1 ))
t
X
wπe ,t
∂
log πθ (Ai |Si ))
(Rt − q̂ πe (St , At ) + v̂ πe (St+1 ))
wθ,t
∂θ
i=0
Thus the DR(H, θ) gradient is:
"
L
L
t
X
X
X
∂
wπ ,t
∂
= E DR(H, θ)
log πθ (At |St ) − 2 DR(H, θ)(
γ t e (Rt − q̂ πe (St , At ) + v̂ πe (St+1 ))
log πθ (Ai |Si )) H ∼ πθ
∂θ
w
∂θ
θ,t
t=0
t=0
i=0
2
The expression for the DR behavior policy gradient is more complex than the expression for the IS behavior policy gradient.
Lowering the variance of DR involves accounting for the covariance of the sum of terms. Intuitively, accounting for the
covariance increases the complexity of the expression for the gradient.
B. BPG’s Off-Policy Estimates are Unbiased
This appendix proves that BPG’s estimate is an unbiased estimate of ρ(πe ). If only trajectories from a single θ i were used
then clearly IS(·, θ i ) is an unbiased estimate of ρ(πe ). The difficulty is that the BPG’s estimate at iteration n depends on all
θ i for i = 1 . . . n and each θ i is not independent of the others. Nevertheless, we prove here that BPG produces an unbiased
#
Data-Efficient Policy Evaluation Through Behavior Policy Search
estimate of ρ(πe ) at each iteration. Specifically, we will show that E [IS(Hn , θ n )|θ 0 = θ e )] is an unbiased estimate of
ρ(πe ), where the IS estimate is conditioned on θ 0 = θ e . To make the dependence of θ i on θ i−1 explicit, we will write
f (Hi−1 ) := θ i where Hi−1 ∼ πθi−1 . We use Pr(h|θ) as shorthand for Pr(H = h|θ).
E [IS(Hn , θ n )|θ = θ e )] =
X
Pr(h0 |θ 0 )
X
Pr(h1 |f (h0 )) · · ·
Pr(hn |f (hn−1 )) IS(hn )
hn
h1
h0
X
|
=ρ(πe )
X
Pr(h0 |θ 0 )
X
{z
ρ(πe )
}
Pr(h1 |f (h0 )) · · ·
h1
h0
=ρ(πe )
Notice that, even though BPG’s off-policy estimates at each iteration are unbiased, they are not statistically independent.
This means that concentration inequalities, like Hoeffding’s inequality, cannot be applied directly. We conjecture that
the conditional independence properties of BPG (specifically that Hi is independent of Hi−1 given θi ), are sufficient for
Hoeffding’s inequality to be applicable.
C. Supplemental Experiment Description
This appendix contains experimental details in addition to the details contained in Section 5 of the paper.
Gridworld: This domain is a 4x4 Gridworld with a terminal state with reward 10 at (3, 3), a state with reward −10 at
(1, 1), a state with reward 1 at (1, 3), and all other states having reward −1. The action set contains the four cardinal directions and actions move the agent in its intended direction (except when moving into a wall which produces no movement).
The agent begins in (0,0), γ = 1, and L = 100. All policies use softmax action selection with temperature 1 where the
probability of taking an action a in a state s is given by:
eθsa
θsa0
a0 e
π(a|s) = P
We obtain two evaluation policies by applying REINFORCE to this task, starting from a policy that selects actions uniformly at random. We then select one evaluation policy from the early stages of learning – an improved policy but still far
from converged –, π1 , and one after learning has converged, π2 . We run our set of experiments once with πe := π1 and a
second time with πe := π2 . The ground truth value of ρ(πe ) is computed with value iteration for both πe .
Stochastic Gridworld: The layout of this Gridworld is identical to the deterministic Gridworld except the terminal state
is at (9, 9) and the +1 reward state is at (1, 9). When the agent moves, it moves in its intended direction with probability
0.9, otherwise it goes left or right with equal probability. Noise in the environment increases the difficulty of building an
accurate model from trajectories.
Continuous Control: We evaluate BPG on two continuous control tasks: Cart-pole Swing Up and Acrobot. Both tasks
are implemented within RLLAB (Duan et al., 2016) (full details of the tasks are given in Appendix 1.1). The single task
modification we make is that in Cart-pole Swing Up, when a trajectory terminates due to moving out of bounds we give
a penalty of −1000. This modification increases the variance of πe . We use γ = 1 and L = 50. Policies are represented
as conditional Gaussians with mean determined by a neural network with two hidden layers of 32 tanh units each and
a state-independent diagonal covariance matrix. In Cart-pole Swing Up, πe was learned with 10 iterations of the TRPO
algorithm (Schulman et al., 2015) applied to a randomly initialized policy. In Acrobot, πe was learned with 60 iterations.
The ground truth value of ρ(πe ) in both domains is computed with 1,000,000 Monte Carlo roll-outs.
Domain Independent Details In all experiments
control variate (or baseline) in the gradient
we subtract a constant
estimate from Theorem 1. The baseline is bi = E − IS(H)2 H ∼ θ i−1 and our new gradient estimate is:
"
L
X
∂
E (− IS −bi )
log πθ (At |St ) H ∼ πθ
∂θ
t=0
2
#
Data-Efficient Policy Evaluation Through Behavior Policy Search
hP
i
L
∂
Adding or subtracting a constant does not change the gradient in expectation since bi · E
log
π
(A
|S
)
= 0.
θ
t t
t=0 ∂θ
BPG with a baseline has lower variance so that the estimated gradient is closer in direction to the true gradient.
We use batch sizes of 100 trajectories per iteration for Gridworld experiments and size 500 for the continuous control tasks.
The step-size parameter was determined by a sweep over [10−2 , 10−6 ]
Early Stopping Criterion In all experiments we run BPG for a fixed number of iterations. In general, BPS can
continue for a fixed number of iterations or until the variance of the IS estimator stops decreasing. The true variance
is unknown but can be estimated by sampling a set of k trajectories with θ i and computing the uncentered variance:
Pk
1
2
j=0 OPE(Hj , θ j ) . This measure can be used to empirically evaluate the quality of each θ or determine when a BPS
k
algorithm should terminate behavior policy improvement.
| 2 |
1
Sampling Constrained Asynchronous
Communication: How to Sleep Efficiently
arXiv:1501.05930v5 [] 24 Oct 2017
Venkat Chandar and Aslan Tchamkerten
Abstract—The minimum energy, and, more generally,
the minimum cost, to transmit one bit of information
has been recently derived for bursty communication when
information is available infrequently at random times at
the transmitter. Furthermore, it has been shown that even
if the receiver is constrained to sample only a fraction
ρ ∈ (0, 1] of the channel outputs, there is no capacity
penalty. That is, for any strictly positive sampling rate ρ,
the asynchronous capacity per unit cost is the same as
under full sampling, i.e.,, when ρ = 1. Moreover, there is
no penalty in terms of decoding delay.
The above results are asymptotic in nature, considering
the limit as the number B of bits to be transmitted
tends to infinity, while the sampling rate ρ remains fixed.
A natural question is then whether the sampling rate
ρ(B) can drop to zero without introducing a capacity (or
delay) penalty compared to full sampling. We answer this
question affirmatively. The main result of this paper is an
essentially tight characterization of the minimum sampling
rate. We show that any sampling rate that grows at least
as fast as ω(1/B) is achievable, while any sampling rate
smaller than o(1/B) yields unreliable communication. The
key ingredient in our improved achievability result is a
new, multi-phase adaptive sampling scheme for locating
transient changes, which we believe may be of independent
interest for certain change-point detection problems.
Index Terms—Asynchronous communication; bursty
communication; capacity per unit cost; energy; change detection; hypothesis testing; sequential analysis; sparse communication; sampling; synchronization; transient change
I. I NTRODUCTION
I
N many emerging technologies, communication is
sparse and asynchronous, but it is essential that
This work was supported in part by a grant and in part by a Chair of
Excellence both from the French National Research Agency (ANRBSC and ANR-ACE, respectively). This work was presented in part
at the 2015 International Symposium on Information Theory and at
the 2016 Asilomar Conference on Signals, Systems, and Computers.
V. Chandar is with D. E. Shaw and Co., New York, NY 10036,
USA. Email: [email protected].
A. Tchamkerten is with the Department of Communications and
Electronics, Telecom ParisTech, 75634 Paris Cedex 13, France.
Email: [email protected].
when data is available, it is delivered to the destination as timely and reliably as possible.
In [3] the authors characterized capacity per unit
cost as a function of the level of asynchronism for
the following model. There are B bits of information that are made available to the transmitter at
some random time ν, and need to be communicated to the receiver. The B bits are encoded into
a codeword of length n, and transmitted over a
memoryless channel using a sequence of symbols
that have costs associated with them. The rate R per
unit cost is B divided by the cost of the transmitted
sequence. Asynchronism is captured here by the fact
that the random time ν is not known a priori to
the receiver. However, both transmitter and receiver
know that ν is distributed uniformly over a time
horizon {1, 2, . . . , A}. At all times before and after
the actual transmission, the receiver observes pure
noise.
The goal of the receiver is to reliably decode the
information bits by sequentially observing the outputs of the channel. A main result in [3] is a singleletter characterization of the asynchronous capacity
per unit cost C(β), where β = (log A)/B denotes
the timing uncertainty per information bit. While
this result holds for arbitrary discrete memoryless
channels and arbitrary input costs, the underlying
model assumes that the receiver is always in the
listening mode: every channel output is observed
until the decoding instant.
In [8] it is shown that even if the receiver is
constrained to observe at most a fraction ρ ∈ (0, 1]
of the channel outputs the asynchronous capacity
per unit cost C(β, ρ) is not impacted by a sparse
output sampling, that is
C(β, ρ) = C(β)
for any asynchronism level β > 0 and sampling
frequency ρ ∈ (0, 1]. Moreover, the decoding delay
is minimal: the elapsed time between when information is available sent and when it is decoded
2
is asymptotically the same as under full sampling.
This result uses the possibility for the receiver to
sample adaptively: the next sample can be chosen as
a function of past observed samples. In fact, under
non-adaptive sampling, it is still possible to achieve
the full sampling asynchronous capacity per unit
cost, but the decoding delay gets multiplied by a
factor 1/ρ. Therefore, adaptive sampling strategies
are of particular interest in the very sparse regime.
The results of [8] provide an achievability scheme
when the sampling frequency ρ is a strictly positive
constant. This suggests the question whether ρ =
ρ(B) can tend to zero as B tends to infinity while
still incurring no capacity or delay penalty. The
main result of this paper resolves this question. We
introduce a novel, multi-phase adaptive sampling
algorithm for message detection, and use it to prove
an essentially tight asymptotic characterization of
the minimum sampling rate needed in order to communicate as efficiently as under full sampling. Informally, we exhibit a communication scheme utilizing
this multi-phase sampling method at the receiver
that asymptotically achieves vanishing probability
of error and possesses the following properties:
1. The scheme achieves the capacity per unit cost
under full sampling, that is, there is no rate
penalty even though the sampling rate tends to
zero;
2. The receiver detects the codeword with minimal delay;
3. The receiver detects changes with minimal
sampling rate, in the sense that any scheme that
achieves the same order of delay but operates
at a lower sampling rate will completely miss
the codeword transmission period, regardless
of false-alarm probability. The sampling rate
converges to 0 in the limit of large B, and our
main result characterizes the best possible rate
of convergence.
In other words, our communication scheme achieves
essentially the minimal sampling rate possible, and
incurs no delay or capacity penalty relative to full
sampling. A formal statement of the main result is
given in Section II.
of asynchronism under which it is still possible
to communicate reliably. In [9], [10] capacity is
defined as the message length divided by the mean
elapsed time between when information is available
and when it is decoded. For this definition, capacity
upper and lower bounds are established and shown
to be tight for certain channels. In [9] it is also
shown that so called training-based schemes, where
synchronization and information transmission are
performed separately, need not be optimal in particular in the high rate regime. In [3] capacity is
defined with respect to codeword length and is characterized as a function of the level of asynchronism.
For the same setup Polyanskiy in [5] investigated
the finite length regime and showed that in certain
cases dispersion is unaffected by asynchronism even
when β > 0.
In [11], [12] the authors investigated the slotted
version of the problem (i.e., the decoder is revealed
ν mod n) and established error exponent tradeoffs
between between decoding error, false-alarm, and
miss-detection.
In [3], [6] the above bursty communication setup
is investigated in a random access configuration and
tradeoffs between communication rate and number
of users are derived as a function of the level of
asynchronism. Finally, in [7] a diamond network is
considered and the authors provided bounds on the
minimum energy needed to convey one bit across
the network.
Paper organization
This paper is organized as follows. In Section II,
we recall the asynchronous communication model
and related prior results. Then, we state our main
result, Theorem 3, which is a stronger version
of the results in [8]. Section III states auxiliary
results, Theorems 4 and 5, characterizing the performance of our multi-phase sampling algorithm.
In Section IV we first prove Theorems 4 and 5,
then prove Theorem 3. The achievability part of
Theorem 3 uses the multi-phase sampling algorithm
for message detection at the receiver, and the converse is essentially an immediate consequence of
the converse of Theorem 5.
Related works
II. M AIN RESULT: THE SAMPLING RATE
REQUIRED
IN ASYNCHRONOUS COMMUNICATION
The above sparse communication model was first
Our main result, Theorem 3 below, is a strengthintroduced in [2], [10]. These works characterize
the synchronization threshold, i.e.,the largest level ening of the results of [8]. We recall the model and
3
results (Theorems 1 and 2) of that paper below to receiver observes “pure noise.” Specifically, conditioned on ν and on the message to be conveyed m,
keep the paper self-contained.
Communication is discrete-time and carried over the receiver observes independent channel outputs
a discrete memoryless channel characterized by its
Y1 , Y2 , . . . , YA+n−1
finite input and output alphabets
X and
Y,
distributed as follows. For
1≤t≤σ−1
respectively, and transition probability matrix
Q(y|x),
or
σ +n ≤ t ≤ A+n −1,
for all y ∈ Y and x ∈ X. Without loss of generality,
we assume that for all y ∈ Y there is some x ∈ X the Yt ’s are “pure noise” symbols, i.e.,
for which Q(y|x) > 0.
Yt ∼ Q(·|⋆)
Given B ≥ 1 information bits to be transmitted,
a codebook C consists of
where ⋆ represents the “idle” symbol. For σ ≤ t ≤
M = 2B
σ+n−1
Yt ∼ Q(·|ct−σ+1 (m))
codewords of length n ≥ 1 composed of symbols
from X.
where ci (m) denotes the ith symbol of the codeword
A randomly and uniformly chosen message m cn (m).
is available at the transmitter at a random time ν,
Decoding involves three components:
independent of m, and uniformly distributed over
• a sampling strategy,
{1, . . . , AB }, where the integer
• a stopping (decoding) time defined on the sampled process,
A = 2βB
• a decoding function defined on the stopped
characterizes the asynchronism level between the
sampled process.
transmitter and the receiver, and where the constant A sampling strategy consists of “sampling times”
which are defined as an ordered collection of ranβ≥0
dom time indices
denotes the timing uncertainty per information bit.
S = {(S1 , . . . , Sℓ ) ⊆ {1, . . . , A+n−1} : Si < Sj , i < j}
While ν is unknown to the receiver, A is known by
both the transmitter and the receiver.
where Sj is interpreted as the jth sampling time.
We consider one-shot communication, i.e.,only The sampling strategy is either non-adaptive or
one message arrives over the period {1, 2, . . . , A} . adaptive. It is non-adaptive when the sampling times
If A = 1, the channel is said to be synchronous.
in S are independent of Y1A+n−1 . The strategy is
Given ν and m, the transmitter chooses a time adaptive when the sampling times are functions of
σ(ν, m) to start sending codeword cn (m) ∈ C past observations. This means that S1 is an arbitrary
assigned to message m. Transmission cannot start value in {1, . . . , A + n − 1}, possibly random but
before the message arrives or after the end of the independent of Y A+n−1 , and for j ≥ 2
1
uncertainty window, hence σ(ν, m) must satisfy
Sj = gj ({YSi }i<j )
ν ≤ σ(ν, m) ≤ A almost surely.
for some (possibly randomized) function
In the rest of the paper, we suppress the arguments
ν and m of σ when these arguments are clear from
gj : Yj−1 → {Sj−1 + 1, . . . , A + n − 1} .
context.
Given a sampling strategy, the receiver decodes
Before and after the codeword transmission,
i.e.,before time σ and after time σ + n − 1, the by means of a sequential test (τ, φτ ) where τ
4
denotes a stopping (decision) time with respect to Definition 2 (Cost of a code). The (maximum) cost
the sampled output process1
of a code C with respect to a cost function k : X →
[0, ∞] is defined as
Y S1 , Y S2 , . . .
n
X
def
and where φτ denotes a decoding function based on
k(ci (m)).
K(C) = max
m
the stopped sampled output process. Let
i=1
def
St = {Si ∈ S : Si ≤ t}.
(1)
denote the set of sampling times taken up to time t
and let
def
Ot = {YSi : Si ∈ St }
(2)
Definition 3 (Sampling frequency of a code). Given
ε > 0, the sampling frequency of a code C, denoted
by ρ(C, ε), is the relative number of channel outputs that are observed until a message is declared.
Specifically, it is defined as the minimum r ≥ 0
such that
denote the corresponding set of channel outputs.
The decoding function φτ is a map
min Pm (|Sτ |/τ ≤ r) ≥ 1 − ε .
φτ : Y|Oτ | → {1, 2, . . . , M}
Oτ 7→ φτ (Oτ ).
Definition 4 (Delay of a code). Given ε > 0, the
(maximum) delay of a code C, denoted by d(C, ε),
is defined as the minimum integer l such that
A code (C, (S, τ, φτ )) is defined as a codebook
and a decoder composed of a sampling strategy, a
decision time, and a decoding function. Throughout
the paper, whenever clear from context, we often
refer to a code using the codebook symbol C only,
leaving out an explicit reference to the decoder.
Note that a pair (S, τ ) allows only to do message
detection but does not provide a message estimate.
Such a restricted decoder will later (Section III) be
referred simply as a “detector.”
m
min Pm (τ − ν ≤ l − 1) ≥ 1 − ε .
m
We now define capacity per unit cost under the
constraint that the receiver has access to a limited
number of channel outputs:
Definition 5 (Asynchronous capacity per unit cost
under sampling constraint). Given β ≥ 0 and a
non-increasing sequence of numbers {ρB }, with
0 ≤ ρB ≤ 1, rate per unit cost R is said to
be achievable if there exists a sequence of codes
Definition 1 (Error probability). The maximum
{CB } and a sequence of positive numbers εB with
(over messages) decoding error probability of a code
B→∞
εB −→ 0 such that for all B large enough
C is defined as
1) CB operates at timing uncertainty per informax Pm (Em |C),
(3)
mation bit β;
m
2) the maximum error probability P(E|CB ) is at
where
most εB ;
A
X
3)
the rate per unit cost
def 1
Pm (Em |C) =
Pm,t (Em |C),
A t=1
B
K(CB )
where the subscripts “m, t” denote conditioning on
the event that message m arrives at time ν = t, and
where Em denotes the error event that the decoded
message does not correspond to m, i.e.,
def
Em = {φτ (Oτ ) 6= m} .
(4)
1
Recall that a (deterministic or randomized) stopping time τ
with respect to a sequence of random variables Y1 , Y2 , . . . is a
positive, integer-valued, random variable such that the event {τ = t},
conditioned on the realization of Y1 , Y2 , . . . , Yt , is independent of the
realization of Yt+1 , Yt+2 , . . . for all t ≥ 1.
is at least R − εB ;
4) the sampling frequency satisfies
ρ(CB , εB ) ≤ ρB ;
5) the delay satisfies2
1
log(d(CB , εB )) ≤ εB .
B
2
Throughout the paper logarithms are always intended to be to the
base 2.
5
Given β and {ρB }, the asynchronous capacity per where maxX denotes maximization with respect to
unit cost, denoted by C(β, {ρB }), is the supremum the channel input distribution PX , where (X, Y ) ∼
PX (·)Q(·|·), where Y⋆ denotes the random output
of achievable rates per unit cost.
of the channel when the idle symbol ⋆ is transTwo comments are in order. First note that sammitted (i.e.,Y⋆ ∼ Q(·|⋆)), where I(X; Y ) denotes
ples occurring after time τ play no role in our perthe mutual information between X and Y , and
formance metrics since error probability, delay, and
where
D(Y ||Y⋆) denotes the divergence between the
sampling rate are are all functions of Oτ (defined
distributions of Y and Y⋆ .
in (2)). Hence, without loss of generality, for the
rest of the paper we assume that the last sample
Theorem 1 characterizes capacity per unit cost
is taken at time τ , i.e.,that the sampled process is under full output sampling, and over codes whose
truncated at time τ . The truncated sampled process delay grow sub-exponentially with B. As it turns
is thus given by the collection of sampling times out, the full sampling capacity per unit cost can
Sτ (defined in (1)). In particular, we have (almost also be achieved with linear delay and sparse output
surely)
sampling.
Define3
S1 ⊆ S2 ⊆ · · · ⊆ Sτ = Sτ +1 = · · · = SAB +n−1 . (5)
B
The second comment concerns the delay con- n∗B (β, R) def
=
= Θ(B)
R max{E[k(X)] : X ∈ P(R)}
straint 4). The delay constraint is meant to capture
(7)
the fact that the receiver is able to locate νB with
high accuracy. More precisely, with high probabil- where P(R) is defined as the set
ity, τB should be at most sub-exponentially larger
I(X; Y ) I(X; Y ) + D(Y ||Y⋆ )
than νB . This already represents a decent level of
≥R .
,
X : min
accuracy, given that νB itself is uniform over an
E[k(X)]
E[k(X)](1 + β)
(8)
exponentially large interval. However, allowing a
sub-exponential delay still seems like a very loose
∗
constraint. As Theorem 3 claims, however, we can The quantity nB (β, R) quantifies the minimum deachieve much greater accuracy. Specifically, if a tection delay as a function of the asynchronism level
sampling rate is achievable, it can be achieved with and rate per unit cost, under full sampling:
delay linear in B, and if a sampling rate cannot be Theorem 2 (Minimum delay, constant sampling
achieved with linear delay, it cannot be achieved rate, Theorem 3 [8]). Fix β ≥ 0, R ∈ (0, C(β)],
even if we allow a sub-exponential delay.
and ρ ∈ (0, 1]. For any codes {CB } that achieve
Notational conventions: We shall use dB and ρB rate per unit cost R at timing uncertainty β, and
instead of d(CB , εB ) and ρ(CB , εB ), respectively, operating at constant sampling rate 0 < ρB = ρ,
leaving out any explicit reference to CB and the we have
dB
sequence of non-negative numbers {εB }, which we
≥ 1.
lim inf ∗
B→∞ n (β, R)
assume satisfies εB → 0. Under full sampling,
B
i.e.,when ρB = 1 for all B, capacity is simply
denoted by C(β), and when the sampling rate is Furthermore, there exist codes {CB } that achieve
constant, i.e.,when ρB = ρ ≤ 1 for all B, capacity rate R with (a) timing uncertainty β, (b) sampling
rate ρB = ρ, and (c) delay
is denoted by C(β, ρ).
The main, previously known, results regarding cadB
lim sup ∗
≤ 1.
pacity for this asynchronous communication model
B→∞ nB (β, R)
are the following. First, capacity per unit cost under
Theorem 2 says that the minimum delay achieved
full sampling is given by the following theorem:
by rate R ∈ (0, C(β)] codes is n∗B (β, R) for any
Theorem 1 (Full sampling, Theorem 1 [1] ). For constant sampling rate ρ ∈ (0, 1]. This naturally
any β ≥ 0
3
Throughout the paper we use the standard “big-O” Landau
I(X; Y ) I(X; Y ) + D(Y ||Y⋆)
C(β) = max min
,
notation to characterize growth rates (see, e.g., [4, Chapter 3]). These
X
E[k(X)]
E[k(X)](1 + β)
growth rates, e.g., Θ(B) or o(B), are intended in the limit B → ∞,
(6) unless stated otherwise.
6
suggests the question “What is the minimum sampling rate of codes that achieve rate R and minimum
delay n∗B (β, R)?” Our main result is the following
theorem, which states that the minimum sampling
rate essentially decreases as 1/B:
Theorem 3 (Minimum delay, minimum sampling
rate). Consider a sequence of codes {CB } that
operate under timing uncertainty per information
bit β > 0. If
ρB dB = o(1),
(9)
the receiver does not even sample a single component of the sent codeword with probability tending to one. Hence, the average error probability
tends to one whenever R > 0, dB = O(B), and
ρB = o(1/B).
Moreover, for any R ∈ (0, C(β)] and any sequence of sampling rates satisfying ρB = ω(1/B),
there exist codes {CB } that achieve rate R at (a)
timing uncertainty β, (b) sampling rate ρB , and (c)
delay
dB
lim sup ∗
≤ 1.
B→∞ nB (β, R)
If R > 0, the minimum delay n∗B (β, R) is O(B)
by Theorem 2 and (7), so Theorem 3 gives an
essentially tight characterization of the minimum
sampling rate; a necessary condition for achieving
the minimum delay is that ρB be at least Ω(1/B),
and any ρB = ω(1/B) is sufficient.
That sampling rates of order o(1/dB ) are not
achievable is certainly intuitively plausible and even
essentially trivial to prove when restricted to nonadaptive sampling. To see this note that by the
definition of delay, with high probability decoding
happens no later than instant ν + dB . Therefore,
without essential loss of generality, we may assume
that information is being transmitted only within
period {ν, ν + 1, . . . , ν + dB }. Hence, if sampling is
non-adaptive and its rate is of order o(1/dB ) then
with high probability (over ν) information transmission will occur during one unsampled period
of duration dB . This in turn implies a high error
probability. The main contribution in the converse
argument is that it also handles adaptive sampling.
Achievability rests on a new multi-phase procedure to efficiently detect the sent message. This
detector, whose performance is the focus of Section III, is a much more fine grained procedure than
the one used to establish Theorem 2. To establish
achievability of Theorem 2, a two-mode detector is
considered, consisting of a baseline mode operating
at low sampling rate, and a high rate mode. The
detector starts in the baseline mode and, if past
observed samples suggest the presence of a change
in distribution, the detector changes to the high rate
mode which acts as a confirmation phase. At the end
of the confirmation phase the detector either decides
to stop, or decides to reverse to the baseline mode
in case the change is unconfirmed.
The detector proposed in this paper (see Section III for the setup and Section IV-C for the
description of the procedure) has multiple confirmation phases, each operating at a higher sampling
rate than the previous phase. Whenever a confirmation phase is passed, the detector switches to
the next confirmation phase. As soon as a change
is unconfirmed, the procedure is aborted and the
detector returns to the low rate baseline mode. The
detector only stops if the change is confirmed by all
confirmation phases. Having multiple confirmation
phases instead of just one, as for Theorem 2, is key
to reducing the rate from a constant to essentially
1/B, as it allows us to aggressively reject falsealarms whithout impacting the ability to detect the
message.
III. S AMPLING
CONSTRAINED TRANSIENT
CHANGE - DETECTION
This section focuses on one key aspect of asynchronous communication, namely, that we need to
quickly detect the presence of a message with a
sampling constrained detector. As there is only one
possible message, the problem amounts to a pure
(transient) change-point detection problem. Related
results are stated in Theorems 4 and 5. These results
and their proofs are the key ingredients for proving
Theorem 3.
A. Model
The transient change-detection setup we consider
in this section is essentially a simpler version of
the asynchronous communication problem stated
in Section II. Specifically, rather than having a
codebook of 2B messages, we consider a binary
hypothesis testing version of the problem. There
is a single codeword, so no information is being
conveyed, and our goal is simply to detect when
the codeword was transmitted.
7
the tail event where τ is much larger than ν, say
because we missed the transient change completely.
We are able to characterize optimal performance
tightly with the above definition, but expected delay
would also be of interest, and an analysis of the
y
optimal performance under this metric is an open
There is no parameter B in our problem, but in problem for future research.
analogy with Section II, let n denote the length of
Definition 8 (Sampling rate). For a given detector
the transient change. Let ν be uniformly distributed
(S, τ ) and ε > 0, the sampling rate, denoted by
over
ρ((S, τ ), ε), is defined as the minimum r ≥ 0 such
{1, 2, . . . , A = 2αn }.
that
P(|Sτ |/τ ≤ r) ≥ 1 − ε.
where the integer A denotes the uncertainty level
and where α the corresponding uncertainty expoAchievable sampling rates are defined analonent, respectively.
gously to Section II, but we include a formal defiGiven P0 and P1 , process {Yt } is defined simi- nition for completeness.
larly as in Section II. Conditioned on the value of
Definition 9 (Achievable sampling rate). Fix α ≥ 0,
ν, the Yt ’s are i.i.d. according to P0 for
and fix a sequence of non-increasing values {ρn }
1≤t<ν
with 0 ≤ ρn ≤ 1. Sampling rates {ρn } are said
to be achievable at uncertainty exponent α if there
or
exists a sequence of detectors {(Sn , τn )} such that
νn + n ≤ t ≤ A + n − 1
for all n large enough
and i.i.d. according to P1 for ν ≤ t ≤ ν + n −
1) (Sn , τn ) operates under uncertainty level An =
1. Process {Yt } is thus i.i.d. P0 except for a brief
2αn ,
period of duration n where it is i.i.d. P1 .
2) the false-alarm probability P(τn < νn ) is at
Sampling strategies are defined as in Section II,
most εn ,
but since we now only have a single message, we
3) the sampling rate satisfies ρ((Sn , τn ), εn ) ≤
formally define the relevant performance metrics
ρn ,
below.
4) the delay satisfies
Definition 6 (False-alarm probability). For a given
1
log(d((Sn , τn ), εn )) ≤ εn
detector (S, τ ) the probability of false-alarm is den
fined as
for some sequence of non-negative numbers {εn }
P(τ < ν) = P0 (τ < ν)
n→∞
such that εn −→ 0.
where P0 denotes the joint distribution over τ and
Notational conventions: We shall use dn and ρn
ν when the observations are drawn from the P0 instead of d((Sn , τn ), εn ) and ρ((Sn , τn ), εn ), reproduct distribution. In other words, the false-alarm
spectively, leaving out any explicit reference to the
probability is the probability that the detector stops
detectors and the sequence of non-negative numbers
before the transient change has started.
{εn }, which we assume satisfies εn → 0.
Definition 7 (Detection delay). For a given detector
(S, τ ) and ε > 0, the delay, denoted by d((S, τ ), ε), B. Results
is defined as the minimum l ≥ 0 such that
Define
nα
P(τ − ν ≤ l − 1) ≥ 1 − ε .
def
= Θ(n).
(10)
n∗ (α) =
D(P1 ||P0 )
Remark: The reader might wonder why we chose
the above definition of delay, as opposed to, for Theorem 4 (Detection, full sampling). Under full
example, measuring delay by E[max(0, τ − ν)]. sampling (ρn = 1):
The above definition corresponds to capturing the
1) the supremum of the set of achievable uncer“typical” delay, without incurring a large penalty in
tainty exponents is D(P1 ||P0);
Proceeding more formally, let P0 and P1 be
distributions defined over some finite alphabet Y and
with finite divergence
X
def
D(P1 ||P0) =
P1 (y) log[P1 (y)/P0(y)].
8
IV. P ROOFS
2) any detector that achieves uncertainty exponent α ∈ (0, D(P1 ||P0 )) has a delay that
Typicality convention
satisfies
A length q ≥ 1 sequence v q over Vq is said to be
dn
lim inf ∗
≥ 1;
typical with respect to some distribution P over V
n→∞ n (α)
if5
3) any uncertainty exponent α ∈ (0, D(P1 ||P0))
||P̂vq − P || ≤ q −1/3
is achievable with delay satisfying
where P̂vq denotes the empirical distribution (or
dn
type) of v q .
lim sup ∗
≤ 1.
n→∞ n (α)
Typical sets have large probability. Quantitatively,
Hence, the shortest detectable4 change is of size a simple consequence of Chebyshev’s inequality is
that
log An
(1 ± o(1))
(11) P q (||P̂V q − P || ≤ q −1/3 ) = 1 − O q −1/3 (q → ∞)
nmin (An ) =
D(P1 ||P0 )
(12)
by Claim 1) of Theorem 4, assuming An ≫ 1. In
q
this regime, change duration and minimum detec- where P denotes the q-fold product distribution of
tion delay are essentially the same by Claims 2)-3) P . Also, for any distribution P̃ over V we have
and (10), i.e.,
P q (||P̂V q − P̃ || ≤ q −1/3 ) ≤ 2−q(D(P̃ ||P )−o(1)) . (13)
n∗ (α = (log An )/nmin (An )) = nmin (An )(1 ± o(1))
whereas in general minimum detection delay could
be smaller than change duration.
The next theorem says that the minimum sampling rate needed to achieve the same detection
delay as under full sampling decreases essentially
as 1/n. Moreover, any detector that tries to operate
below this sampling limit will have a huge delay.
About rounding
Throughout computations, we ignore issues related to the rounding of non-integer quantities, as
they play no role asymptotically.
A. Proof of Theorem 4
The proof of Theorem 4 is essentially a Corollary
of [2, Theorem]. We sketch the main arguments.
1) : To establish achievability of D(P1 ||P0) one
uses the same sequential typicality detection proρn = ω(1/n)
cedure as in the achievability of [2, Theorem]. For
is achievable with delay satisfying
the converse argument, we use similar arguments
as for the converse of [2, Theorem]. For this latter
dn
setting, achieving α means that we can drive the
≤ 1.
lim sup ∗
n→∞ n (α)
probability of the event {τn 6= νn + n − 1} to
zero. Although this performance metric differs from
Conversely, if
ours—vanishing probability of false-alarm and subρn = o(1/n)
exponential delay—a closer look at the converse
the detector samples only from distribution P0 (i.e.,it argument of [2, Theorem] reveals that if α >
completely misses the change) with probability tend- D(P1 ||P0) there are exponentially many sequences
ing to one. This implies that the delay is Θ(An = of length n that are “typical” with respect to the
2αn ) whenever the probability of false-alarm tends posterior distribution. This, in turn, implies that
either the probability of false-alarm is bounded
to zero.
away from zero, or the delay is exponential.
Theorem 5 (Detection, sparse sampling). Fix α ∈
(0, D(P1||P0 )). Any sampling rate
4
By detectable we mean with vanishing false-alarm probability and
subexponential delay.
5
|| · || refers to the L1 -norm.
9
2) : Consider stopping times {τn } that achieve
delay {dn }, and vanishing false-alarm probability
(recall the notational conventions for dn at the end
of Section III-A). We define the “effective process”
{Ỹi } as the process whose change has duration
min{dn , n} (instead of n).
Effective output process: The effective process {Ỹi }
is defined as follows. Random variable Ỹi is equal
to Yi for any index i such that
1 ≤ i ≤ νn + min{dn , n} − 1
and
{Ỹi : νn + min{dn , n} ≤ i ≤ An + n − 1}
is an i.i.d. P0 process independent of {Yi }. Hence,
the effective process differs from the true process
over the period {1, 2, . . . , τn } only when {τn ≥ νn +
dn } with dn < n.
Genie aided statistician: A genie aided statistician
observes the entire effective process (of duration
An +n−1) and is informed that the change occurred
over one of
def An + n − 1 − (νn mod dn )
rn =
(14)
dn
B. Proof of Theorem 5: Converse
As alluded to earlier (see discussion after Theorem 3), it is essentially trivial to prove that sampling
rates of order o(1/n) are not achievable when we
restrict to non-adaptive sampling, that is when all
sampling times are independent of {Yt }. The main
contribution of the converse, and the reason why it
is somewhat convoluted, is that it handles adaptive
sampling as well.
Consider a sequence of detectors {(Sn , τn )} that
achieves, for some false-alarm probability εn → 0,
sampling rate {ρn } and communication delay dn
(recall the notational conventions for dn and ρn at
the end of Section III-A).
We show first that if
ρn = o(1/n)
(15)
then any detector, irrespective of delay, will take
only P0 -generated samples with probability asymptotically tending to one. This, in turn, will imply
that the delay is exponential, since by assumption
the false-alarm probability vanishes.
In the sequel, we use P(·) to denote the (unconditional) joint distribution of the output process
Y1 , Y2 , . . . and ν, and we use P0 (·) to denote the
consecutive (disjoint) blocks of duration dn . The
distribution of the output process Y1 , Y2 , . . . , YA+n−1
genie aided statistician produces a time interval of
when no change occurs, that is a P0 -product distrisize dn which corresponds to an estimate of the
bution.
change in distribution and is declared to be correct
By definition of achievable sampling rates {ρn }
only if this interval corresponds to the change in
we have
distribution.
Observe that since τn achieves false-alarm proba1 − o(1) ≤ P(|Sτn | ≤ τn ρn ).
(16)
bility εn and delay dn on the true process {Yi }, the
genie aided statistician achieves error probability at The following lemma, proved thereafter, says if
most 2εn . The extra εn comes from the fact τn stops (15) holds then with probability tending to one the
after time νn + dn − 1 (on {Yi }) with probability at detector samples only P0 -distributed samples with
most εn . Therefore, with probability at most εn the probability tending to one:
genie aided statistician observes a process that may Lemma 1. For any α > 0, if ρn = o(1/n) then
differ from the true process.
P({νn , νn + 1, . . . , νn + n − 1} ∩ Sτn = ∅)
By using the same arguments as for the converse
of [2, Theorem], but on the process {Ỹi } parsed into
≥ 1 − o(1). (17)
consecutive slots of size dn , we can conclude that
This, as we now show, implies that the delay is
if
dn
exponential.
lim inf ∗
<1
n→∞ n (α)
On the one hand, since the probability of falsethen the error probability of the genie aided decoder alarm vanishes, we have
tends to one.
3) : To establish achievability apply the same
sequential typicality test as in the achievability part
of [2, Theorem].
o(1) ≥ P(τn < νn )
≥ P(τn < An /2|νn ≥ An /2)/2
= P0 (τn < An /2)/2.
10
for any k ∈ {1, 2, . . . , An }, where we defined the
set of indices
This implies
P0 (τn < An /2) ≤ o(1),
def
Js = {j : {j, j + 1, . . . , j + n − 1} ∩ s = ∅}}.
and, therefore,
P(τn ≥ An /2) ≥ P(τn ≥ An /2|νn > An /2)/2
= P0 (τn ≥ An /2)/2
= 1/2 − o(1).
(18)
The first equality in (20) holds by the definition
of St (see (1)) and by (5). The third equality
holds because event {Sν+n−1 = s} involves random
variables whose indices are not in Js . Hence samples
in s are all distributed according to the nominal
distribution P0 (P0 -product distribution). The last
inequality holds by the property
Now, define events
def
A1 = {τn ≥ An /2},
def
|Sa+b | ≤ |Sa | + b
A2 = {|Sτn | ≤ τn ρn },
def
A3 = {{νn , νn + 1, . . . , νn + n − 1} ∩ Sτn = ∅}, which follows from the definition of St .
def
Since τn ≤ An + n − 1 from (16) we get
and let A = A1 ∩ A2 ∩ A3 .
From (16), (17), and (18), we get
1 − o(1) ≤ P(|Sτn | ≤ (An + n − 1)ρn )
P(A) = 1/2 − o(1).
(19)
≤ P(|Sνn −1 | ≤ (An + n − 1)ρn )
(21)
(22)
We now argue that when event A happens, the de- where the second inequality holds by (5).
Now,
tector misses the change which might have occurred,
say, before time An /4, thereby implying a delay P(|Sνn−1 | ≤ (A + n − 1)ρ )
n
n
Θ(An ) since τn ≥ An /2 on A.
A
n
X
When event A happens, the detector takes
P(|St−1 | ≤ (An + n − 1)ρn , νn = t)
=
o(An /n) samples (this follows from event A2 since
t=1
by assumption ρn = o(1/n)). Therefore, within
An
X
{1, 2, . . . , An /4} there are at least An /4 − o(An ))
P0 (|St−1 | ≤ (An + n − 1)ρn )P(νn = t)
=
time intervals of length n that are unsampled. Each
t=1
of these corresponds to a possible change. ThereAnX
+n−1
fore, conditioned on event A, with probability at
≤
P0 (|St−1 | ≤ (An + n − 1)ρn )P(νn = t)
least 1/4 − o(1) the change happens before time
t=n
An /4, whereas τn ≥ An /2. Hence the delay is
n−1
X
Θ(An ), since the probability of A is asymptotically
+
P(νn = t)
bounded away from zero by (19).
t=1
Proof of Lemma 1: We have
≤ P0 (|Sνn +n−1 | ≤ (An + n − 1)ρn )
+ n/An
P({νn , νn + 1, . . . , νn + n − 1} ∩ Sτn = ∅)
νn +n−1
| ≤ (An + n − 1)ρn )
= P({{νn , νn + 1, . . . , νn + n − 1} ∩ Sνn +n−1 = ∅}) ≤ P0 (|S
+ o(1)
(23)
≥ P({{ν , ν + 1, . . . , ν + n − 1} ∩ Sν+n−1 = ∅}
n
n
n
νn +n−1
∩ {|S
=
X X
where the last equality holds since An = 2αn .
From (23) and (22) we have
| ≤ k})
P(Sνn +n−1 = s, νn = j)
1 − o(1) ≤ P0 (|Sνn +n−1 | ≤ (An + n − 1)ρn ).
(24)
s:|s|≤k j∈Js
=
X X
P0 (Sνn +n−1 = s)P(νn = j)
Letting
s:|s|≤k j∈Js
An − k · n X
≥
P0 (Sνn +n−1 = s)
An
def
s:|s|≤k
An − k · n
=
P0 (|Sνn+n−1 | ≤ k)
An
def
k = kn = (An + n − 1)ρn ,
and assuming that ρn = o(1/n) we get
(20)
kn · n = o(An )
(25)
11
and hence from (20) and (24)
P({νn , νn + 1, . . . , νn + n − 1} ∩ Sτn = ∅)
≥ 1 − o(1)
which concludes the proof.
C. Proof of Theorem 5: Achievability
We describe a detection procedure that asymptotically achieves minimum delay n∗ (α) and any
sampling rate that is ω(1/n) whenever α ∈
(0, D(P0||P1 )).
Fix α ∈ (0, D(P1||P0 )) and pick ε > 0 small
enough so that
typical with respect to P1 . If the test is negative,
skip samples until the next s-instant and repeat
Phase zero (that is, test ∆0 (n) samples). If the
test is positive, perform a second confirmation
phase with ∆1 (n) replaced with ∆2 (n), and
so forth. Note that each confirmation phase is
performed on a new set of samples. If ℓ − 1
consecutive confirmation phases (with respect
to the same s-instant) are positive, the receiver
moves to the full block sampling phase.
3 Full block sampling (ℓ-th phase): Take
another
∆ℓ (n) = n∗ (α)(1 + ε)
samples and check if they are typical with respect to P1 . If they are typical, stop. Otherwise,
Suppose we want to achieve some sampling rate
skip samples until the next s-instant and repeat
ρn = f (n)/n where f (n) = ω(1) is some arPhase zero. If by time An + n − 1 no sequence
bitrary increasing function (upper bounded by n
is found to be typical, stop.
without loss of generality). For concreteness, it Note that with our f (n) = log log log log(n) exmight be helpful for the reader to take f (n) = ample, we have two preamble confirmation phases
log log log log(n). Define
followed by the last full block sampling phase.
def
1/3
For the probability of false-alarm we have
¯
∆(n)
= n/f (n)
n∗ (α)(1 + 2ε) ≤ n.
(26)
def
¯
s-instants = {t = j ∆(n),
j ∈ N∗ },
and recursively define
def
∆0 (n) = f (n)
def
∆i (n) = min{2
c∆i−1 (n)
P(τn < νn ) ≤ 2αn · 2−n
∗ (α)(1+ε)(D(P
1 ||P0 )−o(1))
= 2−nαΘ(ε)
= o(1)
(27)
1/3
because whenever the detector stops, the previous
∗
, n (α)(1 + ε)}
for i ∈ 1, 2, . . . , ℓ where ℓ denotes the smallest
integer such that ∆ℓ (n) = n∗ (α)(1 + ε). The
constant c in the definition of ∆i (n) can be any
fixed value such that
0 < c < D(P1 ||P0 ).
The detector starts sampling in phases at the first
¯
s-instant (i.e.,, at time t = ∆(n))
as follows:
1 Preamble detection (phase zero): Take ∆0 (n)
consecutive samples and check if they are
typical with respect to P1 . If the test is negative,
meaning that ∆0 (n) samples are not typical,
skip samples until the next s-instant and repeat the procedure i.e.,sample and test ∆0 (n)
observations. If the test is positive, proceed to
confirmation phases.
2 Preamble confirmations (variable duration,
ℓ − 1 phases at most): Take another ∆1 (n)
consecutive samples and check if they are
n∗ (α)(1 + ε)
samples are necessarily typical with respect to P1 .
Therefore, the inequality (27) follows from (13) and
a union bound over time indices. The equality in
(27) follows directly from the definition of n∗ (α)
(see (10)).
Next, we analyze the delay of the proposed
scheme. We show that
P(τn ≤ νn + (1 + 2ε)n∗ (α)) = 1 − o(1).
(28)
¯
To see this, note that by the definition of ∆(n)
and
because each ∆i (n) is exponentially larger than the
previous ∆i−1 (n),
¯
∆(n)
+
ℓ
X
∆i (n) ≤ (1 + 2ε)n∗ (α)
i=0
for n large enough. Applying (12) and taking a
union bound, we see that when the samples are
distributed according to P1 , the series of ℓ + 1
12
hypothesis tests will all be positive with probability Therefore, the expected number of samples taken
by the detector up to any given time t can be upper
1 − o(1). Specifically,
bounded
as
ℓ
X
1
t
P(any test fails) ≤
O (∆i (n))− 3 = o(1). (29)
E0 |St | ≤ ¯
∆0 (n)(1 + o(1))
∆(n)
i=0
f (n)2/3
Since ε can be made arbitrarily small, from (27) and
=t
(1 + o(1)).
(33)
n
(28) we deduce that the detector achieves minimum
delay (see Theorem 4, Claim 2)) .
This, as we now show, implies that the detector has
Finally, to show that the above detection proce- the desired sampling rate. We have
dure achieves sampling rate
P(|Sτn |/τn ≥ ρn )
ρn = f (n)/n
≤ P(|Sτn |/τn ≥ ρn , νn ≤ τn ≤ νn + (1 + 2ε)n∗ (α))
+ 1 − P(νn ≤ τn ≤ νn + (1 + 2ε)n∗ (α))
we need to establish that
n→∞
≤ P(|Sτn |/τn ≥ ρn , νn ≤ τn ≤ νn + n)
P(|Sτn |/τn ≥ ρn ) −→ 0.
(30)
+ 1 − P(νn ≤ τn ≤ νn + (1 + 2ε)n∗ (α))
(34)
To prove this, we first compute the sampling rate of
the detector when run over an i.i.d.-P0 sequence, where the second inequality holds for ε small
∗
that is, a sequence with no transient change. As enough by the definition of n (α).
The fact that
should be intuitively clear, this will essentially give
us the desired result, since in the true model, the
1 − P(νn ≤ τn < νn + (1 + 2ε)n∗ (α)) = o(1)
duration of the transient change, n, is negligible
(35)
with respect to An anyway.
To get a handle on the sampling rate of the detec- follows from (27) and (28). For the first term on the
tor over an i.i.d.-P0 sequence, we start by computing right-hand side of the second inequality in (34), we
the expected number of samples N taken by the have
detector at any given s-instant, when the detector is
P(|Sτn |/τn ≥ ρn , νn ≤ τn ≤ νn + n)
started at that specific s-instant and the observations
≤ P(|Sνn+n | ≥ νn ρn )
are all i.i.d. P0 . Clearly, this expectation does not
≤ P(|Sνn−1 | ≥ νn ρn − n − 1).
(36)
depend on the s-instant.6 We have
E0 N ≤ ∆0 (n) +
ℓ−1
X
i=0
pi · ∆i+1 (n)
(31)
Since Sνn −1 represents sampling times before the
transient change, the underlying process is i.i.d. P0 ,
so we can use our previous bound on the sampling
rate to analyze Sνn −1 . Conditioned on reasonably
large values of νn , in particular, all νn satisfying
p
νn ≥ An = 2αn
(37)
where pi denotes the probability that the i-th confirmation phase is positive given that the detector
actually reaches the i-th confirmation phase, and
E0 denotes expectation with respect to an i.i.d.-P0
sequence. Since each phase uses new, and therefore, we have
independent, observations, from (13) we conclude
E0 |Sνn |
P(|Sνn −1 | ≥ νn ρn − n − 1|νn ) ≤
that
νn ρn − n − 1
pi ≤ 2−∆i (n)(D(P1 ||P0 )−o(1)) .
2/3
f (n) (1 + o(1))
≤
Using the definition of ∆i (n), and recalling that 0 <
n(ρn − (n + 1)/νn )
c < D(P1 ||P0 ), this implies that the sum in the
f (n)2/3 (1 + o(1))
≤
second term of (31) is negligible, and
nρn (1 − o(1))
(1 + o(1))
E0 Ns = ∆0 (n)(1 + o(1)).
(32)
=
f (n)1/3 (1 − o(1))
6
Boundary effects due to the fact that An need not be a multiple
= o(1)
(38)
¯ n play no role asymptotically and thus are ignored.
of ∆
13
where the second inequality holds by (33); where
the third inequality holds by (37) and because ρn =
ω(1/n); and where the last two equalities hold by
the definitions of ρn and f (n).
Removing the conditioning on νn ,
P(|Sνn−1 | ≥ νn ρn − n − 1)
≤ P(|Sνn−1 | ≥ νn ρn − n − 1, νn ≥
p
+ P(νn < An )
= o(1)
p
An )
(39)
by (38) and the fact that νn is uniformly distributed
over {1, 2, . . . , An }. Hence, from (36), the first term
on the right-hand side of the second inequality in
(34) vanishes.
This yields (30).
D. Discussion
There is obviously a lot of flexibility around
the quickest detection procedure described in Section IV-C. Its main feature is the sequence of
binary hypothesis tests, which manages to reject
the hypothesis that a change occurred with as few
samples as possible when the samples are drawn
from P0 , while maintaining a high probability of
detecting the transient change.
It may be tempting to simplify the detection
procedure by considering, say, only two phases,
a preamble phase and the full block phase. Such
a scheme, which is similar in spirit to the one
proposed in [8], would not work, as it would produce either a much higher level of false-alarm, or a
much higher sampling rate. We provide an intuitive
justification for this below, thereby highlighting the
role of the multiphase procedure.
Consider a two phase procedure, a preamble
phase followed by a full block phase. Each time
we switch to the second phase, we take Θ(n)
samples. Therefore, if we want to achieve a vanishing sampling rate, then necessarily the probability
of switching from the preamble phase to the full
block phase under P0 should be o(1/n). By Sanov’s
theorem, such a probability can be achieved only
if the preamble phase makes it decision to switch
to the full block phase based on at least ω(log n)
samples, taken over time windows of size Θ(n).
This translates into a sampling rate of ω((log n)/n)
at best, and we know that this is suboptimal, since
any sampling rate ω(1/n) is achievable.
The reason a two-phase scheme does not yield a
sampling rate lower than ω((log n)/n) is that it is
too coarse. To guarantee a vanishing sampling rate,
the decision to switch to the full block phase should
be based on at least log(n) samples, which in turn
yields a suboptimal sampling rate. The important
observation is that the (average) sampling rate of
the two-phase procedure essentially corresponds to
the sampling rate of the first phase, but the first
phase also controls the decision to switch to the
full block phase and sample continuously for a long
period of order n. In the multiphase procedure,
however, we can separate these two functions. The
first phase controls the sampling rate, but passing
the first phase only leads us to a second phase, a
much less costly decision than immediately switching to full block sampling. By allowing multiple
phases, we can ensure that when the decision to
ultimately switch to full sampling occurs, it only
occurs because we have accumulated a significant
amount of evidence that we are in the middle of the
transient change. In particular, note that many other
choices would work for the length and probability
thresholds used in each phase of our sampling
scheme. The main property we rely on is that the
lengths and probability thresholds be chosen so that
the sampling rate is dominated by the first phase.
E. Proof of Theorem 3
In this section, we prove Theorem 3. A reader
familiar with the proofs presented in [8] will recognize Theorem 3 as a corollary of Theorem 5, but we
include a detailed proof below for interested readers
unfamiliar with the prior work [8].
1) Converse of Theorem 3: By using the same
arguments as for Lemma 1, and simply replacing
replacing n with dB , one readily sees that if
ρB dB = o(1)
(40)
then
P({νB ,νB + 1, . . . , νB + dB − 1} ∩ SτB = ∅)
≥ (1 − o(1)).
(41)
Since the decoder samples no codeword symbol
with probability approaching one, the decoding error probability will tend to one whenever the rate is
positive (so that (M − 1)/M tends to one).
14
2) Achievability of Theorem 3: Fix β > 0.
We show that any R ∈ (0, C(β)] is achievable
with codes {CB } whose delays satisfy d(CB , εB ) ≤
n∗B (β, R)(1 + o(1)) whenever the sampling rate ρB
is such that
f (B)
ρB =
B
for some f (B) = ω(1).
Let X ∼ P be some channel input and let
Y denote the corresponding output, i.e.,(X, Y ) ∼
P (·)Q(·|·). For the moment we only assume that X
is such that I(X; Y ) > 0. Further, we suppose that
the codeword length n is linearly related to B, i.e.,
B
=q
n
¯
according to P until when xn−∆(n) is typical with
respect to P . In this case we let
def
¯
cn∆(n)+1
(1) = xn−∆(n) ,
¯
move to message 2, and repeat the procedure until
when a codeword has been assigned to each message.
From (12), for any fixed m no repetition will
be required to generate cn∆(n)+1
(m) with probability
¯
tending to one as n → ∞. Moreover, by construction the codewords are essentially of constant
composition, i.e.,each symbol appears roughly the
same number of times in all codewords, and all
codewords have cost
nE[k(X)](1 + o(1))
for some fixed constant q > 0. We shall specify this
linear dependency later to accommodate the desired as n → ∞.
Codeword transmission time. Define the set of
rate R. Further, let
start instants
def
f˜(n) = f (q · n)/q
def
¯
s-instants = {t = j ∆(n),
j ∈ N∗ }.
and
˜
Codeword transmission start time σ(m, νn ) corredef f (n)
.
ρ̃n =
sponds to the first s-instant ≥ νn (regardless of m).
n
Sampling and decoding procedures. The deHence, by definition we have
coder first tries to detect the preamble by using a
similar detection procedure as in the achievability
ρ̃n = ρB .
of Theorem 5, then applies a standard message
Let a be some arbitrary fixed input symbol such decoding isolation map.
that
Starting at the first s-instant (i.e.,at time t =
¯
∆(n)),
the decoder samples in phases as follows.
Q(·|a) 6= Q(·|⋆).
1 Preamble test (phase zero): Take ∆0 (n) con¯
Below we introduce the quantities ∆(n)
and ∆i (n),
secutive samples and check if they are typical
1 ≤ i ≤ ℓ, which are defined as in Section IV-C
with respect to Q(·|a). If the test turns negative,
but with P0 replaced with Q(·|⋆), P1 replaced with
the decoder skips samples until the next sQ(·|a), f (n) replaced with f˜(n), and n∗ (α) replaced
instant when it repeats the procedure. If the
with n.
test turns positive, the decoder moves to the
Codewords: preamble followed by constant
confirmation phases.
composition information symbols. Each codeword
2
Preamble confirmations (variable duration,
cn (m) starts with a common preamble that consists
ℓ − 1 phases at most): The decoder takes
¯
of ∆(n)
repetitions of symbol a. The remaining
another ∆1 (n) consecutive samples and checks
if they are typical with respect to Q(·|a). If
¯
n − ∆(n)
the test turns negative the decoder skips samples until the next s-instant when it repeats
components
n
Phase
zero (and tests ∆0 (n) samples). If the
c∆(n)+1
(m)
¯
test turns positive, the decoder performs a
second confirmation phase based on new ∆2 (n)
of cn (m) of each message m carry information and
samples, and so forth. If ℓ − 1 consecutive
are generated as follows. For message 1, randomly
¯
¯
confirmation phases (with respect to the same
generate length n − ∆(n)
sequences xn−∆(n) i.i.d.
15
s-instant) turn positive, the decoder moves to
the message sampling phase.
3 Message sampling and isolation (ℓ-th phase):
Take another n samples and check if among
¯
these samples there are n − ∆(n)
consecutive samples that are jointly typical with the
¯
n − ∆(n)
information symbols of one of the
codewords. If one codeword is typical, stop
and declare the corresponding message. If more
than one codeword is typical declare one message at random. If no codeword is typical, the
decoder stops sampling until the next s-instant
and repeats Phase zero. If by time AB + n − 1
no codeword is found to be typical, the decoder
declares a random message.
Error probability. Error probability and delay
are evaluated in the limit B → ∞ with AB = 2βB
and with
B
I(X; Y ) + D(Y ||Y⋆)
q=
.
< min I(X; Y ),
n
1+β
(42)
as n → ∞.
Using analogous arguments as in the achievability
of [1, Proof of Theorem 1], we obtain the upper
bounds
Pm (E1,m′ ) ≤ 2βB · 2−n(I(X;Y )+D(Y ||Y⋆ )−o(1))
and
Pm (E2,m′ ) ≤ 2−n(I(X;Y )−o(1))
which are both valid for any fixed ε > 0 provided
that B is large enough. Hence from the union bound
Pm (E1,m′ ∪ E2,m′ ) ≤2−n(I(X;Y )−o(1))
+ 2βB · 2−n(I(X;Y )+D(Y ||Y⋆ )−o(1)) .
Taking a second union bound over all possible
wrong messages, we get
B
′
′
′
Pm (∪m 6=m (E1,m ∪ E2,m )) ≤ 2 2−n(I(X;Y )−o(1))
+2βB · 2−n(I(X;Y )+D(Y ||Y⋆ )−o(1))
def
= ε2 (B)
(46)
We first compute the error probability averaged
over codebooks and messages. Suppose message m where ε2 (B) = o(1) because of (42).
Combining (43), (44), (46), we get from the union
is transmitted and denote by Em the error event that
′
the decoder stops and outputs a message m 6= m. bound
Then we have
Pm (Em ) ≤ ε1 (B) + ε2 (B)
(43)
Em ⊆ E0,m ∪m′ 6=m (E1,m′ ∪ E2,m′ ),
= o(1)
(47)
where events E0,m , E1,m′ , and E2,m′ are defined as for any m.
• E0,m : at the s-instant corresponding to σ, the
Delay. We now show that the delay of our coding
preamble test phase or one of the pream- scheme is at most n(1 + o(1)). Suppose codeword
ble confirmation phases turns negative, or cn (m) is sent. If
cn∆(n)+1
(m) is not found to be typical by time
¯
τB > σ + n
σ + n − 1;
• E1,m′ : the decoder stops at a time t < σ and
then necessarily cn∆+1
(m) is not typical with the
¯
declares m′ ;
corresponding channel outputs. Hence
• E2,m′ : the decoder stops at a time t between σ
and σ + n − 1 (including σ and σ + n − 1) and
Pm (τB − σ ≤ n) ≥ 1 − Pm (E0,m )
declares m′ .
= 1 − ε1 (B)
(48)
From Sanov’s theorem,
¯
¯
by (44). Since σ ≤ νB + ∆(n)
and ∆(n)
= o(n) we
Pm (E0,m ) = ε1 (B)
(44)
7
get
where ε1 (B) = o(1). Note that this equality holds
Pm (τB − νB ≤ n(1 + o(1))) ≥ 1 − ε1 (B) .
pointwise (and not only on average over codebooks)
for any specific (non-random) codeword cn (m) Since this inequality holds for any codeword cn (m)
since, by construction, they all satisfy the constant that satisfies (45), the delay is no more than n(1 +
composition property
o(1)). Furthermore, from (47) there exists a specific
−1/3
¯
||P̂cn (m) − P || ≤ (n − ∆)
= o(1)
(45)
7
¯
∆+1
Recall that B/n is kept fixed and B → ∞.
16
non-random code C whose error probability, averaged over messages, is less than ε1 (n) + ε2 (n) =
o(1) whenever condition (42) is satisfied. Removing
the half of the codewords with the highest error
probability, we end up with a set C′ of 2B−1 codewords whose maximum error probability satisfies
max Pm (Em ) ≤ o(1)
m
(49)
whenever condition (42) is satisfied.
Since any codeword has cost nE[k(X)](1+o(1)),
condition (42) is equivalent to
I(X; Y )
R < min
,
E[k(X)](1 + o(1))
I(X; Y ) + D(Y ||Y⋆ )
(50)
E[k(X)](1 + o(1))(1 + β)
where
def
R=
B
K(C′ )
denotes the rate per unit cost of C′ .
Thus, to achieve a given R ∈ (0, C(β)) it suffices
to choose the input distribution and the codeword
length as
X = arg max{E[k(X ′ )] : X ′ ∈ P(R)}
and
n = n∗B (β, R)
(see (7) and (8)). By a previous argument the corresponding delay is no larger than n∗B (β, R)(1+o(1)).
Sampling rate. For the sampling rate, a very
similar analysis to the achievability proof of Theorem 5 (see from equation (30) onwards with f (n),
ρn , n∗ (α), and An replaced with f˜(n), ρ̃n , n∗ (β, R),
and AB , respectively) shows that
B→∞
Pm (|SτB |/τB ≥ ρB ) −→ 0.
(51)
Note that the arguments that establish (51) rely
only on the preamble detection procedure. In particular, they do not use (50) and hold for any codeword
length nB as long as nB = Θ(B).
V. C ONCLUSION
We have proved an essentially tight characterization of the sampling rate required to have no
capacity or delay penalty for the asynchronous
communication model of [8]. The key ingredient in
our results is a new, multi-phase, adaptive sampling
scheme used to detect when the received signal’s
distribution switches from the pure noise distribution to the codeword distribution. As noted above,
there is a lot of flexibility around the quickest
detection procedure described in Section IV-C, but
a simple, two level generalization of the sampling
algorithm from [8] is insufficient to achieve the
optimal sampling rate. Instead, a fine-grained, multilevel scheme is needed.
R EFERENCES
[1] V. Chandar, A. Tchamkerten, and D. Tse. Asynchronous
capacity per unit cost. Information Theory, IEEE Transactions
on, 59(3):1213 –1226, march 2013.
[2] V. Chandar, A. Tchamkerten, and G. Wornell. Optimal sequential frame synchronization. Information Theory, IEEE
Transactions on, 54(8):3725–3728, 2008.
[3] Venkat Chandar, Aslan Tchamkerten, and David Tse. Asynchronous capacity per unit cost. Information Theory, IEEE
Transactions on, 59(3):1213–1226, 2013.
[4] T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein.
Introduction to Algorithms, 2nd edition. MIT Press, McGrawHill Book Company, 2000.
[5] Yury Polyanskiy. Asynchronous communication: Exact synchronization, universality, and dispersion. Information Theory,
IEEE Transactions on, 59(3):1256–1270, 2013.
[6] Sara Shahi, Daniela Tuninetti, and Natasha Devroye. On the
capacity of strong asynchronous multiple access channels with
a large number of users. In Information Theory (ISIT), 2016
IEEE International Symposium on, pages 1486–1490. IEEE,
2016.
[7] I. Shomorony, R. Etkin, F. Parvaresh, and A.S. Avestimehr.
Bounds on the minimum energy-per-bit for bursty traffic in
diamond networks. In Information Theory Proceedings (ISIT),
2012 IEEE International Symposium on, pages 801–805. IEEE,
2012.
[8] A. Tchamkerten, V. Chandar, and G. Caire. Energy and sampling constrained asynchronous communication. Information
Theory, IEEE Transactions on, 60(12):7686–7697, Dec 2014.
[9] A. Tchamkerten, V. Chandar, and G. W. Wornell. Asynchronous
communication: Capacity bounds and suboptimality of training.
Information Theory, IEEE Transactions on, 59(3):1227 –1255,
march 2013.
[10] A. Tchamkerten, V. Chandar, and G.W. Wornell. Communication under strong asynchronism. Information Theory, IEEE
Transactions on, 55(10):4508–4528, 2009.
[11] Da Wang. Distinguishing codes from noise: fundamental limits
and applications to sparse communication. Master’s thesis,
Massachusetts Institute of Technology, 2010.
[12] N. Weinberger and N. Merhav. Codeword or noise? exact
random coding exponents for joint detection and decoding.
Information Theory, IEEE Transactions on, 60(9):5077–5094,
Sept 2014.
| 7 |
Moving curve ideals of rational plane parametrizations
Carlos D’Andrea
arXiv:1308.6790v3 [] 24 Jan 2014
Facultat de Matemàtiques, Universitat de Barcelona. Gran Via 585, 08007 Barcelona, Spain
[email protected] http://atlas.mat.ub.es/personals/dandrea ?
Abstract. In the nineties, several methods for dealing in a more efficient way with
the implicitization of rational parametrizations were explored in the Computer Aided
Geometric Design Community. The analysis of the validity of these techniques has been
a fruitful ground for Commutative Algebraists and Algebraic Geometers, and several
results have been obtained so far. Yet, a lot of research is still being done currently
around this topic. In this note we present these methods, show their mathematical
formulation, and survey current results and open questions.
1
Rational Plane Curves
Rational curves are fundamental tools in Computer Aided Geometric Design. They are used
to trace the boundary of any kind of shape via transforming a parameter (a number) via
some simple algebraic operations into a point of the cartesian plane or three-dimensional
space. Precision and esthetics in Computer Graphics demands more and more sophisticated
calculations, and hence any kind of simplification of the very large list of tasks that need to
be performed between the input and the output is highly appreciated in this world. In this
survey, we will focus on a simplification of a method for implicitization rational curves and
surfaces defined parametrically. This method was developed in the 90’s by Thomas Sederberg
and his collaborators (see [STD94, SC95, SGD97]), and turned out to become a very rich and
fruitful area of interaction among mathematicians, engineers and computer scientist. As we
will see at the end of the survey, it is still a very active of research these days.
Fig. 1. The shape of an “orange” plotted with Mathematica 8.0 ([Wol10]).
To ease the presentation of the topic, we will work here only with plane curves and point to
the reader to the references for the general cases (spatial curves and rational hypersurfaces).
?
Partially supported by the Research Project MTM2010–20279 from the Ministerio de Ciencia e
Innovación, Spain
Let K be a field, which we will suppose to be algebraically closed so our geometric statements are easier to describe. Here, when we mean “geometric” we refer to Algebraic Geometry
and not Euclidean Geometry which is the natural domain in Computer Design. Our assumption on K may look somehow strange in this context, but we do this for the ease of our
presentation. We assume the reader also to be familiar with projective lines and planes over
K, which will be denoted with P1 and P2 respectively. A rational plane parametrization is a
map
φ : P1 −→
P2
(1)
(t0 : t1 ) 7−→ u0 (t0 , t1 ) : u1 (t0 , t1 ) : u2 (t0 , t1 ) ,
where u0 (t0 , t1 ), u1 (t0 , t1 ), u2 (t0 , t1 ) are polynomials in K[T0 , T1 ], homogeneous, of the same
degree d ≥ 1, and without common factors. We will call C to the image of φ, and refer to it as
the rational plane curve parametrized by φ.
This definition may sound a bit artificial for the reader who may be used to look at maps
of the form
K 99K K2
(2)
b(t)
t 7−→ a(t)
c(t) , c(t) ,
with a(t), b(t), c(t) ∈ K[t] without common factors, but it is easy to translate this situation
to (1) by extending this “map” (which actually is not defined on all points of K) to one
from P1 → P2 , in a sort of continuous way. To speak about continuous maps, we need to
have a topology on Kn and/or in Pn , for n = 1, 2. We will endow all these sets with the
so-called Zariski topology, which is the coarsest topology that make polynomial maps as in (2)
continuous.
Now it should be clear that there is actually an advantage in working with projective spaces
instead of parametrizations as in (2): our rational map defined in (1) is actually a map, and
the translation from a(t), b(t), c(t) to u0 (t0 , t1 ), u1 (t0 , t1 ), u2 (t0 , t1 ) is very straightforward.
The fact that K is algebraically closed also comes in our favor, as it can be shown that for
parametrizations defined over algebraically closed fields (see [CLO07] for instance), the curve
C is actually an algebraic variety of P2 , i.e. it can be described as the zero set of a finite system
of homogeneous polynomial equations in K[X0 , X1 , X2 ]).
More can be said on the case of C, the Implicitization’s Theorem in [CLO07] states essentially that there exists F (X0 , X1 , X2 ) ∈ K[X0 , X1 , X2 ], homogeneous of degree D ≥ 1,
irreducible, such that C is actually the zero set of F (X0 , X1 , X2 ) in P2 , i.e. the system of polynomials equations in this case reduces to one single equation. It can be shown that F (X0 , X1 , X2 )
is well-defined up to a nonzero constant in K, and it is called the defining polynomial of C. The
implicitization problem consists in computing F having as data the polynomials u0 , u1 , u2
which are the components of φ as in (1).
Example 1. Let C be the unit circle with center in the origin (0, 0) of K2 . A well-known
parametrization of this curve by using a pencil of lines centered in (−1, 0) is given in affine
format (2) as follows:
K 99K
K2
2
(3)
2t
t 7−→ 1−t
1+t2 , 1+t2 , .
Note that if K has square roots of −1, these values do not belong to the field of definition
of the parametrization above. Moreover, it is straightforward to check that the point (−1, 0)
is not in the image of (3). However, by converting (3) into the homogeneous version (1), we
obtain the parametrization
φ:
P1 −→
P2
(t0 : t1 ) 7−→ t20 + t21 : t20 − t21 : 2t0 t1 ,
(4)
Fig. 2. The unit circle.
which is well defined on all P1 . Moreover, every point of the circle (in projective coordinates)
is in the image of φ, for instance (1 : −1 : 0) = φ(0 : 1), which is the point in C we were
“missing” from the parametrization (3). The defining polynomial of C in this case is clearly
F (X0 , X1 , X2 ) = X12 + X22 − X02 .
In general, the solution to the implicitization problem involves tools from Elimination
Theory, as explained in [CLO07]: from the equation
(X0 : X1 : X2 ) = u0 (t0 : t1 ) : u1 (t0 : t1 ) : u2 (t0 : t1 ) ,
one “eliminates” the variables t0 and t1 to get an expression involving only the X’s variables.
The elimination process can be done with several tools. The most popular and general is
provided by Gröbner bases, as explained in [AL94] (see also [CLO07]). In the case of a rational
parametrization like the one we are handling here, we can consider a more efficient and suitable
tool: the Sylvester resultant of two homogeneous polynomials in t0 , t1 , as defined in [AJ06]
(see also [CLO05]). We will denote this resultant with Rest0 ,t1 (·, ·). The following result can
be deduced straightforwardly from the section of Elimination and Implicitization in [CLO07].
Proposition 1. There exist α, β ∈ N such that -up to a nonzero constant
Rest0 ,t1 X2 u0 (t0 , t1 ) − X0 u2 (t0 , t1 ), X2 u1 (t0 , t1 ) − X1 u2 (t0 , t1 ) = X2α F (X0 , X1 , X2 )β . (5)
Note that as the polynomial F (X0 , X1 , X2 ) is well-defined up to a nonzero constant, all formulae involving it must also hold this way. For instance, an explicit computation of (3) in
Example 1 shows that this resultant is equal to
− 4X22 X02 − X12 − X22 .
(6)
One may think that the number −4 which appears above is just a random constant, but indeed
it is indicating us something very important: if the characteristic of K is 2, then it is easy to
verify that (3) does not describe a circle, but the line X2 = 0. What is even worse, (4) is not
the parametrization of a curve, as its image is just the point (1 : 1 : 0).
To compute the Sylvester Resultant one can use the well-known Sylvester matrix (see [AJ06,
CLO07]), whose nonzero entries contain coefficients of the two polynomials X2 u0 (t0 , t1 ) −
X0 u2 (t0 , t1 ) and X2 u1 (t0 , t1 ) − X1 u2 (t0 , t1 ), regarded as polynomials in the variables t0 and
t1 . The resultant is then the determinant of that (square) matrix.
For instance, in Example 1, we have
X2 u0 (t0 , t1 ) − X0 u2 (t0 , t1 ) = X2 t20 − 2X0 t0 t1 + X2 t21
X2 u1 (t0 , t1 ) − X1 u2 (t0 , t1 ) = X2 t20 − 2X1 t0 t1 − X2 t21 ,
and (6) is obtained as the determinant of the Sylvester matrix
X2 −2X0
X2
0
0
X2 −2X0 X2
.
X2 −2X1 −X2
0
0
X2 −2X1 −X2
(7)
Having X2 as a factor in (5) is explained by the fact that the polynomials whose resultant
is being computed in (3) are not completely symmetric in the X’s parameters, and indeed X2
is the only X-monomial appearing in both expansions.
The exponent β in (5) has a more subtle explanation, it is the tracing index of the map
φ, or the cardinality of its generic fiber. Geometrically, for all but a finite number of points
(p0 : p1 : p2 ) ∈ C, β is the cardinality of the set φ−1 (p0 : p1 : p2 ). Algebraically, it is defined as
the degree of the extension
K u0 (t0 , t1 )/u2 (t0 , t1 ), u1 (t0 , t1 )/u2 (t0 , t1 ) : K(t0 /t1 ) .
In the applications, one already starts with a map φ as in (1) which is generically injective,
i.e. with β = 1. This assumption is not a big one, due to the fact that generic parametrizations are generically injective, and moreover, thanks to Luröth’s theorem (see [vdW66]), every
parametrization φ as in (1) can be factorized as φ = φ ◦ P, with φ : P1 → P2 generically
injective, and P : P1 → P1 being a map defined by a pair of coprime homogeneous polynomial
both of them having degree β. One can then regard φ as a “reparametrization” of C, and there
are very efficient algorithms to deal with this problem, see for instance [SWP08].
In closing this section, we should mention the difference between “algebraic (plane) curves”
and the rational curves introduced above. An algebraic plane curve is a subset of P2 defined
by the zero set of a homogeneous polynomial G(X0 , X1 , X2 ). In this sense, any rational plane
curve is algebraic, as we can find its defining equation via the implicitization described above.
But not all algebraic curve is rational, and moreover, if the curve has degree 3 or more, a
generic algebraic curve will not be rational. Being rational or not is actually a geometric
property of the curve,and one should not expect to detect it from the form of the defining
polynomial, see [SWP08] for algorithms to decide whether a given polynomial G(X0 , X1 , X2 )
defines a rational curve or not. For instance, the Folium of Descartes (see Figure 3) is a rational
Fig. 3. The Folium of Descartes.
curve with parametrization
(t0 : t1 ) 7→ (t30 + t31 : 3t20 t1 : 3t0 t21 ),
Fig. 4. Fermat’s cubic.
and implicit equation given by the polynomial F (X0 , X1 , X2 ) = X13 + X23 − 3X0 X1 X2 . On the
other hand, Fermat’s cubic plotted in Figure 4 is defined by the vanishing of G(X0 , X1 , X2 ) =
X13 + X23 − X03 but it is not rational.
The reason why rational curves play a central role in Visualization and Computer Design
should be easy to get, as they are
– easy to “manipulate” and be plotted,
– enough to describe all possible kind of shape by using patches (so-called spline curves).
2
Moving lines and µ-bases
Moving lines were introduced by Thomas W. Sederberg and his collaborators in the nineties,
[STD94, SC95, SGD97, CSC98]. The idea is the following: in each row of the Sylvester
matrix appearing in (7) one can find the coefficients as a polynomial in t0 , t1 of a form
L(t0 , t1 , X0 , X1 , X2 ) ∈ K[t0 , t1 , X0 , X1 , X2 ] of degree 3 in the variables t’s, and satisfying:
L t0 , t1 , u0 (t0 , t1 ), u1 (t0 , t1 ), u2 (t0 , t1 ) = 0.
(8)
The first row of (7) for instance, contains the coefficients of
t0 (X2 u0 (t0 , t1 ) − X0 u2 (t0 , t1 )) = X2 t30 −2X0 t20 t1 + X2 t0 t21 + 0t31 ,
which clearly vanishes if we set Xi 7→ ui (t0 , t1 ). Note that all the elements in (7) are linear in
the X’s variables.
With this interpretation in mind, we can regard any such L(t0 , t1 , X0 , X1 , X2 ) as a family
of lines in P2 in such a way that for any (t0 : t1 ) ∈ P1 , this line passes through the point
φ(t0 : t1 ) ∈ C. Motivated by this idea, the following central object in this story has been
defined.
Definition 1. A moving line of degree δ which follows the parametrization φ is a polynomial
Lδ (t0 , t1 , X0 , X1 , X2 ) = v0 (t0 , t1 )X0 + v1 (t0 , t1 )X1 + v2 (t0 , t1 )X2 ∈ K[t0 , t1 , X0 , X1 , X2 ],
with each vi homogeneous of degree δ, i = 0, 1, 2, such that
Lδ (t0 , t1 , u0 (t0 , t1 ), u1 (t0 , t1 ), u2 (t0 , t1 )) = 0,
i.e.
v0 (t0 , t1 )u0 (t0 , t1 ) + v1 (t0 , t1 )u1 (t0 , t1 ) + v2 (t0 , t1 )u2 (t0 , t1 ) = 0.
(9)
Note that both X2 u0 (t0 , t1 ) − X0 u2 (t0 , t1 ) and X2 u1 (t0 , t1 ) − X1 u2 (t0 , t1 ) are always moving
lines following φ. Moreover, note that if we multiply any given moving line by a homogeneous
polynomial in K[t0 , t1 ], we obtain another moving line of higher degree. The set of moving lines
following a given parametrization has an algebraic structure of a module over the ring K[t0 , t1 ].
Indeed, another way of saying that Lδ (t0 , t1 , X0 , X1 , X2 ) is a moving line which follows φ is
that the vector (v0 (t0 , t1 ), v1 (t0 , t1 ), v2 (t0 , t1 )) is a homogeneous element of the syzygy module
of the ideal generated by the sequence {u0 (t0 , t1 ), u1 (t0 , t1 ), u2 (t0 , t1 )} -the coordinates of φin the ring of polynomials K[t0 , t1 ].
We will not go further in this direction yet, as the definition of moving lines does not
require understanding concepts like syzygies or modules. Note that computing moving lines
is very easy from an equality like (9). Indeed, one first fixes δ as small as possible, and
then sets v0 (t0 , t1 ), v1 (t0 , t1 ), v2 (t0 , t1 ) as homogeneous polynomials of degree δ and unknown
coefficients, which can be solved via the linear system of equations determined by (9).
With this very simple but useful object, the method of implitization by moving lines as
stated in [STD94] says essentially the following: look for a set of moving lines of the same
degree δ, with δ as small as possible, which are “independent” in the sense that the matrix of
their coefficients (as polynomials in t0 , t1 ) has maximal rank. If you are lucky enough, you will
find δ + 1 of these forms, and hence the matrix will be square. Compute then the determinant
of this matrix, and you will get a non-trivial multiple of the implicit equation. If your are even
luckier, your determinant will be equal to F (X0 , X1 , X2 )β .
Example 2. Let us go back to the parametrization of the unit circle given in Example 1. We
check straightforwardly that both
L1 (t0 , t1 , X0 , X1 , X2 ) = −t1 X0 − t1 X1 + t0 X2 = X2 t0 − (X0 + X1 ) t1
L2 (t0 , t1 , X0 , X1 , X2 ) = −t0 X0 + t0 X1 + t1 X2 = (−X0 + X1 ) t0 + X2 t1 .
satisfy (8). Hence, they are moving lines of degree 1 which follow the parametrization of the
unit circle. Here, δ = 1. We compute the matrix of their coefficients as polynomials (actually,
linear forms) in t0 , t1 , and get
X2
−X0 − X1
.
(10)
−X0 + X1
X2
It is easy to check that the determinant of this matrix is equal to
F (X0 , X1 , X2 ) = X12 + X22 − X02 .
Note that the size of (10) is actually half of the size of (7), and also that the determinant of
this matrix gives the implicit equation without any extraneous factor.
Of course, in order to convince the reader that this method is actually better than just performing (5), we must shed some light on how to compute algorithmically a matrix of moving
lines. The following result was somehow discovered by Hilbert more than a hundred years ago,
and rediscovered in the CAGD community in the late nineties (see [CSC98]).
Theorem 1. For φ as in (1), there exist a unique µ ≤ d2 and two moving lines following φ
which we will denote as Pµ (t0 , t1 , X0 , X1 , X2 ), Qd−µ (t0 , t1 , X0 , X1 , X2 ) of degrees µ and d − µ
respectively such that any other moving line following φ is a polynomial combination of these
two, i.e. if every Lδ (t0 , t1 , X0 , X1 , X2 ) as in the Definition 1 can be written as
Lδ (t0 , t1 , X0 , X1 , X2 ) = p(t0 , t1 )Pµ (t0 , t1 , X0 , X1 , X2 ) + q(t0 , t1 )Pd−µ (t0 , t1 , X0 , X1 , X2 ),
with p(t0 , t1 ), q(t0 , t1 ) ∈ K[t0 , t1 ] homogeneous of degrees δ − µ and δ − d + µ respectively.
Fig. 5. Moving lines L1 (left) and L2 (right).
This statement is consequence of a stronger one, which essentially says that a parametrization φ as in (1), can be “factorized” as follows:
Theorem 2 (Hilbert-Burch). For φ as in (1), there exist a unique µ ≤
tions ϕµ , ψd−µ : P1 → P2 of degrees µ and d − µ respectively such that
φ(t0 : t1 ) = ϕµ (t0 : t1 ) × ψd−µ (t0 : t1 ),
d
2
and two parametriza-
(11)
where × denotes the usual cross product of vectors.
Note that we made an abuse of notation in the statement of (11), as ϕµ (t0 : t1 ) and ψd−µ (t0 :
t1 ) are elements in P2 and the cross product is not defined in this space. The meaning of ×
in (11) should be understood as follows: pick representatives in K3 of both ϕµ (t0 : t1 ) and
ψd−µ (t0 : t1 ), compute the cross product of these two representatives, and then “projectivize”
the result to P2 again.
The parametrizations ϕµ and ψd−µ can be explicited by computing a free resolution of the
ideal hu0 (t0 , t1 ), u1 (t0 , t1 ), u2 (t0 , t1 )i ⊂ K[t0 , t1 ], and there are algorithms to do that, see for
instance [CDNR97]. Note that even though general algorithms for computing free resolutions
are based on computations of Gröbner bases, which have in general bad complexity time, the
advantage here is that we are working with a graded resolution, and also that the resolution
of an ideal like the one we deal with here is of Hilbert-Burch type in the sense of [Eis95]. This
means that the coordinates of both ϕd and ψd−µ appear in the columns of the 2 × 3 matrix
of the first syzygies in the resolution. We refer the reader to [CSC98] for more details on the
proofs of Theorems 1 and 2.
The connection between the moving lines Pµ (t0 , t1 , X0 , X1 , X2 ), Qd−µ (t0 , t1 , X0 , X1 , X2 )
of Theorem (1) and the parametrizations ϕµ , ψd−µ in (11) is the obvious one: the coordinates
of ϕµ (resp. ψd−µ ) are the coefficients of Pµ (t0 , t1 , X0 , X1 , X2 ) (resp. Qd−µ (t0 , t1 , X0 , X1 , X2 ))
as a polynomial in X0 , X1 , X2 .
Definition 2. A sequence {Pµ (t0 , t1 , X0 , X1 , X2 ), Qd−µ (t0 , t1 , X0 , X1 , X2 )} as in Theorem 1,
is called a µ-basis of φ.
Note that both theorems 1 and 2 only state the uniquenes of the value of µ, and not
of Pµ (t0 , t1 , X0 , X1 , X2 ) and Qd−µ (t0 , t1 , X0 , X1 , X2 ). Indeed, if µ = d − µ (which happens
generically if d is even), then any two generic linear combinations of the elements of a µ-basis
is again another µ−basis. If µ < d − µ, then any polynomial multiple of Pµ (t0 , t1 , X0 , X1 , X2 )
of the proper degree can be added to Qd−µ (t0 , t1 , X0 , X1 , X2 ) to produce a different µ-basis
of the same parametrization.
Example 3. For the parametrization of the unit circle given in Example 1, one can easily check
that
ϕ1 (t0 : t1 ) = (−t1 : −t1 : t0 ),
ψ1 (t0 : t1 ) = (−t0 : t0 : t1 )
is a µ-basis of φ defined in (4), i.e. this parametrization has µ = d−µ = 1. Indeed, we compute
the cross product in (11) as follows: denote with e0 , e1 , e2 the vectors of the canonical basis
of K3 . Then, we get
e0 e1 e2
−t1 t1 t0 = − t20 − t21 , t21 − t20 , −2t0 t1 ,
−t0 t0 t1
which shows that the ϕ1 (t0 : t1 ) × ψ1 (t0 : t1 ) = φ(t0 : t1 ), according to (11).
The reason the computation of µ-bases is important, is not only because with them we
can generate all the moving lines which follow a given parametrization, but also because they
will allow us to produce small matrices of moving lines whose determinant give the implicit
equation. Indeed, the following result has been proven in [CSC98, Theorem 1].
Theorem 3. With notation as above, let β be the tracing index of φ. Then, up to a nonzero
constant in K, we have
Rest0 ,t1 Pµ (t0 , t1 , X0 , X1 , X2 ), Qd−µ (t0 , t1 , X0 , X1 , X2 ) = F (X0 , X1 , X2 )β .
(12)
As shown in [SGD97] if you use any kind of matrix formulation for computing the Sylvester
resultant, in each row of these matrices, when applied to formulas (5) and (12), you will find
the coefficients (as a polynomial in t0 , t1 ) of a moving line following the parametrization.
Note that the formula given by Theorem 3 always involves a smaller matrix than the one in
(5), as the t-degrees of the polynomials Pµ (t0 , t1 , X0 , X1 , X2 ) and Qd−µ (t0 , t1 , X0 , X1 , X2 ) are
roughly half of the degrees of those in (5).
There is, of course, a connection between these two formulas. Indeed, denote with Sylt0 ,t1 (G, H)
(resp. Bezt0 ,t1 (G, H)) the Sylvester (resp. Bézout) matrix for computing the resultant of two
homogeneous polynomials of G, H ∈ K[t0 , t1 ]. For more about definitions and properties of
these matrices, see [AJ06]. In [BD12, Proposition 6.1], we prove with Laurent Busé the following:
Theorem 4. There exists an invertible matrix M ∈ Kd×d such that
X2 · Sylvt0 ,t1 Pµ (t0 , t1 , X0 , X1 , X2 ), Qd−µ (t0 , t1 , X0 , X1 , X2 )
= M · Bezt0 ,t1 X2 u0 (t0 , t1 ) − X0 u2 (t0 , t1 ), X2 u1 (t0 , t1 ) − X1 u2 (t0 , t1 ) .
From the identity above, one can easily deduce that it is possible to compute the implicit
equation (or a power of it) of a rational parametrization with a determinant of a matrix of
coefficients of d moving lines, where d is the degree of φ. Can you do it with less? Unfortunately,
the answer is no, as each row or column of a matrix of moving lines is linear in X0 , X1 , X2 ,
and the implicit equation has typically degree d. So, the method will work optimally with a
matrix of size d × d, and essentially you will be computing the Sylvester matrix of a µ-basis
of φ.
3
Moving conics, moving cubics...
One can actually take advantage of the resultant formulation given in (12) and get a determinantal formula for the implicit equation by using the square matrix
Bezt0 ,t1 Pµ (t0 , t1 , X0 , X1 , X2 ), Qd−µ (t0 , t1 , X0 , X1 , X2 ) ,
which has smaller size (it will have d − µ rows and columns) than the Sylvester matrix of these
polynomials. But this will not be a matrix of coefficients of moving lines anymore, as the input
coefficients of the Bézout matrix will be quadratic in X0 , X1 , X2 . Yet, due to the way the
Bézout matrix is being built (see for instance [SGD97], one can find in the rows of this matrix
the coefficients of a polynomial which also vanishes on the parametrization φ. This motivates
the following definition:
Definition 3. A moving curve of bidegree (ν, δ) which follows the parametrization φ is a polynomial Lν,δ (t0 , t1 , X0 , X1 , X2 ) ∈ K[t0 , t1 , X0 , X1 , X2 ] homogeneous in X0 , X1 , X2 of degree ν
and in t0 , t1 of degree δ, such that
L t0 , t1 , u0 (t0 , t1 ), u1 (t0 , t1 ), u2 (t0 , t1 ) = 0.
If ν = 1 we recover the definition of moving lines given in (1). For ν = 2, the polynomial
L(t0 , t1 , X0 , X1 , X2 ) is called a moving conic which follows φ ([ZCG99]). Moving cubics will
be curves with ν = 3, and so on.
A series of experiments made by Sederberg and his collaborators showed something interesting: one can compute the defining polynomial of C as a determinant of a matrix of
coefficients of moving curves following the parametrization, but the more singular the curve is
(i.e. the more singular points it has), the smaller the determinant of moving curves gets. For
instance, the following result appears in [SC95]:
Theorem 5. The implicit equation of a quartic curve with no base points can be written as a
2×2 determinant. If the curve doesn’t have a triple point, then each element of the determinant
is a quadratic; otherwise one row is linear and one row is cubic.
To illustrate this, we consider the following examples.
Example 4. Set u0 (t0 , t1 ) = t40 − t41 , u1 (t0 , t1 ) = −t20 t21 , u2 (t0 , t1 ) = t0 t31 . These polynomials defined a parametrization φ as in (1) with implicit equation given by the polynomial
F (X0 , X1 , X2 ) = X24 − X14 − X0 X1 X22 . From the shape of this polynomial, it is easy to show
that (1 : 0 : 0) ∈ P2 is a point of multiplicity 3 of this curve, see Figure 6. In this case, we
have µ = 1, and it is also easy to verify that
L1,1 (t0 , t1 , X0 , X1 , X2 ) = t0 X2 + t1 X1
is a moving line which follows φ. The reader will now easily check that the following moving
curve of bidegree (3, 1) also follows φ:
L1,3 (t0 , t1 , X0 , X1 , X2 ) = t0 (X13 + X0 X22 ) + t1 X23 .
And the 2 × 2 matrix claimed in Theorem 5 for this case is made with the coefficients of both
L1,1 (t0 , t1 , X0 , X1 , X2 ) and L1,3 (t0 , t1 , X0 , X1 , X2 ) as polynomials in t0 , t1 :
X2
X1
.
X13 + X0 X22 X23
Example 5. We reproduce here Example 2.7 in [Cox08]. Consider
u0 (t0 , t1 ) = t40 , u1 (t0 , t1 ) = 6t20 t21 − 4t41 , u2 (t0 , t1 ) = 4t30 t1 − 4t0 t31 .
This input defines a quartic curve with three nodes, with implicit equation given by
F (X0 , X1 , X2 ) = X24 + 4X0 X13 + 2X0 X1 X22 − 16X02 X12 − 6X02 X22 + 16X03 X1 , see Figure 7.
The following two moving conics of degree 1 in t0 , t1 follow the parametrization:
L1,2 (t0 , t1 , X0 , X1 , X2 ) = t0 (X1 X2 − X0 X2 ) + t1 (−X22 − 2X0 X1 + 4X02 )
L̃1,2 (t0 , t1 , X0 , X1 , X2 ) = t0 (X12 + 21 X22 − 2X0 X1 ) + t1 (X0 X2 − X1 X2 ).
As in the previous example, the 2 × 2 matrix of the coefficients of these moving conics is the
matrix claimed in Theorem 5.
Fig. 6. The curve of Example 4.
Fig. 7. The curve of Example 5.
4
The moving curve ideal of φ
Now it is time to introduce some tools from Algebra which will help us understand all the geometric constructions defined above.The set of all moving curves following a given parametrization generates a bi-homogeneous ideal in K[t0 , t1 , X0 , X1 , X2 ], which we will call the moving
curve ideal of this parametrization.
As explained above, the method of moving curves for implicitization of a rational parametrization looks for small determinants made with coefficients of moving curves which follow the
parametrization of low degree in t0 , t1 . To do this, one would like to have a description as in
Theorem 1, of a set of “minimal” moving curves from which we can describe in an easy way
all the other elements of the moving curve ideal.
Fortunately, Commutative Algebra provides the adequate language and tools for dealing
with this problem. As it was shown by David Cox in [Cox08], all we have to do is look for
minimal generators of the kernel K of the following morphism of rings:
K[t0 , t1 , X0 , X1 , X2 ] −→ K[t0 , t1 , z]
ti
7−→ ti
i = 0, 1,
Xj
7−→ uj (t0 , t1 ) z j = 0, 1, 2.
(13)
Here, z is a new variable. The following result appears in [Cox08, Nice Fact 2.4] (see also
[BJ03] for the case when φ is not generically injective):
Theorem 6. K is the moving curve ideal of φ .
Let us say some words about the map (13). Denote with I ⊂ K[t0 , t1 ] the ideal generated by
u0 (t0 , t1 ), u1 (t0 , t1 ), u2 (t0 , t1 ). The image of (13) is actually isomorphic to K[t0 , t1 ][z I], which
is called the Rees Algebra of I. By the Isomorphism Theorem, we then get that K[t0 , t1 , X0 , X1 , X2 ]/K
is isomorphic to the Rees Algebra of I. This is why the generators of K are called the defining
equations of the Rees Algebra of I. The Rees Algebra that appears in the moving lines method
corresponds to the blow-up of V (I), the variety defined by I. Geometrically, it is is just the
blow-up of the empty space (the effect of this blow-up is just to introduce torsion...), but yet
the construction should explain somehow why moving curves are sensitive to the presence of
complicated singularities. It is somehow strange that the fact that the description of K actually
gets much simpler if the singularities of C are more entangled.
Let us show this with an example. It has been shown in [Bus09], by unravelling some duality
theory developed by Jouanolou in [Jou97], that for any proper parametrization of a curve of
+5
degree d having µ = 2 and only cusps as singular points, the kernel K has (d+1)(d−4)
2
minimal generators. On the other hand, in a joint work with Teresa Cortadellas [CD13b] (see
also [KPU13]), we have shown that if µ = 2 and there is a point of very high multiplicity (it can
be proven that if the multiplicity of a point is larger than 3 when µ = 2, then it must be equal
to d − 2), then the number of generators drops to b d+6
2 c, i.e. the description of K is simpler in
this case. In both cases, these generators can be make explicit, see [Bus09, CD13b, KPU13].
Further evidence supporting this claim is what is already known for the case µ = 1, which
was one of the first one being worked out by several authors: [HSV08, CHW08, Bus09, CD10].
It turns out (cf. [CD10, Corollary 2.2]) that µ = 1 if and only if the parametrization is proper
(i.e. generically injective), and there is a point on C which has multiplicity d − 1, which is
the maximal multiplicity a point can have on a curve of degree d. If this is the case, then the
parametrization has exactly d + 1 elements.
In both cases (µ = 1 and µ = 2), explicit elements of a set of minimal generators of K can
be given in terms of the input parametrization. But in general, very little is known about how
many are them and which are their bidegrees. Let n0 (K) be the 0-th Betti number of K (i.e.
the cardinal of any minimal set of generators of K). We propose the following problem which
is the subject of attention of several researchers at the moment.
Problem 1. Describe all the possible values of n0 (K) and the parameters that this function
depends on, for a proper parametrization φ as in (1).
Recall that “proper” here means “generically injective”. For instance, we just have shown
above that, for µ = 1, n0 (µ) = d + 1. If µ = 2, the value of n0 (K) depends on whether there is
a very singular point or not. Is n0 a function of only d, µ and the multiplicity structure of C?
A more ambitious problem of course is the following. Let B(K) ⊂ N2 be the (multi)-set of
bidegrees of a minimal set of generators of K.
Problem 2. Describe all the possible values of B(K).
For instance, if µ = 1, we have that (see [CD10, Theorem 2.9])
B(K) = {(0, d), (1, 1), (1, d − 1), (2, d − 2), . . . , (d − 1, 1)}.
Explicit descriptions of B(K) have been done also for µ = 2 in [Bus09, CD13b, KPU13]. In
this case, the value of B(K) depends on whether the parametrization has singular point of
multiplicity d − 2 or not.
For µ = 3 the situation gets a bit more complicated as we have found in [CD13b]: consider
the parametrizations φ1 and φ2 whose µ-bases are respectively :
P3,1 (t0 , t1 , X0 , X1 , X2 ) = t30 X0 + (t31 − t0 t21 )X1
Q7,1 (t0 , t1 , X0 , X1 , X2 ) = (t60 t1 − t20 t51 )X0 + (t40 t31 + t20 t51 )X1 + (t70 + t71 )X2 ,
P3,2 (t0 , t1 , X0 , X1 , X2 ) = (t30 − t20 t1 )X0 + (t31 + t0 t21 − t20 t1 )X1
Q7,2 (t0 , t1 , X0 , X1 , X2 ) = (t60 t1 − t20 t51 )X0 + (t40 t31 + t20 t51 )X1 + (t70 + t71 )X2 .
Each of them parametrizes properly a rational plane curve of degree 10 having the point
(0 : 0 : 1) with multiplicity 7. The rest of them are either double or triple points. Set K1 and
K2 for the respective kernels, we have then
B(K1 ) = {(3, 1), (7, 1), (2, 3), (2, 3), (4, 2), (2, 4), (1, 6), (1, 6), (1, 6), (0, 10)},
B(K2 ) = {(3, 1), (7, 1), (2, 3), (2, 3), (4, 2), (2, 4), (1, 5), (1, 6), (1, 6), (0, 10)}.
The parameters to find in the description of n0 (K) proposed in Problem 1 may be more than
µ and the multiplicities of the curve. For instance, in [CD13], we have shown that if there is a
minimal generator of bidegree (1, 2) in K, then the whole set B(K) is constant, and equal to
d+1
{(0, d), (1, 2), (1, d − 2), (2, d − 4), . . . , ( d−1
2 , 1), ( 2 , 1)} if d is odd
d
d
{(0, d), (1, 2), (1, d − 2), (2, d − 4), . . . , ( 2 , 1), ( 2 , 1)}
if d is even.
To put the two problems above in a more formal context, we proceed as in [CSC98, Section
3
3]: For d ≥ 1, denote with Vd ⊂ K[t0 , t1 ]d the set of triples of homogeneous polynomials
(u0 (t0 , t1 ), u1 (t0 , t1 ), u2 (t0 , t1 ) defining a proper parametrization φ as in (1). Note that one
can regard Vd as an open set in an algebraic variety in the space of parameters. Moreover, Vd
3
could be actually be taken as a quotient of K[t0 , t1 ]d via the action of SL(2, K) acting on the
monomials t0 , t1 .
Problem 3. Describe the subsets of Vd where B(K) is constant.
Note that, naturally the µ-basis is contained in K, and moreover, we have (see [BJ03, Proposition 3.6]):
K = hPµ (t0 , t1 , X0 , X1 , X2 ), Qd−µ (t0 , t1 , X0 , X1 , X2 )i : ht0 , t1 i∞ ,
so the role of the µ-basis is crucial to understand K. Indeed, any minimal set of generators of
K contains a µ-basis, so the pairs (1, µ), (1, d − µ) are always elements of B(K). The study of
the geometry of Vd according to the stratification done by µ has been done in [CSC98, Section
3] (see also [DAn04, Iar13]). Also, in [CKPU13], a very interesting study of how the µ-basis of
a parametrization having generic µ (µ = bd/2c) and very singular points looks like has been
made. It would be interesting to have similar results for K.
In this context, one could give a positive answer to the experimental evidence provided by
Sederberg and his collaborators about the fact that “the more singular the curve, the simpler
the description of K” as follows. For W ⊂ Vd , we denote by W the closure of W with respect
to the Zariski topology.
Conjecture 1. If W1 , W2 ⊂ Vd are such that n0 |Wi is constant for i = 1, 2, and W1 ⊂ W2 ,
then
n0 W1 ≤ n0 W2 .
Note that this condition is equivalent to the fact that n0 (K) is upper semi-continuous on
Vd with its Zariski topology. Very related to this conjecture is the following claim, which
essentially asserts that in the “generic” case, we obtain the largest value of n0 (K) :
Conjecture 2. Let Wd be open set of Vd parametrizing all the curves with µ = bd/2c, and
having all its singular points being ordinary double points. Then, n0 (K) is constant on Wd ,
and attains its maximal value on Vd in this component.
Note that a “refinement” of Conjecture 1 with B(K1 ) ⊂ B(K2 ) will not hold in general, as in
the examples computed for µ = 2 in [Bus09, CD13b, KPU13] show. Indeed, we have in this
case that the Zariski closure of those parametrizations with a point of multiplicity d − 2 is
contained in the case where all the points are only cusps, but the bidegrees of the minimal
generators of K in the case of parametrizations with points of multiplicity d − 2 appear at
lower values than the more general case (only cusps).
5
Why Rational Plane Curves only?
All along this text we were working with the parametrization of a rational plane curve, but
most of the concepts, methods and properties worked out here can be extended in two different
directions. The obvious one is to consider “surface” parametrizations, that is maps of the form
φS :
P2
99K
P3
(t0 : t1 : t2 ) 7−→ u0 (t0 , t1 , t2 ) : u1 (t0 , t1 , t2 ) : u2 (t0 , t1 , t2 ) : u3 (t0 , t1 , t2 )
(14)
where ui (t0 , t1 , t2 ) ∈ K[t0 , t1 , t2 ], i = 0, 1, 2, 3, are homogeneous of the same degree, and without common factors. Obviously, one can do this in higher dimensions also, but we will restrict
the presentation just to this case. The reason we have now a dashed arrow in (14) is because
even with the conditions imposed upon the ui ’s, the map may not be defined on all points of
P2 . For instance, if
u0 (t0 , t1 , t2 ) = t1 t2 , u1 (t0 , t1 , t2 ) = t0 t2 , u2 (t0 , t1 , t2 ) = t0 t1 , u3 (t0 , t1 , t2 ) = t0 t1 + t1 t2 ,
φS will not be defined on the set {(1 : 0 : 0), (0 : 1 : 0), (0 : 0 : 1)}.
In this context, there are methods to deal with the implicitization analogues to those
presented here for plane curves. For instance, one can use a multivariate resultant or a sparse
resultant (as defined in [CLO05]) to compute the implicit equation of the Zariski closure of the
image of φS . Other tools from Elimination Theory such as determinants of complexes can be
also used to produce matrices whose determinant (or quotient or gcd of some determinants)
can also be applied to compute the implicit equation, see for instance [BJ03, BCJ09].
The method of moving lines and curves presented before gets translated into a method of
moving planes and surfaces which follows φS , and its description and validity is much more
complicated, as the both the Algebra and the Geometry involved have more subtleties, see
[SC95, CGZ00, Cox01, BCD03, KD06]. Even though it has been shown in [CCL05] that there
exists an equivalent of a µ-basis in this context, its computation of is not as easy as in the
planar case. Part of the reason is that the syzygy module of general ui (t0 , t1 , t2 ), i = 0, 1, 2, 3
is not free anymore (i.e. it does not have sense the meaning of a “basis” as we defined in the
case of curves), but if one set t0 = 1 and regards these polynomials as affine bivariate forms, a
nicer situation appears but without control on the degrees of the elements of the µ-basis, see
[CCL05, Proposition 2.1] for more on this. Some explicit descriptions have been done for either
low degree parametrizations, and also for surfaces having some additional geometric features
(see [CSD07, WC12, SG12, SWG12]), but the general case remains yet to be explored.
A generalization of a map like (13) to this situation is straightforward, and one can then
consider the defining ideal of the Rees Algebra associated to φS . Very little seems to be known
about the minimal generators of K in this situation. In [CD10] we studied the case of monoid
surfaces, which are rational parametrizations with a point of the highest possible multiplicity.
This situation can be regarded as a possible generalization of the case µ = 1 for plane curves,
and has been actually generalized to de Jonquières parametrizations in [HS12].
We also dealt in [CD10] (see also [HW10]) with the case where there are two linearly
independent moving planes of degree 1 following the parametrization plus some geometric
conditions, this may be regarded of a generalization of the “µ = 1” situation for plane curves.
But the general description of the defining ideal of the Rees Algebra for the surface situation
is still an open an fertile area for research.
The other direction where we can go after consider rational plane parametrizations is to
look at spatial curves, that is maps
φC :
P1 −→
P3
(t0 : t1 ) 7−→ (u0 (t0 , t1 ) : u1 (t0 , t1 ) : u2 (t0 , t1 ) : u3 (t0 , t1 )),
where ui ∈ K[t0 , t1 ], homogeneous of the same degree d ≥ 1 in K[t0 , t1 ] without any common
factor. In this case, the image of φC is a curve in P3 , and one has to replace “an” implicit
equation with “the” implicit equations, as there will be more than one in the same way that
the implicit equations of the line joining (1 : 0 : 0 : 1) and (0 : 0 : 0 : 1) in P3 is given by the
vanishing of the equations X1 = X2 = 0.
As explained in [CSC98], both Theorems 1 and 2 carry on to this situation, so there is
more ground to play and theoretical tools to help with the computations. In [CKPU13], for
instance, the singularities of the spatial curve are studied as a function of the shape of the
µ-basis. Further computations have been done in [KPU09] to explore the generalization of
the case µ = 1 and produce generators for K in this case. These generators, however, are far
from being minimal. More explorations have been done in [JG09, HWJG10, JWG10], for some
specific values of the degrees of the generators of the µ-basis.
It should be also mentioned that in the recently paper [Iar13], an attempt of the stratification proposed in Problem 2 for this kind of curves is done, but only with respect to the the
value of µ and no further parameters.
As the reader can see, there are lots of recent work in this area, and many many challenges
yet to solve. We hope that in the near future we can get more and deeper insight in all these
matters, and also to be able to apply these results in the Computer Aided and Visualization
community.
Acknowledgments 15 I am grateful to Laurent Busé, Eduardo Casas-Alvero and Teresa
Cortadellas Benitez for their careful reading of a preliminary version of this manuscript, and
very helpful comments. Also, I thank the anonymous referee for her further comments and
suggestions for improvements, and to Marta Narváez Clauss for her help with the computations
of some examples. All the plots in this text have been done with Mathematica 8.0 ([Wol10]).
References
[AL94]
Adams, William W.; Loustaunau, Philippe. An introduction to Gröbner bases. Graduate
Studies in Mathematics, 3. American Mathematical Society, Providence, RI, 1994.
[AJ06]
Apéry, F.; Jouanolou, J.-P. Élimination: le cas d’une variable. Collection Méthodes.
Hermann Paris, 2006.
[Bus09]
Busé, Laurent. On the equations of the moving curve ideal of a rational algebraic plane
curve. J. Algebra 321 (2009), no. 8, 2317–2344.
[BCJ09]
Busé, Laurent; Chardin, Marc; Jouanolou, Jean-Pierre. Torsion of the symmetric algebra
and implicitization. Proc. Amer. Math. Soc. 137 (2009), no. 6, 1855–1865.
[BCD03] Busé, Laurent; Cox, David; D’Andrea, Carlos. Implicitization of surfaces in P3 in the
presence of base points. J. Algebra Appl. 2 (2003), no. 2, 189–214.
[BD12]
Busé, Laurent; D’Andrea, Carlos. Singular factors of rational plane curves. J. Algebra 357
(2012), 322–346.
[BJ03]
Busé, Laurent; Jouanolou, Jean-Pierre. On the closed image of a rational map and the
implicitization problem. J. Algebra 265 (2003), no. 1, 312–357.
[CDNR97] Capani, A.; Dominicis, G. De; Niesi, G.; Robbiano, L. Computing minimal finite free
resolutions. Algorithms for algebra (Eindhoven, 1996). J. Pure Appl. Algebra 117/118
(1997), 105-117.
[CCL05] Chen, Falai; Cox, David; Liu, Yang. The µ-basis and implicitization of a rational parametric surface. J. Symbolic Comput. 39 (2005), no. 6, 689–706.
[CSD07] Chen, F.; Shen, L.; Deng, J. Implicitization and parametrization of quadratic and cubic
surfaces by µ-bases. Computing 79 (2007), no. 2-4, 131142.
[CD10]
Cortadellas Benitez, Teresa; D’Andrea, Carlos. Minimal generators of the defining ideal
of the Rees Algebra associated to monoid parametrizations. Computer Aided Geometric
Design, Volume 27, Issue 6, August 2010, 461–473.
[Cox01]
Cox, David A. Equations of parametric curves and surfaces via syzygies. Symbolic computation: solving equations in algebra, geometry, and engineering (South Hadley, MA, 2000),
1–20, Contemp. Math., 286, Amer. Math. Soc., Providence, RI, 2001.
[CD13]
Cortadellas Benitez, Teresa; D’Andrea, Carlos. Rational plane curves parametrizable by
conics. J. Algebra 373 (2013) 453–480.
[CD13b] Cortadellas Benitez, Teresa; D’Andrea, Carlos. Minimal generators of the defining ideal of
the Rees Algebra associated to a rational plane parameterization with µ = 2 . To appear in
the Canadian Journal of Mathematics, http://dx.doi.org/10.4153/CJM-2013-035-1
[Cox08]
Cox, David A. The moving curve ideal and the Rees algebra. Theoret. Comput. Sci. 392
(2008), no. 1-3, 23–36.
[CGZ00] Cox, David; Goldman, Ronald; Zhang, Ming. On the validity of implicitization by moving
quadrics of rational surfaces with no base points. J. Symbolic Comput. 29 (2000), no. 3,
419–440.
[CHW08] Cox, David; Hoffman, J. William; Wang, Haohao. Syzygies and the Rees algebra. J. Pure
Appl. Algebra 212 (2008), no. 7, 1787–1796.
[CKPU13] Cox,David; Kustin, Andrew; Polini, Claudia; Ulrich, Bernd. A study of singularities on
rational curves via syzygies. Volume 222, Number 1045 (2013)
[CLO05] Cox, David A.; Little, John; O’Shea, Donal. Using algebraic geometry. Second edition.
Graduate Texts in Mathematics, 185. Springer, New York, 2005.
[CLO07] Cox, David; Little, John; O’Shea, Donal. Ideals, varieties, and algorithms. An introduction
to computational algebraic geometry and commutative algebra. Third edition. Undergraduate Texts in Mathematics. Springer, New York, 2007.
[CSC98] Cox, David A.; Sederberg, Thomas W.; Chen, Falai. The moving line ideal basis of planar
rational curves.
[DAn04] D’Andrea, Carlos On the structure of µ-classes. Comm. Algebra 32 (2004), no. 1, 159–165.
[Eis95]
Eisenbud, David. Commutative algebra. With a view toward algebraic geometry. Graduate
Texts in Mathematics, 150. Springer-Verlag, New York, 1995.
[HS12]
Hassanzadeh, Seyed Hamid; Simis, Aron. Implicitization of the Jonquières parametrizations. arXiv:1205.1083
[HW10]
Hoffman, J. William; Wang, Haohao. Defining equations of the Rees algebra of certain
parametric surfaces. Journal of Algebra and its Applications, Volume: 9, Issue: 6(2010),
1033–1049
[HWJG10] Hoffman, J. William; Wang, Haohao; Jia, Xiaohong; Goldman, Ron. Minimal generators
for the Rees algebra of rational space curves of type (1, 1, d2). Eur. J. Pure Appl. Math. 3
(2010), no. 4, 602632.
[HSV08] Hong, Jooyoun; Simis, Aron; Vasconcelos, Wolmer V. On the homology of two-dimensional
elimination. J. Symbolic Comput. 43 (2008), no. 4, 275–292.
[HSV09] Hong, Jooyoun; Simis, Aron; Vasconcelos, Wolmer V. Equations of almost complete intersections. Bulletin of the Brazilian Mathematical Society, June 2012, Volume 43, Issue 2,
171–199.
[Iar13]
Iarrobino, Anthony. Strata of vector spaces of forms in k[x, y] and of rational curves in
Pk . arXiv:1306.1282
[JG09]
Jia, Xiaohong; Goldman, Ron. µ-bases and singularities of rational planar curves. Comput.
Aided Geom. Design 26 (2009), no. 9, 970988.
[JWG10] Jia, Xiaohong; Wang, Haohao; Goldman, Ron. Set-theoretic generators of rational space
curves. J. Symbolic Comput. 45 (2010), no. 4, 414433.
[Jou97]
Jouanolou, J. P. Formes d’inertie et résultant: un formulaire. Adv. Math. 126 (1997), no.
2, 119–250.
[KD06]
Khetan, Amit; D’Andrea, Carlos. Implicitization of rational surfaces using toric varieties.
J. Algebra 303 (2006), no. 2, 543–565.
[KPU09] Kustin, Andrew R.; Polini, Claudia; Ulrich, Bernd. Rational normal scrolls and the defining
equations of Rees Algebras. J. Reine Angew. Math. 650 (2011), 23–65.
[KPU13] Kustin, Andrew; Polini, Claudia; Ulrich, Bernd. The bi-graded structure of Symmetric
Algebras with applications to Rees rings. arXiv:1301.7106 .
[SC95]
Sederberg, Thomas; Chen, Falai. Implicitization using moving curves and surfaces. Proceedings of SIGGRAPH, 1995, 301–308.
[SGD97]
[STD94]
[SWP08]
[SG12]
[SWG12]
[vdW66]
[WC12]
[Wol10]
[ZCG99]
Sederberg, Tom; Goldman, Ron; Du, Hang. Implicitizing rational curves by the method
of moving algebraic curves. Parametric algebraic curves and applications (Albuquerque,
NM, 1995). J. Symbolic Comput. 23 (1997), no. 2-3, 153–175.
Sederberg, Thomas W., Takafumi Saito, Dongxu Qi, and Krzysztof S. Klimaszewski. Curve
Implicitization using Moving Lines. Computer Aided Geometric Design, 11:687706, 1994.
Sendra, J. Rafael; Winkler, Franz; Pérez-D´iaz, Sonia. Rational algebraic curves. A computer algebra approach. Algorithms and Computation in Mathematics, 22. Springer, Berlin,
2008.
Shi, Xiaoran; Goldman, Ron. Implicitizing rational surfaces of revolution using µ-bases.
Comput. Aided Geom. Design 29 (2012), no. 6, 348–362.
Shi, Xiaoran; Wang, Xuhui; Goldman, Ron. Using µ-bases to implicitize rational surfaces
with a pair of orthogonal directrices. Comput. Aided Geom. Design 29 (2012), no. 7,
541–554.
van der Waerden, B. L. Modern Algebra, Vol. 1, 2nd ed. New York: Frederick Ungar, 1966.
Wang, Xuhui; Chen, Falai. Implicitization, parameterization and singularity computation
of Steiner surfaces using moving surfaces. J. Symbolic Comput. 47 (2012), no. 6, 733750.
Wolfram Research, Inc. Mathematica, Version 8.0, Champaign, IL (2010).
Zhang, Ming; Chionh, Eng-Wee; Goldman, Ronald N. On a relationship between the moving line and moving conic coefficient matrices. Comput. Aided Geom. Design 16 (1999),
no. 6, 517–527.
| 0 |
arXiv:1711.07459v1 [] 20 Nov 2017
SquishedNets: Squishing SqueezeNet further for edge
device scenarios via deep evolutionary synthesis
Mohammad Javad Shafiee
Dept. of Systems Design Engineering
University of Waterloo, DarwinAI
[email protected]
Francis Li
Dept. of Systems Design Engineering
University of Waterloo, DarwinAI
[email protected]
Brendan Chwyl
Dept. of Systems Design Engineering
University of Waterloo, DarwinAI
[email protected]
Alexander Wong
Dept. of Systems Design Engineering
University of Waterloo, DarwinAI
[email protected]
Abstract
While deep neural networks have been shown in recent years to outperform other
machine learning methods in a wide range of applications, one of the biggest challenges with enabling deep neural networks for widespread deployment on edge devices such as mobile and other consumer devices is high computational and memory requirements. Recently, there has been greater exploration into small deep
neural network architectures that are more suitable for edge devices, with one of
the most popular architectures being SqueezeNet, with an incredibly small model
size of 4.8MB. Taking further advantage of the notion that many applications of
machine learning on edge devices are often characterized by a low number of target classes, this study explores the utility of combining architectural modifications
and an evolutionary synthesis strategy for synthesizing even smaller deep neural
architectures based on the more recent SqueezeNet v1.1 macroarchitecture (considered state-of-the-art in efficient architectures) for applications with fewer target classes. In particular, architectural modifications are first made to SqueezeNet
v1.1 to accommodate for a 10-class ImageNet-10 dataset, and then an evolutionary
synthesis strategy is leveraged to synthesize more efficient deep neural networks
based on this modified macroarchitecture. The resulting SquishedNets possess
model sizes ranging from 2.4MB to 0.95MB (∼5.17X smaller than SqueezeNet
v1.1, or 253X smaller than AlexNet). Furthermore, the SquishedNets are still able
to achieve accuracies ranging from 81.2% to 77%, and able to process at speeds of
156 images/sec to as much as 256 images/sec on a Nvidia Jetson TX1 embedded
chip. These preliminary results show that a combination of architectural modifications and an evolutionary synthesis strategy can be a useful tool for producing
very small deep neural network architectures that are well-suited for edge device
scenarios without the need for compression or quantization.
1 Introduction
One can consider deep neural networks [5] to be one of the most successful machine learning methods, outperforming many state-of-the-art machine learning methods in a wide range of applications
ranging from image categorization [4] to speech recognition. A very major factor to the great recent successes of deep neural networks has been the availability of very powerful high performance
computing systems due to the great advances in parallel computing hardware. This has enabled
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
researchers to significantly increase the depth and complexity of deep neural networks, resulting in
greatly increased modeling capabilities. As such, the majority of research in deep neural networks
have largely focused on designing deeper and more complex deep neural network architectures for
improved accuracy. However, the increasing complexity of deep neural networks has become one
of the biggest obstacles to the widespread deployment of deep neural networks on edge devices
such as mobile and other consumer devices, where computational, memory, and power capacity is
significantly lower than that in high performance computing systems.
Given the proliferation of edge devices and the increasing demand for machine learning applications in such devices, there has been an increasing amount of research exploration on the design of
smaller, more efficient deep neural network architectures that can both infer and train faster, as well
as transfer faster onto embedded chips that power such edge devices. One commonly employed approach for designing smaller neural network architectures is synaptic precision reduction, where the
number of bits used to represent synaptic strength is significantly reduced from 32-bit floating point
precision to fixed-point precision [12], 2-bit precision [6, 13, 7], or 1-bit precision [1, 8]. While this
approach leads to greatly reduced model sizes, the resulting deep neural networks often require specialized hardware support for accelerated deep inference and training on embedded devices, which
can limit their utility for wide range of applications.
Another approach to designing smaller deep neural network architectures is to take a principled
approach and employ architectural design strategies to achieve more efficient deep neural network
macroarchitectures [2, 3]. An exemplary case of what can be achieved using such an approach
is SqueezeNet [3], where three key design strategies where employed: 1) decrease the number of
3x3 filters, 2) decrease the number of input channels to 3x3 filters, and 3) downsample late in
the network. This resulted in a macroarchitecture composed of Fire modules that possessed an
incredibly small model size of 4.8MB, which is 50X smaller than AlexNet with comparable accuracy
on ImageNet for 1000 classes. The authors further introduced SqueezeNet v1.1, where the number
of filters as well as the filter sizes are further reduced, resulting in 2.4X less computation than
the original SqueezeNet without sacrificing accuracy and thus can be considered state-of-the-art in
efficient network architectures.
Inspired by the incredibly small macroarchitecture of SqueezeNet, we are motivated to take it one
step further by taking into account that the majority of applications of machine learning on edge
devices such as mobile and consumer devices are quite specialized and often require a much fewer
number of target classes (typically less than 10 target classes). As such, this study explores the utility of combining architectural modifications and an evolutionary synthesis strategy for synthesizing
even smaller deep neural architectures based on the SqueezeNet v1.1 macroarchitecture for applications with fewer target classes. We will refer to these smaller deep neural network architectures as
SquishedNets.
2 Architectural modification for fewer target classes
The first and simplest strategy taken in this study is to perform some simple architectural modifications to the macroarchitecture of SqueezeNet v1.1 to accommodate for scenarios where we are
dealing with much fewer target classes. For this study, we explored a classification scenario where
the number of target classes is reduced to 10, as fewer target classes is quite common for many machine learning applications on edge devices such as mobile and other consumer devices. Given the
reduced number of target classes, we modify the conv10 layer to a set of 10 1x1 filters. Given that
the conv10 layer contains ∼40% of all parameters in SqueezeNet [3], this architectural modification
resulted in a significant reduction in model size. While not a leap of the imagination and a rather
trivial modification, this illustrates that SqueezeNet v1.1 is a great macroarchitecture that can be
modified to be even more efficient for edge scenarios where there are fewer target classes.
3 Evolutionary synthesis of more efficient network architectures
The second strategy taken in this study is to employ an evolutionary synthesis strategy [10, 11, 9] to
synthesize even more efficient deep neural network architectures than can be achieved through principled macroarchitecture design strategies. First proposed by [10] and subsequently extended [11, 9],
the idea behind evolutionary synthesis is to automatically synthesize progressively more efficient
deep neural networks over successive generations in a stochastic synaptogenesis manner within a
probabilistic framework. More specifically, synaptic probability models are used to encode the ge2
Table 1: ImageNet-10 dataset
wnid
n02783161
n03085013
n04557648
n04037443
n03793489
Class Name
pen
keyboard
water bottle
car
computer mouse
wnid
n03584254
n04548362
n07930864
n03782006
n04409515
Class Name
cell phone
wallet
cup
monitor
tennis ball
netic information of a deep neural network, thus mimic the notion of heredity. Offspring networks
are then synthesized in a stochastic manner given the synaptic probability models and a set of computational environmental constraints for influencing synaptogenesis, thus forming the next generation
of deep neural networks, thus mimicking the notions of random mutation and natural selection. The
offspring deep neural networks are then trained and this evolution synthesis process is performed
over generations until the desired traits are met.
While a more detailed description of the evolutionary synthesis strategy can be found in [10], a
brief mathematical description is provided as follows. Let the genetic encoding of a network be
formulated as P (Hg |Hg−1 ), where the network architecture Hg at generation g is influenced by the
network architecture Hg−1 of generation g − 1. An offspring network is synthesized in a stochastic
manner via a synthesis probability P (Hg ), which combines the genetic encoding P (Hg |Hg−1 ) with
the environmental factor model R being imposed:
P (Hg ) ≈ P (Hg |Hg−1 ) · R.
(1)
Since the underlying goal is to influence the synaptogenesis behaviour of offspring deep neural networks to be progressively more efficient generation after generation, the environmental factor model
R is set to a value less than 1 to further enforce a resource-starved environment on the deep neural
networks from generation to generation. In this study, the aforementioned modified macroarchitecture is used as the ancestral precursor for the evolutionary synthesis process to produce deep neural
networks with even smaller network architectures. In this study, 15 generations of evolutionary
synthesis was performed to produce the final SquishedNets.
Table 2: Performance results of SquishedNets.
Model
Name
Model
size
SquishedNet-1
SquishedNet-2
SquishedNet-3
SquishedNet-4
2.4MB
2.0MB
1.3MB
0.95MB
Reduction in
model size
vs. SqueezeNet v1.1
2.04X
2.45X
3.77X
5.17X
Reduction in
model size
vs. AlexNet
100X
120X
184X
253X
Runtime
speed
(images/sec)
156.09
174.86
225.35
256.00
Top-1
accuracy
(ImageNet-10)
81.2%
79.6%
78.6%
77.0%
4 Preliminary Results and Discussion
To study the utility of a combination of architectural modifications and evolutionary synthesis on
synthesizing very small deep neural network architectures based on the SqueezeNet v1.1 macroarchitecture that are well-suited for edge device scenarios, we examine the top-1 accuracies and runtime speeds (on an Nvidia Jetson TX1 embedded chip with batch size of 32) of our synthesized
SquishedNets on the 10-class ImageNet-10 dataset. The ImageNet-10 dataset used in this study is
a subset of the ImageNet dataset composed of the following ten target classes reported in Table 1.
The performance results of four different SquishedNets (produced at four different generations of
the evolutionary synthesis process) are shown in Table 1. A number of observations can be made
based on Table 2. First, it can be observed that leveraging both architectural modifications to account
for fewer target classes as well as evolutionary synthesis results in the generation of even more efficient network architectures, as evident by the SquishedNets having model sizes range from 2.4MB
to just 0.95MB. Therefore, the smallest SquishedNet is 5.17X smaller than SqueezeNet v1.1 (or
∼253X smaller than AlexNet). Second, not only was there a significant model size reductions, the
SquishedNets were able to process at speeds of 156 images/sec to as much as 256 images/sec on a
3
Nvidia Jetson TX1 embedded chip, which is significant particularly for edge scenarios with mobile
and other consumer devices. Therefore, the ability to not only achieve very small model sizes but
also fast runtime speeds has great benefits when used in resource-starved environments with limited
computational, memory, and energy requirements. In terms of top-1 accuracy, the SquishedNets are
able to still achieve accuracies ranging from 81.2% to 77.0%, which is high enough for many edge
applications. These preliminary results show that a combination of architectural modifications and
an evolutionary synthesis strategy can be a useful tool for producing very small deep neural network architectures that are well-suited for edge device scenarios without the need for compression
or quantization.
Acknowledgment
The authors thank NSERC, the Canada Research Chairs program, Nvidia, and DarwinAI.
References
[1] M. Courbariaux, Y. Bengio, and J.-P. David. Binaryconnect: Training deep neural networks with binary
weights during propagations. NIPS, 2015.
[2] A. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam.
Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv:1704.04861,
2017.
[3] Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, and Kurt
Keutzer. Squeezenet: Alexnet-level accuracy with 50x fewer parameters and <0.5mb model size.
arXiv:1602.07360, 2016.
[4] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional
neural networks. In NIPS, 2012.
[5] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 2015.
[6] Fengfu Li, Bo Zhang, and Bin Liu. Ternary weight networks. arXiv:1605.04711, 2016.
[7] W. Meng, Z. Gu, M. Zhang, and Z. Wu. Two-bit networks for deep learning on resource-constrained
embedded devices. arXiv:1701.00485, 2017.
[8] M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi. Xnor-net: Imagenet classification using binary
convolutional neural networks. ECCV, 2015.
[9] M. Shafiee, F. Li, and A. Wong. Exploring the imposition of synaptic precision restrictions for evolutionary synthesis of deep neural networks. In https://arxiv.org/abs/1707.00095, 2016.
[10] M. Shafiee, A. Mishra, and A. Wong. Deep learning with darwin: Evolutionary synthesis of deep neural
networks. arXiv:1606.04393, 2016.
[11] M. Shafiee and A. Wong. Evolutionary synthesis of deep neural networks via synaptic cluster-driven
genetic encoding. In NIPS Workshop, 2016.
[12] S. Shin, Y. Boo, and W. Sung. Fixed-point optimization of deep neural networks with adaptive step size
retraining. arXiv:1702.08171, 2017.
[13] P. Yin, S. Zhang, J. Xin, and Y. Qi. Training ternary neural networks with exact proximal operator.
arXiv:1612.06052, 2017.
4
| 2 |
Performance Analysis and Coherent Guaranteed Cost Control for
Uncertain Quantum Systems Using Small Gain and Popov Methods
arXiv:1508.06377v2 [] 21 Mar 2016
Chengdi Xiang, Ian R. Petersen and Daoyi Dong
Abstract— This paper extends applications of the quantum
small gain and Popov methods from existing results on robust
stability to performance analysis results for a class of uncertain
quantum systems. This class of systems involves a nominal linear quantum system and is subject to quadratic perturbations in
the system Hamiltonian. Based on these two methods, coherent
guaranteed cost controllers are designed for a given quantum
system to achieve improved control performance. An illustrative
example also shows that the quantum Popov approach can
obtain less conservative results than the quantum small gain
approach for the same uncertain quantum system.
I. INTRODUCTION
Due to recent advances in quantum and nano-technology,
there has been a considerable attention focusing on research
in the area of quantum feedback control systems; e.g., [1][24]. In particular, robust control has been recognized as
a critical issue in quantum control systems, since many
quantum systems are unavoidably subject to disturbances and
uncertainties in practical applications; e.g., [1], [2] and [3].
A majority of papers in the area of quantum feedback control
only consider the case in which the controller is a classical
system. In this case, analog and digital electronic devices
may be involved and quantum measurements are required
in the feedback loop. Due to the limitations imposed by
quantum mechanics on the use of quantum measurement, recent research has considered the design of coherent quantum
controllers to achieve improved performance. In this case, the
controller itself is a quantum system; e.g., [1], [4] and [5].
In the linear case, the quantum system is often described
by linear quantum stochastic differential equations (QSDEs)
that require physical realizability conditions in terms of
constraints on the system matrices to represent physically
meaningful systems.
As opposed to using QSDEs, the papers [6], [7] have
introduced a set of parameterizations (S, L, H) to represent
a class of open quantum systems, where S is a scattering
matrix, L is a vector of coupling operators and H is a
Hamiltonian operator. The matrix S, together with the vector
L, describes the interface between the system and the field,
and the operator H represents the energy of the system. The
advantage of using a triple (S, L, H) is that this framework
automatically represents a physically realizable quantum
system. Therefore, in this paper, a coherent guaranteed cost
Chengdi
School of
New South
berra ACT
Xiang, Ian R. Petersen and Daoyi Dong are with the
Engineering and Information Technology, University of
Wales at the Australian Defence Force Academy, Can2600, Australia. {elyssaxiang, i.r.petersen,
daoyidong}@gmail.com This work was supported by the Australian
Research Council.
controller is designed based on (S, L, H) and the physical
realizability condition does not need to be considered.
The small gain theorem and the Popov approach are two
of the most important methods for the analysis of robust
stability in classical control. The paper [8] has applied the
small gain method to obtain a robust stability result for
uncertain quantum systems. This result gives a sufficient
condition for robust stability in terms of a strict bounded real
condition. The small gain method has also been extended
to the robust stability analysis of quantum systems with
different perturbations and applications (e.g., see [9], [10],
[11] and [12]). The paper [13] has introduced a quantum
version of the Popov stability criterion in terms of a frequency domain condition which is of the same form as the
classical Popov stability condition (e.g., see [25]). The Popov
approach has also been used to analyze the robust stability
of an optical cavity containing a saturated Kerr medium
[14]. Also, the paper [13] has shown that the frequency
domain condition obtained in [13] is less conservative than
the stability result using the small gain theorem [8].
In this paper, we extend the quantum small gain method
in [8] and the Popov type approach in [13] from robust
stability analysis to robust performance analysis for uncertain
quantum systems. We assume that the system Hamiltonian
H can be decomposed in terms of a known nominal Hamiltonian H1 and a perturbation Hamiltonian H2 , i.e., H =
H1 + H2 . The perturbation Hamiltonian H2 is contained in
a set of Hamiltonians W. We consider uncertain quantum
systems where the nominal system is a linear system and
the perturbation Hamiltonian is quadratic. Moreover, a coherent controller is designed using the small gain approach
and the Popov approach for the uncertain quantum system,
where a guaranteed bound on a cost function is derived in
terms of linear matrix inequality (LMI) conditions. Although
preliminary versions of the results in this paper have been
presented in the conference papers [22] and [23], this paper
presents complete proofs of the main results and modifies the
example in [22] for a consistent performance comparison of
the proposed two methods.
The remainder of the paper proceeds as follows. In Section II, we define the general class of quantum systems
under consideration and specify the nominal system as a
linear quantum system. We then present a set of quadratic
Hamiltonian perturbations in Section III. In Section IV, a
performance cost function for the given system is defined.
Moreover, a small gain approach and a Popov type method
are used to analyze the performance of the given system.
In Section V, a quantum controller is added to the original
system to stabilize the system and also to achieve improved
performance. Also, corresponding approaches are used to
construct a coherent guaranteed cost controller in terms of
LMI conditions. In Section VI, an illustrative example is
provided to demonstrate the method that is developed in
this paper. A performance comparison between these two
methods is also shown in the illustrative example. We present
some conclusions in Section VII.
II. QUANTUM SYSTEMS
In this section, we describe the general class of quantum
systems under consideration, which is defined by parameters
(S, L, H). Here H = H1 + H2 , H1 is a known self-adjoint
operator on the underlying Hilbert space referred to as the
nominal Hamiltonian and H2 is a self-adjoint operator on
the underlying Hilbert space referred to as the perturbation
Hamiltonian contained in a specified set of Hamiltonians
W; e.g., [6], [7]. The set W can correspond to a set of
exosystems (see, [6]). The corresponding generator for this
class of quantum systems is given by
G(X) = −i[X, H] + L(X)
(1)
where L(X) = 12 L† [X, L] + 12 [L† , X]L. Here, [X, H] =
XH −HX describes the commutator between two operators
and the notation † refers to the adjoint transpose of a vector
of operators. Based on a quantum stochastic differential
equation, the triple (S, L, H), together with the corresponding generators, defines the Heisenberg evolution X(t) of an
operator X [6]. The results presented in this paper will build
on the following results from [13].
Lemma 1: [13] Consider an open quantum system defined by (S, L, H) and suppose there exist non-negative selfadjoint operators V and W on the underlying Hilbert space
such that
G(V ) + W ≤ λ
(2)
where λ is a real number. Then for any plant state, we have
Z
1 T
lim sup
hW (t)idt ≤ λ.
(3)
T →∞ T 0
Here W (t) denotes the Heisenberg evolution of the operator
W and h·i denotes quantum expectation; e.g., see [13] and
[6].
In this paper, the nominal system is considered to be
a linear quantum system. We assume that H1 is in the
following form
1 †
a
a aT M
(4)
H1 =
a#
2
where M ∈ C2n×2n is a Hermitian matrix and has the
following form with M1 = M1† and M2 = M2T
M1 M2
M=
.
(5)
M2# M1#
Here a is a vector of annihilation operators on the underlying
Hilbert space and a# is the corresponding vector of creation
operators. In the case of matrices, the notation † refers to
the complex conjugate transpose of a matrix. In the case
of vectors of operators, the notation # refers to the vector
of adjoint operators and in the case of complex matrices,
this notation refers to the complex conjugate matrix. The
canonical commutation relations between annihilation and
creation operators are described in the following way
a
a#
†
†
a
a
a
=
,
a#
a#
a#
T T
#
a
a
−
a#
a#
=J
(6)
I
0
where J =
[2].
0 −I
The coupling vector L is assumed to be of the form
a
a
L = N1 N2
=
Ñ
(7)
a#
a#
where N1 ∈ Cm×n and N2 ∈ Cm×n . We also write
N1 N2
L
a
a
=N
=
.
L#
a#
a#
N2# N1#
(8)
When the nominal Hamiltonian H is a quadratic function
of the creation and annihilation operators as shown in (4)
and the coupling operator vector is a linear function of
the creation and annihilation operators, the nominal system
corresponds to a linear quantum system (see, [2]).
We consider self-adjoint “Lyapunov” operators V of the
form
†
a
T
a
a
V =
P
(9)
a#
where P ∈ C2n×2n is a positive definite Hermitian matrix
of the form
P 1 P2
P =
.
(10)
P2# P1#
We then consider a set of non-negative self-adjoint operators P defined as
V of the form (9) such that P > 0 is a
P=
. (11)
Hermitian matrix of the form (10)
III. PERTURBATIONS OF THE HAMILTONIAN
In Section II, we introduced the nominal linear quantum system. This section defines the quadratic Hamiltonian
perturbations (e.g., see [8], [18]) for the quantum system
under consideration. We first define two general sets of
Hamiltonians in terms of a commutator decomposition, and
then present two specific sets of quadratic Hamiltonian
perturbations.
When the matrix ∆ is subject to the norm bound
A. Commutator Decomposition
For the set of non-negative self-adjoint operators P and
given real parameters γ > 0, δ ≥ 0, a particular set of
perturbation Hamiltonians W1 is defined in terms of the
commutator decomposition
[V, H2 ] = [V, z † ]w − w† [z, V ]
(12)
for V ∈ P, where w and z are given vectors of operators.
W1 is then defined in terms of sector bound condition:
w† w ≤
1 †
z z + δ.
γ2
(13)
We define
H2 : ∃ w, z such that (13) is satisfied
W1 =
. (14)
and (12) is satisfied ∀ V ∈ P
B. Alternative Commutator Decomposition
Given a set of non-negative operators P, a self-adjoint
operator H1 , a coupling operator L, real parameters β ≥ 0
γ > 0, and a set of Popov scaling parameters Θ ⊂ [0, ∞),
we define a set of perturbation Hamiltonians W2 in terms of
the commutator decompositions [13]
[V − θH1 , H2 ] = [V − θH1 , z † ]w − w† [z, V − θH1 ],
L(H2 ) ≤ L(z † )w + w† L(z) + β[z, L]† [z, L]
We define
H2 ≥ 0 : ∃ w, z such that (15) and (16)
W2 =
.
are satisfied ∀ V ∈ P, θ ∈ Θ
(17)
C. Quadratic Hamiltonian Perturbation
We consider a set of quadratic perturbation Hamiltonians
that is in the form
1 †
ζ
T
ζ ζ
∆
(18)
H2 =
ζ#
2
where ζ = E1 a + E2 a# and ∆ ∈ C2m×2m is a Hermitian
matrix of the form
∆1 ∆2
∆=
(19)
∆#
∆#
2
1
with ∆1 = ∆†1 and ∆2 = ∆T2 .
Since the nominal system is linear, we use the relationship:
E1 E2
ζ
a
a
z=
=
=E
.
ζ#
a#
a#
E2# E1#
(20)
Then
1 †
a
a aT E † ∆E
H2 =
.
(21)
a#
2
(22)
where k.k refers to the matrix induced norm, we define
H2 of the form (18) and (19) such that
W3 =
.
condition (22) is satisfied
(23)
In [8], it has been proven that for any set of self-adjoint
operators P,
W3 ⊂ W1 .
(24)
When the matrix ∆ is subject to the bounds
0≤∆≤
4
I,
γ
(25)
we define
H2 of the form (18) and (19) such that
W4 =
.
condition (25) is satisfied
(26)
In [13], it has been proven that if [z, L] is a constant vector,
then for any set of self-adjoint operators P,
W4 ⊂ W2 .
(27)
IV. PERFORMANCE ANALYSIS
(15)
for V ∈ P and θ ∈ Θ, where w and z are given vectors of
operators. Note that (12) and (15) correspond to a general
quadratic perturbation of the Hamiltonian. This set W2 is
then defined in terms of the sector bound condition
1
1
1
(16)
(w − z)† (w − z) ≤ 2 z † z.
γ
γ
γ
2
,
γ
k∆k ≤
In this section, we provide several results on performance
analysis for quantum systems subject to a quadratic perturbation Hamiltonian. Also, the associated cost function is
defined in the following way:
Z
1 T †
a
T
idt
(28)
h a a
R
J = lim sup
a#
T →∞ T 0
where R > 0. We denote
W = a†
aT
R
a
a#
.
(29)
In order to prove the following theorems on performance
analysis, we require some algebraic identities.
Lemma 2: (See Lemma 4 of [8]) Suppose V ∈ P, H1 is
of the form (4) and L is of the form (7). Then
†
a
a
(P JM − M JP )
,
(30)
[V, H1 ] =
a#
a#
1
L(V ) = −
2
a
a#
†
†
†
(N JN JP + P JN JN )
I 0
†
+ Tr(P JN
N J),
0 0
†
a
a
a
a
[
,
P
] = 2JP
.
a#
a#
a#
a#
Lemma 3: For V ∈ P and z defined in (20),
a
[z, V ] = 2EJP
,
a#
a
a#
†
[V, z ][z, V ] = 4
a
a#
†
†
P JE EJP
a
a#
(31)
(32)
(33)
,
(34)
†
a
a
†
E
.
z z=
E
a#
a#
Proof: The proof follows from Lemma 2.
Proof: Since V ∈ P and H2 ∈ W1 ,
†
(35)
2
Lemma 4: (See Lemma 5 of [13]) For z defined in (20)
and L being of the form (7),
a
a
] = EJΣÑ T
(36)
,
Ñ
[z, L] = [E
a#
a#
is a constant vector, where
0 I
.
(37)
I 0
Lemma 5: (See Lemma 6 of [13]) For z defined in (20),
H1 defined in (4) and L being of the form (7), we have
1
a
− i[z, H1 ] + L(z) = E(−iJM − JN † JN )
(38)
a#
2
Σ=
and
a
i[z, V ] = 2iEJP
.
(39)
a#
Now we present two theorems (Theorem 1 and Theorem 2)
which can be used to analyze the performance of the given
quantum systems using a quantum small gain method and a
Popov type approach, respectively.
A. Performance Analysis Using the Small Gain Approach
Theorem 1: Consider an uncertain quantum system
(S, L, H), where H = H1 + H2 , H1 is in the form of (4), L
is of the form (7) and H2 ∈ W3 . If F = −iJM − 12 JN † JN
is Hurwitz,
"
#
†
F † P + P F + γE2 τE2 + R 2P JE †
<0
(40)
2EJP
−I/τ 2
has a solution P > 0 in the form of (10) and τ > 0, then
Z
1 T
J = lim sup
hW (t)idt
T →∞ T 0
Z
1 T †
δ
a
= lim sup
h a aT R
idt ≤ λ̃ + 2
#
a
τ
T →∞ T 0
(41)
where
I 0
λ̃ = Tr(P JN
N J).
(42)
0 0
In order to prove this theorem, we need the following lemma.
†
Lemma 6: Consider an open quantum system (S, L, H)
where H = H1 + H2 and H2 ∈ W1 , and the set of nonnegative self-adjoint operators P. If there exists a V ∈ P
and real constants λ̃ ≥ 0, τ > 0 such that
− i[V, H1 ] + L(V ) + τ 2 [V, z † ][z, V ] +
1 †
z z + W ≤ λ̃,
γ2τ 2
(43)
then
lim sup
T →∞
1
T
Z
T
hW (t)idt ≤ λ̃ +
0
δ
, ∀t ≥ 0.
τ2
(44)
G(V ) = −i[V, H1 ] + L(V ) − i[V, z † ]w + iw† [z, V ]. (45)
Also,
0 ≤ (τ [V, z † ] −
i †
i
w )(τ [V, z † ] − w† )†
τ
τ
(46)
w† w
= τ [V, z ][z, V ] + i[V, z ]w − iw [z, V ] + 2 .
τ
Substituting (45) into (46) and using the sector bound
condition (13), the following inequality is obtained:
2
†
†
†
G(V ) ≤ −i[V, H1 ]+L(V )+τ 2 [V, z † ][z, V ]+
1 †
δ
z z+ 2 .
γ2τ 2
τ
(47)
Hence,
δ
.
(48)
τ2
Consequently, the conclusion in the lemma follows from
Lemma 1.
2
Proof of Theorem 1: Using the Schur complement [26],
the inequality (40) is equivalent to
G(V ) + W ≤ λ̃ +
F † P + P F + 4τ 2 P JE † EJP +
E†E
+ R < 0.
γ2τ 2
(49)
If the Riccati inequality (49) has a solution P > 0 of the
form (10) and τ > 0, according to Lemma 2 and Lemma 3,
we have
1
− i[V, H1 ] + L(V ) + τ 2 [V, z † ][z, V ] + 2 2 z † z + W =
γ τ
!
†
†
2
†
F
P
+
P
F
+
4τ
P
JE
EJP
a
a
†
a#
a#
+ γE2 τE2 + R
I 0
+ Tr(P JN †
N J).
0 0
(50)
Therefore, it follows from (40) that condition (43) is satisfied
with
I 0
λ̃ = Tr(P JN †
N J) ≥ 0.
(51)
0 0
Then, according to the relationship (24) and Lemma 6, we
have
Z
1 T
lim sup
hW (t)idt
T →∞ T 0
Z
δ
1 T †
a
h a aT R
idt ≤ λ̃ + 2 .
= lim sup
#
a
T
τ
T →∞
0
(52)
2
B. Performance Analysis Using the Popov Approach
Theorem 2: Consider an uncertain quantum system
(S, L, H), where H = H1 + H2 , H1 is in the form of (4), L
is of the form (7) and H2 ∈ W4 . If F = −iJM − 21 JN † JN
is Hurwitz, and
P F + F †P + R
−2iP JE † + E † + θF † E †
<0
2iEJP + E + θEF
−γI
(53)
has a solution P > 0 in the form of (10) for some θ ≥ 0,
then
Z
1 T
hW (t)idt
J = lim sup
T →∞ T 0
Z
1 T †
a
= lim sup
h a aT R
idt ≤ λ (54)
a#
T →∞ T 0
where
4θ #
Ñ ΣJE † EJΣÑ T .
γ
(55)
In order to prove this theorem, we need the following lemma.
λ = Tr(P JN †
I
0
0
0
N J) +
Lemma 7: (See Theorem 1 of [13]) Consider a set of nonnegative self-adjoint operators P, an open quantum system
(S, L, H) and an observable W , where H = H1 + H2 and
H2 ∈ W2 defined in (17). Suppose there exists a V ∈ P and
real constants θ ∈ Θ, λ ≥ 0 such that
1
(i[z, V − θH1 ] + θL(z) + z)†
γ
× (i[z, V − θH1 ] + θL(z) + z) + θβ[z, L]† [z, L] + W ≤ λ.
(56)
− i[V, H1 ] + L(V ) +
Then
lim sup
T →∞
1
T
Z
T
hW (t)idt ≤ λ.
(57)
0
Here W (t) denotes the Heisenberg evolution of the operator
W and h·i denotes quantum expectation.
Proof of Theorem 2: Using the Schur complement, (53) is
equivalent to
1
(−2iP JE † + E † + θF † E † )
γ
× (2iEJP + E + θEF ) + R < 0.
P F + F †P +
(58)
According to Lemma 2 and Lemma 5, we have
1
(i[z, V − θH1 ] + θL(z) + z)†
γ
4θ
× (i[z, V − θH1 ] + θL(z) + z) + [z, L]† [z, L] + W
γ
†
†
PF + F P
a
a
1
†
†
† †
+ γ (−2iP JE + E + θF E )
=
a#
a#
×(2iEJP + E + θEF ) + R
4θ
I 0
+ Tr(P JN †
N J) + Ñ # ΣJE † EJΣÑ T .
0 0
γ
(59)
− i[V, H1 ] + L(V ) +
From this and using the relationship (27), Lemma 4 and
Lemma 7, we obtain
Z
1 T
lim sup
hW (t)idt
T →∞ T 0
†
Z
(60)
1 ∞ a
a
h
R
idt
= lim sup
#
#
a
a
T →∞ T 0
≤λ
where
λ = Tr(P JN
†
I
0
0
0
N J) +
4θ #
Ñ ΣJE † EJΣÑ T .
γ
(61)
2
V. COHERENT GUARANTEED COST
CONTROLLER DESIGN
In this section, we design a coherent guaranteed cost
controller for the uncertain quantum system subject to a
quadratic perturbation Hamiltonian to make the control system not only stable but also to achieve an adequate level of
performance. The coherent controller is realized by adding a
controller Hamiltonian H3 . H3 is assumed to be in the form
1 †
a
a aT K
(62)
H3 =
a#
2
where K ∈ C2n×2n is a Hermitian matrix of the form
K1 K2
K=
(63)
K2# K1#
and K1 = K1† , K2 = K2T . Associated with this system is
the cost function J
†
Z
1 ∞ a
a
2
(R
+
ρK
)
idt (64)
J = lim sup
h
a#
a#
T →∞ T 0
where ρ ∈ (0, ∞) is a weighting factor. We let
†
a
a
2
.
(R
+
ρK
)
W =
a#
a#
(65)
The following theorems (Theorem 3 and Theorem 4)
present our main results on coherent guaranteed cost controller design for the given quantum system using a quantum
small gain method and a Popov type approach, respectively.
A. Coherent Controller Design Using the Small Gain Approach
Theorem 3: Consider an uncertain quantum system
(S, L, H), where H = H1 + H2 + H3 , H1 is in the form
of (4), L is of the form (7), H2 ∈ W3 and the controller
Hamiltonian H3 is in the form of (62). With Q = P −1 ,
Y = KQ and F = −iJM − 21 JN † JN , if there exist a
matrix Q = q ∗ I (q is a constant scalar and I is the identity
matrix), a Hermitian matrix Y and a constant τ > 0, such
that
1
A + 4τ 2 JE † EJ
Y
qR 2
qE †
Y
−I/ρ
0
0
< 0 (66)
1
qR 2
0
−I
0
2 2
qE
0
0
−γ τ I
where A = qF † + F q + iY J − iJY , then the associated cost
function satisfies the bound
Z
1 T
lim sup
hW (t)idt
T →∞ T 0
†
Z
1 ∞ a
a
2
= lim sup
h
(R
+
ρK
)
idt (67)
a#
a#
T →∞ T 0
δ
≤ λ̃ + 2
τ
where
I 0
†
λ̃ = Tr(P JN
N J).
(68)
0 0
Proof: Suppose the conditions of the theorem are satisfied.
Using the Schur complement, (66) is equivalent to
1
A + 4τ 2 JE † EJ + ρY Y qR 2
qE †
1
< 0.
qR 2
−I
0
2 2
qE
0
−γ τ I
(69)
Applying the Schur complement again, it follows that (69)
is equivalent to
A + 4τ 2 JE † EJ + ρY Y + q 2 R
qE †
< 0 (70)
qE
−γ 2 τ 2 I
Hence, we minimize ξ subject to (79) and (66) in Theorem
3. This is a standard LMI problem.
and (70) is equivalent to
qF † + F q + iY J − iJY + 4τ 2 JE † EJ
+ ρY Y + q 2 (
E†E
+ R) < 0.
γ2τ 2
(71)
Substituting Y = Kq = qK † into (71), we obtain
q(F − iJK)† + (F − iJK)q + 4τ 2 JE † EJ
+ q2 (
E†E
+ R + ρK 2 ) < 0.
γ2τ 2
(72)
Since P = Q−1 , premultiplying and postmultiplying this
inequality by the matrix P , we have
(F − iJK)† P + P (F − iJK) + 4τ 2 P JE † EJP
We know that P = Q−1 = q −1 I and apply the Schur
complement to inequality (77), so that we have
1
−ξ + τδ2 B 2
≤0
(78)
1
B2
−q
I 0
†
where B = Tr(JN
N J). Applying the Schur
0 0
complement again, it is clear that (78) is equivalent to
1
1
B2
−ξ δ 2
1
δ 2 −τ 2
(79)
0 ≤ 0.
1
B2
0
−q
B. Coherent Controller Design Using the Popov Approach
Theorem 4: Consider an uncertain quantum system
(S, L, H), where H = H1 + H2 + H3 , H1 is in the form
of (4), L is of the form (7), H2 ∈ W4 , the controller
Hamiltonian H3 is in the form of (62). With Q = P −1 ,
Y = KQ and F = −iJM − 21 JN † JN , if there exist a
matrix Q = q ∗ I (q is a constant scalar and I is the identity
matrix), a Hermitian matrix Y and a constant θ > 0, such
that
1
A
B†
Y
qR 2
B
−γI
0
0
<0
(80)
Y
0
−I/ρ
0
1
0
0
−I
qR 2
(73)
E†E
+ R + ρK 2 < 0.
2
2
γ τ
It follows straightforwardly from (73) that F − iJK is
Hurwitz. We also know that
where A = F q + qF † − iJY + iY J and B = 2iEJ + Eq +
1 †
2
†
− i[V, H1 + H3 ] + L(V ) + τ [V, z ][z, V ] + 2 2 z z + W θEF q − iθEJY , then the associated cost function satisfies
γ τ
the bound
†
(F − iJK)† P + P (F − iJK)
Z
a
+4τ 2 P JE † EJP + E † E/(γ 2 τ 2 ) a#
1 T
=
lim
sup
hW (t)idt
a#
a
+R + ρK 2
T →∞ T 0
†
Z
I 0
1 ∞ a
a
2
+ Tr(P JN †
N J).
(74)
(R
+
ρK
)
idt ≤ λ
=
lim
sup
h
0 0
a#
a#
T →∞ T 0
(81)
According to the relationship (24) and Lemma 6, we have
Z T
1
where
hW (t)idt
lim sup
T
T →∞
0
4θ
I 0
†
†
Z ∞
λ
=
Tr(P
JN
N J) + Ñ # ΣJE † EJΣÑ T .
1
a
a
2
0
0
(75)
γ
= lim sup
h
(R + ρK )
idt
a#
a#
(82)
T →∞ T 0
δ
Proof: Suppose the conditions of the theorem are satisfied.
≤ λ̃ + 2
Using the Schur complement, (80) is equivalent to
τ
where
1
A + γ1 B † B
Y
qR 2
I 0
†
λ̃ = Tr(P JN
N J).
(76)
0 0
(83)
Y
−I/ρ
0 < 0.
1
2
2
qR
0
−I
Remark 1: In order to design a coherent controller which
minimizes the cost bound (67) in Theorem 3, we need to We then apply the Schur complement to inequality (83) and
obtain
formulate an inequality
"
#
1
A + γ1 B † B + ρY Y qR 2
δ
I
0
< 0.
(84)
1
Tr(P JN †
N J) + 2 ≤ ξ.
(77)
qR 2
−I
0 0
τ
+
Also, (84) is equivalent to
†
F q + qF − iJY + iY J+
1
(−2iJE † + qE † + θqF † E † + iθY JE † )
γ
× (2iEJ + Eq + θEF q − iθEJY )
(85)
2
Remark 2: For each fixed value of θ, the problem is an
LMI problem. Then, we can iterate on θ ∈ [0, ∞) and choose
the value which minimizes the cost bound (82) in Theorem
4.
VI. ILLUSTRATIVE EXAMPLE
+ q 2 R + ρY Y < 0.
Substituting Y = Kq = qK † into (85), we obtain
(F − iJK)q + q(F − iJK)†
1
+ (−2iJE † + qE † + θq(F − iJK)† E † )
γ
× (2iEJ + Eq + θE(F − iJK)q)
(86)
+ q 2 (R + ρK 2 ) < 0.
Since P = Q−1 , premultiplying and postmultiplying this
inequality by the matrix P , we have
P (F − iJK) + (F − iJK)† P
1
+ (−2iP JE † + E † + θ(F − iJK)† E † )
(87)
γ
× (2iEJP + E + θE(F − iJK)) + R + ρK 2 < 0.
It follows straightforwardly from (87) that F − iJK is
Hurwitz. We also know that
− i[V, H1 + H3 ] + L(V )
1
+ (i[z, V − θ(H1 + H3 )] + θL(z) + z)†
γ
× (i[z, V − θ(H1 + H3 )] + θL(z) + z)
4θ
+ [z, L]† [z, L] + W
γ
†
a
a
M̃
=
a#
a#
4θ
I 0
+ Tr(P JN †
N J) + Ñ # ΣJE † EJΣÑ T
0 0
γ
(88)
where
M̃ =P (F − iJK) + (F − iJK)† P
1
+ (−2iP JE † + E † + θ(F − iJK)† E † )
γ
× (2iEJP + E + θE(F − iJK))
(89)
+ R + ρK 2 .
According to the relationship (27) and Lemma 7, we have
Z
1 T
lim sup
hW (t)idt
T →∞ T 0
†
Z
(90)
1 ∞ a
a
2
= lim sup
(R
+
ρK
)
idt
h
#
#
a
a
T
T →∞
0
≤λ
where
λ = Tr(P JN †
I
0
0
0
N J) +
4θ #
Ñ ΣJE † EJΣÑ T .
γ
(91)
In order to illustrate our methods and compare their
performance, we use the same quantum system considered in
[23] as an example. The system corresponds to a degenerate
parametric amplifier and its (S, L, H) description has the
following form
√
1
(92)
H = i((a† )2 − a2 ), S = I, L = κa.
2
We let the perturbation Hamiltonian be
1 †
a
1
0.5i
a aT
H2 =
(93)
−0.5i
1
a#
2
and the nominal Hamiltonian be
1 †
a
−1
0.5i
a aT
H1 =
−0.5i −1
a#
2
(94)
so that H1 + H2 = H. The corresponding parameters considered in Theorem 1, Theorem 2, Theorem 3 and Theorem
4 are as follows:
√
−1
0.5i
κ √0
M=
,N =
,
−0.5i −1
0
κ
κ
(95)
−2 + i
0.5
F =
,
E
=
I
0.5
− κ2 − i
and
∆=
1
0.5i
−0.5i
1
.
(96)
To illustrate Theorem 1 and Theorem 3, we consider
H2 ∈ W3 . Hence, γ = 1 is chosen to satisfy (22). The
performance using the small gain approach for the uncertain
quantum system is shown in Figure 1. In Figure 1, the dashed
line represents the cost bound for the linear quantum system
considered in Theorem 1 as a function of the parameter
κ. The solid line shows the system performance with the
coherent controller designed in Theorem 3. Compared to
the performance without a controller, the coherent controller
can guarantee that the system is stable for a larger range
of the damping parameter κ and gives the system improved
performance.
Now we illustrate one approach to realizing the desired
controller. For instance, when κ = 4.5, by using the controller design method in Theorem 3, we have the desired
controller Hamiltonian as
1 †
a
0
−0.5i
T
a a
.
(97)
H3 =
0.5i
0
a#
2
This controller Hamiltonian can be realized by connecting
the degenerate parametric amplifier with a static squeezer as
shown in Figure 2. This static squeezer is a static Bogoliubov
component which corresponds to the Bogoliubov transformation [27], [28]. Also, we have the following definition:
Definition 1: (see [27], [28]) A static Bogoliubov component is a component
that
the
Bogoliubov
implements
du(t)
dy(t)
, where B =
=B
transformation:
#
du# (t)
dy (t)
B1 B2
, B † JB = J.
B2# B1#
5
− 34
4
To realize H3 in (97), we let the matrix B =
5
− 34
4
†
which satisfies the Bogoliubov condition B JB = J, and
κ̃ = 31 . Therefore, the overall Hamiltonian of the closed
loop system is H = H1 + H2 + H3 , which achieves the
controller design goal. Detailed procedure regarding how to
get the matrix B can be found in appendix.
Fig. 3.
Performance versus θ
In Figure 4, the dashed line shows the performance for
the given system considered in Theorem 2 and the solid line
describes the cost bound for the linear quantum system with
the coherent controller considered in Theorem 4. As can be
seen in Figure 4, the system with a controller has better
performance than the case without a controller.
Fig. 1. Guaranteed cost bounds for an uncertain quantum system with a
controller (solid line) and without a controller (dashed line) using the small
gain approach
Fig. 2.
Degenerate parametric amplifier coupled to a static squeezer.
To illustrate Theorem 2 and Theorem 4, we consider H2 ∈
W4 . Hence, γ = 2 is chosen to satisfy (25). The results
using the Popov approach are shown in Figure 3 and Figure
4. Figure 3 demonstrates how to choose the value of θ. We
consider the same example as above with κ = 3.8 and iterate
on θ ∈ [0, 1]. Figure 3 shows the cost bound for this quantum
system obtained in Theorem 4 as a function of the parameter
θ. It is clear that the minimal cost bound is achieved when
θ = 0.1. Therefore, we choose θ = 0.1 for κ = 3.8 and use
a similar method to choose θ for other values of κ.
Fig. 4. Guaranteed cost bounds for the uncertain quantum system with a
controller (solid line) and without a controller (dashed line) using the Popov
approach
Also, we can observe that the method in Theorem 3 can
only make the quantum system stable for κ > 4 in the
example. Therefore, compared with the results in Figure 1,
the Popov method obtains a lower cost bound and a larger
range of robust stability as shown in Figure 4. This is as
expected, since the Popov approach allows for a more general
class of Lyapunov functions than the small gain approach.
VII. CONCLUSION
In this paper, the small gain method and the Popov
approach, respectively, are used to analyze the performance
of an uncertain linear quantum system subject to a quadratic
perturbation in the system Hamiltonian. Then, we add a
coherent controller to make the given system not only stable
but also to achieve improved performance. By an illustrative
example, we have also shown that the Popov method adds
to a considerable improvement over the small gain method
in terms of system performance. Future work will include
the extension of these approaches to nonlinear uncertain
quantum systems [24].
A PPENDIX
The detailed procedure regarding how to realize the desired controller is shown below. We consider a degenerate
parametric amplifier (DPA) as an example. Based on the
(S, L, H) description in (92), we can calculate the following
quantum stochastic differential equations [1] describing the
DPA:
κ
a(t)
da(t)
1
−2
dt
=
1 − κ2
a# (t)
da# (t)
√
dA1 (t)
κ √0
−
;
κ
0
dA#
1 (t)
√
dAout
1 (t) = κa(t)dt + dA1 (t).
We have known that when κ = 4.5 and using the controller
design method in Theorem 3, we have the desired controller
Hamiltonian as in (97).
Next, we show how to realize this controller Hamiltonian
by connecting this degenerate parametric amplifier with
a static squeezer as shown in Fig. 2. The corresponding
quantum stochastic differential equations for this DPA is as
follows:
κ+κ̃
da(t)
a(t)
− 2
1
=
dt
da# (t)
a# (t)
1
− κ+κ̃
2
√
dA1 (t)
κ √0
−
0
κ
dA#
1 (t)
√
(98)
dA2 (t)
κ̃ √0
;
−
dA#
0
κ̃
2 (t)
√
out
dA1 (t) = κa(t)dt + dA1 (t);
√
dAout
κ̃a(t)dt + dA2 (t).
2 (t) =
We have known that this static squeezer is a static Bogoliubov component which satisfies Definition 1. According to
the Definition 1, we have the following relation:
dAout
dA2 (t)
2 (t)
=B
.
(99)
dA#
dAout#
(t)
2
2 (t)
According to (98), we have
√
dAout
a(t)
κ̃ √0
2 (t)
=
dt
a# (t)
dAout#
(t)
0
κ̃
2
dA2 (t)
+
.
dA#
2 (t)
Substituting (99) into (100), we obtain that
√
dA2 (t)
a(t)
κ̃ √0
−1
=
B
dt
a# (t)
dA#
0
κ̃
2 (t)
dA2 (t)
+
.
dA#
2 (t)
Hence,
(B −1 − I)
dA2 (t)
dA#
2 (t)
√
=
κ̃ √0
0
κ̃
a(t)
a# (t)
dt.
(101)
We now assume inverse of B −1 − I exists. It follows (101)
that we can write
√
dA2 (t)
a(t)
κ̃ √0
−1
−1
= (B − I)
dt.
a# (t)
dA#
0
κ̃
2 (t)
(102)
Substituting (102) into first equation in (98), we get
κ+κ̃
a(t)
da(t)
− 2
1
dt
=
a# (t)
da# (t)
1
− κ+κ̃
2
√
dA1 (t)
κ √0
−
0
κ
dA#
1 (t)
√
κ̃ √0
−
(B −1 − I)−1
0
κ̃
√
a(t)
κ̃ √0
dt
×
a# (t)
κ̃
0
κ+κ̃
a(t)
− 2
1
=
dt
a# (t)
1
− κ+κ̃
2
√
dA1 (t)
κ √0
−
0
κ
dA#
1 (t)
a(t)
κ̃ 0
−1
−1
− (B − I)
dt.
0 κ̃
a# (t)
(103)
According to Definition 1, the B matrix satisfies the
following relation:
†
B1 B2
I 0
I 0
B1 B2T
JB † JB =
0 −I
0 −I
B2# B1#
B2† B1T
B1
B2
B1†
B2T
=
†
−B2# −B1#
−B2 −B1T
B1† B2 − B2T B1#
B1† B1 − B2T B2#
= I.
=
−B2† B1 + B1T B2# −B2† B2 + B1T B1#
In our case that B is a 2 × 2 matrix, we have
B1† B2 − B2T B1# = 0; −B2† B1 + B1T B2# = 0.
Moreover, we need to have the following relation:
2
2
2
2
B1† B1 −B2T B2# = −B2† B2 +B1T B1# = (B1x
+B1y
)−(B2x
+B2y
) = I.
where B1 = B1x + iB1y , B2 = B2x + iB2y . Thus, we can
assume that
2
2
2
2
B1x
+ B1y
= cosh(r)2 ; B2x
+ B2y
= sinh(r)2 .
(100)
Hence, we may write the matrix B in the following form:
B=
cosh(r)cos(α) + icosh(r)sin(α)
sinh(r)cos(β) − isinh(r)sin(β)
sinh(r)cos(β) + isinh(r)sin(β)
cosh(r)cos(α) − icosh(r)sin(α)
.
Since B, I are 2 × 2 matrices, we have
1
B1 − 1
(B −1 − I)−1 =
B2#
2 − 2B1x
B2
B1# − 1
.
Therefore, the last term on the right side of equation (103)
can be expressed as:
κ̃
−1
−1
κ̃(B
− I)
=
2 − 2cosh(r)cos(α)
cosh(r)cos(α) − 1
0
×(
0
cosh(r)cos(α) − 1
icosh(r)sin(α)
sinh(r)cos(β) + isinh(r)sin(β)
+
)
sinh(r)cos(β) − isinh(r)sin(β)
−icosh(r)sin(α)
κ̃
κ̃
−2
0
+
=
0
− κ̃
2 − 2cosh(r)cos(α)
2
icosh(r)sin(α)
sinh(r)cos(β) + isinh(r)sin(β)
×
.
sinh(r)cos(β) − isinh(r)sin(β)
−icosh(r)sin(α)
(104)
Substituting (104) into (103), we have
−κ
1
da(t)
a(t)
2
=
dt
κ
#
#
1
−2
da (t)
a (t)
√
dA1 (t)
κ √0
−
0
κ
dA#
1 (t)
κ̃
×
−
2 − 2cosh(r)cos(α)
icosh(r)sin(α)
sinh(r)cos(β) + isinh(r)sin(β)
sinh(r)cos(β) − isinh(r)sin(β)
−icosh(r)sin(α)
a(t)
×
dt.
a# (t)
(105)
Therefore, the closed loop system with DPA and a static
squeezer has the dynamical equation (105). The difference
between this closed loop system (105) and the original
uncertain quantum system (98) is the addition of the last term
on right side of (105) which corresponds to the controller
Hamiltonian H3 . To realize our desired controller Hamiltonian H3 as in (97), the following relation is required:
κ̃
−
×
2 − 2cosh(r)cos(α)
icosh(r)sin(α)
sinh(r)cos(β) − isinh(r)sin(β)
0
−0.5
= −iJK =
.
−0.5
0
sinh(r)cos(β) + isinh(r)sin(β)
−icosh(r)sin(α)
(106)
When α = 0, β = 0, sinh(r) = − 34 , cosh(r) = 54 , κ̃ = 13 ,
the relationship (106) holds. In this case, we may have B
5
− 43
4
matrix as B =
which satisfies the Bogoliubov
3
5
−4
4
condition B † JB = J.
R EFERENCES
[1] M. R. James, H. I. Nurdin, and I. R. Petersen, “H ∞ control of
linear quantum stochastic systems,” IEEE Transactions on Automatic
Control, vol. 53, no. 8, pp. 1787-1803, 2008.
[2] I. R. Petersen, “Quantum linear systems theory,” in Proceedings of the
19th International Symposium on Mathematical Theory of Networks
and Systems, Budapest, Hungary, July 2010.
[3] D. Dong and I. R. Petersen, “Sliding mode control of quantum
systems,” New Journal of Physics, vol. 11, p. 105033, 2009.
[4] M. Yanagisawa and H. Kimura, “Transfer function approach to quantum control-part I: Dynamics of quantum feedback systems,” IEEE
Transactions on Automatic Control, vol. 48, no. 12, pp. 2107-2120,
2003.
[5] M. Yanagisawa and H. Kimura, “Transfer function approach to
quantum control-part II: Control concepts and applications,” IEEE
Transactions on Automatic Control, vol. 48, no. 12, pp. 2121-2132,
2003.
[6] M. James and J. Gough, “Quantum dissipative systems and feedback
control design by interconnection,” IEEE Transactions on Automatic
Control, vol. 55, no. 8, pp. 1806-1820, 2010.
[7] J. Gough and M. R. James, “The series product and its application to
quantum feedforward and feedback networks,” IEEE Transactions on
Automatic Control, vol. 54, no. 11, pp. 2530-2544, 2009.
[8] I. R. Petersen, V. Ugrinovskii, and M. R. James, “Robust stability of
uncertain quantum systems,” in Proceedings of the 2012 American
Control Conference, Montreal, Canada, June 2012.
[9] I. R. Petersen, V. Ugrinovskii, and M. R. James, “Robust stability of
quantum systems with a nonlinear coupling operator,” in Proceedings
of the 51st IEEE Conference on Decision and Control, Maui, December 2012.
[10] I. R. Petersen, “Quantum robust stability of a small Josephson junction
in a resonant cavity,” in 2012 IEEE Multi-conference on Systems and
Control, Dubrovnik, Croatia, October 2012.
[11] I. R. Petersen, “Robust stability analysis of an optical parametric
amplifier quantum system,” in Proceedings of the 2013 Asian Control
Conference, Istanbul, Turkey, July 2013.
[12] I. R. Petersen, “Robust stability of quantum systems with nonlinear
dynamic uncertainties,” in Proceedings of the 52nd IEEE Conference
on Decision and Control, Florence, Italy, December 2013.
[13] M. R. James, I. R. Petersen, and V. Ugrinovskii, “A Popov stability
condition for uncertain linear quantum systems,” in Proceedings of
the 2013 American Control Conference, Washington, DC, USA, June
2013.
[14] I. R. Petersen, “Quantum Popov robust stability analysis of an optical
cavity containing a saturated Kerr medium,” in Proceedings of the
2013 European Control Conference, Zurich, Switzerland, July 2013.
[15] J. E. Gough, M. R. James, and H. I. Nurdin, “Squeezing components
in linear quantum feedback networks,” Physical Review A, vol. 81, p.
023804, 2010.
[16] H. M. Wiseman and G. J. Milburn, Quantum Measurement and
Control, Cambridge, U.K.: Cambridge University Press, 2010.
[17] D. Dong and I. R. Petersen, “Quantum control theory and applications:
a survey,” IET Control Theory & Applications, vol. 4, pp. 2651-2671,
2010.
[18] I. R. Petersen, V. Ugrinovskii, and M. R. James, “Robust stability of
uncertain linear quantum systems,” Philosophical Transactions of the
Royal Society A, vol. 370, no. 1979, pp. 5354-5363, 2012.
[19] C. D’Helon and M. R. James. “Stability, gain, and robustness in
quantum feedback networks,” Physical Review A, vol. 73, p. 053803,
2006.
[20] H. I. Nurdin, M. R. James, and I. R. Petersen, “Coherent quantum
LQG control,” Automatica, vol. 45, no. 8, pp. 1837-1846, 2009.
[21] A. I. Maalouf and I. R. Petersen, “Coherent H ∞ control for a class
of linear complex quantum systems,” IEEE Transactions on Automatic
Control, vol. 56, no. 2, pp. 309-319, 2011.
[22] C. Xiang, I. R. Petersen and D. Dong, “Performance analysis and
coherent guaranteed cost control for uncertain quantum systems,” in
the Proceedings of the 2014 European Control Conference, Strasbourg,
France, June 2014.
[23] C. Xiang, I. R. Petersen and D. Dong, “A Popov approach to performance analysis and coherent guaranteed cost control for uncertain
quantum systems,” in the Proceedings of the 2014 Australian Control
Conference, Canberra, Australia, November 2014.
[24] I. R. Petersen, “Guaranteed non-quadratic performance for quantum
systems with nonlinear uncertainties,” in Proceedings of the 2014
American Control Conference, Portland, Oregon, USA, June 2014.
[25] H. Khalil, Nonlinear Systems, 3rd ed. Upper Saddle River, NJ, USA:
Prentice-Hall, 2002.
[26] S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan, Linear Matrix
Inequalities in Systems and Control Theory. Philadelphia, PA: SIAM,
1994.
[27] J. E. Gough, M. R. James, and H. I. Nurdin, “Squeezing components
in linear quantum feedback networks,” Physical Review A, vol. 81, p.
023804, 2010.
[28] S. L. Vuglar and I. R. Petersen, “Singular perturbation approximations
for general linear quantum systems, in Proceedings of the Australian
Control Conference, Sydney, Australia, Nov 2012, pp. 459463.
| 3 |
LOCAL COHOMOLOGY OF DU BOIS SINGULARITIES AND
APPLICATIONS TO FAMILIES
arXiv:1605.02755v3 [math.AG] 14 Apr 2017
LINQUAN MA, KARL SCHWEDE, AND KAZUMA SHIMOMOTO
Abstract. In this paper we study the local cohomology modules of Du Bois singularities.
i
i
Let (R, m) be a local ring, we prove that if Rred is Du Bois, then Hm
(R) −→ Hm
(Rred ) is
surjective for every i. We find many applications of this result. For example we answer a
question of Kovács and the second author [KS16b] on the Cohen-Macaulay property of Du
Bois singularities. We obtain results on the injectivity of Ext that provide substantial partial answers of questions in [EMS00] in characteristic 0. These results can also be viewed as
generalizations of the Kodaira vanishing theorem for Cohen-Macaulay Du Bois varieties. We
prove results on the set-theoretic Cohen-Macaulayness of the defining ideal of Du Bois singularities, which are characteristic 0 analogs and generalizations of results of Singh-Walther
and answer some of their questions in [SW05]. We extend results of Hochster-Roberts on
the relation between Koszul cohomology and local cohomology for F -injective and Du Bois
singularities [HR76]. We also prove that singularities of dense F -injective type deform.
1. Introduction
The notion of Du Bois singularities was introduced by Steenbrink based on work of Du Bois
[DB81] which itself was an attempt to localize Deligne’s Hodge theory of singular varieties
[Del74]. Steenbrink studied them initially because families with Du Bois fibers have remarkable base-change properties [Ste81]. On the other hand, Du Bois singularities have recently
found new connections across several areas of mathematics [KK10, KS16a, Lee09, CHWW11,
HJ14]. In this paper we find numerous applications of Du Bois singularities especially to
questions of local cohomology. Our key observation is as follows.
Key Lemma (Lemma
3.3). Suppose (S, m) is a local ring essentially of finite type over C
√
and that Sred = S/ 0 has Du Bois singularities. Then
is surjective for every i.
Hmi (S) −
→ Hmi (Sred )
In fact, we obtain this surjectivity for Du Bois pairs, but we leave the statement simple
in the introduction. Utilizing this lemma and related methods, we prove several results.
Theorem A (Corollary 3.9). Let X be a reduced scheme and let H ⊆ X be a Cartier divisor.
If H is Du Bois and X\H is Cohen-Macaulay, then X is Cohen-Macaulay and hence so is
H.
The first named author was supported in part by the NSF Grant DMS #1600198 and NSF CAREER
Grant DMS #1252860/1501102.
The second named author was supported in part by the NSF FRG Grant DMS #1265261/1501115 and
NSF CAREER Grant DMS #1252860/1501102.
The third named author is partially supported by Grant-in-Aid for Young Scientists (B) # 25800028.
1
This first consequence answers a question of Kovács and the second author and should be
viewed as a generalization of several results in [KK10, Section 7]. In particular, we do not
need a projective family. This result holds if one replaces Cohen-Macaulay with Sk and also
generalizes to the context of Du Bois pairs as in [KS16b]. Theorem A should also be viewed as
a characteristic 0 analog of [FW89, Proposition 2.13] and [HMS14, Corollary A.4]. Theorem
A also implies that if X \ H has rational singularities and H has Du Bois singularities, then
X has rational singularities (see Corollary 3.10), generalizing [KS16b, Theorem E].
Our next consequence of the key lemma is the following.
Theorem B (Proposition 4.1). Let (R, m) be a Gorenstein local ring essentially of finite
type over C, and let I ⊆ R be an ideal such that R/I has Du Bois singularities. Then the
natural map ExtjR (R/I, R) −
→ HIj (R) is injective for every j.
This gives a partial characteristic 0 answer to [EMS00, Question 6.2], asking when such
maps are injective. A special case of the analogous characteristic p > 0 result was obtained
by Singh-Walther [SW07]. This result leads to an answer of [DN16, Questioin 7.5] on the
bounds on the projective dimension of Du Bois and log canonical singularities. In the graded
setting, we can prove a much stronger result on the injectivity of Ext:
Theorem C (Theorem 4.5). Let (R, m) be a reduced Noetherian N-graded (R0 = C)-algebra
with m the unique homogeneous maximal ideal. Suppose RP is Du Bois for all P 6= m.
Write R = A/I where A = C[x1 , . . . , xn ] is a polynomial ring with deg xi = di > 0 and I is a
homogeneous ideal. Then the natural degree-preserving map ExtjA (R, A) −
→ HIj (A) induces
an injection
ExtjA (R, A) ≥−d ֒→ HIj (A) ≥−d
P
for every j, where d = di .
Theorem C leads to new vanishing for local cohomology for N-graded isolated non-Du
Bois singularities Theorem 4.8, a generalization of the Kodaira vanishing theorem for CohenMacaulay Du Bois projective varieties.
Our next consequence of the key lemma answers a longstanding question, but first we
give some background. Du Bois singularities are closely related to the notion of F -injective
singularities (rings where Frobenius acts injectively on local cohomology of maximal ideals).
In particular, it is conjectured that X has Du Bois singularities if and only if its reduction
to characteristic p > 0 has F -injective singularities for a Zariski dense set of primes p ≫ 0
(this is called dense F -injective type). This conjecture is expected to be quite difficult as it
is closely related to asking for infinitely many ordinary characteristic p > 0 reductions for
smooth projective varieties over C [BST13]. On the other hand, several recent papers have
studied how Du Bois and F -injective singularities deform [KS16a, HMS14]. We know that if
a Cartier divisor H ⊆ X has Du Bois singularities, then X also has Du Bois singularities near
H. However, the analogous statement for F -injective singularities in characteristic p > 0 is
open and has been since [Fed83]. In fact F -injective singularities were introduced because it
was observed that Cohen-Macaulay F -injective deform in this way but F -pure singularities1
do not. We show that at least the property of having dense F -injective type satisfies such a
deformation result, in other words that F -injectivity deforms when p is large.
1We
now know that F -pure singularities are analogs of log canonical singularities [HW02].
2
Theorem D (Theorem 5.3). Let (R, m) be a local ring essentially of finite type over C and
let x be a nonzerodivisor on R. Suppose R/xR has dense F -injective type. Then for infinitely
many p > 0, the Frobenius action xp−1 F on Hmi p (Rp ) is injective for every i, where (Rp , mp )
is the reduction mod p of R. In particular, R has dense F -injective type.
Our final result is a characteristic 0 analog of a strengthening of the main result of [HR76],
and we can give a characteristic p > 0 generalization as well.
Theorem E (Corollary 4.13). Let R be a Noetherian N-graded k-algebra, with m the unique
homogeneous maximal ideal. Suppose R is equidimensional and Cohen-Macaulay on Spec R−
{m}. Assume one of the following:
(a) k has characteristic p > 0 and R is F -injective.
(b) k has characteristic 0 and R is Du Bois.
Then H r (x, R) 0 ∼
= Hmr (R) for every r < n = dim R and every homogeneous system of
parameters x = x1 , . . . , xn , where H r (x, R) denotes the r-th Koszul cohomology of x. In
other words, it is not necessary to take a direct limit when computing the local cohomology!
In fact, we prove a more general result Theorem 4.12 which does not require any F injective or Du Bois conditions, from which Theorem E follows immediately thanks to our
injectivity and vanishing results Theorem 4.5, Theorem 4.8.
Acknowledgements: The authors are thankful to Sándor Kovács, Shunsuke Takagi and the
referee for valuable discussions and comments on a previous draft of the paper. We would
like to thank Pham Hung Quy for discussions which motivate the proof of Theorem 3.8. We
also thank Zsolt Patakfalvi, Anurag K. Singh and Kevin Tucker for valuable discussions.
2. Preliminaries
Throughout this paper, all rings will be Noetherian and all schemes will be Noetherian
and separated. When in equal characteristic 0, we will work with rings and schemes that are
essentially of finite type over C. Of course, nearly all of our results also hold over all other
fields of characteristic zero by base change.
2.1. Du Bois singularities. We give a brief introduction to Du Bois singularities and pairs.
For more details see for instance [KS11] and [Kol13]. Frequently we will work in the setting
of pairs in the Du Bois sense.
Definition 2.1. Suppose X and Z are schemes of essentially finite type over C. By a pair
we will mean the combined data of (X, Z). We will call the pair reduced if both X and Z
are reduced.
Suppose that X is a scheme essentially of finite type over C. Associated to X is an object
b
∈ Dcoh
(X) with a map OX −
→ Ω0X functorial in the following way. If f : Y −
→ X is a
map of schemes then we have a commutative square
Ω0X
OX
/
Ω0X
Rf∗ OY
/
3
Rf∗ Ω0Y
→X
If X is non-reduced, then Ω0X = Ω0Xred by definition. To define Ω0X in general let π : X q −
0
be a hyperresolution and set ΩX = Rπ∗ OX q (alternatively see [Sch07]).
If Z ⊆ X is a closed subscheme, then we define Ω0X,Z as the object completing the following
distinguished triangle
+1
→ Ω0Z −→ .
→ Ω0X −
Ω0X,Z −
We observe that there is an induced map IZ⊆X −
→ Ω0X,Z . We also note that by this definition
and the fact Ω0X = Ω0Xred , we have Ω0X,Z = Ω0Xred ,Zred .
Definition 2.2 (Du Bois singularities). We say that X has Du Bois singularities if the map
OX −
→ Ω0X is a quasi-isomorphism. If Z ⊆ X is a closed subscheme we say that (X, Z) has
Du Bois singularities if the map IZ⊆X −
→ Ω0X,Z is a quasi-isomorphism.
Remark 2.3. It is clear that Du Bois singularities are reduced. In general, a pair (X, Z)
being Du Bois does not necessarily imply X or Z is reduced. However, if a pair (X, Z) is
Du Bois and X is reduced, then so is Z [KS16b, Lemma 2.9].
2.2. Cyclic covers of non-reduced schemes. In this paper we will need to take cyclic
covers of non-reduced schemes. We will work in the following setting. We assume what
follows is well known to experts but we do not know a reference in the generality we need
(see [KM98, Section 2.4] or [dFEM, Subsection 2.1.1]). Note also that the reason we are able
to work with non-reduced schemes is because our L is actually locally free and not just a
reflexive rank-1 sheaf.
Setting 2.4. Suppose X is a scheme of finite type over C (typically projective). Suppose
also that L is a line bundle (typically semi-ample). Choose a (typically general) global
section s ∈ H 0 (X, L n ) for some n > 0 (typically n ≫ 0) and form the sheaf of rings:
R(X, L , s) = OX ⊕ L −1 ⊕ · · · ⊕ L −n+1
where for i, j < n and i + j > n, the multiplication L −i ⊗ L −j −
→ L −i−j+n is performed
by the formula a ⊗ b 7→ abs. We define ν : Y = YL ,s = SpecR(X, L , s) −
→ X. Note we did
not assume that s was nonzero or even locally a nonzero divisor.
Now let us work locally on an affine chart U trivializing L , where U = Spec R ⊆ X with
corresponding ν −1 (U) = Spec S. Fix an isomorphism L |U ∼
= OU and write
S = R ⊕ Rt ⊕ Rt2 ⊕ · · · ⊕ Rtn−1
where t is a dummy variable used to help keep track of the degree. The choice of the section
s|U ∈ H 0 (X, L n ) is the same as the choice of map OX −
→ L n (send 1 to the section s). If
n
s is chosen to be general and L is very ample then this map is an inclusion, but it is not
injective in general. Working locally where we have fixed L |U = OU , we also have implicitly
chosen L n = OU and hence we have just specified that tn = s|U ∈ Γ(U, L n ) = Γ(U, OU ). In
other words
S = R[t]/htn − si.
In particular, it follows that ν is a finite map.
Lemma 2.5. The map ν is functorial with respect to taking closed subschemes of X. In
particular if Z ⊆ X is a closed subscheme and L and s are as above, then we have a
4
commutative diagram
YL ,s
SpecR(X, L , s)
WL |Z ,s|Z
SpecR(Z, L ⊗ OZ , s|Z )
νX
/
O
?
XO
?
νZ
/
Z.
Furthermore, WL |Z ,s|Z = π −1 (Z) is the scheme theoretic preimage of Z.
Proof. The first statement follows since R(X, L , s) = OX ⊕L −1 ⊕. . .⊕L −n+1 surjects onto
−n+1
R(Z, L |Z , s|Z ) = OZ ⊕ L |−1
in the obvious way. For the second statement
Z ⊕ . . . ⊕ L |Z
we simply notice that
R(X, L , s) ⊗OX OZ = R(Z, L |Z , s|Z ).
Lemma 2.6. Suppose we are working in Setting 2.4 and that X is projective and also reduced
(respectively normal), L n is globally generated and s is chosen to be general. Then Y = YL ,s
is also reduced (respectively normal).
Proof. To show that Y is reduced we will show it is S1 and R0 . We work locally on an open
affine chart U = Spec R ⊆ X.
To show it is R0 , it suffices to consider the case R = K is a field. Because s was picked
generally, we may assume that the image of s in K is nonzero (we identify s with its image).
Then we need to show that K[t]/htn − si is a product of fields. But this is obvious since we
are working in characteristic zero.
Next we show it is S1 . Indeed if (R, m) is a local ring of depth ≥ 1 then obviously
S = R[t]/htn − si has depth ≥ 1 as well since it is a free R-module. Finally, the depth
condition is preserved after localizing at the maximal ideals of the semi-local ring of S.
This proves the reduced case. The normal case is well known but we sketch it. The R1
condition follows from [KM98, Lemma 2.51] and the fact that s is chosen generally, utilizing
Bertini’s theorem. The S2 condition follows in the same way that S1 followed above.
2.3. F -pure and F -injective singularities. Du Bois singularities are conjectured to be
the characteristic 0 analog of F -injective singularities [Sch09], [BST13]. In this short subsection we collect some definitions about singularities in positive characteristic. Our main
focus are F -pure and F -injective singularities.
A local ring (R, m) is called F -pure if the Frobenius endomorphism F : R → R is pure.2
Under mild conditions, for example when R is F -finite, which means the Frobenius map
F
R−
→ R is a finite map, R being F -pure is equivalent to the condition that the Frobenius
F
endomorphism R −
→ R is split [HR76, Corollary 5.3]. The Frobenius endomorphism on R
induces a natural Frobenius action on each local cohomology module Hmi (R) and we say a
local ring is F -injective if this natural Frobenius action on Hmi (R) is injective for every i
[Fed83]. This holds if R is F -pure [HR76, Lemma 2.2]. For some other basic properties of
F -pure and F -injective singularities, see [HR76, Fed83, EH08].
2A
map of R-modules N → N ′ is pure if for every R-module M the map N ⊗R M → N ′ ⊗R M is injective.
This implies that N → N ′ is injective, and is weaker than the condition that 0 → N → N ′ be split.
5
3. The Cohen-Macaulay property for families of Du Bois pairs
We begin with a lemma which we assume is well known to experts, indeed it is explicitly
stated in [Kol13, Corollary 6.9]. However, because many of the standard references also
include implicit reducedness hypotheses, we include a careful proof that deduces it from the
reduced case. Of course, one could also go back to first principals but we believe the path
we take below is quicker.
Lemma 3.1. Let X be a projective scheme over C and let Z ⊆ X be a closed subscheme
(X, Z are not necessarily reduced). Then the natural map
H i (X, IZ ) −
→ Hi (Xred , Ω0Xred ,Zred ) ∼
= Hi (X, Ω0X,Z )
is surjective for every i ∈ Z.
Proof. Note that if the result holds for C, it certainly also holds for other fields of characteristic zero. The isomorphism in the statement of the lemma follows from the definition.
For the surjectivity, we consider the following commutative diagram where we let U = X\Z.
Note that the rows are not exact.
Hci (Ured , C)
/
O
H i (Xred , IZred )
O
/
Hi (Xred , Ω0Xred ,Zred )
O
∼
=
∼
=
Hci (U, C)
/
H i (X, IZ )
/
Hi (X, Ω0X,Z )
The composite map in the top horizontal line is a surjection by [KS16b, Lemma 2.17] or
[Kov11, Corollary 4.2]. The vertical isomorphism on the left holds because the constant
sheaf C does not see the non-reduced structure. Likewise for the vertical isomorphism on
the right where Ω0X,Z does not see the non-reduced structure. The diagram then shows that
H i (X, IZ ) −
→ Hi (Xred , Ω0Xred ,Zred ) is a surjection.
Next we prove the key injectivity lemma for possibly non-reduced pairs. The proof is
essentially the same as in [KS16b] or [KS16a]. We reproduce it here carefully because we
need it in the non-reduced setting.
Lemma 3.2. Let X be a scheme of essentially finite type over C and Z ⊆ X a closed
subscheme (X and Z are not necessarily reduced). Then the natural map
hj (ω X,Z ) ֒→ hj (R H omOX (IZ , ωX ))
q
q
is injective for every j ∈ Z, where ω X,Z = R H omOX (Ω0X,Z , ωX ).
q
q
Proof. The question is local and compatible with restricting to an open subset, hence we may
assume that X is projective with ample line bundle L . Let s ∈ H 0 (X, L n ) be a general
n−1
global section for some n ≫ 0 and let η: Y = Spec ⊕i=0
L −i −
→ X be the n-th cyclic cover
−1
corresponding to s. Set W = η (Z). Then for n ≫ 0 and s general, the restriction of η to
n−1
W is the cyclic cover W = Spec ⊕i=0
L |−i
→ Z by Lemma 2.5. Likewise η also induces
Z −
the corresponding cyclic covers of the closed subschemes Yred −
→ Xred and Wred −
→ Zred by
n−1
Lemma 2.5 and Lemma 2.6. We have η∗ IW = ⊕i=0
IZ ⊗ L −i and
∼
⊗ L −i ∼
η∗ Ω0 ∼
= η∗ Ω0
= ⊕n−1 Ω0
= ⊕n−1 Ω0 ⊗ L −i
Y,W
Yred ,Wred
i=0
Xred ,Zred
6
i=0
X,Z
where the second isomorphism is [KS16b, Lemma 3.1] (see also [KS16a, Lemma 3.1]). Since
Lemma 3.1 implies that H j (Y, IW ) ։ Hj (Y, Ω0Y,W ) is surjective for every j ∈ Z, we know
that H j (X, IZ ⊗ L −i ) ։ Hj (X, Ω0X,Z ⊗ L −i ) is surjective for every i ≥ 0 and j ∈ Z.
Applying Grothendieck-Serre duality we obtain an injection
Hj (X, ω X,Z ⊗ L i ) ֒→ Hj (X, R H omOX (IZ , ωX ) ⊗ L i )
q
q
for all i ≥ 0 and j ∈ Z. Since L is ample, for i ≫ 0 the spectral sequence computing the
above hypercohomology degenerates. Hence for i ≫ 0 we get
H 0 (X, hj (ω X,Z ) ⊗ L i ) ֒→ H 0 (X, hj (R H omOX (IZ , ωX )) ⊗ L i ).
q
q
But again since L is ample, the above injection for i ≫ 0 implies the injection hj (ω X,Z ) ֒→
q
hj (R H omOX (IZ , ωX )).
Next we prove the key lemma stated in the introduction (and we do the generalized pair
version). It follows immediately from the above injectivity, Lemma 3.2. For simplicity we
will use (S, S/J) to denote the pair (Spec S, Spec S/J).
Lemma 3.3. Let (S, m) be a local ring essentially of finite type over C and let J ⊆ S be an
ideal. Suppose further that S ′ = S/N where N ⊆ S is an ideal contained in the nilradical
(for instance, N could be the nilradical and then S ′ = Sred ). Suppose (S ′ , S ′ /JS ′ ) is a Du
Bois pair. Then the natural map Hmi (J) −
→ Hmi (JS ′ ) is surjective for every i. In particular,
i
i
if Sred is Du Bois, then Hm (S) −
→ Hm (Sred ) is surjective for every i.
Proof. We consider the following commutative diagram:
Hmi (J)
/
Hmi (JS ′ )
∼
=
Him (Ω0S,S/J )
∼
=
/
Him (Ω0S ′ ,S ′/JS ′ )
Here the left vertical map is surjective by the Matlis dual of Lemma 3.2 applied to X =
Spec S and Z = Spec S/J, the right vertical map is an isomorphism because (S ′ , S ′ /JS ′ ) is
′
a Du Bois pair, and the bottom map is an isomorphism because Sred = Sred
. Chasing the
i
i
′
diagram shows that Hm (J) −
→ Hm (JS ) is surjective. The last sentence is the case that J is
the unit ideal (i.e., the non-pair version).
Remark 3.4. The characteristic p > 0 analog of the above lemma (in the case J = S, i.e.,
the non-pair version) holds if Sred is F -pure. We give a short argument here. Without
loss
√
of generality we may assume S and Sred are complete, so S = A/I and
√ Sered = A/ I where
A is a complete regular local ring. We may pick e ≫ 0 such that ( I)[p ] ⊆ I. We have a
composite map
√ e
√
φ
Hmi (A/( I)[p ] ) −
→ Hmi (A/I) ∼
→ Hmi (A/ I) ∼
= Hmi (S) −
= Hmi (Sred ).
We know from [Lyu06, Lemma 2.2] that the image of the composite map is equal to
spanA hf e (Hmi (Sred ))i, where f e denotes the natural e-th Frobenius action on Hmi (Sred ). In
particular, the Frobenius action on Hmi (Sred )/ im φ is nilpotent. However, since im φ is an
F -stable submodule of Hmi (Sred ) and Sred is F -pure, [Ma14, Theorem 3.7] shows that Frobenius acts injectively on Hmi (Sred )/ im φ. Hence we must have Hmi (Sred ) = im φ, that is,
Hmi (S) −
→ Hmi (Sred ) is surjective.
7
We next give an example showing that the analog of Lemma 3.3 for F -injectivity fails in
general. The example is a modification of [EH08, Example 2.16].
Example 3.5. Let K = k(u, v) where k is an algebraically closed field of characteristic p > 0
and let L = K[z]/(z 2p + uz p + v) as in [EH08, Example 2.16]. Now let R=K + (x, y)L[[x, y]]
with m = (x, y)L[[x, y]]. Then it is easy to see that (R, m) is a local ring of dimension 2 and
we have a short exact sequence:
0−
→R−
→ L[[x, y]] −
→ L/K −
→ 0.
The long exact sequence of local cohomology gives
Hm1 (R) ∼
= L/K,
Hm2 (R) ∼
= Hm2 (L[[x, y]]).
Moreover, one can check that the Frobenius action on Hm1 (R) is exactly the Frobenius action
on L/K. The Frobenius action on L/K is injective since Lp ∩ K = K p , and the Frobenius
action on Hm2 (L[[x, y]]) is injective because L[[x, y]] is regular. Hence the Frobenius action
on both Hm1 (R) and Hm2 (R) are injective. This proves that R is F -injective.
Write R = A/I for a regular local ring (A, m). One checks that the Frobenius action F :
1
Hm (R) −
→ Hm1 (R) is not surjective up to K-span (and hence not surjective up to R-span
since the residue field of R is K) because L 6= Lp [K] by our choice of K and L (see [EH08,
Example 2.16] for a detailed computation on this). But now [Lyu06, Lemma 2.2] shows that
Hm1 (A/I [p]) −
→ Hm1 (A/I) is not surjective. Therefore we can take S = A/I [p] with Sred = R
that is F -injective, but Hm1 (S) −
→ Hm1 (Sred ) is not surjective.
In view of Remark 3.4 and Example 3.5 and the relation between F -injective and Du Bois
singularities, it is tempting to try to define a more restrictive variant of F -injective singularities for local rings. In particular, if (R, m) is a local ring such that Rred has these more
restrictive F -injective singularities, then it should follow that Hmi (R) −
→ Hmi (Rred ) surjects
for all i.
Theorem 3.6. Suppose that (X, Z) is a reduced pair and that H ⊆ X is a Cartier divisor
such that H does not contain any irreducible components of either X or Z. If (H, Z ∩ H) is
a Du Bois pair, then for all i and all points η ∈ H ⊆ X, the following segment of the long
exact sequence
→0
0−
→ Hηi (IZ,η · OH,η ) ֒→ Hηi+1 (IZ,η · OX,η (−H)) ։ Hηi+1 (IZ,η ) −
is exact for all i. Dually, in the special case that IZ = OX , we can also phrase this as saying
that
q
q
q
0−
→ h−i (ωX ) −
→ h−i (ωX (H)) −
→ h−i+1 (ωH ) −
→0
is exact for all i.
Proof. Localizing at η, we may assume that X = Spec R, Z = Spec R/I, H = div(x) with
(R, m) a local ring and η = {m}. Moreover, the hypotheses imply that x is a nonzerodivisor
on both R and R/I. It is enough to show that the segment of the long exact sequence
·x
(3.6.1)
·x
→0
→ Hmi+1 (I) −
→ Hmi+1 (I) −
0−
→ Hmi (I/xI) −
induced by 0 −
→I−
→I −
→ I/xI −
→ 0 is exact.
8
Consider S = R/xn R and J = I(R/xn R). The pair (S ′ = R/xR, S ′ /IS ′ ) = (H, Z ∩ H) is
Du Bois by hypothesis (note we do not assume that H is reduced). Thus Lemma 3.3 implies
that
Hmi (I(R/xn R)) ։ Hmi (I(R/xR))
(3.6.2)
is surjective for every i, n. Since x is a nonzerodivisor on R/I, xn I = I ∩ xn R. Thus
I(R/xn R) =
Hence (3.6.2) shows that
I + xn R ∼
I
= I/xn I.
=
n
n
x R
I ∩x R
Hmi (I/xn I) ։ Hmi (I/xI)
is surjective for every i, n. The long exact sequence of local cohomology induced by 0 −
→
·x
I/xn−1 I −
→ I/xn I −
→ I/xI −
→ 0 tells us that
·x
Hmi (I/xn−1 I) −
→ Hmi (I/xn I)
is injective for every i, n. Hence after taking a direct limit we obtain that:
φi : Hmi (I/xI) −
→ lim Hmi (I/xn I) ∼
(I/xn I)) ∼
= Hmi+1 (I)
= Hmi (lim
= Hmi (Hx1 (I)) ∼
−→
−→
n
n
is injective for every i. Here the last two isomorphism come from the fact that x is a
nonzerodivisor on I (since it is a nonzerodivisor on R) and a simple computation using the
local cohomology spectral sequence.
Claim 3.7. The φi ’s are exactly the connection maps in the long exact sequence of local
·x
cohomology induced by 0 −
→I−
→I−
→ I/xI −
→ 0:
φi
·x
··· −
→ Hmi (I) −
→ Hmi (I/xI) −
→ Hmi+1 (I) −
→ Hmi+1 (I) −
→ ··· .
This claim immediately produces (3.6.1) because we proved that φi is injective for every
i. Thus it remains to prove the Claim.
Proof of claim. Observe that by definition, φi is the natural map in the long exact sequence
of local cohomology
φi
·x
··· −
→ Hmi (I/xI) −
→ Hmi (Ix /I) −
→ Hmi (Ix /I) −
→ ···
·x
which is induced by 0 −
→ I/xI −
→ Ix /I −
→ Ix /I −
→ 0 (note that x is a nonzerodivisor on I
·x
1
∼
→
and Hx (I) = Ix /I). However, it is easy to see that the multiplication by x map Hmi (Ix /I) −
·x
Hmi (Ix /I) can be identified with the multiplication by x map Hmi+1 (I) −
→ Hmi+1 (I) because
we have a natural identification Hmi (Ix /I) ∼
= Hmi+1 (I). This finishes the proof
= Hmi (Hx1 (I)) ∼
of the claim and thus also the local cohomology statement.
The global statement can be checked locally, where it is simply a special case of the local
dual of the local cohomology statement.
The following is the main theorem of the section, it answers a question of Kovács and the
second author [KS16b, Question 5.7] and is a generalization to pairs of a characteristic zero
analog of [HMS14, Corollary A.5].
9
Theorem 3.8. Suppose that (X, Z) is a reduced pair and that H ⊆ X is a Cartier divisor
such that H does not contain any irreducible components of either X or Z. If (H, Z ∩ H) is
a Du Bois pair and IZ |X\H is Sk , then IZ is Sk .
Proof. Suppose IZ is not Sk , choose an irreducible component Y of the non-Sk locus of IZ .
Since IZ |X\H is Sk , we know that Y ⊆ H. Let η be the generic point of Y . Theorem 3.6
tells us that
Hηi (IZ · OX (−H)) ։ Hηi (IZ )
is surjective for every i. Note that if we localize at η and use the same notation as in the
·x
proof of Theorem 3.6, this surjection is the multiplication by x map Hmi (I) −
→ Hmi (I).
However, after we localize at η, I is Sk on the punctured spectrum, Hmi (I) has finite length
for every i < k. In particular, for h ≫ 0, xh annihilates Hmi (I) for every i < k. Therefore for
·x
i < k, the multiplication by x map Hmi (I) −
→ Hmi (I) cannot be surjective unless Hmi (I) = 0,
which means I is Sk as an R-module. But this contradicts our choice of Y (because we pick
Y an irreducible component of the non-Sk locus of IZ ). Therefore IZ is Sk .
Corollary 3.9. Let X be a reduced scheme and let H ⊆ X be a Cartier divisor. If H is Du
Bois and X\H is Cohen-Macaulay, then X is Cohen-Macaulay and hence so is H.
As mentioned in the introduction, these results should be viewed as a generalization of
[KK10, Corollary 1.3, Corollary 7.13] and [KS16b, Theorem 5.5]. In those results, one
considered a flat projective family X −
→ B with Du Bois fibers such that the general fiber is
Sk . They then show that the special fiber is Sk . Let us explain this result in a simple case.
Suppose that B is smooth curve and Z = 0, the special fiber is a Cartier divisor H and
X \ H is Cohen-Macaulay. Then we are trying to show that H is Cohen-Macaulay which is
of course the same as showing that X is Cohen-Macaulay, which is exactly what our result
proves. The point is that we had to make no projectivity hypothesis for our family.
This result also implies [KS16b, Conjecture 7.9] which we state next. We leave out the
definition of a rational pair here (this is a rational pair in the sense of Kollár and Kovács: a
generalization of rational singularities analogous to dlt singularities, see [Kol13]), the interested reader can look it up, or simply assume that D = 0.
Corollary 3.10. Suppose (X, D) is a pair with D a reduced Weil divisor. Further suppose
that H is a Cartier divisor on X not having any components in common with D such that
(H, D ∩ H) is Du Bois and such that (X \ H, D \ H) is a rational pair in the sense of Kollár
and Kovács. Then (X, D) is a rational pair in the sense of Kollár and Kovács.
Proof. We follow the [KS16b, Proof of Theorem 7.1]. It goes through word for word once one
observes that OX (−D) is Cohen-Macaulay which follows immediately from Theorem 3.8.
This corollary is also an analog of [FW89, Proposition 2.13] and [HMS14, Corollary A.4].
4. Applications to local cohomology
In this section we give many applications of Lemma 3.3 to local cohomology, several of
these applications are quite easy to prove. However, they provide strong and surprising
characteristic 0 analogs and generalizations of results in [HR76, SW05, SW07, Var13, DN16]
as well as answer their questions. Moreover, our results can give a generalization of the
classical Kodaira vanishing theorem.
10
4.1. Injectivity of Ext and a generalization of the Kodaira vanishing theorem.
Our first application gives a solution to [EMS00, Question 6.2] on the injectivity of Ext in
characteristic 0, which parallels to its characteristic p > 0 analog [SW07, Theorem 1.3].
Proposition 4.1. Let (R, m) be a Gorenstein local ring essentially of finite type over C,
and let I ⊆ R be an ideal such that R/I has Du Bois singularities. Then the natural map
ExtjR (R/I, R) −
→ HIj (R) is injective for every j.
Proof. By Lemma 3.3 applied to S = R/I t and applying local duality, we know that
ExtjR (R/I, R) −
→ ExtjR (R/I t , R)
is injective for every j and every t > 0. Now taking a direct limit we obtain the desired
injectivity ExtjR (R/I, R) −
→ HIj (R).
Remark 4.2. [SW07, Theorem 1.3] proves the same injectivity result in characteristic p > 0
when R/I is F -pure (and when R is regular). Perhaps it is worth to pointing out that
Proposition 4.1 fails in general if we replace Du Bois by F -injective: Example 3.5 is a counterexample. Because there we have S = A/I is F -injective but Hm1 (A/I [p]) −
→ Hm1 (A/I) is
not surjective. Hence applying local duality shows that
A−1
A−1
Extdim
(A/I, A) −
→ Extdim
(A/I [p], A)
A
A
dim A−1
is not injective, so neither is ExtA
(A/I, A) −
→ HIdim A−1 (A).
An immediate corollary of Proposition 4.1 is the following, which is a characteristic 0
analog of [DN16, Theorem 7.3] (this also answers [DN16, Question 7.5], which is motivated
by Stillman’s conjecture).
Corollary 4.3. Let R be a regular ring of essentially finite type over C. Let I ⊆ R be an
ideal such that S = R/I has Du Bois singularities. Then pdR S ≤ ν(I) where ν(I) denotes
the minimal number of generators of I.
j
Proof. For every j > ν(I) and every maximal ideal m of R, we have HIR
(Rm ) = 0. Therefore
m
j
by Proposition 4.1, we have ExtRm (Sm , Rm ) = 0 for every j > ν(I). Since pdRm Sm < ∞, we
have
pdRm Sm = sup{j| ExtjRm (Sm , Rm ) 6= 0} ≤ ν(I)3.
Because this is true for every maximal ideal m, we have pdR S ≤ ν(I).
Next we want to prove a stronger form of Proposition 4.1 in the graded case. We will
first need the following criterion of Du Bois singularities for N-graded rings. This result
and related results should be well-known to experts (or at least well-believed by experts):
for example [Ma15, Theorem 4.4], [BST13, Lemma 2.14], [KS16b, Lemma 2.12] or [GK14].
However all these references only deal with the case that R is the section ring of a projective
variety with respect to a certain ample line bundle, while here we allow arbitrary N-graded
rings which are not even normal. We could not find a reference that handles the generality
that we will need, and hence we write down a careful argument.
3This
follows from applying the functor HomRm (−, Rm ) to a minimal free resolution of Sm and observing
that the matrix defining the maps have elements contained in m. See also [DGI06, 2.4]
11
Proposition 4.4. Let (R, m) be a reduced Noetherian N-graded (R0 = k)-algebra with m the
unique homogeneous maximal ideal. Suppose RP is Du Bois for all P 6= m. Then we have
hi (Ω0 ) ∼
= H i+1(R)
R
>0
m
for every i > 0 and
h0 (Ω0R )/R ∼
= Hm1 (R) >0 .
In particular, R is Du Bois if and only if Hmi (R) >0 = 0 for every i > 0.
Proof. Let R♮ denote the Rees algebra of R with respect to the natural filtration R≥t . That
is, R♮ = R ⊕ R≥1 ⊕ R≥2 ⊕ · · · . Let Y = Proj R♮ . We first claim that Y is Du Bois: Y is
covered by D+ (f ) for homogeneous
elements f ∈ R≥t and t ∈ N.
♮ ∼
If deg f > t, then Rf 0 = Rf is Du Bois. If deg f = t, then Rf♮ 0 ∼
= Rf ≥0 is also Du
Bois4 (see [KSSW09, Lemma 5.4] for an analogous analysis on rational singularities).
Since R and thus R♮ are Noetherian, there exists n such that R≥nt = (R≥n )t for every
t ≥ 1 [Bou98, Chapter III, Section 3, Proposition 3]. Let I = (R≥n ), then we immediately see
that Y ∼
= Proj R ⊕ I ⊕ I 2 ⊕ . . . is the blow up of
Spec R at I. We define the exceptional
2
2
3
∼
divisor to be E = Proj R/I ⊕ I/I ⊕ I /I ⊕ · · · . We next claim that
(4.4.1)
(R/I ⊕ I/I 2 ⊕ I 2 /I 3 ⊕ · · · )red ∼
= R0 ⊕ Rn ⊕ R2n ⊕ · · · .
The point is that, for x ∈ I t /I t+1 = R≥nt /R≥n(t+1) , if x ∈ Rnt+a for some a > 0, then we can
pick b > 0 such that ba > n, we then have
xb ∈ Rbnt+ba ⊆ R≥n(bt+1) .
But this means xb = 0 in I bt /I bt+1 and thus x is nilpotent in R/I ⊕ I/I 2 ⊕ I 2 /I 3 ⊕ · · · . This
proves (4.4.1).
By (4.4.1) we have Ered ∼
= Proj R is Du Bois (because Rf 0
= Proj R0 ⊕ Rn ⊕ R2n ⊕ · · · ∼
is Du Bois for every homogeneous f ∈ R). We consider the following commutative diagram:
E
/
Y
π
π
Spec R/I
/
Spec R
Since Y , Ered and (Spec R/I)red ∼
= Spec k are all Du Bois by the above discussion, the exact
+1
0
0
0
triangle ΩR −
→ Rπ∗ ΩY ⊕ ΩR/I −
→ Rπ∗ Ω0E −→ reduces to
(4.4.2)
+1
Ω0R −
→ Rπ∗ OY ⊕ k −
→ Rπ∗ OEred −→ .
Next we study the map Rπ∗ OY −
→ Rπ∗ OEred using the Čech complex. We pick x1 , . . . , xm ∈
2
I = (R≥n ) in R ⊕ I ⊕ I ⊕ · · · such that
(a) The internal degree, deg xi = n for each xi (in other words, xi ∈ Rn ⊆ R≥n = I);
(b) The images x1 , . . . , xm in (R/I ⊕ I/I 2 ⊕ I 2 /I 3 ⊕ · · · )red = R0 ⊕ Rn ⊕ R2n ⊕ · · · are
algebra generators of R0 ⊕ Rn ⊕ R2n ⊕ · · · over R0 = k.
general, if S is a Z-graded ring, then S≥0 ∼
= S[z]0 where deg z = −1. Hence if S is Du Bois, then S≥0 ,
being a summand of S[z], is also Du Bois [Kov99].
4In
12
Note that conditions (a) and (b) together imply that the radical of (x1 , . . . , xm ) in R ⊕ I ⊕
I 2 ⊕ · · · is the irrelevant ideal I ⊕ I 2 ⊕ · · · . In particular, {D+ (xi )}1≤i≤m forms an affine
open cover of Y . The point is that for any y ∈ I t = R≥tn , y n ∈ I tn = R≥tn2 as an element
in R ⊕ I ⊕ I 2 ⊕ · · · is always contained in the ideal (x1 , . . . , xm ), this is because the internal
degree of y n is divisible by n, so it can be written as a sum of monomials in xi by (b).
The natural map OY → OEred induces a map between the s-th spot of the Čech complexes
of OY and OEred with respect to the affine cover {D+ (xi )}1≤i≤m . The induced map on Čech
complexes can be explicitly described as follows (all the direct sums in the following diagram
are taken over all s-tuples 1 ≤ i1 < · · · < is ≤ m):
L
L
/
OY (D+ (xi1 xi2 · · · xis ))
OEred (D+ (xi1 xi2 · · · xis ))
L
(
y
(xi1 xi2 ···xis )h
∼
=
h > 0, y ∈ I sh = R≥nsh
)
/
∼
=
L
Rxi1 xi2 ···xis ≥0
L
(
y
(xi1 xi2 ···xis )h
∼
=
h > 0, y ∈ Rnsh
)
φ
∼
=
/
L
Rxi1 xi2 ···xis 0
y
y
to
n
(xi1 xi2 · · · xis )
(xi1 xi2 · · · xis )n
where y denotes the image of y in Rnsh . Hence the same map φ on the third line is exactly
“taking the degree 0 part”. Therefore we have
Ri π∗ OY ∼
= H i (Y, OY ) = H i+1(R)
The induced map on the second line takes the element
m
while
≥0
Ri π∗ OEred ∼
= H i (Ered , OEred ) ∼
= Hmi+1 (R) 0
for every i ≥ 1, and the map Ri π∗ OY −
→ Ri π∗ OEred is taking degree 0 part. Therefore taking
cohomology of (4.4.2), we have
for every i ≥ 2.
(4.4.3)
hi (Ω0 ) ∼
= H i+1 (R)
R
>0
m
Moreover, for i = 0, 1, the cohomology of (4.4.2) gives
φ
(4.4.4) 0 −
→ h0 (Ω0R ) −
→ H 0 (Y, OY ) ⊕ k −
→ H 0 (Ered , OEred ) −
→ h1 (Ω0R ) → Hm2 (R) >0 −
→ 0.
A similar Čech complex computation as above shows that
H 0 (Y, OY ) = ker ⊕ Rxi ≥0 −
→ ⊕ Rxi xj ≥0
while H 0 (Ered , OEred ) = ker ⊕ Rxi 0 −
→ ⊕ Rxi xj 0 . Therefore φ is surjective, which
implies
(4.4.5)
h1 (Ω0 ) ∼
= H 2 (R) .
R
m
>0
Taking degree > 0 part of (4.4.4) we get an exact sequence
→ ⊕ Rxi >0 −
→ ⊕ Rxi xj >0 .
0−
→ h0 (Ω0R )>0 −
13
This implies that h0 (Ω0R )>0 ∼
= (Γ(Spec R \ m, OSpec R ))>0 and so
(4.4.6)
h0 (Ω0 )>0 /R>0 ∼
= H 1 (R) .
R
>0
m
∼
Finally we notice that h
= Rsn is the seminormalization of R [Sai00, 5.2]. We know
sn
sn
sn
R0 ⊆ R is reduced. But we can also view R0sn as the quotient Rsn /R>0
, in particular the
sn
prime ideals of R0sn correspond to prime ideals of Rsn that contain R>0
, so they all contract
to m in R. Since seminormlization induces a bijection on spectrum, R0sn has a unique prime
ideal. Thus R0sn , being a reduced Artinian local ring, must be a field. Since seminormalization
also induces isomorphism on residue fields, R0sn = k and thus h0 (Ω0R )0 ∼
= R0sn = R0 . Hence
(4.4.6) tells us that
(4.4.7)
h0 (Ω0 )/R ∼
= H 1 (R) .
0
(Ω0R )
R
m
>0
Now (4.4.3), (4.4.5) and (4.4.7) together finish the proof.
Now we prove our result on injectivity of Ext. Later we will see that this theorem can be
viewed as a generalization of the Kodaira vanishing theorem.
Theorem 4.5. Let (R, m) be a reduced Noetherian N-graded (R0 = C)-algebra with m the
unique homogeneous maximal ideal. Suppose RP is Du Bois for all P 6= m. Write R = A/I
where A = C[x1 , . . . , xn ] is a polynomial ring with deg xi = di > 0 and I is a homogeneous
ideal. Then the natural degree-preserving map ExtjA (R, A) −
→ HIj (A) induces an injection
ExtjA (R, A) ≥−d ֒→ HIj (A) ≥−d
P
for every j, where d = di .
Proof. We have the hypercohomology spectral sequence
0
Hmp (hq (Ω0R )) ⇒ Hp+q
m (ΩR ).
Since R is Du Bois away from V (m), hq (Ω0R ) has finite length when q ≥ 1. Thus we know
Hmp (hq (Ω0R )) = 0 unless p = 0 or q = 0. We also have that Hm0 (hi (Ω0R )) ∼
= hi (Ω0R ) for i ≥ 1.
Hence the above spectral sequence carries the data of a long exact sequence:
0 −
→ Hm1 (h0 (Ω0R )) −
→ H1m (Ω0R ) −
→ h1 (Ω0R ) −
→
(4.5.1)
−
→ Hm2 (h0 (Ω0R )) −
→ H2m (Ω0R ) −
→ h2 (Ω0R ) −
→
···
···
···
→ Him (Ω0R ) −
→ hi (Ω0R ) −
→ ···
−
→ Hmi (h0 (Ω0R )) −
We also have short exact sequence 0 −
→R−
→ h0 (Ω0R ) → Hm1 (R) >0 −
→ 0 by Proposition 4.4.
Therefore the long exact sequence of local cohomology and the observation that Hm1 (R) >0
has finite length tells us that
H 1 (h0 (Ω0 )) ∼
= H 1 (R)/ H 1 (R)
m
R
m
and
m
>0
Hmi (h0 (Ω0R )) ∼
= Hmi (R) for every i ≥ 2.
Now taking the degree ≤ 0 part of (4.5.1) and again using Proposition 4.4 yields:
i
(4.5.2)
Hm (R) ≤0 ∼
= Him (Ω0R ) ≤0 for every i.
14
At this point, note that√by the Matlis dual of Lemma 3.2 (see the proof of Lemma 3.3
applied to S = A/J for J = I, and √
thus Sred = A/I = R), we always have a surjection
0
i
i
Hm (A/J) ։ Hm (ΩR ) for every i and J = I. Taking the degree ≤ 0 part and applying
(4.5.2), we thus get:
i
(4.5.3)
Hm (A/J) ≤0 ։ Hmi (R) ≤0
√
is surjective for every i and J = I. Now taking J = I t and applying graded local duality (we
refer to [BS98] for definitions and standard properties of graded canonical modules, graded
injective hulls and graded local duality, but we emphasize here that the graded canonical
module of A is A(−d)), we have:
ExtjA (R, A(−d)) ≥0 ֒→ ExtjA (A/I t , A(−d)) ≥0
is injective for every j and t. So after taking a direct limit and a degree shift, we have
ExtjA (R, A) ≥−d ֒→ HIj (A) ≥−d
is injective for every j. This finishes the proof.
i
i
Remark 4.6. The dual form of Theorem 4.5 says that Hm (A/J) t ։ Hm (R) t is surjective
√
for every i, every t ≤ 0 and every J = I, see (4.5.3). When R = A/I has isolated
singularity, A is standard graded and t = 0, this was proved in [Var13, Proposition 3.8].
Therefore our Theorem 4.5 greatly generalized this result.
In general, we cannot expect ExtjA (R, A) <−d ֒→ HIj (A) <−d is injective under the hypothesis of Theorem 4.5 (even if R is an isolated singularity). Consider the following example.
Example 4.7 (cf. Example 3.5 in [SW07]). Let R = C[s4 , s3 t, st3 , t4 ]. Then we can write
R = A/I where A = C[x, y.z, w] with standard grading (i.e., x, y, z, w all have degree one).
It is straightforward to check that R is an isolated singularity with
Hm1 (R) = Hm1 (R) >0 6= 0
(in particular depth R = 1). By graded local duality, we have Ext3A (R, A) <−4 6= 0. On
the other hand, using standardvanishing theorems
in 3[HL90,
Theorem 2.9] we know that
3
3
HI (A) = 0. Therefore the map ExtA (R, A) <−4 → HI (A) <−4 is not injective.
An important consequence of Theorem 4.5 (in fact we only need the injectivity in degree
> −d) is the following vanishing result:
Theorem 4.8. Let (R, m) be a Noetherian N-graded (R0 = C)-algebra with m the unique
homogeneous maximal ideal.
RP is Du Bois for all P 6= m and Hmi (R) has finite
i Suppose
length for some i. Then Hm (R) <0 = 0.
In particular,
if R is equidimensional and is Cohen-Macaulay Du Bois on Spec R − {m},
i
then Hm (R) <0 = 0 for every i < dim R.
Proof. Let R = A/I where A = C[x1 , . . . , xn ] is a (not necessarily standard
P graded) polynomial ring and I is a homogeneous ideal. Set deg xi = di > 0 and d =
di . The graded
canonical module of A is A(−d). By graded local duality,
Extn−i (R, A)(−d) ∼
= H i (R)∗ .
A
m
15
n−i
Therefore if Hmi (R) −j 6= 0 for some j > 0, then ExtA
(R, A) j−d 6= 0 for some j > 0. By
Theorem 4.5, we have an injection:
n−i
ExtA
(R, A) j−d ֒→ HIn−i (A) j−d .
n−i
Since Hmi (R) has finite length, ExtA
(R, A) also has finite length. Hence the natural degreen−i
n−i
preserving map ExtA (R, A) −
→ HI (A) factors through
n−i
ExtA
(R, A) −
→ Hm0 HIn−i (A) −
→ HIn−i (A).
Taking the degree j − d part, we thus get an injection:
n−i
ExtA
(R, A) j−d → Hm0 HIn−i (A) j−d.
It follows that Hm0 HIn−i (A) j−d 6= 0 for some j > 0. However, Hm0 HIn−i (A) is an Eulerian graded D-module supported only at m. Thus
by [MZ14, Theorem 1.2],5 the socle of
Hm0 HIn−i (A) is concentrated in degree −d so that Hm0 HIn−i (A) >−d = 0, a contradiction.
If R is the section ring of a normal Cohen-Macaulay
and Du Bois projective variety X with
i
respect to an ample line bundle L , then Hm (R) <0 = 0 is exactly the Kodaira vanishing for
X (which is well-known, for example see [Ma15] or [Pat15]). But Theorem 4.8 can handle
more general R, i.e., R need not be a section ring of an ample line bundle. If R is normal,
then any graded ring is the section ring of some Q-divisor [Dem88] and in that case, our
results yield variants and consequences of Kawamata-Viehweg vanishing (also see [Wat81,
Lemma 2.1 and Proposition 2.2]). But for general graded rings we do not know how to view
them as section rings. Thus our results Theorem 4.5 and Theorem 4.8 should be viewed as
generalizations of the Kodaira vanishing theorem for Cohen-Macaulay Du Bois projective
varieties. It would also be natural to try to generalize the results of Proposition 4.4 through
Theorem 4.8 to the context of Du Bois pairs. One particular obstruction is the use of the
Eulerian graded D-module at the end of the proof of Theorem 4.8.
4.2. Set-theoretic Cohen-Macaulayness. Our next application is a characteristic 0 analog of [SW05, Lemma 3.1] on set-theoretic Cohen-Macaulayness. Recall that an ideal I in
a√ regular
√ ring R is set theoretically Cohen-Macaulay if there exists an ideal J such that
I = J and R/J is Cohen-Macaulay.
Proposition 4.9. Let (R, m) be a regular local ring essentially of finite type over C, and
let I ⊆ R be an ideal. If R/I is Du Bois but not Cohen-Macaulay, then the ideal I is not
set-theoretically Cohen-Macaulay.
√
Proof. Suppose I = J for some J such that R/J is Cohen-Macaulay. Applying Lemma 3.3
to S = R/J, we find that for every i < dim R/I,
0 = Hmi (R/J) ։ Hmi (R/I)
is surjective and thus R/I is Cohen-Macaulay, a contradiction.
In the graded characteristic 0 case, we have a stronger criterion for set-theoretic CohenMacaulayness.
5[MZ14,
P
Theorem 1.2] assumes A = C[x1 , . . . , xn ] has standard grading, i.e., di = 1 and hence d =
di = n.
However,
the
same
proof
can
be
adapted
to
the
general
case:
one
only
needs
to
replace
the
Euler
operator
P
P
xi ∂i by
di xi ∂i . The reader is referred to [Put15, Section 2] for a discussion on this.
16
Corollary 4.10. Let R = C[x1 , . . . , xn ] be a polynomial ring with possibly non-standard
grading. Let I be a homogeneous ideal of R such that R/I is Du Bois on Spec R − {m} (e.g.,
R/I has an isolated singularity
at {m}).
Suppose Hmi (R/I) ≤0 6= 0 for some i < dim R/I (e.g., R/I is not Cohen-Macaulay on
Spec R − {m}, or H i (X, OX ) 6= 0 for X = Proj R/I). Then I is not set-theoretically CohenMacaulay.
√
Proof. Suppose I = J for some J such that R/J is Cohen-Macaulay. Applying the dual
form of Theorem 4.5 (see Remark 4.6), we get
0 = Hmi (R/J) ≤0 ։ Hmi (R/I) ≤0
is surjective for every i < dim R/I. This clearly contradict our hypothesis.
We point out the following example as an application.
Example 4.11. Let k be a field and let Ek ⊆ P2k be a smooth elliptic curve over k. We want
to study the defining ideal of the Segre embedding Ek ×P1k ⊆ P5k . We let k[x0 , . . . , x5 ]/I = A/I
be this homogeneous coordinate ring. It is well known that A/I is not Cohen-Macaulay.
(a) k has characteristic p > 0 and Ek is an ordinary elliptic curve. In this case it is well
known that A/I is F -pure, so [SW05, Lemma 3.1] shows I is not set-theoretically
Cohen-Macaulay.
(b) k has characteristic p > 0 and Ek is supersingular. We want to point out that, at least
when k is F -finite, I is still not set-theoretically Cohen-Macaulay.
√ This answers a
question in [SW05, Remark 3.4]. Suppose there exists J such that J = I and A/J is
e
Cohen-Macaulay. Let e ≫ 0 such that I [p ] ⊆ J. The composite of the Frobenius map
e
F
on A/I with the natural surjection A/I −
→ A/I [p ] ։ A/J makes A/J a small CohenMacaulay algebra over A/I (note that k, and hence A, is F -finite). However, by a
result of Bhatt [Bha14, Example 3.11], A/I does not have any small Cohen-Macaulay
algebra, a contradiction.
(c) k has characteristic 0. In this case, it is easy to check using Proposition 4.4 that
A/I is Du Bois. Hence Proposition 4.9 immediately shows I is not set-theoretically
Cohen-Macaulay. This example was originally obtained in [SW05, Theorem 3.3] using
reduction to characteristic p > 0. Thus our Proposition 4.9 can be viewed as a vast
generalization of their result.
It is worth√to point out that in Example 4.11, we know Hmi (A/J) ։ Hmi (A/I) for every
i and every J = I in characteristic 0 since A/I√ is Du Bois. In characteristic p > 0, we
have Hmi (A/J) ։ Hmi (A/I) for every i and every J = I when Ek is ordinary, however it is
straightforward to check that Hm2 (A/I [p] ) −
→ Hm2 (A/I) is not surjective (it is the zero map)
when Ek is supersingular. Therefore the surjective property proved in Lemma 3.3 does not
pass to reduction mod p ≫ 0.
4.3. Koszul cohomology versus local cohomology. Our last application in this section
is a strengthening of the main result of [HR76]. We start by proving a general result which
is characteristic-free. The proof is inspired by [Sch82].
Theorem 4.12. Let R be a Noetherian N-graded k-algebra where k = R0 is an
arbitrary
field. Let m be the unique homogenous maximal ideal. If Hmr (R) = Hmr (R) 0 for every
17
r < n = dim R, then H r (x, R) 0 ∼
= Hmr (R) for every r < n and every homogeneous system
of parameters x = x1 , . . . , xn , where H r (x, R) denotes the r-th Koszul cohomology of x. In
other words, it is not necessary to take a direct limit when computing the local cohomology.
Proof. We fix x = x1 , . . . , xn a homogeneous system of parameters. Let deg xi = di > 0.
Consider the graded Koszul complex:
Kq : 0−
→ R(−d1 − d2 − · · · − dn ) −
→ ··· −
→ ⊕R(−di ) −
→R−
→ 0.
After we apply HomR (−, R) we obtain the graded Koszul cocomplex:
K :0−
→R−
→ ⊕R(di ) −
→ ··· −
→ R(d1 + d2 + · · · + dn ) −
→ 0.
q
We note that K lives in cohomology degree 0, 1, . . . , n.
q
q
Let ωR be the graded normalized dualizing complex of R, thus ωR = h−n ωR is the graded
canonical module of R. Let (−)∗ = HomR (−, ∗ E) where ∗ E is the graded injective hull of k.
We have a triangle
q
+1
ωR [n] −
→ ωR −
→ τ>−n ωR −→
q
q
Applying R HomR (K , −) we get:
q
+1
R HomR (K , ωR [n]) −
→ R HomR (K , ωR ) −
→ R HomR (K , τ>−n ωR ) −→
q
q
q
q
q
Applying HomR (−, ∗ E) and using graded local duality, we obtain:
+1
R HomR (K , τ>−n ωR )∗ −
→K −
→ R HomR (K , ωR [n])∗ −→
q
q
q
q
Note that R HomR (K , ωR [n])∗ lives in cohomological degree n, n+1, . . . , 2n, hence we obtain
a graded isomorphism in the derived category:
q
q
q
τ <n K ∼
= τ <n R HomR (K , τ>−n ω )∗ .
q
R
Therefore for every r < n, we have:
q
q
q
hr (K ) ∼
= hr (R HomR (K , τ>−n ωR )∗ ).
At this point, notice that Hmr (R) = Hmr (R) 0 for every r < n implies R is Buchsbaum
q
by [Sch82, Theorem 3.1]. This means τ>−n ωR is quasi-isomorphic to a complex of graded kvector spaces.
Moreover,
we know that all these graded vector spaces have degree 0 because
Hmi (R) = Hmi (R) 0 for every i < n. In other words, we have:
q
τ>−n ω ∼
→ k sn−1 −
→ k sn−2 −
→ ··· −
→ k s1 −
→ k s0 −
→0
=0−
R
where the complex on the right hand side has zero differentials,
k si has internal degree 0 and
sits in cohomology degree −i, with si = dimk Hmi (R) 0 .
Recall that di > 0, hence by keeping track of the internal degrees we see that
q
q
q
R HomR (K , τ>−n ω ) ∼
= τ>−n ω .
R
0
R
Now by graded local duality, we have:
r
q
q
q
q
H (x, R) 0 = hr (K ) 0 ∼
= Hmr (R)
= hr (R HomR (K , τ>−n ωR )∗ ) 0 ∼
= hr ((τ>n ωR )∗ ) ∼
for every r < n. This finishes the proof.
Now we can prove the following extension of the main result of [HR76].
18
Corollary 4.13. Let R be a Noetherian N-graded (R0 = k)-algebra with m the unique
homogeneous maximal ideal. Suppose R is equidimensional and Cohen-Macaulay on Spec R−
{m}. Assume one of the following:
(a) k has characteristic p > 0 and R is F -injective.
(b) k has characteristic 0 and R is Du Bois.
Then H r (x, R) 0 ∼
= Hmr (R) for every r < n = dim R and every homogeneous system of
parameters x = x1 , . . . , xn , where H r (x, R) denotes the r-th Koszul cohomology of x. In
other words, it is not necessary to take a direct limit when computing the local cohomology.
Proof. Since R is equidimensional and Cohen-Macaulay on the punctured spectrum,
r we know
r
r
that Hm (R) has finite length for every r < n = dim R. We will show Hm (R) = Hm (R) 0 for
every r < n in situation (a) or (b).
r This will finish the proof by rTheorem 4.12.
r
In situation (a), Hm (R) = Hm (R) 0 is obvious because Hm (R) has finite length and
Frobenius acts injectively on it. In situation (b), notice that Hmr (R) <0 = 0 by Theorem 4.8
while Hmr (R) >0 = 0 by Proposition 4.4, hence Hmr (R) = Hmr (R) 0 .
Remark 4.14. In situation (b) above, if R is normal standard graded, then Hmr (R) = Hmr (R) 0
also follows from [Ma15, Theorem 4.5]. However, in the above proof we don’t need any normal
or standard graded hypothesis thanks to Proposition 4.4 and Theorem 4.8.
Remark 4.15. Corollary 4.13 was proved when R is F -pure and k is perfect in [HR76, Theorem 1.1], and by a technical reduction to p > 0 technique, it was also proved when R is of
F -pure type [HR76, Theorem 4.8]. Since F -pure certainly implies F -injective and F -injective
type implies Du Bois (see [Sch09]), our theorem gives a generalization of Hochster-Roberts’s
result, and our proof is quite different from that of [HR76].
We end this section by pointing out an example showing that in Theorem 4.12 or Corollary 4.13,
it is possible that H r (x, R) 6= Hmr (R), i.e., we must take the degree 0 piece of the Koszul
cohomology. This is a variant of Example 4.11.
Example 4.16. Let R = x3k[x,y,z]
#k[a, b, c] be the Segre product of x3k[x,y,z]
and k[a, b, c]
+y 3 +z 3
+y 3 +z 3
where the characteristic of k is either 0 or congruent to 1 mod 3. Therefore R is the
homogeneous coordinate ring of the Segre embedding of X = E × P2 to P8 , where E =
Proj x3k[x,y,z]
is an elliptic curve. Notice that dim X = 3 and dim R = 4. Since R has an
+y 3 +z 3
isolated singularity it is Cohen-Macaulay on the punctured spectrum. It is easy to check
that R is Du Bois in characteristic 0, and F -pure (and thus F -injective) in characteristic
p > 0 since p ≡ 1 mod 3. So we know that Hmi (R) = [Hmi (R)]0 for every i < 4 where m is
the unique homogeneous maximal ideal of R. Now we compute:
Hm2 (R) = [Hm2 (R)]0 = H 1 (X, OX ) = ⊕i+j=1 H i (E, OE ) ⊗k H j (P2 , OP2 ) ∼
=k
Hm3 (R) = [Hm3 (R)]0 = H 2 (X, OX ) = ⊕i+j=2 H i (E, OE ) ⊗k H j (P2 , OP2 ) = 0.
The first line shows that R is not Cohen-Macaulay (actually we have depth R = 2), so for
any homogeneous system of parameters x of R, we have H 3 (x, R) ∼
= H1 (x, R) 6= 0. Hence
3
3
H (x, R) 6= Hm (R).
19
5. Deformation of dense F -injective type
In this section we use results in section 3 to prove that singularities of (dense) F -injective
type in characteristic 0 deform. In fact, we prove a slightly stronger result. Our motivation for studying this question is a recent result of Kovács-Schwede [KS16a] that Du Bois
singularities in characteristic 0 deform.
By the main result of [Sch09], singularities of dense F -injective type are always Du Bois.
It is conjectured [BST13, Conjecture 4.1] that Du Bois singularities should be equivalent to
singularities of dense F -injective type. This conjecture is equivalent to the weak ordinarity
conjecture of Mustaţă-Srinivas [MS11]. If this conjecture is true, then singularities of dense
F -injective type deform because Du Bois singularities deform [KS16a]. However, the weak
ordinarity conjecture is wide open.
Setup 5.1 (Reduction to characteristic p > 0). We recall briefly reduction to characteristic
p > 0. For more details in our setting, see [HH06, Section 2.1] and [Sch09, Section 6].
Suppose (R, m) is essentially of finite type over C (or another field of characteristic
zero, which we will also call C) so that (R, m) is a homomorphic image of TP where
T = C[x1 , . . . , xt ] and P ⊆ T is a prime ideal so that R = (T /J)P . Given a finite collection of finitely generated R-modules Mi (and finitely many maps between them), we may
assume that each Mi = (Mi′ )P for a finitely generated T -module Mi′ annihilated by J. Suppose E −
→ Spec T /J is the reduced preimage of T /J in a log resolution of (Spec T, Spec T /J).
We also keep track of E −
→ Spec T /J in the reduction to characteristic p > 0 process, as well
i
0
as the modules h (ΩT /J ) = Ri π∗ OE . Assume x is the image of h(x1 , . . . , xt ) ∈ T (note that
if x has a problematic denominator, we can replace x by another element that generates the
same ideal and does not have a denominator).
We pick a finitely generated regular Z-algebra A ⊆ C such that the coefficients of the generators of P , the coefficients of h, the coefficients of J, and the coefficients of a presentation
of the Mi′ are contained in A. Form TA = A[x1 , . . . , xt ] and let PA = P ∩ TA , JA = J ∩ TA
and observe that PA ⊗A C = P, JA ⊗A C = J by construction. Note that by generic flatness,
we can replace A by A[b−1 ] and so shrink Spec A if necessary to assume that RA is a flat
A-module and x is a nonzerodivisor on RA . Likewise form (Mi′ )A with the same presentation
matrix of Mi′ (and likewise with maps between the modules) and form EA −
→ Spec TA /JA so
′
′
i
i
that (Mi )A ⊗A C = Mi and that (R π∗ OEA ) ⊗A C = (R π∗ OEA ). Shrinking Spec A yet again
if necessary, we can assume all these modules are flat over A (and that any relevant kernels
and cokernels of the finitely many maps between them are also flat).
We now mod out by a maximal ideal n of A with κ = A/n. In particular, we use RA =
(TA /JA )PA , etc. and Rκ , Tκ , Eκ etc. to denote the corresponding rings, schemes and modules
over A and κ, where κ = A/n for n a maximal ideal in A. Since all relevant kernels and
cokernels of the maps are flat by shrinking Spec A, x is a nonzerodivisor on Rκ for every κ.
We also need a slightly different version of the main result of [Sch09]. In that paper, it was
assumed that if R is finite type over a field of characteristic zero, and of dense F -injective
type, then R has Du Bois singularities. We need a version of this result in which R is local.
The reason is, that, in the notation of Theorem 5.3, if R is of finite type, and Rp /xp Rp is F injective, then for p sufficiently divisible, we obtain that Rp is F -injective in a neighborhood
of V (xp ). The problem is that we do not know to control these neighborhoods as p varies.
20
Thus we need the following preliminary result. In particular, we do not know whether having
dense F -injective type is an open property in characteristic zero.
Theorem 5.2. Let (R, m) be a local ring essentially of finite type over C and suppose that
R has dense F -injective type. Then R is Du Bois.
The strategy is the same as in [Sch09], indeed, the proof only differs in how carefully we
keep track of a minimal prime of the non-Du Bois locus.
Sketch of the proof. It is easy to see that R is seminormal so we need to show that hi (Ω0R ) = 0
for all i > 0. We use the notation of Setup 5.1. Let QC ⊆ P ⊆ T /J correspond to a minimal
prime of the non-Du Bois locus of (R, m), so that hi (Ω0(T /J)Q ) = hi (Rπ∗ OE )QC has finite
C
length for i > 0. Since (R, m) has dense F -injective type, it is easy to see that so does
(RQC , QC RQC ) so we may assume that m = QC . Now using Setup 5.1, reduce to some model
so that (Rκ , mκ ) is F -injective. The proof now follows exactly that of [Sch09, Proof 6.1]
where we obtain that
(Rκ ).
hi (R(πκ )∗ OEκ )Qκ ֒→ HQi+1
κ
But the left side is annihilated by a power of Frobenius by [Sch09, Theorem 7.1] and Frobenius
acts injectively on the right by hypothesis. The result follows.
Theorem 5.3. Let (R, m) be a local ring essentially of finite type over C and let x be a
nonzerodivisor on R. Suppose R/xR has dense F -injective type. Then for infinitely many
p > 0, the Frobenius action xp−1 F on Hmi p (Rp ) is injective for every i, where (Rp , mp ) is the
reduction mod p of R. In particular, R has dense F -injective type.
Proof. By Theorem 5.2, R/xR has Du Bois singularities. By Theorem 3.6 (taking Z = ∅),
·x
·x
Hmi (R) −
→ Hmi (R) surjects for all i. By Matlis duality, ExtiT (R, T ) −
→ Exti (R, T ) injects
for all i. Spreading this out to A, and possibly inverting an element of A, we see that
·x
→ ExtiTA (RA , TA ) injects for all i (note there are only finitely many Ext to
ExtiTA (RA , TA ) −
consider). Inverting another element of A if necessary, we deduce that
·x
→ ExtiTκ (Rκ , Tκ )
ExtiTκ (Rκ , Tκ ) −
injects for every i and each maximal ideal n ⊆ A, setting κ = A/n. We abuse notation
and let x denote the image of x in Rκ . Applying Matis duality again and considering the
Frobenius on the local cohomology modules, we have the following collection of short exact
sequences for each i.
0
/
Hmi κ (Rκ /xRκ )
(Rκ )
Hmi+1
κ
/
/
Hmi κ (Rκ /xRκ )
(Rκ )
Hmi+1
κ
/
xp−1 F
F
0
·x
/
(Rκ )
Hmi+1
κ
/
0
/
0
F
·x
/
(Rκ )
Hmi+1
κ
where p is the characteristic of κ and F denotes the natural Frobenius action on Hmi κ (Rκ /xRκ )
(Rκ ).
and Hmi+1
κ
At this point recall that R/xR has dense F -injective type. It follows that for infinitely
many p > 0, if the residue field κ of A has characteristic p > 0 then the natural Frobenius
action on Hmi κ (Rκ /xRκ ) is injective for every i. Now chasing the above diagram, if xp−1 F is
(Rκ )) ∩ ker(xp−1 F ). Since y is in the socle
not injective, then we can pick 0 6= y ∈ socle(Hmi+1
κ
21
(Rκ ), it maps to zero under multiplication by x. But then 0 6= y ∈ Hmi κ (Rκ /xRκ ),
of Hmi+1
κ
and chasing the diagram we find that xp−1 F (y) 6= 0, which is a contradiction.
We have established that for infinitely many p, after we do reduction to p, the Frobenius
action xp−1 F on Hmi+1
(Rκ ) is injective for every i. This certainly implies the natural Frobenius
κ
(R
action F on Hmi+1
κ ) is injective for every i. Hence R has dense F -injective type.
κ
Remark 5.4. It is still unknown whether F -injective singularities in characteristic p > 0
deform (and this has been open since [Fed83]). Our theorem is in support of this conjecture:
it shows this is true “in characteristic p ≫ 0”. For the most recent progress on deformation of
F -injectivity in characteristic p > 0, we refer to [HMS14]. On the other hand, Theorem 5.3
provides evidence for the weak ordinarity conjecture [MS11] because of the relation between
the weak ordinarity conjecture and the conjecture that Du Bois singularities have dense
F -injective type [BST13].
References
[Bha14]
B. Bhatt: On the non-existence of small Cohen-Macaulay algebras, J. Algebra 411 (2014),
1–11.
[BST13]
B. Bhatt, K. Schwede, and S. Takagi: The weak ordinarity conjecture and F -singularities,
Advanced Studies in Pure Mathematics, to appear, arXiv: 1307.3763.
[Bou98]
N. Bourbaki: Commutative algebra. Chapters 1–7, Elements of Mathematics (Berlin),
Springer-Verlag, Berlin, 1998, Translated from the French, Reprint of the 1989 English translation. MR1727221 (2001g:13001)
[BS98]
M. P. Brodmann and R. Y. Sharp: Local cohomology: an algebraic introduction with geometric applications, Cambridge Studies in Advanced Mathematics, vol. 60, Cambridge University
Press, Cambridge, 1998. MR1613627 (99h:13020)
[CHWW11] G. Cortiñas, C. Haesemeyer, M. E. Walker, and C. Weibel: A negative answer to a
question of Bass, Proc. Amer. Math. Soc. 139 (2011), no. 4, 1187–1200. 2748413
[dFEM]
T. de Fernex, L. Ein, and M. Mustaţă: Vanishing theorems and singularities in birational
geometry, An unpublished draft of a monography.
[DN16]
A. De Stefani and L. Núñez-Betancourt: F -threshold of graded rings, Nagoya Mathematical Journal (2016), 1–28.
[Del74]
P. Deligne: Théorie de Hodge. III, Inst. Hautes Études Sci. Publ. Math. (1974), no. 44, 5–77.
MR0498552 (58 #16653b)
[Dem88]
[DB81]
[DGI06]
[EMS00]
[EH08]
[Fed83]
[FW89]
[GK14]
[HW02]
M. Demazure: Anneaux gradués normaux, Introduction à la théorie des singularités, II,
Travaux en Cours, vol. 37, Hermann, Paris, 1988, pp. 35–68. MR1074589 (91k:14004)
P. Du Bois: Complexe de de Rham filtré d’une variété singulière, Bull. Soc. Math. France 109
(1981), no. 1, 41–81. MR613848 (82j:14006)
W. Dwyer, J. P. C. Greenlees, and S. Iyengar: Finiteness in derived categories of local
rings, Comment. Math. Helv. 81 (2006), no. 2, 383–432. 2225632
D. Eisenbud, M. Mustaţă, and M. Stillman: Cohomology on toric varieties and local
cohomology with monomial supports, J. symbolic Comput. 29 (2000), no. 4-5, 583–600.
F. Enescu and M. Hochster: The Frobenius structure of local cohomology, Algebra Number
Theory 2 (2008), no. 7, 721–754. MR2460693 (2009i:13009)
R. Fedder: F -purity and rational singularity, Trans. Amer. Math. Soc. 278 (1983), no. 2,
461–480. MR701505 (84h:13031)
R. Fedder and K. Watanabe: A characterization of F -regularity in terms of F -purity,
Commutative algebra (Berkeley, CA, 1987), Math. Sci. Res. Inst. Publ., vol. 15, Springer, New
York, 1989, pp. 227–245. MR1015520 (91k:13009)
P. Graf and S. J. Kovács: Potentially Du Bois spaces, J. Singul. 8 (2014), 117–134. 3395242
N. Hara and K.-I. Watanabe: F-regular and F-pure rings vs. log terminal and log canonical
singularities, J. Algebraic Geom. 11 (2002), no. 2, 363–392. MR1874118 (2002k:13009)
22
[HH06]
[HR76]
[HMS14]
[HJ14]
[HL90]
[Kol13]
M. Hochster and C. Huneke: Tight closure in equal characteristic zero, A preprint of a
manuscript, 2006.
M. Hochster and J. L. Roberts: The purity of the Frobenius and local cohomology, Advances in Math. 21 (1976), no. 2, 117–172. MR0417172 (54 #5230)
J. Horiuchi, L. E. Miller, and K. Shimomoto: Deformation of F -injectivity and local
cohomology, Indiana Univ. Math. J. 63 (2014), no. 4, 1139–1157, Appendix by Karl Schwede
and Anurag K. Singh. 3263925
A. Huber and C. Jörder: Differential forms in the h-topology, Algebr. Geom. 1 (2014), no. 4,
449–478. 3272910
C. Huneke and G. Lyubeznik: On the vanishing of local cohomology modules, Invent. Math.
102 (1990), no. 1, 73–93.
J. Kollár: Singularities of the minimal model program, Cambridge Tracts in Mathematics, vol.
200, Cambridge University Press, Cambridge, 2013, With the collaboration of Sándor Kovács.
3057950
[KK10]
[KM98]
J. Kollár and S. J. Kovács: Log canonical singularities are Du Bois, J. Amer. Math. Soc.
23 (2010), no. 3, 791–813. 2629988
J. Kollár and S. Mori: Birational geometry of algebraic varieties, Cambridge Tracts in
Mathematics, vol. 134, Cambridge University Press, Cambridge, 1998, With the collaboration of C. H. Clemens and A. Corti, Translated from the 1998 Japanese original. MR1658959
(2000b:14018)
[Kov99]
[Kov11]
S. J. Kovács: Rational, log canonical, Du Bois singularities: on the conjectures of Kollár and
Steenbrink, Compositio Math. 118 (1999), no. 2, 123–133. MR1713307 (2001g:14022)
S. J. Kovács: Du Bois pairs and vanishing theorems, Kyoto J. Math. 51 (2011), no. 1, 47–69.
2784747
[KS11]
[KS16a]
[KS16b]
[KSSW09]
[Lee09]
[Lyu06]
[Ma14]
[Ma15]
[MZ14]
[MS11]
[Pat15]
[Put15]
[Sai00]
[Sch82]
S. J. Kovács and K. Schwede: Hodge theory meets the minimal model program: a survey of
log canonical and Du Bois singularities, Topology of Stratified Spaces (G. Friedman, E. Hunsicker, A. Libgober, and L. Maxim, eds.), Math. Sci. Res. Inst. Publ., vol. 58, Cambridge Univ.
Press, Cambridge, 2011, pp. 51–94.
S. J. Kovács and K. Schwede: Du Bois singularities deform, Minimal Models and Extremal
Rays, Adv. Stud. Pure Math., vol. 70, Math. Soc. Japan, Tokyo, 2016, pp. 49–66.
S. J. Kovács and K. Schwede: Inversion of adjunction for rational and Du Bois pairs,
Algebra Number Theory 10 (2016), no. 5, 969–1000. 3531359
K. Kurano, E.-i. Sato, A. K. Singh, and K.-i. Watanabe: Multigraded rings, diagonal
subalgebras, and rational singularities, J. Algebra 332 (2009), 3248–3267.
B. Lee: Local acyclic fibrations and the de Rham complex, Homology, Homotopy Appl. 11
(2009), no. 1, 115–140. 2506129
G. Lyubeznik: On the vanishing of local cohomology in characteristic p > 0, Compos. Math.
142 (2006), no. 1, 207–221.
L. Ma: Finiteness properties of local cohomology for F -pure local rings, Int. Math. Res. Not.
(2014), no. 20, 5489–5509.
L. Ma: F -injectivity and Buchsbaum singularities, Math. Ann 362 (2015), no. 1-2, 25–42.
L. Ma and W. Zhang: Euleriang graded D-modules, Math. Res. Lett 21 (2014), no. 1, 149–
167.
M. Mustaţă and V. Srinivas: Ordinary varieties and the comparison between multiplier
ideals and test ideals, Nagoya Math. J. 204 (2011), 125–157. 2863367
Z. Patakfalvi: Semi-negativity of Hodge bundles associated to Du Bois families, J. Pure Appl.
Algebra 219 (2015), no. 12, 5387–5393. 3390027
T. J. Puthenpurakal: De Rham cohomology of local cohomology modules: the graded case,
Nagoya Math. J. 217 (2015), 1–21.
M. Saito: Mixed Hodge complexes on algebraic varieties, Math. Ann. 316 (2000), no. 2, 283–
331. MR1741272 (2002h:14012)
P. Schenzel: Applications of dualizing complexes to Buchsbaum rings, Adv. in Math. 44
(1982), no. 1, 61–77. MR654548 (83j:13011)
23
[Sch07]
[Sch09]
K. Schwede: A simple characterization of Du Bois singularities, Compos. Math. 143 (2007),
no. 4, 813–828. MR2339829
K. Schwede: F -injective singularities are Du Bois, Amer. J. Math. 131 (2009), no. 2, 445–473.
MR2503989
[SW05]
[SW07]
[Ste81]
[Var13]
[Wat81]
A. K. Singh and U. Walther: On the arithmetic rank of certain Segre products, Commutative
algebra and algebraic geometry, Contemp. Math., vol. 390, Amer. Math. Soc., Providence, RI,
2005, pp. 147–155.
A. K. Singh and U. Walther: Local cohomology and pure morphisms, Illinois J. Math 51
(2007), no. 1, 287–298.
J. H. M. Steenbrink: Cohomologically insignificant degenerations, Compositio Math. 42
(1980/81), no. 3, 315–320. MR607373 (84g:14011)
M. Varbaro: Cohomological and projective dimensions, Compos. Math. 149 (2013), no. 7,
1203–1210. 3078644
K.-i. Watanabe: Some remarks concerning Demazure’s construction of normal graded rings,
Nagoya Math J. 83 (1981), 203–221.
Department of Mathematics, University of Utah, Salt Lake City, UT 84112
E-mail address: [email protected]
Department of Mathematics, University of Utah, Salt Lake City, UT 84112
E-mail address: [email protected]
Department of Mathematics College of Humanities and Sciences, Nihon University, Setagayaku, Tokyo 156-8550, Japan
E-mail address: [email protected]
24
| 0 |
arXiv:1711.05869v1 [stat.ML] 16 Nov 2017
Predictive Independence Testing,
Predictive Conditional Independence Testing,
and Predictive Graphical Modelling
Samuel Burkart∗ and Franz J. Király†
Department of Statistical Science, University College London,
Gower Street, London WC1E 6BT, United Kingdom
November 17, 2017
Abstract
Testing (conditional) independence of multivariate random variables is a task central to statistical inference and modelling in general - though unfortunately one for which to date there
does not exist a practicable workflow. State-of-art workflows suffer from the need for heuristic or
subjective manual choices, high computational complexity, or strong parametric assumptions.
We address these problems by establishing a theoretical link between multivariate/conditional
independence testing, and model comparison in the multivariate predictive modelling aka supervised learning task. This link allows advances in the extensively studied supervised learning workflow to be directly transferred to independence testing workflows - including automated tuning
of machine learning type which addresses the need for a heuristic choice, the ability to quantitatively trade-off computational demand with accuracy, and the modern black-box philosophy for
checking and interfacing.
As a practical implementation of this link between the two workflows, we present a python
package ’pcit’, which implements our novel multivariate and conditional independence tests, interfacing the supervised learning API of the scikit-learn package. Theory and package also allow
for straightforward independence test based learning of graphical model structure.
We empirically show that our proposed predictive independence test outperform or are on par
to current practice, and the derived graphical model structure learning algorithms asymptotically
recover the ’true’ graph. This paper, and the ’pcit’ package accompanying it, thus provide powerful, scalable, generalizable, and easy-to-use methods for multivariate and conditional independence
testing, as well as for graphical model structure learning.
∗ [email protected]
† [email protected]
1
Contents
1 Introduction
1.1 Setting: testing independence . . .
1.2 Predictive independence testing . .
1.3 Graphical model structure learning
1.4 Principal contributions . . . . . . .
1.5 Paper overview . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2 Statistical independence testing
2.1 About (in)dependence . . . . . . . . . . .
2.2 Statistical independence testing . . . . . .
2.3 The state-of-art in advanced independence
2.4 Issues in current methodology . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
4
4
4
5
5
5
. . . . .
. . . . .
testing
. . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
7
7
7
8
10
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3 Predictive independence testing
3.1 Mathematical setting . . . . . . . . . . . . .
3.2 Elicitation by convex losses . . . . . . . . .
3.3 Predictive uninformedness . . . . . . . . . .
3.4 Statistical dependence equals predictability
3.5 Conditional independence . . . . . . . . . .
3.6 Testing prediction error against baseline . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
11
11
12
13
14
18
19
4 Graphical Models
4.1 Informal definition of graphical models . . .
4.2 Types of graphical models . . . . . . . . . .
4.2.1 Bayesian Networks . . . . . . . . . .
4.2.2 Markov Networks . . . . . . . . . . .
4.3 Graphical model structure learning . . . . .
4.3.1 Score-based methods . . . . . . . . .
4.3.2 Independence testing based methods
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
22
22
22
22
24
25
25
26
5 Predictive inference algorithms
5.1 Predictive conditional independence testing . . . . . . . . .
5.1.1 False-discovery rate control . . . . . . . . . . . . . .
5.1.2 Improving the prediction functionals . . . . . . . . .
5.1.3 Supervised learning for independence testing . . . .
5.2 Predictive structure learning of undirected graphical models
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
27
27
28
28
29
29
6 pcit package
6.1 Overview . . . . . . . . . . . . . . . .
6.1.1 Use cases . . . . . . . . . . . .
6.1.2 Dependencies . . . . . . . . . .
6.2 API description . . . . . . . . . . . . .
6.3 Function signatures . . . . . . . . . . .
6.4 API design . . . . . . . . . . . . . . .
6.4.1 Sklearn interface . . . . . . . .
6.4.2 Wrapper for Sklearn estimators
6.5 Examples . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
31
31
31
31
31
32
34
34
34
35
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
7 Experiments
7.1 Performance tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.1.1 Performance of conditional independence test . . . . . . . . . . . .
7.1.2 Performance of structure learning algorithm: Error rates . . . . . .
7.1.3 Performance of structure learning algorithm: Variance of estimated
7.2 Experiments on real data sets . . . . . . . . . . . . . . . . . . . . . . . . .
7.2.1 Sklearn data sets: Boston Housing and Iris . . . . . . . . . . . . .
7.2.2 Key short-term economic indicators (UK) . . . . . . . . . . . . . .
. . . .
. . . .
. . . .
model
. . . .
. . . .
. . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
36
36
36
37
40
41
41
41
8 Discussion
43
Appendices
45
A Best uninformed predictors: classification
A.1 Misclassification loss is a probabilistic loss . . . . . . . . . . . . . . . . . . . . . . . . .
A.2 Logarithmic loss is a proper loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A.3 Brier loss is a proper loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
45
45
46
46
B Elicited statistics for regression losses
B.1 Squared loss elicits the mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
B.2 Quantile loss elicits the quantile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
47
47
48
3
1. Introduction
1.1. Setting: testing independence
The study of dependence is at the heart of any type of statistical analysis, and independence testing
is an important step in many scientific investigations, be it to determine if two things are related, or
to assess if an intervention had the desired effect, such as:
• When conducting market research, one might be interested in questions such as “Are our advertisement expenditures independent of our profits?” (hopefully not), or the more sophisticated
version “conditional on the state of the market, are advertisement expenditures independent of
our profits?”, which, if found to be true, would mean we are unlikely to increase profits through
an increase in our advertising budget (subject to the usual issue that inferring causality from
data requires an intervention or instrument).
• When collecting data for a medical study on the occurrence of an outcome Y , one might ask
“In the presence of data about attributes A for the subjects, should we still collect data for
attributes B ”. If Y is independent of attributes B given attributes A, additionally collecting
information about attributes B will not improve the knowledge of the state of Y (subject to the
usual issue that this conclusion is valid only for patients sampled in the same way from the same
population).
The difficulty of the independence testing task crucially relies on whether the following two complications present:
• Multivariate independence testing. This concerns the type of values which the involved
variables, in the second example the values which attributes A and attributes B may take: if the
domain of possible values consists either of a single number (continuous variable), or one class
out of many (categorical variable), the hypothesis test is “univariate”, and powerful methodology
exists that deals well with most scenarios, subject to some constraints. Otherwise, we are in the
“multivariate hypothesis testing” setting.
• Conditional independence testing. Whether there are conditioning random variables which
are to be controlled for, in the sense of testing independence conditional on a possible third
attributes C. If so, we are in the “conditional hypothesis testing” setting.
For the task which is neither multivariate nor conditional, well-recognized and universally applicable
hypothesis tests (such as the t-test or chi-squared test) are classically known. The multivariate setting
and the conditional setting are less well studied, and are lacking approaches which are general and
universally accepted, due to difficulties in finding a suitable approach which comes with theoretical
guarantees and is free of strong model assumptions. The setting which is both multivariate and
conditional is barely studied. The three major state-of-the-art approaches are density estimation,
copula and kernel based methods. Most instances are constrained to specific cases or rely on subjective
choices that are difficult to validate on real-world data. A more detailed overview of the state-of-art
and background literature is given in Section 2.
1.2. Predictive independence testing
The methodology outlined in this paper will consider multivariate and conditional independence testing from a new angle. The underlying idea for the test is that if two random variables X, Y are
independent, it is impossible to predict Y from X - in fact it will be shown that these two properties
are equivalent (in a certain quantitative sense). The same applies to conditional independence tests:
two random variables X, Y are conditionally independent given Z, if adding X as predictor variable
4
above Z will not improve the prediction of Y . In both cases, the predictive hypothesis test takes the
form of comparing a good prediction strategy against an optimal baseline strategy via a predictive
loss function. By determining if losses stemming from two predictions are significantly different, one
can then test statistically if a variable adds to the prediction of another variable (potentially in the
presence of a conditioning set), which by the equivalence is a test for (conditional) independence.
1.3. Graphical model structure learning
Probabilistic graphical models are a concept that heavily relies on independence statements for learning and inference. Most structure learning algorithms to date are, as a result of the lack of a scalable
conditional independence tests and additional combinatorial issues, constraint-based, or make heavy
assumptions on underlying distributions of a sample. This paper will leverage the predictive independence test into a new routine to estimate the undirected graph for the distribution underlying a
sample, based on conditional independence testing, allowing it to make only weak assumptions on the
underlying distribution.
1.4. Principal contributions
The new approach to multivariate and conditional independence testing outlined in this paper improves
concurrent methodology by deriving an algorithm that
• features a principled model selection algorithm for independence testing by linking the field of
independence testing to the field of predictive modelling, thus filling a gap in state-of-the-art
methodology,
• additionally allowing independence testing to directly benefit from the well-understood and
efficiently implemented theory for model selection and parameter tuning in predictive modelling,
• is comparatively fast and scalable on a wide variety of problems and
• deals with the multivariate and conditional independence testing task in a straightforward manner.
Additionally, an algorithm leveraging the newly derived test into a scalable independence testingbased graphical model structure learning algorithm is outlined, which overcomes issues in the field by
offering a test for undirected graph structure learning that
• has polynomial time complexity, O(p2 ) in the number of variables, p,
• is exact and
• offers stringent methodology to control the number of type 1 errors in the estimated graph.
1.5. Paper overview
Section 2 will provide an overview of the most important tasks in and approaches to statistical independence testing and outline the issues in current methodology. Section 3 will then propose and derive
a novel approach to independence testing, the predictive conditional independence test (PCIT). After,
section 4 will introduce the concept of a graphical model and survey the current structure learning
algorithms. Section 5 will then state the relevant algorithms of the Python implementation, which
is outlined in section 6. Section 7 provides performance statistics and examples for the conditional
independence test as well as the graphical model structure learning algorithm. Lastly, section 8
will describe the advantages of using the PCIT for independence testing and outline drawbacks and
directions for further research.
5
Authors’ contributions
This manuscript is based on SB’s MSc thesis, submitted September 2017 at University College London
and written under supervision of FK, as well as on an unpublished manuscript of FK which relates
predictive model selection to statistical independence. The present manuscript is a substantial reworking of the thesis manuscript, jointly done by SB and FK.
FK provided the ideas for the independence tests (paper) in section 3 and the usage of them for
graphical models, SB and FK jointly conceived the ideas for the graphical model structure learning algorithm. Literature overview is due to SB with helpful pointers by FK. Python package and
experiments are written and conducted by SB, partially based on discussions with FK.
Acknowledgements
We thank Fredrik Hallgren and Harald Oberhauser for helpful discussions.
FH has briefly worked on a thesis about the same topic under the supervision of FK, before switching
to a different topic. While FH did, to our knowledge, not himself make any of the contributions found
in this paper, discussions with him insipred a few of them.
HO pointed FK in generic discussions about loss functions towards some prior results (elicitation of
median, and elicitation as a concept as defined by Gneiting and Raftery [17]) which helped making
some of the proven statements more precise (such as the Q-losses being faithful for the univariate real
case).
6
2. Statistical independence testing
2.1. About (in)dependence
Statistical (in)dependence is a property of a set of random variables central to statistical inference.
Intuitively, if a set of random variables are statistically independent, knowing the value of some will
not help in inferring the values of any of the others.
Mathematically: let X, Y and Z be random variables taking values in X, Y and Z respectively. As
usual, we denote for A ∈ X by P (X ∈ A) the probability of X taking a value in A, and for A ∈ X, C ∈ Z
by P (X ∈ A|Z ∈ C) := P (X ∈ A, Z ∈ C)/P (Z ∈ C) be the conditional probability of X taking a
value in A when Z is known/observed to have taken a value in C.
Definition 2.1. X and Y are called marginally independent (of each other) if for all A ⊆ X and
B ⊆ Y (where the below expression is defined) it holds that
P (X ∈ A, Y ∈ B) = P (X ∈ A)P (Y ∈ B).
This formulation allows for X and Y to be defined over sets of random variables that are a mixture of
continuous and discrete, as well as being univariate or multivariate (thus implicitly covering the case
of multiple univariate random variables as well).
Definition 2.2. X and Y are called conditionally independent (of each other) given Z if for all
A ⊆ X, B ⊆ Y, and C ⊆ Z (where the below expression is defined) it holds that
P (X ∈ A, Y ∈ B|Z ∈ C) = P (X ∈ A|Z ∈ C)P (Y ∈ B|Z ∈ C).
For absolutely continuous or discrete X and Y , Definition 2.1 straightforwardly implies that marginal
independence is equivalent to the joint distribution or mass function factorizing, i.e., it equals the
product of the marginal distributions’ probability or mass function. The analogue result also shows
for the conditional case.
We would like to note that the mathematical definition independence is symmetric, i.e., having the
property from Definition 2.1 is unaffected by interchanging X and Y . In contrast, the intuitive motivation we gave at the beginning however, namely that knowing the value of X does not yield any
additional restriction regarding the value of Y , is non-symmetric. This non-symmetric statement of
statistical independence is commonly phrased mathematically as P (Y ∈ B|X ∈ A) = P (Y ∈ B)
which is directly implied by Definition 2.1 (and that of the conditional).
This non-symmetric characterization of statistical independence is also morally at the heart of the
predictive characterization which we later give in the supervised learning setting: Y cannot be predicted from X, phrased in a quantitative way that connects checking of statistical independence to
estimation of a supervised generalization error difference which eventually allows for a quantitatve
testing of the hypothesis of independence.
2.2. Statistical independence testing
Testing whether two random variables are (conditionally) independent is one of the most common
tasks in practical statistics (perhaps the most common one after summarization) and one of the most
important topics in the theoretical study of statistical inference (perhaps the most important one).
The most frequently found types of quantitative certificates for statistical independence are phrased in
the (frequentist) Neyman-Pearson hypothesis testing paradigm. In terms of the methodological/mathematical idea, the main distinction of independence tests is into parametric (where the data is assumed
to stem from a certain model type and/or distribution) and non-parametric (”distribution-free”) tests.
The hypothesis tested usually takes one of the three following forms:
7
|=
Marginal independence, X
Y
Conditional independence, X
|=
Marginal independence of two random variables is the topos of much of classical statistics. Well
known classical test statistics for the continuous univariate case are the Pearson correlation coefficient,
Kendall’s τ and Spearman’s ρ. For discrete variables, Pearson’s χ-squared statistic can be used. From
theoretical results on these statistics’ asymptotic or exact distribution, univariate independence tests
may be derived. For X being discrete and Y being continuous, t-family tests or Wilcoxon family tests
may be used.
Testing with at least one of X or Y being multivariate is much more difficult and less standard, current
methodology will be outlined in the next section.
Y |Z
Testing for conditional independence is an inherently more difficult problem than marginal independence [7]. Hence, few viable options exist to date to test if two sets of variables are conditionally
independent given a conditioning set. The most common instances are either based on parametric
model assumptions (e.g., linear) or on binning Z and X, then comparing if the distribution of Y
conditioned on Z changes, when additionally conditioning on X. Strategies for of conditional independence testing may demand much larger sample sizes than marginal independence tests due to the
explicitly modelling sub-domains of the values which Z takes.
d
Equality of distribution of two unpaired samples, X1 = X2 ,
d
where = indicates equality in underlying distribution. An unpaired two-sample test for equality
of distribution tests whether two unpaired i.i.d. samples from X1 and X2 are sampled from the
same underlying distributions. The connection to independence testing is not obvious but may be
established by pairing each draw from X1 or X2 with a draw from a variable Y taking values in {1, 2},
indicating whether the draw was from X1 or X2 , i.e., taking value i if the draw was from Xi .
2.3. The state-of-art in advanced independence testing
The task of independence testing (in various settings) has been tackled from many different angles.
As soon as one considers observations that are multivariate, or the conditional task, there is not one
universally agreed-upon method, but many different approaches. The most prominent ideas used for
multivariate and/or conditional independent testing will be presented in this section together with
their advantages and shortcomings.
Density estimation
The classical approach. In case of existence, the joint probability density function contains all the
necessary information about independence structures for a set of random variables in the multivariate
case. While many univariate tests are based on or may be interpreted as being based on some sort of
density estimation, for the multivariate and conditional case, density estimation is a difficult problem.
One example is shown in [12], where density estimation-based information-theoretical measures are
used to conduct multivariate marginal independence tests.
Copulas
An approach that is widely used in finance and risk management. Copulas are multivariate probability
distributions with uniform marginals. To use copulas for independence tests, one transforms the
marginals of a multivariate distribution into uniform distributions, and the resulting copula contains
all information about the independence structure. A simple example using the empirical cdf can be
found in [16]. Copulas have mainly been used for marginal multivariate independence tests, such as in
8
[34]. The task of testing for conditional independence has been attempted through the use of partial
copulas [6], however strong assumptions (linearity) are made on the type of relationship between the
variables that are to be tested for independence and the conditioning set. Two-sample testing has
largely remain unaddressed, since its application in the financial sector are less relevant. [33] describes
a two-sample test for the estimated copulas, and hence, the independence structures, but the test does
not extend to a comparison of the joint distributions as a whole.
Kernel methods
|=
A relatively recent approach using optimization techniques in reproducing kernel Hilbert spaces to
answer independence queries based on various test-statistics. Kernel methods were first used for
marginal multivariate independence testing via a quantitative measure of dependence, the HilbertSchmidt Independence Criterion (HSIC), which is defined as the squared Hilbert Schmidt norm of
the cross-covariance operator (the covariance between two difference spaces), see [19]. [20] further
expands on the theoretical foundations of the HSIC’s distribution under the null, resulting in a hypothesis test for the independence of two variables X and Y using the HSIC criterion. [18] outlines
a non-parametric test for independence, and demonstrates its performance on samples of random
variables with up to three dimensions.
The conditional case is tackled in [37], which derives a test statistic based on conditional crosscovariance operators, for which the asymptotic distribution under the null-hypothesis X Y |Z is
derived. They state that the manual choice of kernel can affect type II and, more importantly, type I
errors, especially when the dimensionality of the conditioning set is large. Additionally, they found
that the performance of their method decreases in the dimensions of the conditioning set, constraining
the set of problems for which the test is viable for.
As for two-sample testing, [3] derive a test based on the Euclidean inter-point distances between the
two samples. A different approach to the same test statistics with the additional use of characteristic
functions was made by [13]. While the difficulty of density estimation is reduced when using the
empirical distributions, the data requirements tend to be much larger, while additionally imposing a
sometimes taxing computational complexity. Both of these methods can be related to kernel functions, which was picked up in [22], which proposed kernel-based two-sample test by deriving three
multivariate tests for assessing the null-hypothesis that the distributions p and q generating two samples X ∼ p, Y ∼ q are equal, p = q. The criterion introduced is the “Maximum Mean Discrepancy”
(MMD) between p and q over a function space F.
M M D = sup (EX∼p [f (X)] − EY ∼q [f (Y )])
f ∈F
That is, the maximum difference between the expected function values with respect to the function
space (in their case, a Reproducing Kernel Hilbert Space).
Predictive independence testing: precursors
There are a few instances in which independence testing has already been approached from the perspective of predictability: most prominently, Lopez-Paz and Oquab [28] present a two-sample test
(on equality of distribution) based on abstract binary classifiers, e.g., random forests. The presented
rationale is in parts heuristic and specific to the two-sample setting. Their main idea is as follows: if
the null-hypothesis of the two samples being drawn from the same distribution is true, no classifier
assigning to a data point the distribution from it was drawn should fare significantly better than
chance, on an unseen test set.
The work of Lopez-Paz and Oquab [28] may be seen as a special case of our predictive independence
test presented later, for Y taking values in a two-element set, and the loss being the misclassification
loss. It is probably the first instance in which the abstract inability to predict is explicitly related to
independence, albeit somewhat heuristically, and via the detour of two-sample testing for equality in
9
distribution.
In earlier work, Sriperumbudur et al. [35] already relate the kernel two-sample test to a specific classifier, the Parzen window classifier, evaluated by the misclassification loss. Thus this ideas of Sriperumbudur et al. [35] may in turn be seen as a precursor of the later work of Lopez-Paz and Oquab [28]
which abstracts the idea that any classifier could be used to certify for equality of the two samples.
2.4. Issues in current methodology
Density estimation
While the probability density function is in some sense optimal for measuring dependence, since
it contains all the information about the random variable, its estimation is a difficult task, which
requires either strong assumptions or large amounts of data (or both). Furthermore, due to the
curse of dimensionality, density estimation for 3 or more dimensions/variables is usually practically
intractable, hence its practical usefulness is often limited to testing pairwise independence rather than
testing for full (mutual) independence.
Copulas
Leaving aside issues arising by practitioners misunderstanding the method (which had a strong contribution to the 2007/2008 financial crisis, but no bearance whatsoever on the validity of the method
when applied correctly), copula-based independence testing is largely a heuristics driven field, requiring many subjective manual choices for estimation and testing. Above all, copula methods require a
user to subjectively choose an appropriate copula from a variety of options, such as for example the
survival copula, the multivariate Gaussian or the multivariate Student’s t-copula [9]. Additionally,
two-sample testing is to date largely unaddressed in the field.
Kernels
As for the copula-based methods, kernels require many subjective and manual choices, such as the
choice of kernel function, and its hyper-parameters. While in a theoretical setting, these choices are
made by using the ground truth (and generating artificial data from it), in practice it is difficult to
tune the parameters and make statements about the confidence in results. Additionally, the cost of
obtaining the test statistic and its asymptotic distribution may be high for many of the proposed approaches. There are attempts at resolving these issues, [36] and [23] outline heuristics and strategies
to minimize the heuristic choices, and [21] and [10] propose more computationally efficient strategies.
While all methods have their merits, and there is vast research on specific problems, providing specific
solutions to the individual challenges, what is missing is an approach that is scalable and powerful
not just for specific cases, but for the general case, that automatically finds good solutions to real
problems, where the ground truth is not known and cannot be used for tuning.
10
3. Predictive independence testing
This section will explore the relationship between predictability and dependence in a set of variables. It
will be shown that one can use supervised learning methodology to conduct marginal and conditional
independence tests. This distinction between marginal and conditional dependence is not theoretically
necessary (since the marginal case can be achieved by setting the conditioning set to the empty set),
but is made to highlight specific properties of the two approaches. First, equivalence of independence
statements and a specific supervised learning scenario will be shown. After, a routine leveraging this
equivalence to test for conditional independence will be proposed.
3.1. Mathematical setting
The independence tests will be based on model comparison of supervised learning routines.
Supervised learning is the task where given i.i.d. samples (X1 , Y1 ), . . . , (XN , YN ) ∼ (X, Y ) taking
values in X × Y, to find a prediction functional f : X → Y such that f (Xi ) well approximates a
target/label variables Yi , where ”well” is defined with respect to a loss function L which is to be
minimized in expectation.
Definition 3.1. A (point prediction) loss functional is an element of [Y×Y → R], i.e., a function with
range Y × Y and image R. By convention, the first argument will be considered the proposed/predicted
value, the second the true value to compare to. By convention, a lower loss will be considered more
beneficial.
A loss functional L : Y × Y → R is called:
(i) convex if L(., y) is lower bounded and E [L(Y, y)] ≤ L (E[Y ], y) for any Y-valued random variable
Y and any y ∈ Y, and
(ii) strictly convex if L(., y) is lower bounded and E [L(Y, y)]
L (E[Y ], y) for any non-constant
Y-valued random variable Y and any y ∈ Y.
More formally, a good prediction functional possesses a small expected generalization error
L (f ) := E[L(f (X), Y )].
In usual practice and in our setting f is estimated from training data
D = {(Xi , Yi )}N
i=1 ,
hence it is in fact a random object. The generalization error may be estimated from test data
T = {(Xi∗ , Yi∗ )}M
i=1
∗
∗
where (X1∗ , Y1∗ ), . . . , (XM
, YM
) ∼ (X, Y ),
which we assume i.i.d. and independent of the training data D and f , as
b
L (f ) =
M
X
[L(f (Xi∗ ), Yi∗ )].
i=1
Independence of training data D and test data T is required to avoid bias of the generalization error
estimate (“over-fitting”). In the following sections, when estimating the generalization error, it will
be assumed that f is independent of the test data set, and we condition on D, hence treat f as fixed
(i.e., conditional on D).
In the following, we assume that Y is a vector space, and Y ⊆ Rq , which naturally includes the setting
of supervised regression. However, it also includes the setting of supervised classification through a
special choice of loss:
11
Remark 3.2. Deterministic classification comprises the binary case of Y = {−1, 1}, or more generally
Y being finite. In neither case are additions or expectations defined on Y, as Y is just a discrete
set. Supervised classification algorithms in the deterministic case are asked to produce a prediction
functional f : X → Y, and are usually evaluated by the misclassification loss or 0/1-loss
L : Y × Y → R : (y, y∗ ) 7→ 1[y 6= y∗ ],
where 1[y 6= y∗ ] is the indicator function which evaluates to 0 if y = y∗ and to 1 otherwise. Hence,
Definition 3.1 of convexity does not directly apply to the misclassification loss as expectations are not
defined on Y.
However, by allowing all algorithms to make predictions which are Y-valued random variables (instead
of constants), one finds that
E[L(Y, y)] = L0 (p, y)
where L0 : (pY , y) 7→ 1 − pY (y)
and pY is the probability mass function of Y . Identifying Y-valued random variables with their probability mass functions, one may replace
(i) Y by the corresponding subset Y0 of R#Y−1 which is the set of probability vectors (the so-called
probability simplex). For example, Y = {−1, 1} would be replaced by [0, 1].
(ii) the misclassification loss by the probabilistic, convex (but not strictly convex) loss L0 : Y0 ×Y0 → R,
where the observations in the second Y0 argument are always pmf describing constant random
variables.
A further elementary calculation (see appendix A.1 for an explicit derivation) shows that L0 is always
minimized by making deterministic predictions:
argmin E[L0 (p, Y )] ∩ [Y → {0, 1}] is non-empty,
p∈Y0
i.e., the L0 -best classifier may always be chosen to be a deterministic one, i.e., one that always predicts
a probability mass functions with probabilities 0 or 1.
This exhibits deterministic classification as a special case of probabilistic classification with a special
choice of loss function.
3.2. Elicitation by convex losses
Loss functionals are canonically paired with summary statistics of distributions:
Definition 3.3. Let L : Y × Y → R be a convex loss functional. For a Y-valued random variable Y ,
we define
µL ([Y ]) := argminE [L(y, Y )]
y∈Y
where [Y ] denotes Y as a full object (i.e., a measurable function), rather than its value.
Following Gneiting and Raftery [17], we term the functional which maps Y-valued distributions to
sub-sets of Y the eliciting functional associated to L. We call µL ([Y ]) the summary of Y elicited by
L.
Note that well-definedness, i.e., existence of the minimizer, is ensured by convexity of L (and the
implied continuity). If L is strictly convex, there is furthermore a unique minimizer, in which case we
will exchangeably consider µL to be a functional with target Y.
Well-known examples of elicited summaries are given in the following:
Lemma 3.4. The following relations between losses and elicited statistics of real-valued random variables hold:
12
(i) the (strictly convex) squared loss L : (y, y∗ ) 7→ (y−y∗ )2 elicits the mean. That is, µL ([Y ]) = E[Y ]
for any Rn -valued random variable Y .
(ii) the (convex but not strictly convex) absolute loss L : (y, y∗ ) 7→ |y − y∗ | elicits the median(s).
That is, µL ([Y ]) = median[Y ] for any R-valued random variable Y .
(iii) the (convex but not strictly convex) quantile-loss (or short: Q-loss) L(y, y∗ ) = α · m(y∗ , y) + (1 −
α) · m(y, y∗ ), with m(x, z) = min(x − z, 0), elicits the α-quantile(s). That is, µL ([Y ]) = FY−1 (α)
for any R-valued random variable Y , where FY−1 : [0, 1] → P (R) is the set-valued inverse c.d.f. of
Y (with the convention that the full set of inverse values is returned at jump discontinuities rather
than just an extremum).
Proof. (i) after substitution of definition, the claim is equivalent to the statement to the mean being
the minimizer of squared distances. A more explicit proof is given in Appendix B.1.
(ii) follows, by setting α = 12 , from (iii).
(iii) This is carried out in Appendix B.2.
In the supervised setting, the best possible prediction functional can be exactly characterized in terms
of elicitation:
Proposition 3.5. Let L be a (strictly) convex loss. Then, it holds that
argmin εL (f ) = [x 7→ µL [Y |X = x]] .
f ∈[X→Y]
That is, the best possible prediction as measured by L is predicting the statistic which L elicits from
the conditional random variable Y |X = x.
Proof. The prediction functional ω : x 7→ µL [Y |X = x] is well-defined, hence it suffices to show that
εL (f ) ≥ εL (ω) for any prediction functional f : X → Y.
Now by definition of µL , it holds that
E [L(ω(X), Y )|X] ≤ E [L(f (X), Y )|X] .
Taking total expectations yields the claim.
Intuitively, the best prediction functional, as measured by a convex loss L, always predicts the statistic
elicited by L from the conditional law [Y |X = x].
3.3. Predictive uninformedness
We will now introduce the notion of an uninformed baseline which will act as a point of comparison.
Definition 3.6. A prediction functional f : X → Y is called uninformed if it is a constant functional,
i.e., if f (x) = f (y) for all x, y ∈ X. We write uα for the uninformed prediction functional uα : x 7→ α.
We will show that, for a given loss function, there is one single uninformed baseline that is optimal.
Lemma 3.7. Let L be a (strictly) convex loss, let µ := µL ([Y ]) be the/a statistic elicited by L (see
Definition 3.3). Then, the following quantities are equal:
(i) inf{εL (f ) : f is an uninformed prediction functional}
(ii) εL (uµ )
That is, uµ is achieves the lowest possible (L-)loss amongst uninformed prediction functionals and
prediction strategies.
Proof. Note that ε(uα ) = E[L(α, Y )]. It follows hence by definition of µ that ε(uµ ) ≤ ε(uα ) for any
(constant) α ∈ Y. I.e., uµ is the best uninformed prediction functional.
13
Lemma 3.7 motivates the definition of the best uninformed predictor:
Definition 3.8. We call uµ , as defined in Lemma 3.7, the (L-)best uninformed predictor (even though
it may not be unique, the choice, when possible, will not matter in what follows).
We call a prediction functional (L−)better-than-uninformed if its expected generalization loss is strictly
smaller than of the (L−)best uninformed predictor. More formally, a prediction functional f is Lbetter-than-uninformed if εL (f ) ≤ εL (uµ ).
For convenience, we further introduce some mathematical notation for best predictors:
Notation 3.9. Let L be a (strictly) convex loss. We will write:
(L)
(i) ωY
(ii)
(L)
(L)
ωY |X
:= [x 7→ µL [Y ]] for the/a (L-)best uninformed predictor as defined in Definition 3.8.
:= [x 7→ µL [Y |X = x]] for the/a (L-)best predictor as considered in Proposition 3.5.
(L)
ωY and ωY |X are unique when L is strictly convex, as per the discussion after Definition 3.3. When
multiple choices are possible for the minimizer, i.e., if L is convex but not strictly convex, an arbitrary
choice may be made (not affecting subsequent discussion). The superscript L may be omitted in
situations where the loss is clear from the context.
An important fact which we will use in testing is that if a better-than-uninformed prediction functional
exists, then ωY |X is an example:
Proposition 3.10. Fix a (strictly) convex loss. The following are equivalent:
(i) ωY |X is L-better-than-uninformed.
(ii) ωY |X is not L-uninformed.
(iii) There is an L-better-than-uninformed prediction functional.
Note that equivalence of (i) and (ii) is not trivial, since there are prediction functionals which are not
better-than-uninformed but not uninformed.
Proof. The equivalence of (i) and (iii) follows directly from Lemma 3.7, and noting that (i) implies
(iii).
(i)⇔(ii): By Proposition 3.5, ε ωY |X ≤ ε(f ) for any f , in particular also any uninformed f . By
Lemma 3.7 and the above, the inequality is strict if and only if ωY |X is better-than-uninformed.
3.4. Statistical dependence equals predictability
We continue with a result that relates - or more precisely, equates - better-than-uninformed predictability with statistical dependence. As such, it shows that the choice of constant functions as a
proxy for uninformedness was canonical.
We start with the more intuitive direction of the claimed equivalence:
Proposition 3.11. Let X, Y be random variables taking values in X, Y. Let L be any convex loss
functional. If X, Y are statistically independent, then: There exists no L-better-than-uninformed
prediction functional for predicting Y from X. More precisely, there is no prediction functional f :
X → Y and no convex loss L : Y × Y → R such that f is L-better-than-uninformed.
Proof. Assume X, Y are statistically independent. Let L be a convex loss function, let f : X → Y be
a prediction functional. Then, by convexity of L,
E [L(f (X), Y )|Y ] ≥ L(E[f (X)], Y )|Y.
14
Since X is independent of Y (by assumption) and f is not random, it holds that E[f (X)]|Y = E[f (X)],
i.e., E[f (X)]|Y = y, as a function of y, is constant. Writing ν := E[f (X)], we hence obtain that
L(E[f (X)], Y )|Y = L(ν, Y ) = L(uν , Y ).
After taking total expectations, it hence holds by the law of total expectation that
E [L(f (X), Y )] ≥ E [L(uν , Y )] ,
meaning that f is not better-than-uninformed w.r.t. L. Since f and L were arbitrary, the statement
holds.
Proposition 3.11 states that independence implies unpredictability (as measured per expected loss). A
natural question to ask is whether the converse holds, or more generally which converses exactly, since
the loss functional L in (ii) and (iii) of Proposition 3.10, was arbitrary, and independence in (i) is a
statement which remains unchanged when exchanging X and Y , while predictability is not symmetric
in this respect. Hence the weakest possible converse would require unpredictability w.r.t. all convex
losses and w.r.t. either direction of prediction, however much stronger converses may be shown.
Before stating the mathematically more abstract general result for the converse direction, we first
separately present special cases for the three important sub-cases, namely deterministic classification,
probabilistic classification, and regression.
Theorem 1. As in our setting, consider two random variables X, Y , taking values in X and Y.
Assume Y = {−1, 1}. The following are equivalent:
(i) X and Y are statistically independent.
(ii) There exists no better-than-uninformed prediction functional predicting Y from X. More precisely, there is no prediction functional f : X → Y such that f is L-better-than-uninformed for
the misclassification loss L : (p, y) 7→ 1 − p(y).
Regarding the specific loss L, see Remark 3.2 for the identification of a class prediction with a
probability-1 probabilistic prediction.
Proof. By Proposition 3.11, the only direction which remains to be proven is (ii)⇒(i): (i)⇒(ii) follows directly from substituting the specific L into the implication between statements with the same
numbers in Proposition 3.11.
(ii)⇒ (i): We prove this by contraposition: we assume X and Y are statistically dependent and
construct a better-than-uninformed prediction functional.
By the equivalence established in Remark 3.2, the best uninformed predictor ωY is predicting the
most probable class, w.l.o.g. 1 ∈ Y, and its expected generalization loss is one minus its generative
frequency ε(ωY ) = P (Y = −1).
Since X and Y are statistically dependent, there is a positive probability X0 ⊆ X (measurable with
positive probability measure) such that P (Y = 1|X ∈ X0 ) ≥ P (Y = −1|X ∈ X0 ) (the definition yields
6=, but w.l.o.g. replacing X0 by its complement yields ≥). An elementary computation shows that the
prediction functional f : X → Y : 2 · 1[x ∈ X0 ] − 1 is better-than-uninformed.
The proof of Theorem 1 shows that for a converse, one does not necessarily need to consider predictability of X from Y as well. However, the deterministic binary classification case is somewhat
special, since the misclassification loss is insufficient in the multi-class case, and in general a single
loss function will be unable to certify for independence. In order to formulate these negative results,
we define shorthand terminology for stating correctness of the converse.
15
Definition 3.12. Fix a (label) domain Y, let L be a set with elements being convex loss functionals
in [Y × Y → R]. We call L faithful for Y if for every statistically dependent pair of random variables
X, Y taking values in X, Y, there is L ∈ L and a prediction functional f : X → Y such that f is
L-better-than-uninformed. If L is a one-element-set, we call its single element faithful for Y.
If L is faithful for Y, and L is endowed with a measure µ such that no µ-strict sub-set of L is faithful
for Y, then we call L (µ-)strictly faithful.
If Y is canonically associated with a prediction task (such as classification for finite Y or classprobabilities Y, regression for continuous Y), the reference to Y may be replaced by a reference to
that task.
In this terminology, Theorem 1 states that the misclassification loss is faithful for the label domain Y =
{−1, 1}, or equivalently that the misclassification loss is faithful for deterministic binary classification
(and strictly faithful, since any set smaller than an one-element set, as by the counting measure,
is empty and by definition not faithful). We can now state some negative and positive results on
deterministic multi-class classification and probabilistic classification:
Proposition 3.13. (i) The misclassification loss L : (p, y) 7→ 1 − p(y) is not faithful for deterministic multi-class classification, i.e., for Y being finite and containing 3 or more elements.
(ii) The log-loss L : (p, y) 7→ − log(p(y)) is (strictly) faithful for probabilistic classification.
(iii) The squared probabilistic classification loss L : (p, y) 7→ (1 − p(y))2 is (strictly) faithful for
probabilistic classification.
P
(iv) The Brier loss L : (p, y) 7→ (1 − p(y))2 + y0 6=y p(y 0 )2 is (strictly) faithful for probabilistic
classification.
Proof. (i): It suffices to construct a counterexample for each possible Y, i.e., every finite Y with 3 or
more elements. Let X = {−1, 1}, and let y ∈ Y be arbitrary. Define X, Y such that P (Y = y|X =
1) = P (Y = y|X = −1) := 0.9 and P (Y = y 0 |X = 1) 6= P (Y = y 0 |X = −1) for some class y 0 6= y.
This choice is possible as Y has 3 or more elements. The best uninformed predictor always predicts
y, with expected generalization loss 0.1, while it cannot be outperformed, see e.g. the discussion in
Remark 3.2.
(ii)-(iv): for faithfulness, it suffices to show that ε(ωY |X ) = ε(ωY ) implies statistical independence of
X, Y .
Explicit computations in each case (see Appendices A.2 and A.3) show that argminp E[L(p, Y )] = pY ,
where pY is the probability mass function of Y , implying that ωY = [x 7→ [y 7→ P (Y = y)]]. By
conditioning the same statement on X, this also implies that argminp E[L(p, Y )|X] = [y 7→ P (Y =
y|X)], thus argminf E[L(f (X), Y )] = [y 7→ P (Y = y|X = x)]. In particular, ωY |X = [x 7→ [y 7→
P (Y = y|X = x)]] is the unique minimizer of the expected generalization loss. Thus, ε(ωY |X ) = ε(ωY )
only if both functions are identical, i.e., P (Y = y|X = x) = P (Y = y) for all x, which is one possible
definition of X and Y being statistically independent.
For regression, the usual loss functions are unable to certify for independence anymore:
Proposition 3.14. The following convex loss functions (taken as single-element sets) are not faithful
for univariate regression, i.e., for the label domain Y = R
(i) the squared loss L(y, y∗ ) = (y − y∗ )2
(ii) the absolute loss L(y, y∗ ) = |y − y∗ |
(iii) the distance loss L(y, y∗ ) = d(y, y∗ )2 , for any metric d : Y × Y → R
Proof. It suffices to construct, for each convex loss functional L, a counterexample where X, Y are
statistically dependent, but no prediction functional predicting Y from X is better-than-L-uninformed.
In each case, one may construct two Y-valued random variables Y1 , Y2 with distinct laws such that
µL [Y1 ] = µL [Y2 ] - for example, an arbitrary non-constant Y1 and the Y2 being constant µL [Y1 ]. Further
16
setting X = {−1, 1} and defining a non-constant X-valued random variable X, together with an Yvalued random variable such that Yi = Y |X = i for i ∈ X yields an example of a statistically dependent
pair X, Y of random variables where the constant prediction of µL [Y1 ] = µL [Y2 ] is not only the L-best
uninformed prediction functional, but also the L-best prediction functional.
Using equivalence of (i) and (iii) in Proposition 3.10 proves the claim.
The previously introduced quantile losses form a strictly faithful set of losses for univariate regression:
Theorem 2. The set of Q-losses is strictly faithful for (univariate) regression.
More precisely, the set L = {Lα : α ∈ [0, 1]}, where Lα (y, y∗ ) = α · m(y∗ , y) + (1 − α) · m(y, y∗ ), with
m(x, z) = min(x − z, 0) (see Lemma 3.4 (iii)), endowed with the Lebesgue measure through canonical
identification of Lα with [0, 1], is strictly (Lebesgue-measure-)faithful for Y = R.
Proof. For faithfulness, we show that impossibility of (Lα )-better-than-uninformed prediction of a
R-valued random variable Y from an X-valued random variable X implies statistical independence of
X and Y .
Thus, assume that there is no α ∈ R and no prediction functional f such that f is Lα -betterthan-uninformed. By equivalence of (ii) and (iii) in Proposition 3.10 and negation/contraposition,
(Lα )
ωY |X
is uninformed hence constant. By Lemma 3.4 (iii), µLα [Y |X = x] is the α-quantile of the
conditional random variable Y |X = x, which by the previous statement does not depend on x. Since
α was arbitrary, none of the quantiles of Y |X = x depends on x, i.e., the cdf and hence the laws
of all conditionals Y |X = x agree, which implies (by one common definition/characterization of
independence) that X and Y are statistically independent.
Strict faithfulness follows by following through the above argument after removing a positive-measure
open set U ⊆ [0, 1] from the indices, i.e., Lα for α∈ U from L. As U has positive measure, we may pick
u ∈ U and X = {−1, 1} as well as conditional cdf such that P (Y ≤ u|X = 1) 6= P (Y ≤ u|X = −1)
while P (Y ≤ x|X = 1) = P (Y ≤ X|X = −1) for all x 6∈ U . By the above argument, predicting the
α-quantile of Y is the Lα -best prediction functional from X for any α 6∈ U , and it is furthermore a
uninformed prediction strategy, thus L is not faithful after removing Lα , α ∈ U .
In the light of Theorem 2, it may be interesting to ask for a theoretical characterization of a strictly
faithful set of losses for univariate regression (e.g., does it need to be infinite?), or what may be a set
of strictly faithful losses for multivariate regression.
A-priori, it is even unclear whether there is a set of (not necessarily strictly) faithful losses for general
prediction tasks, which the following result answers positively:
Theorem 3. Assume that Y may be identified (including the taking of expectations) with a sub-set
of Rn , for some n, e.g., in multivariate regression, or simultaneous prediction of multiple categorical
and continuous outputs. Then, the set of convex losses is faithful for Y.
Proof. Consider random variables X and Y taking values in X and Y, are statistically dependent.
By definition of (in)dependence, this is equivalent to there existing X0 ⊆ X, Y0 ⊆ Y (measurable
with positive joint probability measure) such that P (Y ∈ Y0 |X ∈ X0 ) 6= P (Y ∈ Y0 ). By taking
(or not taking) the complement of X0 within X, we may assume without loss of generality that
P (Y ∈ Y0 |X ∈ X0 ) P (Y ∈ Y0 ).
Define g : Y → R ; x 7→ x2 (where we have used the identification of Y with Rn . Define
L : Y × Y → R ; (y, y∗ ) 7→ g(y) · 1(y∗ ∈ Y0 ) + g(y − α) · 1(y∗ 6∈ Y0 ),
where 1(y∗ ∈ Y0 ) is the indicator function for y∗ ∈ Y0 , and α ∈ Y \ {0} is arbitrary. Define f : X →
Y ; x 7→ 0 if (x ∈ X0 ), otherwise α. An elementary computation shows that εL (f ) is better-thanuninformed, hence we have proved non-(ii).
17
For the general case, it seems interesting to ask what would be a strictly faithful set of losses, how
such sets may be characterized, or whether they even exist (which seems neither obviously nor directly
implied by the existence of a faithful set of losses).
Due to the constructive nature of respective proofs (or, more precisely, the semi-constructivity of
the proof for multi-variate regression in Theorem 3), model comparison procedures suggested by
Theorems 1 and 2 on univariate classification and regression will be used in the testing procedures.
For convenience we briefly repeat the main results used in testing as a corollary:
Corollary 3.15. Consider two random variables X, Y , taking values in X and Y, where Y is finitely
supported pmf (classification) or where Y ⊆ Rq (regression). The following are equivalent:
(i) X and Y are statistically dependent.
(L)
(L)
(ii) ε(ωY |X ) εL (ωY ) for L the log-loss/Brier-loss (classification), resp. for some convex loss L
(regression). I.e., the L-best predictor is L-better-than-uninformed, for some L.
(L)
(iii) there exists a prediction functional f : X → Y such that εL (f )
εL (ωY ) for L the logloss/Brier-loss (classification), resp. for some convex loss L (regression). I.e., there exists an
L-better-than-uninformed prediction functional, for some L.
|=
Since Corollary 3.15 (i) is the alternative hypothesis in an independence test, we may use a test for
the equivalent hypothesis Corollary 3.15 (iii), i.e., comparing the performance a learnt f with the
best uninformed baseline, as an independence test. Any choice of f is sufficient for the methodology
outlined in the latter Section 5.1. Since the null-hypothesis is that X Y , choosing a bad f or failing
to detect multivariate dependence (a specific case) only decreases the power of the test, while the type
1-error remains unaffected.
3.5. Conditional independence
The last section established a theoretical foundation for marginal independence testing, now this foundation will be expanded by adding a framework for testing conditional independence.
The statement one would like to make thus connects two random variables, X and Y , that are conditionally independent given a third variable, Z, taking values in Z, with the expected generalization
loss from predicting Y from Z and from the set {X, Z}.
In slight extension of our setting, we consider prediction functionals in [X × Z → Y], i.e., we separate
the features in two parts corresponding to X and the conditioning Z.
We generalize the main definitions from Section 3.3 to the conditional case:
Definition 3.16. A prediction functional f : X × Z → Y is called conditionally uninformed if it does
not depend on the first argument, i.e., if f (x, z) = f (y, z) for all x, y ∈ X and z ∈ Z. By notational
convention, a functional g : Z → Y is identified with the conditionally uninformed prediction functional
(x, z) 7→ g(z).
In the text, it will be implicitly understood that conditioning will happen on the second argument.
We define straightforward conditional generalizations of baselines and best predictors:
Definition 3.17. We define the following prediction functionals in [X × Z → Y :
(L)
The best conditionally uninformed prediction ωY |Z : (x, z) 7→ µL ([Y |Z = z])
(L)
The best conditional prediction ωY |X,Z : (x, z) 7→ µL ([Y |X = x, Z = z])
(L)
(L)
It is briefly checked that ωY |Z and ωY |X,Z have the properties their names imply:
Lemma 3.18. Let L be a (strictly) convex loss. The following holds:
(L)
(i) εL (ωY |Z ) = min{εL (f ) : f is a conditionally uninformed prediction functional}
18
(L)
(ii) εL (ωY |X,Z ) = min{εL (f ) : f ∈ [X × Z → Y]}
Proof. (i) Conditioning on the event Z = z, Lemma 3.7 for the unconditional case implies the equality
conditional for the event. Since z was arbitrary, it holds without the conditioning.
(ii) This is directly implied by Proposition 3.5, by substituting the joint random variable (X, Z) for
the X in Proposition 3.5.
With these definitions, the conditional variant of Theorem 3 can be stated:
Theorem 4. As in our setting, consider three random variables X, Y, Z, taking values in X, Y, Z,
where Y is finitely supported pmf (classification) or where Y ⊆ Rq (regression). The following are
equivalent:
(i) X and Y are statistically dependent conditional on Z.
(L)
(L)
(ii) εL (ωY |(X,Z) )
εL (ωY |Z ) for L the log-loss/Brier-loss (classification), resp.
loss L (regression)
for some convex
(L)
(iii) there exists a prediction functional f : X × Z → Y such that εL (f )
εL (ωY |Z ) for L the
log-loss/Brier-loss (classification), resp. for some convex loss L (regression)
(iv) there exists a prediction functional f : X × Z → Y such that for all conditionally uninformed
prediction functionals g : (X×)Z → Y, one has εL (f )
εL (g) for L the log-loss/Brier-loss
(classification), resp. for some convex loss L (regression)
The set of losses in which existence is required in (ii) may be restricted to a set of losses which is
faithful for the unconditional setting (such as: quantile losses for Y = R as per Theorem 2), as in
Section 3.4, without changing the fact that the stated equivalence is correct.
Proof. The proof is an analogue of that of Corollary 3.15. It may be obtained from following the
whole proof through while in addition conditioning on Z = z, and noting that (i) holds if and only if
the conditionals X|Z = z and Y |Z = z are statistically dependent for some z.
Our conditional independence test is based on statement (iv) in Theorem 4. Unlike in the uncondi(L)
tional case, there is no easy way to estimate ωY |Z directly. Thus, in the algorithm in Section 5.1,
the statement will be replaced by a slightly weaker statement, where automatic model selection will
(L)
(L)
determine an f as well as a g, without guarantees that f = ωY |X,Z or that g = ωY |Z are optimal.
This is in line with the usual approach to and caveats of supervised learning - if there were a universal
(L)
way to estimate ωY |Z , that would amount to a universally perfect supervised learning strategy.
3.6. Testing prediction error against baseline
Theorems 3 or 4 in the previous sections establish a new basis for (marginal or conditional) independence testing. Namely, the theorems relate testing of (marginal or conditional) dependence between
X and Y to testing predictability of Y from X. Namely, Theorems 3 or 4 state the following: If there
exists a significantly better-than-(conditionally-)uninformed prediction strategy f , i.e,
εL (f )
εL (g),
(L)
(1)
(L)
where g is a suitable baseline (an approximation of ωY or ωY |Z ), then we may conclude by Theorem 3
or 4 that X and Y are not (marginally or conditionally) independent.
Thus the problem of independence testing is reduced to the problem of testing whether there exists
a prediction functional f which outperforms the baseline g as measured by some convex loss function
L.
We make a few remarks about the strategy and logic of our proposed significance test.
19
• Neither the proposed functional f nor the baseline g are a-priori known in the usual practical
setting, and neither is an L which may make a difference apparent. However, both f and g
may be seen to approximate a best prediction functional which is unknown, thus as instances
of supervised learning. Hence, some choice has to be made, in absence of the “perfect learning
strategy”. A bad choice will contribute to a loss of power only if type I error control is achieved.
(L)
• The baseline g approximates ωY in the marginal case. For frequently used L, the baseline g
predicts constants which are the mean or the median or other well-studied statistics of a sample
from Y , hence g may have a beneficial asymptotic.
• Significance tests to compare prediction strategies are studied in [29], which proposes amongst
others a paired sample test of prediction residuals.
The values εL (f ), εL (g) may be estimated by respective empirical estimates, by the usual estimator
M
1 X
Li (f )
εbL (f ) :=
M i=1
where Li (f ) = L(f (Xi∗ ), Yi∗ ),
and similar εbL (g) for g. Since the test data (Xi∗ , Yi∗ ) are independent of each other and of f, g, by the
central limit theorem, one has
√
d
M (b
εL (f ) − ε(f )) −
→ N(0, Var[L(f (X), Y )]), M → ∞,
conditional on f and g being fixed. That is, the empirical mean of the loss residuals for prediction
strategy f and loss L is asymptotically normal with mean ε(f ) and variance Var[L(f (X), Y )])/N .
Instead of directly estimating εL (f ) and εL (g) with confidence intervals and then comparing, one
notes (as also in [29]) that the samples of loss residuals Li (f ) and Li (g) are inherently paired, which
eventually leads to a more powerful testing procedure.
We hence consider the difference in the i-th loss residual, Ri := Li (g) − Li (f ). The Ri are still
i.i.d. (conditional on f, g) and also have a normal asymptotic, usually with smaller variance than
either of f, g in empiry. An effect size of the difference is obtained with normal confidence intervals,
and one may conduct a one-sided paired two-sample test for the null
H0 : E(ε(g) − ε(f )) ≤ 0 against HA : E(ε(g) − ε(f ))
0.
Note that the test is one-sided since we test for f to outperform the baseline g.
To assess this null-hypothesis, two simple tests, one parametric and one non-parametric, are implemented.
• T-test for paired samples [29]: A parametric test assuming that the sample mean of loss
residual differences Ri is normally distributed with mean 0 and standard deviation σ. Under
the assumptions of the test and the null hypothesis, the normalized empirical mean t := m
d
u/b
σ,
where m
d
u and σ
b are the sample mean and standard deviation of the Ri , respectively, follows a
t-distribution with M degrees of freedom, where we have used the fact that under H0 it holds
that E[R] = 0.
• Wilcoxon signed-rank test: If the sample mean of loss residual differences Ri is not normally
distributed, an increase in power can be achieved by testing if the rank differences are symmetric
around the median. Since the Ri in question are differences in loss residuals, there is no ex-ante
reason to assume normality, in particular for a “good” non-baseline method one may expect
that most are positive, hence the normality assumption may be too strong. Hence the nonparametric Wilcoxon class test should be a better general approach to compare loss residuals
(see pages 38-52 of [11] for the Wilcoxon tests).
20
One subtlety to notice is that instead of testing for the alternative hypothesis “there exists f such
(L)
that εL (f ) ≤ εL (ωY )” which certifies for independence, by the above strategy we are testing for
the alternative “there exists f such that for all/the best g it holds that εL (f ) ≤ εL (g)” where g is a
(L)
fixed estimate of ωY . However, g is dependent on the training data and itself uncertain. Thus, for a
(L)
conservative estimate of the significance level, uncertainty in g estimating ωY needs to be taken into
account.
(L)
In the unconditional case, ωY will be a constant predictor of an elicited statistic of the training
labels, such as the mean, median or one minus the majority class frequency, with known central limit
theorems that may be used to adapt the Student or Wilcoxon type tests (e.g., by explicitly increasing
the variance in the t-test).
In either case, one may also bootstrap the distribution of g and make use of it as follows: Since the
training set is independent of the test set, the function g, is as a random variable which is a function
of the training data, also independent of the test data. Thus instead of using a fixed g in prediction,
one can ensure a conservative test by using a bootstrap sample of pseudo-inputs for the predictions,
a different sample of g per test data point that is predicted.
Since the exact form of the correction appears an unresolved sub-problem of the predictive model
validation and model selection process in general, and since bootstrapping of g is computationally
costly, we have implemented the “fixed g” variant (potentially liberal) in the package which can be
easily adapted to potential corrections based on explicit asymptotics.
We will present an algorithmic implementation in Section 5.1 as an algorithmic conditional testing
routine, which is later applied to graphical model structure learning, in Section 5.2
21
4. Graphical Models
This section will briefly review graphical models and graphical model structure learning, which we
will later address using the conditional independence test outlined in Section 3.
4.1. Informal definition of graphical models
A probabilistic graphical model is a graph-based description of properties of a joint probability distribution (possibly but not necessarily a full specification thereof). It usually consists of a directed
or undirected graph, together with a fixed convention how to translate the graph structure into a
full probability distribution, or more generally, into conditional independence statements about a
distribution of not further specified type.
Graphs
A graph G is an ordered pair (V, E) where V ⊆ E × E. The set V are interpreted as vertices (nodes)
of the graph, and the elements in E are interpreted as edges (links) of the graph (ordered for directed
graphs, unordered for undirected graphs). G is usually visualized by drawing all vertices V and edges
between all pairs of vertices in E. In graphical models, the vertex set is identified with a collection of
(possibly associated) random variables X = [X1 , ..., Xn ], and the edges encode some independence or
conditional independence information about the components of X. Two popular choices are discussed
in Section 4.2 below.
Probabilistic graphical models
Graphical model theory is usually concenred two main tasks, structure learning (of the graph) and
inference (on a given graph). Inference is concerned with estimating parameters or statistics of the
parametric distribution assuming the independence structure prescribed by a fixed graph, from a finite
sample drawn from X. Structure learning is concerned with inferring the graph from such a sample,
thus inferring the encoded independence relations. For an extended introduction to graphical models,
the reader might refer to [26] or [2].
Graphical model structure learning is usually considered the more difficult of the two tasks, due to the
combinatorial explosion of possible graphs. Manual approaches involve an expert encoding domain
knowledge in presence or absence of certain edges; automated approaches usually conduct inference
based on parametric distributional assumptions combined with selection heuristics [26].
4.2. Types of graphical models
We review two of the most frequently used types of graphical models.
4.2.1. Bayesian Networks
A Bayesian network is a graphical model which states conditional independence assumptions without making parametric distributional assumptions. The conditional independence assumptions are
encoded in a directed acyclic graph over a set of random variables. Acyclicity of the graph implies
that each node Xi has a (potentially empty) set of descendants,
Definition 4.1. A node Xj is a descendant of Xi in the graph G if there is a directed path from Xi
to Xj , where a path is any connection of links that lead from Xi and Xj .
That is, if there is a path following the arrows in G, going from Xi to Xj . Since the graph is acyclic,
no cycles exist, and following the arrows in the graph, the same variable can never be reached twice.
Examples of such a networks are shown in Figure 1.
22
Difficulty
Intelligence
Difficulty
Intelligence
GPA
Grade
Grade
Grade
Letter
Letter
Letter
(a)
(b)
(c)
Figure 1: Example of expert based graphical model structure learning (backtracking), adapted
from [26].
(a) The quality of an academic reference letter is determined by the grade
(b) The students intelligence and course difficulty determine the grade.
(c) Knowing a students GPA gives additional information about the state of intelligence
Bayesian Network graphs arise in a natural way when considering the factorization properties of a
probability distribution. Assume we are interested in a probability distribution P over the Dif f iculty
of a course, a students Intelligence, the Grade a student achieved in a course, and the quality of
a reference Letter received from the tutor of the course, P (Dif f iculty, Intelligence, Grade, Letter).
Further assume the following is true
|=
• The quality of the Letter is solely determined by the Grade a student received in the course.
That is, Letter {Dif f iculty, Intelligence}|Grade
• The Grade of a student depends on the Dif f iculty of the course and his Intelligence
• Dif f iculty and Intelligence are not causally influenced by any other variable in the graph
This gives a natural way to order the variables for factorization. Dif f iculty and Intelligence are
so-called root nodes, hence their order is irrelevant, however Grade depends on both of them and
Letter depends on Grade, giving the ordering: {Letter, Grade, Dif f iculty, Intelligence}, which will
now be denoted as {L, G, D, I}. An ordering {X1 , ..., Xn } implies that for ∀i, j ∈ {1, ..., n} and i < j,
Xj can not be a descendant of Xi . The distribution is then factorized, according to the ordering,
P (L, G, D, I) = P (L|G, D, I)P (G, |D, I)P (D|I)P (I),
a standard way to factorize distributions. Working in the independence statements gives
P (L, G, D, I) = P (L|G)P (G, |D, I)P (D)P (I),
|=
|=
since L {D, I}|G and D I. Returning to Figure 1 (b) shows that this factorized distribution exactly
matches the arrows in the graph, which start at the parents (the conditioning variables) and lead to
the children (the variable that the conditional distribution is over).
To understand the independence statements encoded in a Bayesian Network, one needs to first distinguish between active and passive trails [26, p. 71]. Let a trail be any path between Xi and Xj on G
and a v-structure a structure such that Xi−1 → Xi ← Xi+1 , where Xi is a descendant of both Xi−1
and Xi+1 .
23
A
C
E
B
D
F
G
Figure 2: Markov Network
Definition 4.2. A trail between Xi and Xj is active given a conditioning set Z, if for every v-structure
Xi−1 → Xi ← Xi+1 along the trail, Xi or one if it’s descendants is in Z and no other variables on
the trail are in Z.
In Figure 1 (c) that means, the trail from Letter to GPA is active only if conditioned on either the
empty set or Difficulty, and the trail between Intelligence and Difficulty is only active if Grade or
Letter is in the conditioning set.
|=
Definition 4.3. [2] If there is no active trail in the graph G between the nodes X and Y given Z,
then X Y |Z in any distribution consistent with the graph G
4.2.2. Markov Networks
A Markov Network, or Markov Random Field, is a graphical model which encodes conditional independence statements in an undirected graph over a set of random variables. While the Bayesian
Network is defined in terms of probability distributions, the Markov Network is usually specified in
terms of factor products which are unnormalized probability distributions. The scopes of these factors
determine the complexity of the system, if the scope of a factor covers all the variables in the graph,
the graph is unconstrained, whereas any constraint to smaller scopes decreases its complexity.
Figure 2 shows the Markov Network for the (normalized) factor product
P (a, b, c, d, e, f, g) =
1
φ1 (a, b, c)φ2 (b, c, d)φ3 (c, e)φ4 (e, f )φ4 (f, g)φ5 (e, g),
Z
with the normalizing constant (partition function) Z. Small letters denote realizations of the capitallettered nodes. If P is a probability mass function,
X
Z=
φ1 (a, b, c)φ2 (b, c, d)φ3 (c, e)φ4 (e, f )φ4 (f, g)φ5 (e, g),
a,b,c,d,e,f,g
|=
|=
where φ(s) is such that φ : S → R, ∀s ∈ S.
If P is a probability density function over continuous variables, Z would be attained by integrating
over the support of the variables in the scopes of P .
Since the links are undirected, independence statements arising from Markov Networks are symmetric.
A path between Xi and Xj , i 6= j is called active given Z, if no node on the path is in Z. To determine
whether Xi and Xj are conditionally independent given Z in all distributions consistent with G, one
again considers all paths going from Xi to Xj . It holds that Xi Xj |Z if there is no path between Xi
and Xj that is active given Z.
So to attain a Markov Network that is consistent with a probability distribution, one has to consider
the pairwise conditional independence properties of the variables that the nodes in the graph represent.
In general, if in a Markov Network there is no link between a variable Xi and Xj , then Xi Xj |X \
{Xi , Xj } in any distribution P over X consistent with the graph. These are called the pairwise
Markov-independencies of the graph [15, ch. 17.2].
The independencies that can be encoded using Bayesian and Markov Networks differ. Bayesian
24
X
Y
X
Y
Figure 3: Two DAGs equivalent with respect to independence statements
Networks are natural to express (hypothesized) causal relationships, distributions where an ordering
can be attained and thus have nice factorization properties, while Markov Networks are natural
for expressing a set of pairwise Markov independencies. Additionally, Bayesian Networks can be
transformed into an undirected graph by a (potentially forgetful) process called ”moralization” [26,
p. 135].
4.3. Graphical model structure learning
There are two dominant approaches to structure learning, independence testing based and score-based
methods, however they both suffer to a varying extent from the same underlying problem: combinatorial explosion. Given a probability distribution P (with associated graph G) over a multivariate
random variable X = [X1 , ..., Xn ], for undirected networks, between any pair of variables Xi and Xj
potential
there can be a link or no link in G. Since the links are undirected, there are thus p(p−1)
2
edges in the graph, and an exponential number of possible graphs. If one wants to test if Xi and Xj
are independent given Z, Z ⊂ X, one needs to test if any path between the two is active, leading to a
potentially very large number of tests. A graph’s complexity can be decreased by upper bounding the
number of edges for each node, however, it was shown that, for directed graphs where each node can
have d parents, for d > 1, the problem a finding an optimal graph is NP-hard [26, p. 811]. Generally,
the space of graphs is reduced by making structural (what type of relationships can be captured in the
graph) and parametric (constraints on the distribution) assumptions. An additional problem is posed
when there exists more than one optimum to the structure learning method, as a result of the fact
that different graphs can be equivalent (and thus not identifiable) in the respective search space. An
example of this are the two graphs shown in Figure 3, which arises from the fact that if {X, Y } ∼ P ,
P can be decomposed either into P (X|Y )P (Y ) or P (Y |X)P (X), which are equivalent. There are
different ways to approach these problems, some of which are outlined below.
4.3.1. Score-based methods
Score-based methods attempt to find a (usually local) optimum of a score function in the space of
possible graphs. Examples of this can be the Kullback-Leibler divergence scoring function for directed
graphical models, which can be used to find the optimal Chow-Liu tree graph [2, p.219]. While this
is a convex algorithm, finding the global optimum, it does so under the heavy constraint that each
node only has one parent at maximum (resulting in a tree-structured graph). For undirected graphs
over discrete variables, gradient descent can be used directly on the log likelihood to find the local
optimum [2, p. 221], however each gradient evaluation requires calculation of the partition function
Z (by summing over all discrete states), which makes this algorithm very expensive. Many of the
algorithms for score-based methods are designed for discrete variables, and not applicable when the
variables are continuous. Performance is usually evaluated by calculating the likelihood score on a
hold-out sample. One other area not mentioned above is Bayesian model averaging (e.g. [14]), which
seeks to improve the above methods by averaging over different model structures. It can be viewed
as an ensembling method for the score-based methods mentioned above. State-of-the-art score based
methods are oftentimes very expensive or make very strong assumptions on the underlying distribution
P , such as tree-structure of the graph [26, ch. 18].
25
4.3.2. Independence testing based methods
Unlike the score-based models, independence testing based models conduct independence tests between
variables or sets of variables locally and then aggregate the information to produce the graphical model.
These sets of independence tests are then used to infer the structure of the graph in a straightforward
manner, based on the properties of the respective type of graphical model. An example of this is the
PC-Algorithm [2, e.g. p. 214] which is used in attempts at causal discovery. Another algorithm for
recovering the undirected skeleton is given by Algorithm 3.3 of [26].
There are probably two main reasons that this latter independence testing based method is not used
for graphical modelling:
(a) is that it relies on a conditional independence test where the conditioning is on multiple variables,
i.e., all except two. As outlined in Section 2, this is a largely unsolved problem expect when
there is at most one conditioning variable, i.e., if there are three or less variables in total.
(b) It is hard to regulate the bias-variance trade-off, unlike for score-based methods where this may
be achieved through the use of constraints such as an upper bound on the number of parents.
However, these two issues may be addressed through our novel predictive conditional independence
testing framework:
(a) The predictive conditional independence test described in the subsequent Section 5.1, based on
the theoretical insights in Section 3, allows for efficient conditional independence testing with
variables of any dimension, and thus provides a predictive approach to learning graphical models.
(b) The link to the supervised learning workflow allows for direct adoption of well-known strategies to
trade-off bias and variance and error estimation from supervised learning, such as regularization
and cross-validatied tuning, to the graphical modelling workflow..
In addition, our suggested algorithms have further desirable features:
• The intimate link to supervised learning also allows for a controlled trade-off between power of
the algorithm and time complexity. This trade-off is readily available when using the conditional
independence testing routine described in Section 5.1, since the user can choose estimation
methods of varying power/complexity in order to influence the total run time needed by the
algorithm.
• Section 5.1.1 will also introduce a false-discovery-rate control routine to provide a tool that lets a
user control the proportion of false-positives (erroneously found edges in the estimated graphical
model structure), regardless of the size of the graph.
Note on causal inference
Graphical models, and specifically Bayesian Networks, are a natural way to express hypothesized
causality, since the arrows seem to express causation. However, when actually learning graphical
models from data, causality may only be inferred by making strong (and often incorrect) assumptions
on the underlying distribution, or by collecting data in a manner that allows for causal interpretation,
namely in proper experimental set-ups (e.g., following Pearl’s do-calculus, or standard experimental
study design). As generally for graphical model structure learning, all algorithms outlined in this paper
are not to be interpreted as expressing causality, but merely as producing a collection of statements
about association which certify for causality only in combination with the right experimental set-up.
26
5. Predictive inference algorithms
This section will first introduce the proposed predictive conditional independence testing routine
(PCIT), which is based on the methodology outlined in Section 3, and important subroutines related
to performance and error control of the test. After, an algorithm to leverage the independence test into
a graphical model structure learning routine that addresses the issues outlined in 4.3.2 is presented.
5.1. Predictive conditional independence testing
Algorithm 1 implements the results from Section 3 to test if a set of variables Y is independent of
another set of variables X, given a conditioning set Z (optional, if not provided the algorithm tests
for marginal independence). It will later be used as a subroutine in the graphical model structure
learning, but can also be used in it’s own right for tasks such as determining if a subset X would add
additional information when trying to predict Y from Z.
Algorithm 1 Predictive conditional independent test (PCIT)
Split data into training and test set
for all variables y ∈ Y do
on training data:
4:
find optimal functional f for predicting y from Z
5:
find optimal functional g for predicting y from {X,Z}
1:
2:
3:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
on test data:
calculate and store p-value for test that generalization loss of g is lower than f
end for
if symmetric test needed then
exchange X and Y, repeat above process
end if
p values adjusted ← Apply FDR control to array of all calculated p-values
return p values adjusted
When the test is used as a marginal independence test, the optimal prediction functional for f is
the functional elicited by the loss function, that is, the mean of y for continuous outputs, and the
class probabilities of y for the discrete case (Appendix ??). The link to supervised learning further
allows/forces the user to distinguish between two cases. Independence statements are symmetric, if
X is independent of Y , then Y is independent of X, in both the marginal and conditional setting.
The same cannot be said in supervised learning, where adding X to predicting Y from Z might result
in a significant improvement, but adding Y does not significantly improve the prediction of X, given
Z (as can be seen in the asymmetry of OLS). So if a user is interested in a one-sided statement, the
algorithm can be run for a single direction, for example to evaluate if a new set of variables improves
a prediction method, and is thus worth collecting for the whole population. If one is interested in
making a general statement about independence, the test is “symmetrized” be exchanging X and Y ,
and thus testing in both directions, and FDR control be applied to the union of the p-values.
It is important to distinguish between the two types of tests and null-hypotheses in this test. On one
hand (in the symmetric case) for each variable in X and Y , it will be assessed, if adding X to the
prediction of y ∈ Y from Z results in an improvement (and vice versa). The null-hypothesis of this
“prediction-null” is that no improvement is achieved. After all these p-values are collected, a multiple
testing adjustment is applied (Section 5.1.1), after which the original null-hypothesis, that X and
27
Algorithm 2 The Benjamini-Hochberg-Yekuteli Procedure
j
Require: Set {p(i) }m
i=1 s.t. pj : p-value for observing Xj under H0
1: Sort the p-values in ascending order, p(1) ≤ ... ≤ p(m)
Pm
2: Let q = α/( i=1 1/i) for chosen confidence level α
i
3: Find the k s.t. k = max(i : p(i) ≤ m
q)
j
4: Reject H0 for j ∈ 1, ..., k
Y are conditionally independent, is assessed. We reject this “independence-null”, if any one of the
“prediction-nulls” can be rejected after adjustment. The p-value of the “independence-null” hence
reduces to the lowest p-value in all the “prediction-nulls”. As such, the null of the independence test
is that all the “prediction-nulls” are true. False discovery rate control, the chosen multiple testing
adjustment, is appropriate since it controls the family-wise error rate (FWER), the probability of
making at least one type 1 error, if all the null-hypotheses are true [4].
5.1.1. False-discovery rate control
To account for the multiple testing problem in Algorithm 1, the Benjamini - Hochberg - Yekutieli
procedure for false-discovery rate (FDR) control is implemented [5]. In their paper, they state that
traditional multiple testing adjustments, such as the Bonferroni method, focus on preserving the
FWER. That is, they aim to preserve the probability of making any one type 1 error at the chosen
confidence level. As a result, tests are usually very conservative, since in many multiple-testing scenarios the p-values are not independently distributed (under the null), and thus the power of these
tests can be significantly reduced.
In their 2001 paper, they propose to control the false discovery rate instead, “[..] the expected proportion of erroneous rejections among all rejections”, as an alternative to the FWER. The FDR allows
for more errors (in absolute terms) when many null-hypothesis are rejected, and less errors when few
null-hypotheses are rejected.
Algorithm 2 shows the procedure outlined in [5]. They showed, that this procedure always controls
the FDR at a level that is proportional to the fraction of true hypotheses. As hinted at before, while
this algorithm controls the false-discovery rate, in the special case where all null-hypotheses in the
multiple testing task are assumed to be true, it controls the FWER, which then coincides with the
FDR. This is especially useful, since both scenarios occur in the graphical model structure learning
routine described in Section 5.2.
For the choice of optimal false discovery-rate for an investigation, even more so than in the classical
choice of appropriate type 1 error, there is no simple answer for which rate might serve as a good
default, and it is highly application dependent. If the goal of the procedure is to gain some insight
into the data (without dire consequences for a false-discovery), a user might choose a FDR as high
as 20%, meaning that, in the case of graphical model structure learning, one in five of the discovered
links is wrong on average, which might still be justifiable when trying to gain insight into clustering
properties of the data. This paper will still set the default rate to 5%, but deviate willingly from this
standard whenever deemed appropriate, as should any user of the test.
5.1.2. Improving the prediction functionals
In practice, when assessing the individual “prediction-nulls” in Algorithm 1, the power of the test
(when holding the loss function constant) depends on the capability of the the prediction method to
find a suitable functional g that outperforms the baseline f . That means, a general implementation of
the independence test needs to include a routine to automatically determine good prediction functionals for g and f . The implementations in the pcit package presented in Section 6 support two methods
for the automatic selection of an optimal prediction functional. Both methods ensemble over a set
28
Stage 1
Regression
Classification
ElasticNetCV
BernoulliNB
GradientBoostingRegressor
MultinomialNB
RandomForestRegressor
GaussianNB
SVR
SGDClassifier
RandomForestClassifier
SVC
Stage 2
LinearRegression
LogisticRegression
Table 1: Prediction functionals used for Stacking/Multiplexing
of estimators, which are shown in Table 1. The prediction functionals refer to the estimator names
in sklearn1 . Some are constrained to specific cases (e.g. BernouilliNB, which only applies when the
classification problem is binary).
Stacking Stacking refers to a two-stage model, where in the first stage, a set of prediction function
is fit on a training set. In the second stage, another prediction function is fit on the outputs of the
first stage. If the prediction function in the second stage is a linear regression, this can be viewed as a
simple weighted average of the outputs in the first stage. In theory, stacking allows the user to fit one
single stacking predictor instead of having to compare many potential prediction functions based on
the model diagnostics, as in the second stage, better methods get more weight (in the expectation).
As an additional effect, improvement in the prediction accuracy through ensembling of predictors can
take place (see e.g. Section 19.5 of [1]). The used stacking regressor and classifier can be found in the
Python package Mlxtend [32].
Multiplexing When multiplexing over estimators, the training set is first split into a training and
validation set, a common procedure to find optimal hyperparameters. After, the predictors are fit
individually on the training set, and each predictors expected generalization loss is estimated on the
validation set. One then proceeds by choosing the predictor with the lowest estimate for the empirical
generalization loss, and refits it using the whole training data (including the former validation set),
and then uses the fitted estimator for prediction.
5.1.3. Supervised learning for independence testing
Algorithm 1 shows how to leverage supervised learning methodology into a conditional independence
test. This has a major advantage over the methods outlined in Section 2, as the supervised prediction
workflow is of utmost interest to many areas of science and business, and, as a result, a lot of resources
are going into development and improvement of the existing methods. By making a link between
predictive modelling and independence testing, the power of independence testing will grow in the
continuously increasing power of the predictive modelling algorithms.
5.2. Predictive structure learning of undirected graphical models
This section outlines a routine to learn the vertices in a directed graph (the skeleton) for a data
set by conducting a range of conditional independence tests with the null hypothesis of conditional
independence. Section 4.2 outlines the conditional independence statements of a Markov network. In
a directed graph, if variables Xi and Xj have no direct edge between them, they are conditionally
independent given all other variables in the network.
1 Details
can be found here http://scikit-learn.org/stable/modules/classes.html
29
Algorithm 3 Undirected graph structure estimation
for any combination Xi , Xj s.t. i 6= j do
X− ← X \ {Xi , Xj }
p vali,j ← p-value for test Xi Xj |X−
end for
p val adj ← Apply FDR control on p val matrix
return p val adj
|=
1:
2:
3:
4:
5:
6:
Algorithm 3 describes the skeleton learning algorithm for an input set X = [X1 , ..., Xn ], by considering
all possible pairs of variables in the data set, and testing if they are conditionally independent given
all other variables. The output p val adj is a symmetric matrix with entries i,j being the p-value
for the hypothesis that in the underlying distribution, Xi and Xj are independent, given all other
variables in X, and hence in the graph G describing it, there is no link between the vertices for Xi and
Xj . Ultimately, links should be drawn where the adjusted p-values are below a predefined threshold.
There are O(n2 ) function evaluations in the for-loop, where n is the number of variables. Section 7
will provide experimental performance statistics for the algorithm and showcase applications on real
data sets.
30
6. pcit package
6.1. Overview
The Python2 package implementing the findings and algorithms of this paper can be found on https:
//github.com/SamBurkart/pcit and is distributed under the name pcit in the Python Package Index.
This section will first provide an overview of the structure and important functions of the package. As
this implementation can be thought of as a wrapper for scikit-learn (sklearn) estimators, this section
will then describe the sklearn interface and how the package interacts with the sklearn estimators.
Lastly, simple application-examples are given.
6.1.1. Use cases
The package has two main purposes, independence testing and structure learning. While univariate
unconditional independence testing is possible, it is not expected to outperform current methodology
in the simple tasks, but the difficult ones, such as
Multivariate independence tests, such as for hedging purposes in a multivariate test to determine
which financial products are independent from the ones already in the portfolio.
Conditional independence tests, such as assessing if a data set adds information to existing data
in a prediction task.
Graphical model structure learning, such as for finding clusters in the data as part of an exploratory data analysis or thoroughly investigating associations in the data (both in section 7.2.2).
For these tasks, the PCIT serves as a readily available tool that works without the need for manual
choices or hyperparameter tuning, and scales well in the dimensionality of the data.
6.1.2. Dependencies
The package has the following dependencies:
Scipy [25], for the calculation of p-values
Sklearn [31], for its estimators (predictors)
Mlxtend [32], for the implementation of stacking
6.2. API description
The package introduces three main routines, one for automated prediction (MetaEstimator), conditional independence testing (PCIT), and undirected graph structure learning (find neighbours). The
following section gives on overview, the function signatures can be found in Appendix 6.3.
MetaEstimator
The MetaEstimator class provides a user with a type-independent predictor that automates model selection for given training data, by automatically determining appropriate loss functions and prediction
functionals. It is initialized for sensible defaults, which should generally lead to good results, but can
be changed for specific tasks (such as the use of more complex base estimators for more powerful, but
also more computationally demanding, predictions). The ensembling methods used for the estimator
are described in Section 5.1.2. For regression, the square loss is used for training and calculation of
the residuals, for classification, the logistic loss serves as the loss function.
2 https://www.python.org/
31
PCIT
PCIT implements the conditional independence test in Algorithm 1 to test if two samples stem from
conditionally (or marginally) independent random variables. The MetaEstimator class is used as a
prediction functional, and hence the user can trade off between computational complexity and power
by adjusting the chosen MetaEstimator. That is, if speed is important, the used MetaEstimator should
be a combination of base estimators and ensembling method that is quick in execution, whereas if
computational resources are vast and a more powerful test is needed, the used base-estimators should
be highly tuned.
find neighbours
find neighbours implements Algorithm 5.2 to learn the undirected skeleton for an input data set X,
using the PCIT.
6.3. Function signatures
PCIT
Type: function
Inputs:
Outputs:
Name
Description (type)
Default
x
Input data set 1 ([n x p] numpy array)
y
Input data set 2 ([n x q] numpy array)
z
Conditioning set ([n x r] numpy array)
None (empty set)
estimator
Estimator object to use for test (MetaEstimator)
MetaEstimator()
parametric
Parametric or nonparametric test (bool), section 3.6
False
confidence
Confidence level for test (float [0,1])
0.05
symmetric
Conducts symmetric test (bool), section 5.1
True
p values adj
p-values for ”prediction nulls” of each y ∈ Y (list)
independent
tuple, first value shows if ”independence-null” is rejected (bool)
second value is p-value of ”independence-null” (float [0,1])
loss statistics
RMSE difference for baseline f and altern. g,
loss residuals with standard deviation, for each y ∈ Y .
Only applicable if Y continuous
Note: The variance of the difference is estimated by assuming 0 covariance between the residuals
of baseline f and alternative g, which generally leads to more conservative confidence intervals for
error residuals (due to the irreducible error, prediction residuals for different methods are generally
positively correlated).
32
MetaEstimator
Type: class
Methods
init
Name
Description (type)
Default
Inputs:
method
Ensembling method (’stacking’, ’multiplexing’ or None)
’stacking’
estimators
Estimators to ensemble over (2-tuple of lists of sklearn
None (default estim,
estimators [regression estim], [classification estim])
section 5.1.2)
method type
Task definition (’regr’, ’classif’, None)
None (auto selection)
cutoff
Cutoff for automatic selection of method type (integer)
10
Inputs:
y
Dependent variable ([n x 1] numpy array)
Outputs:
estimators
Appropriate set of estimators (list)
x
Independent variables ([n x p] numpy array)
y
Dependent variable ([n x 1] numpy array)
fitted
Fitted estimator (MetaEstimator)
x
Independent variables ([n x p] numpy array)
y
Dependent variable ([n x 1] numpy array)
fitted
Fitted uninformed baseline estimator (MetaEstimator)
get estim
fit
Inputs:
Outputs:
fit baseline
Inputs:
Outputs:
predict
Requires:
MetaEstimator has been fitted
Inputs:
x
Test set independent variables ([n x p] numpy array)
Outputs:
predictions
Predictions for test set ([n x 1] numpy array)
x train
Training set independent var. ([n x p] numpy array)
y train
Training set dependent variables ([n x 1] numpy array)
x test
Test set independent variables ([n x p] numpy array)
y test
Test set dependent variables ([n x 1] numpy array)
baseline
Should baseline be fitted (boolean)
resid
Residuals for prediction strategy ([n x 1] numpy array)
get resid
Inputs:
Outputs:
33
False
find neighbours
Type: function
Inputs:
Outputs:
Name
Description (type)
Default
X
Input data set ([n x p] numpy array)
estimator
Estimator object to use for test (MetaEstimator)
MetaEstimator()
confidence
False-discovery rate (float [0,1])
0.05
skeleton
Matrix, p-values for each indep test ([p x p] numpy array)
skeleton adj
Learnt graph after applying FDR control ([p x p] numpy array)
6.4. API design
The API is designed as a wrapper for Scikit-learn (sklearn), a package in the Python programming
language, that aims to provide a user with a consistent, easy-to-use set of tools to analyze data [31].
It is one of the most-used tools in today’s supervised learning community, which is why it is chosen as
the supervised prediction workflow to build on for the predictive conditional independence test. This
section will outline the advantages of basing the test on the sklearn package.
6.4.1. Sklearn interface
Sklearn is built around estimator objects, which implement a consistent set of methods. Estimators
provide a fit and, if applicable, a predict method, in order to be able to fit the estimator to training
data and then predict on a test set. Additionally, sklearn provides standardized approaches to model
selection and hyperparameter-tuning as well as data transformation and ensembling methods (see
[8] for a more thorough discussion). The methods can easily be combined to create more powerful
estimators. Defaults are chosen sensibly so that in most cases, an initial fit of a method to data
requires little manual parameter specification from the user’s side. While the highly automated and
simplified approach of sklearn lowers the bar of entry when aiming to generate knowledge from data,
it also comes with a downside. For most statistical applications that exceed fitting and predicting
from a data set, such as inference on the parameters and hypotheses about prediction accuracies,
the relevant subroutines are missing from the API. Relevant statistics can however be attained by
interfacing it with other packages (such as SciPy).
6.4.2. Wrapper for Sklearn estimators
As we saw in section 6.2, the conditional independence test described in Algorithm 1 uses the newly
defined MetaEstimator class to automate determining the optimal prediction functional for a given
task. It does so, by ensembling over a set of base estimators from sklearn. These are either chosen
to be the sensible defaults described in Table 1, or can be passed by the user as a tuple of lists
of sklearn base estimators. This is required since regression and classification tasks rely on vastly
different prediction functionals, and thus need to be specified separately. As a general rule, the passed
regressors need to have a fit and a predict method, whereas the classifiers need to have a fit and
a predict proba method. Requirements might be more stringent for certain types of data or certain
estimators, however specifying an unsuitable estimator class will result in an error message as specified
by the respective class, allowing the user to either remove the unsuitable class or proceed with the
sensible defaults. As mentioned before, this gives a user a flexible tool to trade off between power
and computational complexity. If in need of a fast method, one can use an algorithm that runs in
linear time, such as stochastic gradient descent linear regression, whereas if a test with high power is
needed, one can pass hyper-tuned estimators to the function, that take longer to run but generalizes
better for prediction on unseen data.
34
6.5. Examples
This section will provide some simple examples of how the code is used. For the following it is assumed
that data sets X, Y and Z, all of the size [number of samples × number of dimensions], are loaded as
numpy arrays, and have matching numbers of samples (sample indices in X, Y and Z correspond to
the same sample). After installing the pcit package, import the relevant objects:
Testing if X
Y |Z, using the default values:
I n d e p e n d e n c e T e s t . PCIT(X, Y, z = Z )
Testing if X
mators:
|=
1
|=
1 from p c i t import MetaEstimator , S t r u c t u r e E s t i m a t i o n , I n d e p e n d e n c e T e s t
Y |Z, with a custom MetaEstimator, multiplexing over a manually chosen set of esti-
1 from s k l e a r n . l i n e a r m o d e l import RidgeCV , LassoCV ,
2
SGDClassifier , LogisticRegression
3
4 r e g r e s s o r s = [ RidgeCV ( ) , LassoCV ( ) ]
5 c l a s s i f i e r s = [ SGDClassifier () , LogisticRegression ( ) ]
6
7 c u s t o m e s t i m = MetaEstimator . MetaEstimator ( method = ’ m u l t i p l e x i n g ’ ,
8
estimators = ( regressors , c l a s s i f i e r s ))
9
10 I n d e p e n d e n c e T e s t . PCIT(X, Y, z = Z ,
11
estimator = custom estim )
Learning the undirected skeleton of X:
1
S t r u c t u r e E s t i m a t i o n . f i n d n e i g h b o u r s (X)
Concrete outputs will be shown in section 7 and can be found on GitHub.
35
7. Experiments
This section will first evaluate the performance of the proposed algorithms, and then provide some
examples of applications on real world data sets. All performance tests can be found on Github3 .
7.1. Performance tests
This section will report on performance tests for the algorithms derived in Section 5. First the power
of the the predictive conditional independence routine is bench-marked against current state-of-the-art
methodology, then various tests on the directed graph structure learning algorithm are conducted.
7.1.1. Performance of conditional independence test
In this section the conditional independence routine will be bench-marked against the previous research, namely the kernel based approach for conditional independence testing, which is is shown
to be more powerful than other conditional independence testing algorithms in [37] (on multivariate
Gaussian data). The used code for the kernel test is taken from GitHub4 . To conduct a test using
data that is drawn from a distribution that more closely resembles real world data, as opposed to the
synthetic (Gaussian) data commonly used for performance tests, the UCI Wine Repository data set
[27] is used as follows:
• The columns ’Alcohol’, ’Malic Acid’ and ’Magnesium’ are randomly permuted (to make them
independent) and will serve as X, Y and noise arrays respectively
• Vector Z is created by individually sampling vectors X 0 , Y 0 and noise0 of size n with replacement
from X, Y and the noise vector, and then calculating
p
Zi = log(Xi0 ) × exp(Yi0 ) + u ∗ noise0i , i ∈ {1, ..., n},
where u is the sign, uniformly drawn from {−1, 1}
|=
This results in a scenario, where X Y , but X 6⊥⊥ Y |Z, and the signal to noise ratio is high for
small sample sizes. The test will be conducted by increasing the sample size from 100 to 5000, and
calculating the run times and power for both approaches. For each sample size, the PCIT is run 500
times, and the KCIT is run 200 times, since the KCIT is more computationally demanding than the
PCIT. The only exception is n = 5000 for the KCIT, which is run 25 times, since the time complexity
would be too high to draw more samples. Both methods are run for their default values, without
additional manual tuning, and at a confidence level of 5%. Each time, {X 0 , Y 0 , Z} is sampled, and
then the conditional independence tests are applied and the run time is recorded. If they reject
independence at a 5% level, the round counts as a success, 1, otherwise 0. The power and run times
are then calculated by averaging over the results, and standard errors for the power are attained by
realizing that the standard error of the power for a rerun number of B is the standard error of X
B,
where X ∼ Bin(B, θ), where θ is the observed power (the sample mean).
The results are shown in Table 2. The power at the higher end of the sample sizes seems to be similar
(it is important to note that the power of 1 for the KCIT for n = 5000 was achieved on 25 resamples
only), where as in the range 500 to 1000 samples, the proposed predictive conditional independence test
shows a significantly higher power. For small n, the KCIT seems to fare significantly better, however
both approaches have a very low power, and the PCIT especially shows the power levels below the
confidence levels, which might indicate a discrepancy between true type 1 error and expected type 1
error. Important to note is the very high computational complexity of the kernel-based approach for
a data set of size 5000, with a run time of approximately 80 minutes per test, while the predictive
3 https://github.com/SamBurkart/pcit/tree/master/tests
4 https://github.com/devinjacobson/prediction/tree/master/correlations/samplecode/KCI-test
36
n
PCIT
100
Power
Time (s)
KCIT
Power
200
500
1000
2000
5000
0.020
0.046
0.332
0.672
0.832
0.970
(0.006)
(0.009)
(0.021)
(0.021)
(0.017)
(0.007)
0.32
0.38
0.49
0.624
1.31
4.79
0.050
0.085
0.185
0.325
0.8
1
(0.015)
(0.019)
(0.027)
(0.033)
(0.028)
(∗)
Time (s)
0.57
1.25
9.8
44
383
4758
Stand. difference
-1.8
-1.78
4.25
8.85
0.97
*
Table 2: Performance statistics for the newly proposed predictive conditional independence test
(PCIT) and the kernel based approach (KCIT). The values in brackets show the estimated standard errors. The last row shows the standardized difference between the power estimates, PCIT KCIT
.
conditional independence test still has a very low run time of 4.8 seconds. This is to be taken with a
grain of salt, since the tests were run in different languages (PCIT in Python, KCIT in MATLAB),
but it is apparent that PCIT scales much better than KCIT, while the both converge to a power of 1.
7.1.2. Performance of structure learning algorithm: Error rates
This test will show that the graphical model learning algorithm is capable of recovering the true graph
with high probability as the sample size increases. No comparison with alternative tests is made, as
it would lead to infeasible run times.
The data used in the performance tests is generated as follows:
1. Sample a random positive definite, sparse, symmetric precision matrix M . The size of the
entries of this matrix (relative to the diagonal) determine the signal to noise ratio, and are thus
thresholded to 0, if below a certain value.
|=
2. Invert M , M 0 = inv(M) and use M 0 as covariance matrix to sample from a multivariate Gaussian
distribution. This has the effect, that the zero-entries in M express zero-partial correlations [15,
ch. 17.3] between respective variables (and hence, lack of an edge between the two nodes
in the graph). That is, for a multivariate Gaussian random variable X, X = [X1 , ..., Xp ],
Mi,j = 0 =⇒ Xi Xj |X \ {Xi , Xj }.
3. Sample D, a data set of size n, from the multivariate normal P = N (0, M 0 ). The choice of n
will allow to evaluate the algorithms performance for an increasing signal to noise ratio.
4. The undirected graph G consistent with the probability distribution P is given by M, where
edges occur between variables Xi and Xj , if Mi,j is non-zero.
Then, Algorithm 3 will be tested by getting an estimate Ĝ of the structure of an undirected graph
G induced by the distribution P from which the data set D was drawn. The performance evaluation
will be guided by three metrics:
• False-discovery rate: The FDR is the fraction of type 1 errors (edges in Ĝ that are not in G) over
the total number of identified edges in the learned Ĝ
• Power: Number of found edges (links in true graph G that were found by the structure learning
algorithm) over the total number of links in G
37
FDR
Time (sec)
No ensembling
3.09%
30
Stacking
3.03%
450
Multiplexing
2.75%
1000
Table 3: False-discovery rates and run times for a data set of 22000 for all used methods
• Time: Run time needed to estimate Ĝ
For the test, the number of random variables is chosen to be 10. This means, that each node in the
graph has up to 9 neighbours, and the total number of possible undirected links (before inducing
sparsity) is 45. The sparsity parameter is chosen in a way that generally between 10 and 15 of those
links exist in the true graph. The size and sparsity of the graph are chosen to produce estimators of
the metrics with reasonably low variance, but are otherwise arbitrary. The sample sizes range from
approximately 400 to 20000, increasing in steps of 10%. For each sample size, 10 tests are run to
decrease the variance in the estimators. The test is conducted for conditional independence testing
using stacking and multiplexing, as well as without using any ensembling method, which, since the
data is continuous, results in the usage of Elastic Net regularized regression.
If the algorithms work as expected, the FDR is expected to be at or below 5%. High power-levels
indicate a better performance of the algorithm, with respect to this specific task. At the very least,
the power level is expected to increase in the number of samples, suggesting that asymptotically the
routine will find the correct graph.
Table 3 shows the average FDR and run times for each of the three methods. The average FDR
seems to be similar across all three methods, whereas the computational complexities differ by a
large amount. No ensembling PCIT runs very quick, about 15 times faster than stacking, which
itself only takes about half as long as multiplexing. This is the case, since multiplexing requires the
calculation of performance measures for each used estimator. Figure 4 shows the power and FDR of
the algorithm for increasing sample size. The FDR for all 3 methods seem to be slightly higher for
small sample sizes than they are for large sample sizes, but they are generally around or below the
desired 5% in the expectation (the variances are quite high, as the number of possible reruns is low
due to the computational complexity of the multiplexing method). While it might seem surprising
that stacking and multiplexing are outperformed by the no-ensembling method, one has to remember
that the ensembling is used to choose the optimal prediction functional automatically from a set of
estimators. However, the data is multivariate Gaussian, for which ridge regression is optimal in terms
of generalization performance. While stacking and multiplexing are tasked to find this property in a
large set of estimators, the estimator used in the no ensembling case is Elastic Net regularized linear
regression, a generalization of ridge regression, and hence fares better since there is less variance in
finding the optimal estimator. For all three methods, the power increases roughly logarithmically in
the sample size, implying that for a test that recovers the truth with high probability, a large data
set or a more powerful set of estimators (see Section 7.2.2) might be needed for that specific task.
However, asymptotically, all three tests seem to recover a graph that is close to the truth, in this
specific scenario, unless the power starts to plateau before reaching 1 (which there is no indication
for). Since the power for the no ensembling case is biased by the fact that it uses an optimal prediction
functional, the power curves for stacking and ensembling provide a better estimate for the performance
on an unseen data set of an unknown distribution family. As graphical model structure learning is an
inherently hard problem (due to issues such as multiple testing, combinatorial explosion, as outlined
in Sections 4), it is promising that the algorithm finds an increasing fraction of the links while keeping
the ratio of false-discoveries constant.
38
Figure 4: Power (blue lines) and FDR (red lines) for increasing sample size for all 3 methods, showing
that the performance increases as expected for all 3 methods, when the sample size is increased.
The transparent blue and red areas denote the 90% confidence intervals, the dashed line shows the
expected FDR (0.05)
39
Figure 5: Frequencies of edge occurrence in the learned structure. Green denotes edges that occur in
less than 7% (as advocated by FDR) of the models, blue for edges that occur in more than a third of
the model, and red everything in between
7.1.3. Performance of structure learning algorithm: Variance of estimated model
As a second performance metric, this section will assess the consistency of the learned structures on
resamples from a data set D. Assuming that all the observation in D are identically distributed, the
structure learning method should arrive at the same conclusions on the resamples, less some variance
in the process. The Auto MPG Data Set5 containing various continuous and discrete car performanceattributes, from the UCI machine learning repository [27], is used to conduct the test. The data set
contains 398 instances of 8 variables. For the purpose of the experiment, data sets of the same size
will be sampled with replacement from the full data set 100 times. On each resample, a graph is
learned using the stacking estimator on a 10% FDR level. Each subsample contains about two thirds
of the original data points (with some instances repeated). This is a commonly used procedure to
estimate the sampling distribution of a statistics, and which will here allow us to assess the variance
in the learned graph structure. Figure 5 shows the results. On average, there are 6 links in the learned
structure, hence the FDR advocates about 0.6 type 1 errors per learned model. The green lines are
connections that are found less times than expected by the FDR and the blue lines are connections
that are found in a large fraction of the models (over one third of the resamples). The concerning
links are the ones in between, for which the null of independence is rejected more than occasionally,
but not in a reasonably large fraction of the learned graphs. There are only 2 links in the model for
which this occurs, so overall, the variance across learned graphs seems to be reasonably small and the
learning process is hence consistent.
5 https://archive.ics.uci.edu/ml/datasets/auto+mpg
40
Figure 6: Learned graphical model structures for the Boston Housing and the Iris data set
7.2. Experiments on real data sets
In this section, the outputs of the graphical model structure learning routine are shown on a selection
of real world data sets. It will outline some possibilities of the user to trade off between power and
computational complexity.
7.2.1. Sklearn data sets: Boston Housing and Iris
Setup: One receives a data set and is interested in initial data exploration. This involves finding
related sets of variables, and variables that lack obvious relationships with other variables.
Boston Housing and Iris are two standard data sets from sklearn. The housing data set contains 506
samples of 14 variables, originally collected to build a model for predicting house prices (descriptions
of the variables can be found online6 ). The Iris data set is a data set for predicting flower species
based on petal and sepal measurements with 150 observations in 5 dimensions.
Estimation will take place using the default stacking estimator. Since we are interested in initial
exploration, and finding interesting groups of variables, a large FDR (20%) was chosen. Note that,
unlike for other (mostly score-based) structure learning algorithms, the outcome of one experiment
(the presence of an edge in the model) does not influence other experiments, and hence, false discoveries do not compromise the estimated structure additionally.
The results are shown in Figure 6 (graphs drawn with NetworkX [24]). For the Boston housing data,
seemingly sensible variable groupings occur. In the top left, the variables related to industrialization
of a neighbourhood are shown, while on the right, demographic attributes form a cloud of related
variables. For the Iris data set, while length and width of both sepal and petal are related, as
expected, it seems that petal size has a higher association with class, and, in fact, width and length
are independent given the class.
Both of these analyses require no parameter tuning (the only non-default chosen for this experiment
was the adjusted confidence level) and take very little time (less than 15 seconds). The implementation
of algorithm 3 thus provides a quick, ready to use tool for initial exploratory data analysis.
7.2.2. Key short-term economic indicators (UK)
Setup: One is interested in finding the relationships within a set of variables and making local
conditional independence statements for sets of variables. The focus is on finding associations that
we are confident about.
The economic indicator data set contains monthly data of key economic indicators between February
6 http://scikit-learn.org/stable/modules/classes.html#module-sklearn.datasets
41
(a) Default Approach
(b) Bagged Support Vector Machines
Figure 7: Learned graphs for the Economic Indicator data set for the default approach (left) and a
more powerful approach using bagged SVM’s (right)
1987 and June 2017 for the UK from the OECD statistics database [30]. The economics indicators are
export and import figures, stock price and industrial production levels, overnight and 3 month Interbank interest rates, money supply (M4) and GDP. This rather small data set, around 369 instances
of 9 variables, will outline the possibility of a user to trade off between computational complexity
and power of the learning algorithm. As for most economic data sets, the signal to noise ratio can
be quite low. Figure 7a shows the structure learned by the default approach for confidence (false
discovery-rate) level of 5%, with a run time of 15 seconds. While the ground truth is not known
(economic theory aside), it is apparent that many links are missed by the fast default approach. This
becomes evident when considering a variable like the GDP, which (by definition) includes exports and
imports, and hence some connection between the variables should exist. If there is need for a more
powerful approach, a more powerful estimator can be chosen. Figure 7b show the learned structure for
an estimator that resamples the data 50 times and learns 50 different Support Vector Machines with
automatically tuned hyperparameters. The implementation of this is straightforward, since sklearn
provides hyperparameter and bagging wrappers, so an estimator can be tuned easily and passed to the
MetaEstimator. The graph shows the many edges that were found. While it is impossible to judge
the correctness of the graph, it seems that some reasonable groups of variables are found, such as
industrial production, exports and imports (in economic theory, countries seek to balance their trade
deficit), or the grouping of stock prices and the main drivers of inflation, money supply and interest
rates. Additionally, all the variables are connected in some way, which would be expected from a data
set of economic indicators. The computational complexity of this approach was rather high, with a
run time of about 20 minutes, however this shows how easily the user can trade off between power
and complexity,without having a large effect on the false discovery-rate (as shown in Section 7.1).
42
8. Discussion
This paper has introduced a novel way for multivariate conditional independence testing based on
predictive inference and linked to a supervised learning workflow, which addresses many of the current
issues in independence testing:
• Few subjective choices: By connecting the classical task of independence testing to supervised learning, and its well-understood hyperparameter tuning capabilities, there is no need
for the heuristics and manual tuning choices prevalent in current state-of-the-art conditional
independence tests
• Low computational complexity: By linking independence testing to one of the most-researched
fields in data science, predictive modelling, the high level of power in the state-of-the-artprediction methods can be directly benefited from, and any efficiency gains in the latter directly
benefits the former.
• High-dimensional problems: By connecting the problem to the task of supervised learning,
the method easily deals with the multivariate case, largely removing any issues concerning
dimensionality of the data
It is important to note, that some choices remain necessary, as is the case in statistical methodology in
general. Since it is not possible to test for an infinite amount of loss functions and predictive models,
one has to decide on a subset to conduct this test. The larger the subset, the higher the data requirements to arrive at a respective power level. How the outlined methodology differs from the methods
reviewed in section 2 is by outlining a principled way to choose from a set of subjective choices, and
by using a subset of all-star predictive modelling algorithms to ensemble over as a default, a test that
is expected to be able to deal with most of the usual scenarios, given a reasonable sample size.
To validate these claims, the test was bench-marked against current state-of-the-art conditional independence tests, and showed a similar or better performance in regions where the power of the tests
exceeded 10%.
Subsequently, the PCIT was applied in a method for learning the structure of an undirected graph
best describing the independence structure of an input data set. The combination of the new conditional independence test and the structure learning algorithm address some of the current issues in
graphical model structure learning:
• Low computational complexity: While current exact algorithms, such as the PC-algorithm,
often require a number of tests that is exponential in the number of variables, the proposed
algorithm runs in quadratic time in the size of the graph. Additionally, a straightforward
power-complexity trade off is provided
• Exact algorithm: Unlike many scored-based methods, the algorithm does not make any strong
constraints on the underlying distribution and is not subject to local optima
• False-discovery rate control: FDR control is added to provide a powerful tool to control the
fraction of type 1 errors when the number of variables is increasing
Performance tests showed that the false-discovery rate is as advertised and the power of the test
increases constantly in the number of samples. Additionally, consistency under perturbations in the
data set was demonstrated.
The algorithms have been distributed in the pcit package, providing users with an easy-to-use implementation to test for multivariate and conditional independence, as well as to perform graphical
model structure learning with little need for manual parameter tuning. The implementations are
particularly interesting for users
• assessing the value of collecting additional variables for a prediction task
43
• in need of a conditional independence test for multivariate data
• performing exploratory data analysis
• looking for a visual way to communicate the structure in their data set
There are a few ways in which the current implementation can be generalized to make for a more
powerful test. The power of the conditional independence test is directly linked to the power of the
supervised learning algorithm to find the optimal prediction functional and the correct loss. Currently, only the necessary minimum of 2 loss functions is implemented, one for regression tasks and
one for classification tasks, but this can easily be generalized by including more losses, and checking
if the baseline can be beaten with statistical significance in any of them. This would also strengthen
the argument when reasoning about variables being independent, when the null hypothesis cannot be
rejected. What’s more, the current methodology connected single-output prediction with FDR control
to make statements about the behaviour between the joint distributions. While this procedure results
in a logically sound test, the feasibility of multi-output predictors, predicting several outputs at once,
should be assessed, for appropriate multivariate loss functions.
On the other side, some extensions of the tests need to be conducted before a definitive conclusion
can be made as to its power. In terms of performance evaluation, the power of the proposed routine
was assessed for two specific cases only, for multivariate Gaussian data, and a simple conditional independence test. To get a better idea of the general feasibility of the algorithm, more scenarios need
to be analyzed and performance tests conducted. Additionally, the power of the test in the context
of alternative graphical model structure learning algorithms should be evaluated.
44
A. Best uninformed predictors: classification
This appendix collects proofs of number of elementary computations to obtain some of the explicit
statements about classification losses found in the main corpus.
A.1. Misclassification loss is a probabilistic loss
In this sub-section, we consider the alternative misclassification loss
L : Y0 × Y → [0, 1];
(p, y) 7→ 1 − p(y),
with Y being a discrete set, and Y0 being the probability simplex of probability mass functions on Y,
as considered in Remark 3.2. In said remark, it is claimed that rounding a probabilistic prediction to
a deterministic one never increases the generalization loss. We first prove this for the unconditional
case which is equivalent to constant predictions:
Lemma A.1. Let Y be an Y-valued random variable with probability mass function pY . There is
always a deterministic minimizer of the expected generalization loss, i.e., there exists a pmf p0 : Y →
{0, 1} such that
p0 = argmin E[L(p, Y )].
p∈Y0
Proof. Let p : Y → [0, 1] be any pmf. For its expected generalization loss, one has, by substituting
definitions,
X
X
E[L(p, Y )] =
p(y)(1 − pY (y)) = 1 −
p(y)pY (y).
y∈Y
y∈Y
Let y0 := argmaxpY (y) (if there are multiple maximizers, choose arbitrary). By construction, pY (y0 ) ≥
y∈Y
P
y∈Y p(y)pY (y). Thus, for p0 : y 7→ 1 if y = y0 ; 0 otherwise, one has
E[L(p0 , Y )] = 1 − pY (y0 ) ≤ 1 −
X
p(y)pY (y) = E[L(p, Y )].
y∈Y
Since p was arbitrary, this proves the claim.
Note that Lemma A.1 does not exclude that there are other minimizers of the expected generalization
loss (in general there will be an infinity), it only states that a deterministic minimizer may be found.
Proposition A.2. Let X, Y, Z be a random variables taking values in X, Y, Z. Then one can make
choices which always predict a deterministic class, for the following prediction functionals as considered
in Section 3:
(L)
(i) the best uninformed predictor ωY
: X → Y0
(L)
(ii) the best predictor ωY |X : X → Y0
(L)
(iii) the best conditionally uninformed predictor ωY |Z : X × Z → Y0
(L)
(iv) the best conditional predictor ωY |X,Z : X × Z → Y0
That is, in all four cases, choices may be made where the image is always a deterministic pmf p : Y →
{0, 1}.
45
Proof. (i) follows directly from Lemma A.1 which directly implies that the function y 7→ p0 is the best
constant predictor and hence the best uninformed predictor, where p0 is defined as in the statement
(L)
(or more constructively in the proof) of Lemma A.1, thus ωY : x 7→ p0 is a possible choice.
(ii) follows from noting that the Lemma A.1 and its proof remain still valid when considering the
conditional under X, i.e., defining a conditional
p0 : X → Y0 ; x 7→ argminE[L(p, Y )|X = x]
p∈Y0
(L)
and thus ωY |X : x 7→ p0 (x).
(iii) and (iv) follow analogously by additional conditioning on Z in the same way.
A.2. Logarithmic loss is a proper loss
In this sub-section, we consider the logarithmic loss (or cross-entropy loss)
L : Y0 × Y → R+ ;
(p, y) 7→ − log p(y),
with Y being a discrete set, and Y0 being the probability simplex of probability mass functions on Y,
as considered in Remark 3.2.
Proposition A.3. The expected generalization log-loss is minimized by the true distribution. I.e., let
Y be random variable taking values in Y, with probability mass function pY . Then,
pY = argmin E[L(y, Y )].
y∈Y0
Proof. Let p ∈ Y be arbitrary. Substituting definitions, it holds that
X
E[L(p, Y )] = −
pY (y) log p(y)
y∈Y
≥−
X
p(y) log p(y)
y∈Y
= E[L(pY , Y )],
where the inequality in the middle is Gibbs’ inequality.
A.3. Brier loss is a proper loss
In this sub-section, we consider the Brier loss (or squared classification loss)
X
L : Y0 × Y → R+ ; (p, y) 7→ (1 − p(y))2 +
p(y 0 )2 ,
y 0 6=y
with Y being a discrete set, and Y0 being the probability simplex of probability mass functions on Y,
as considered in Remark 3.2.
Proposition A.4. The expectedBrier loss is minimized by the true distribution. I.e., let Y be random
variable taking values in Y, with probability mass function pY . Then,
pY = argmin E[L(y, Y )].
y∈Y0
46
Proof. Let p ∈ Y be arbitrary. By applying the binomial rule, observe that
X
p(y 0 )2
L(p, y) = 1 − 2p(y) +
y 0 ∈Y
Substituting definitions, it holds that
X
X
pY (y) − 2pY (y)p(y) + pY (y)
p(y 0 )2
E[L(p, Y )] =
y 0 ∈Y
y∈Y
=1−2
X
pY (y)p(y) +
=1−
2
pY (y) +
y∈Y
=1−
X
p(y)2
y∈Y
y∈Y
X
X
X
y∈Y
2
pY (y) +
y∈Y
X
X
pY (y)2 − 2
pY (y)p(y) +
y∈Y
X
p(y)2
y∈Y
2
(p(y) − pY (y)) .
y∈Y
Note that the first two terms (the 1 and the sum over pY (y)) do not depend on p, while the last term
is non-negative and minimized with a value of zero if and only if p = pY .
Since p was arbitrary, this proves the claim.
B. Elicited statistics for regression losses
This appendix collects explicit proofs of the elicited statistics for squared loss, and Q-loss (and the
absolute loss which is a special case).
B.1. Squared loss elicits the mean
In this sub-section, we consider the univariate regression case Y = R, and the squared loss
L : Y × Y → R; (y, y ∗ ) 7→ (y − y ∗ )2 .
Proposition B.1. The squared loss elicits the mean. I.e., let Y be random variable taking values in
Y. Then,
E[Y ] = argmin E[L(y, Y )].
y∈Y
Proof. Substituting definitions, it holds that
E[L(y, Y )] = E[(y − Y )2 ]
= E[y 2 − 2yY + Y 2 ]
= y 2 − 2yE[Y ] + E[Y 2 ]
= y 2 − 2yE[Y ] + E[Y 2 ] − E[Y 2 ] + E[Y 2 ]
= (E[Y ] − y)2 + Var(Y ),
which is the well-known derivation of the bias-variance decomposition.
The first term, (E[Y ] − y)2 , is minimized whenever E[Y ] = y, and the second term, Var(Y ), does
not depend on y. Thus, the sum of both terms is minimized (in y while fixing Y ) for the choice
y = E[Y ].
47
B.2. Quantile loss elicits the quantile
In this sub-section, we consider the univariate regression case Y = R, and the Q-loss (or quantile loss)
L : Y × Y → R; (y, y∗ ) 7→ α · m(y∗ , y) + (1 − α) · m(y, y∗ ),
with m(x, z) = min(x − z, 0).
Proposition B.2. The Q-loss elicits the α-quantile. I.e., let Y be random variable taking values in
Y with cdf F : R → [0, 1]. Then,
FY−1 (α) = argmin E[L(y, Y )].
y∈Y
Proof. We first assume that Y is absolutely continuous, i.e., Y has a probability density function
p : R → R+ and F is bijective. One then computes
EY [Lα (y ∗ , Y )] =
Z
y∗
(1 − α)(y ∗ − y)p(y)dy −
Z
(α)(y ∗ − y)p(y)dy
y∗
−∞
∗
∞
Z
∗
y∗
= y (F (y ) − α) + αE[Y ] −
yp(y)dy
−∞
∂EY [Lα (y ∗ , Y )]
!
= y ∗ p(y ∗ ) + F (y ∗ ) − α − y ∗ p(y ∗ ) = F (y ∗ ) − α = 0
∗
∂y
=⇒ P (y ∗ ) = α
∂ 2 EY [Lα (y ∗ , Y )]
= p(y ∗ ) ≥ 0
∂(y ∗ )2
Hence, the first order condition is a minimum, minimized by the α-quantile of Y , and thus the quantile
loss elicits the quantile.
For general Y , note that F always exists, and thus when p appears inside integrals, the integrals welldefined. The partial derivatives may not always be defined but is the same as the sign of sufficiently
small finite differences, thus the proof logic follows through for the general case. In case of jump
discontinuities of F , any monotone inverse F −1 may be chosen for the statement.
48
References
[1] C.C. Aggarwal. Data Classification: Algorithms and Applications. Chapman & Hall/CRC, 1st
edition, 2014.
[2] D. Barber. Bayesian reasoning and machine learning. Cambridge University Press, 2012.
[3] L. Baringhaus and C. Franz. On a new multivariate two-sample test. Journal of Multivariate
Analysis, 88:190–206, 2004.
[4] Y. Benjamini and Y. Hochberg. Controlling the false discovery rate: A practical and powerful
approach to multiple testing. Journal of the Royal Statistical Society. Series B (Methodological),
57:289–300, 1995.
[5] Y. Benjamini and D. Yekutieli. The control of the false discovery rate in multiple testing under
dependency. The Annals of Statistics, 29:1165–1188, 2001.
[6] W. Bergsma. Nonparametric testing of conditional independence by means of the partial copula.
ArXiv e-prints: 1101.4607, 2011.
[7] W.P. Bergsma. Testing conditional independence for continuous random variables. Eurandom,
2004.
[8] L. Buitinck, G. Louppe, M. Blondel, F. Pedregosa, A. Mueller, O. Grisel, V. Niculae, P. Prettenhofer, A. Gramfort, J. Grobler, et al. API design for machine learning software: experiences
from the scikit-learn project. arXiv preprint arXiv:1309.0238, 2013.
[9] U. Cherubini, E. Luciano, and W. Vecchiato. Copula methods in finance. John Wiley & Sons,
2004.
[10] K.P. Chwialkowski, A. Ramdas, D. Sejdinovic, and A. Gretton. Fast two-sample testing with
analytic representations of probability measures. In Advances in Neural Information Processing
Systems 28, pages 1981–1989. Nips, 2015.
[11] G.W. Corder and D.I. Foreman. Nonparametric Statistics for Non-Statisticians: A Step-by-Step
Approach. John Wiley & Sons, Inc., 2009.
[12] A. Dionisio, R Menezes, and D.A. Mendes. Entropy-based independence test. Nonlinear Dynamics, 44:351–357, 2006.
[13] V.A. Fernández, M.D.J. Gamero, and J.M. Garcı́a. A test for the two-sample problem based
on empirical characteristic functions. Computational Statistics & Data Analysis, 52:3730–3748,
2008.
[14] T.M. Fragoso and F.L. Neto. Bayesian model averaging: A systematic review and conceptual
classification. arXiv preprint arXiv:1509.08864, 2015.
[15] J. Friedman, T. Hastie, and R. Tibshirani. The elements of statistical learning, volume 1. Springer
series in statistics New York, 2001.
[16] C. Genest and B. Rémillard. Test of independence and randomness based on the empirical copula
process. Test, 13:335–369, 2004.
[17] T. Gneiting and A.E. Raftery. Strictly proper scoring rules, prediction, and estimation. Journal
of the American Statistical Association, 102:359–378, 2007.
[18] A. Gretton and L. GyĂśrfi. Consistent nonparametric tests of independence. Journal of Machine
Learning Research, 11:1391–1423, 2010.
49
[19] A. Gretton, O. Bousquet, A. Smola, and B. Scholkopf. Measuring statistical dependence with
hilbert-schmidt norms. In ALT, volume 16, pages 63–78. Springer, 2005.
[20] A. Gretton, K. Fukumizu, C.H. Teo, L. Song, B. Schölkopf, and A. Smola. A kernel statistical
test of independence. In Advances in neural information processing systems, pages 585–592, 2008.
[21] A. Gretton, K. Fukumizu, Z. Harchaoui, and B.K. Sriperumbudur. A fast, consistent kernel
two-sample test. Advances in Neural Information Processing Systems 22, pages 673–681, 2009.
[22] A Gretton, K.M. Borgwardt, M.J. Rasch, B. Schölkopf, and A. Smola. A kernel two-sample test.
Journal of Machine Learning Research, 13:723–773, 2012.
[23] A. Gretton, D. Sejdinovic, H. Strathmann, S. Balakrishnan, M. Pontil, K. Fukumizu, and B.K.
Sriperumbudur. Optimal kernel choice for large-scale two-sample tests. In Advances in Neural
Information Processing Systems 25, pages 1205–1213. Nips, 2012.
[24] A.A. Hagberg, D.A. Schult, and P.J. Swart. Exploring network structure, dynamics, and function
using NetworkX. In Proceedings of the 7th Python in Science Conference (SciPy2008), pages 11–
15, 2008.
[25] E. Jones, T. Oliphant, and P. Peterson. SciPy: Open source scientific tools for Python, 2001.
URL http://www.scipy.org/.
[26] D. Koller and N. Friedman. Probabilistic graphical models: principles and techniques. MIT press,
2009.
[27] M. Lichman. UCI machine learning repository, 2013. URL http://archive.ics.uci.edu/ml.
[28] D. Lopez-Paz and M. Oquab.
arXiv:1610.06545, 2016.
Revisiting classifier two-sample tests.
arXiv preprint
[29] Y. Nadeau, C.and Bengio. Inference for the generalization error. Machine Learning, 52:239–281,
2003.
[30] OECD. OECD statistics: Key short-term economic indicators. http://stats.oecd.org/, 2017.
Accessed: 2017-08-06.
[31] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning
Research, 12:2825–2830, 2011.
[32] S. Raschka. Mlxtend, 2016. URL http://dx.doi.org/10.5281/zenodo.594432.
[33] B. Rémillard and O. Scaillet. Testing for equality between two copulas. Journal of Multivariate
Analysis, 100:377–386, 2009.
[34] B. Schweizer and E. F. Wolff. On nonparametric measures of dependence for random variables.
The Annals of Statistics, 9:879–885, 1981.
[35] Bharath K Sriperumbudur, Kenji Fukumizu, Arthur Gretton, Gert RG Lanckriet, and Bernhard
Schölkopf. Kernel choice and classifiability for RKHS embeddings of probability distributions. In
NIPS, pages 1750–1758, 2009.
[36] W. Zaremba, A. Gretton, and M. Blaschko. B-test: A non-parametric, low variance kernel twosample test. Advances in Neural Information Processing Systems 26, pages 755–763, 2013.
[37] K. Zhang, J. Peters, D. Janzing, and B. Schölkopf. Kernel-based conditional independence test
and application in causal discovery. arXiv preprint arXiv:1202.3775, 2012.
50
| 10 |
Learning Hybrid Algorithms for Vehicle Routing Problems
Yves Caseau1, Glenn Silverstein2 , François Laburthe1
1 BOUYGUES
e-Lab., 1 av. E. Freyssinet, 78061 St. Quentin en Yvelines cedex, FRANCE
2 Telcordia
ycs;[email protected]
Technologies, 445 South Street, Morristown, NJ, 07960, USA
[email protected]
Abstract
This paper presents a generic technique for improving hybrid algorithms through the
discovery of and tuning of meta-heuristics. The idea is to represent a family of
“push/pull” heuristics that are based upon inserting and removing tasks in a current
solution, with an algebra. We then let a learning algorithm search for the best possible
algebraic term, which represents a hybrid algorithm for a given set of problems and an
optimization criterion. In a previous paper, we described this algebra in detail and
provided a set of preliminary results demonstrating the utility of this approach, using
vehicle routing with time windows (VRPTW) as a domain example. In this paper we
expand upon our results providing a more robust experimental framework and learning
algorithms, and report on some new results using the standard Solomon benchmarks. In
particular, we show that our learning algorithm is able to achieve results similar to the
best-published algorithms using only a fraction of the CPU time. We also show that the
automatic tuning of the best hybrid combination of such techniques yields a better
solution than hand tuning, with considerably less effort.
1. Introduction
Recent years have seen a rise in the use of hybrid algorithms in many fields such as
scheduling and routing, as well as generic techniques that seem to prove useful in many
different domains (e.g., Limited Discrepancy Search (LDS) [HG95] and Large Neighborhood
Search (LNS) [Sha98]). Hybrid Algorithms are combinatorial optimization algorithms that
incorporate different types of techniques to produce higher quality solutions. Although
hybrid algorithms and approaches have achieved many interesting results, they are not a
panacea yet as they generally require a large amount of tuning and are often not robust
enough: i.e., a combination that works well for a given data set does poorly on another one.
In addition, the application to the “real world” of an algorithm that works well on academic
benchmarks is often a challenging task.
The field of Vehicle Routing is an interesting example. Many real-world applications rely
primarily on insertion algorithms, which are known to be poor heuristics, but have two major
advantages: they are incremental by nature and they can easily support the addition of
domain-dependent side constraints, which can be utilized by a constraint solver to produce a
high-quality insertion [CL99]. We can abstract the routing aspect and suppose that we are
solving a multi-resource scheduling problem, and that we know how to incrementally insert a
new task into a current solution/schedule, or how to remove one of the current tasks. A
simple approach is to derive a greedy heuristic (insert all the tasks, using a relevant order); a
simple optimization loop is then to try 2-opt moves where pairs of tasks (one in and one out)
are swapped. The goal of our work is to propose a method to (1) build more sophisticated
hybrid strategies that use the same two push and pull operations; (2) automatically tune the
hybrid combination of meta-heuristics. In [CSL99] we described such a framework for
discovering and tuning hybrid algorithms for optimization problems such as vehicle routing
based on a library of problem independent meta-methods. In this paper, we expand upon the
results in [CLS99] providing a more robust experimentation framework and experimentation
methodology with more emphasis on the automated learning and tuning of new terms.
This paper is organized as follows. Section 2 presents a generic framework that we propose
for some optimization problems, based on the separation between domain-specific low-level
methods, for which constraint solving is ideally suited, and generic meta-heuristics. We then
recall the definition of a Vehicle Routing Problem and show what type of generic metaheuristics may be applied. Section 3 provides an overview of the algebra of terms
representing the hybrid methods. Section 4 describes a learning algorithm for inventing and
tuning terms representing problem solutions. Section 5 describes an experimentation
framework, which is aimed at discovering the relative importance of various learning
techniques (mutation, crossover, and invention) and how they, along with experiment
parameters such as the pool size and number of iterations, affect the convergence rate and
stability. Sections 6 and 7 provide the results of a various experiments along with the
conclusions.
2. Meta-Heuristics and Vehicle Routing
2.1 A Framework for Problem Solving
The principle of this work is to build a framework that produces efficient problem solving
algorithms for a large class of problems at reduced development cost, while using state-ofthe-art meta-heuristics. The main idea is the separation between two parts, as explained in
Figure 1, that respectively contain a domain-dependent implementation of two simple
operations (push and pull) and contain a set of meta-heuristics and a combination engine, that
are far more sophisticated but totally generic. Obviously, the first postulate is that many
problems may actually fit this scheme. This postulate is based on our experience with a large
number of industrial problems, where we have found ourselves to re-using “the same set of
tricks” with surprisingly few differences. Here is a list of such problems:
(1) Vehicle Routing Problems. As we shall later see, such problems come in all kinds of
flavor, depending on the objective function (what to minimize) and the sideconstraints (on the trucks, on the clients, etc.). The key methods are the insertion of a
new task into a route, which is precisely the resolution of a small TSP with side
constraints.
(2) Assignment Problems. We have worked on different types of assignment problems,
such as broadcast airspace optimization or workflow resource optimization that may
be seen as assigning tasks to constrained resources. The unit operations are the
insertion and the removal of one task into one resource.
(3) Workforce Scheduling. Here the goal is to construct a set of schedules, one for each
operator, so that a given workload, such as a call center distribution, is handled
optimally. The algorithm that has produced the best overall results and that is being
used today in our software product is based on decomposing the workload into small
units that are assigned to workers. The unit “push” operation is the resolution of a
small “one-machine” scheduling problem, which can be very tricky because of labor
regulations. Constraint solving is a technique of choice for solving such problems.
(4) Frequency Allocation. We participated to the 2001 ROADEF challenge [Roa01] and
applied this decomposition to a frequency allocation problem. Here the unit operation
is the insertion of a new path into the frequency plan. This problem is solved using a
constraint-based approach similar to jobshop scheduling algorithms [CL95].
2
For all these problems, we have followed a similar route: first, build a greedy approach that is
quickly extended into a branch-and-bound scheme, which must be limited because of the size
of the problem. Then local-optimization strategies are added, using a swap (push/pull)
approach, which are extended towards large neighborhood search methods.
Automated Combination
Limited
Branching
(Search)
Push
Generic
(framework)
Remove
Fragment
(forget) and
re-build
Pull
Ejection
method:
force a task
into solution
Selection
Heuristics
Problem-dependent
Side-constraints
Figure 1:A Framework for Meta-Heuristics
The problem that we want to address with this paper is twofold:
1. How can we build a library of meta-heuristics that is totally problem-independent, so that
any insertion algorithm based on a constraint solver can be plugged?
2. How can we achieve the necessary tuning to produce at a low cost a robust solution for
each different configuration?
The first question is motivated by the fact that the meta-heuristic aspect (especially with local
optimization) is the part of the software that is most difficult to maintain when new
constraints are added. There is a tremendous value in confining the domain-dependent part to
where constraint-programming techniques can be used. The second question is drawn from
our practical experience: the speed of the hardware, the runtime constraints, the objective
functions (total travel, number of routes, priority respect,...) all have a strong influence when
designing a hybrid algorithm. For instance, it is actually a difficult problem to re-tune a
hybrid algorithm when faster hardware is installed.
2.2 Application to the Vehicle Routing Problem
2.2.1 Vehicle Routing Problems
A vehicle routing problem is defined by a set of tasks (or nodes) i and a distance matrix
(d[i,j]). Additionally, each task may be given a duration (in which case the matrix d denotes
travel times) and a load, which represents some weight to be picked in a capacitated VRP.
The goal is to find a set of routes that start from a given node (called the depot) and return to
it, so that each task is visited only once and so that each route obeys some constraints about
its maximal length or the maximum load (the sum of the weights of the visited tasks). A VRP
is an optimization problem, where the objective is to minimize the sum of the lengths of the
routes and/or the number of routes [Lap92].
We report experiments with the Solomon benchmarks [Sol87], which are both relative small
(100 customers) and simple (truck capacity and time-windows). Real-world routing problems
3
include a lot of side constraints (not every truck can go everywhere at anytime, drivers have
breaks and meals, some tasks have higher priorities, etc.). Because of this additional
complexity, the most commonly used algorithm is an insertion algorithm, one of the simplest
algorithms for solving a VRP.
Let us briefly describe the VRPTW problem that is proposed in the Solomon benchmarks
more formally.
- We have N = 100 customers, defined by a load ci, a duration di and a time window
[ai,bi]
- d(i,j) is the distance (in time) to go from the location of customer i to the location of
customer j (all trucks travel at the same speed). For convenience, 0 represents the
initial location of the depot where the trucks start their routes.
- A solution is given by a route/time assignment: ri is the truck that will service the
customer i and ti is the time at which this customer will be serviced. We define prev(i)
= j if j is the customer that precedes i in the route ri and prev(i) = 0 is i is the first
customer in ri. We also define last(r) as the last customer for route r.
- The set of constraints that must be satisfied is the following:
o ∀i, ti ∈ [ai,bi] ∧ ti ≥ d(0,i)
-
o
∀i,j ri = rj => tj –ti ≥ di + d(i,j) ∨ ti –tj ≥ dj + d(j,i)
o
∀r,
∑c ≤C
i
i∈{i r =ri }
The goal is to minimize E = max(ri) (the total number of routes) and
i≤ N
D=
∑length(r) with length(r) = ∑d(prev(i),i) + d(last(r), 0)
r ≤K
i ri =r
There are two reasons for using the Solomon benchmarks instead of other larger problems.
First, they are the only problems for which there are many published results. Larger problems,
including our own combined benchmarks (with up to 1000 nodes [CL99]), did not receive as
much attention yet and we will see that it is important to have a good understanding of
competitive methods (i.e., quality vs. run-time trade-off) to evaluate how well our learning
approach is doing. Second, we have found using large routing problems from the
telecommunication maintenance industry that these benchmarks are fairly representative:
techniques that produced improvements on the Solomon benchmarks actually showed similar
improvement on larger problems, provided that their run-time complexity was not prohibitive.
Thus, our goal in this paper is to focus on the efficient resolution of Solomon problems, with
algorithms that could later scale up to larger problems (which explains our interest for finding
good solutions within a short span of time, from a few seconds to a minute).
2.2.2 Insertion and Incremental Local Optimization
Let us first describe an insertion-based greedy algorithm. The tasks (i.e., customers) to be
visited are placed in a stack, that may be sorted statically (once) or dynamically, and a set of
empty routes is created. For each task, a set of candidate routes is selected and the feasibility
of the insertion is evaluated. The task is inserted into the best route found during this
evaluation. This loop is run until all tasks have been inserted. Notice that an important
parameter is the valuation of the feasible route. A common and effective strategy is to pick
the route for which the increase in length due to the insertion is minimal.
The key component is the node insertion procedure (i.e., the push operation of Figure 1),
since it must check all the side constraints. CP techniques can be used either through the full
resolution of the one-vehicle problem, which is the resolution of a small with side-constraints
[CL97][RGP99], or it can be used to supplement a simple insertion heuristic by doing all the
side-constraint checking. We have shown in [CL99] that using a CP solver for the node
insertion increases the quality of the global algorithm, whether this global algorithm is a
simple greedy insertion algorithm or a more complex tree search algorithm.
4
The first hybridization that we had proposed in [CL99] is the introduction of incremental
local optimization. This is a very powerful technique, since we have shown that it is much
more efficient than applying local optimization as a post-treatment, and that it scales very
well to large problems (many thousands of nodes). The interest of ILO is that it is defined
with primitive operations for which the constraint propagation can be easily implemented.
Thus, it does not violate the principle of separating the domain-dependent part of the problem
from the optimization heuristics.
Instead of applying the local moves once the first solution is built, the principle of
incremental local optimization is to apply them after each insertion and only for those moves
that involve the new node that got inserted. The idea of incremental local optimization within
an insertion algorithm had already brought good results in [GHL94], [Rus95] and [KB95].
We have applied ILO to large routing problems (up to 5000 nodes) as well as call center
scheduling problems.
Our ILO algorithm uses three moves, which are all 2- or 3- edge exchanges. The first three
are used once the insertion is performed. These moves are performed in the neighborhood of
the inserted node, to see if some chains from another route would be better if moved into the
same route. They include a 2-edge exchange for crossing routes (see Figure 2), a 3-edge
exchange for transferring a chain from one route to another and a simpler node transfer move
(a limited version of the chain transfer).
The 2-edge move (i.e., exchange between (x,y) and (i,i’)) is defined as follows. To perform
the exchange, we start a branch where we perform the edge substitution by linking i to y and
i’ to x. We then compute the new length of the route r and check side constraints if any apply.
If the move is illegal we backtrack to the previous state. Otherwise, we perform a route
optimization on the two modified routes (we apply 3-opt moves within the route r). We also
recursively continue looking for 2-opt moves and we apply the greedy 3-opt optimization to
r’ that will be defined later.
r’
r
x
i
r’
r
x
i
i’
y
r
r
’
x
y
i
r
’
x
r
y
i
i’
y
y’
i’
y’
z
exchange (2opt / 2routes)
i’
z
transfer (3opt / 2routes)
Figure 2: edge exchange moves used for ILO
The second move transfers a chain (y → y’) from a route r’ to a route r right after the node i.
We use the same implementation technique and create a branch where we perform the 3-edge
exchange by linking i to y, y’ to i’ and the predecessor x of y to the successor z of y’. We then
optimize the augmented route r (assuming the new route does not violate any constraints) and
check that the sum of the lengths of the two resulting routes has decreased. If this is not the
case, we backtrack; otherwise, we apply the same optimization procedures to r’ as in the
precedent routine.
We also use a more limited version of the transfer routine that we can apply to a whole route
(as opposed to the neighborhood of a new node i). A « greedy optimization » looks for nodes
outside the route r that are close to one node of the route and that could be inserted with a
gain in the total. The algorithm is similar to the previous one, except that we do not look for a
chain to transfer, but simply a node.
In the rest of the paper, we will call INSERT(i) the insertion heuristic obtained by applying
greedily the node insertion procedure and a given level of ILO depending on the value i:
i = 0 ⇔ no ILO
i = 1 ⇔ perform only 2-opt moves (exchange)
5
i = 2 ⇔ performs 2 and 3-opt moves (exchange and transfer)
i = 3 ⇔ perform 2 and 3-opt moves, plus greedy optimization
i = 4 ⇔ similar, but in addition, when the insertion fail we try to reconstruct the route by
inserting the new node first.
2.3 Meta-Heuristics for Insertion Algorithms
In the rest of this section we present a set of well-known meta-heuristics that have in common
the property that they only rely on inserting and removing nodes from routes. Thus, if we can
use a node insertion procedure that checks all domain-dependent side-constraints, we can
apply these meta-heuristics freely. It is important to note that not all meta-heuristics have this
property (e.g., splitting and recombining routes does not), but the ones that do make a large
subset and we will show that these heuristics, together with ILO, yield powerful hybrid
combinations.
2.3.1 Limited Discrepancy Search
Limited Discrepancy Search is an efficient technique that has been used for many different
problems. In this paper, we use the term LDS loosely to describe the following idea:
transform a greedy heuristic into a search algorithm by branching only in a few (i.e., limited
number) cases when the heuristic is not “sure” about the best insertion. A classical complete
search (i.e., trying recursively all insertions for all nodes) is impossible because of the size of
the problem and a truncated search (i.e., limited number of backtracks) yields poor
improvements. The beauty of LDS is to focus the “power of branching” to those nodes for
which the heuristic decision is the least compelling. Here the choice heuristic is to pick the
feasible route for which the increase in travel is minimal. Applying the idea of LDS, we
branch when two routes have very similar “insertion costs” and pick the obvious choice when
one route clearly dominates the others. There are two parameters in our LDS scheme: the
maximum number of branching points along a path in the search tree and the threshold for
branching. A low threshold will provoke a lot of branching in the earlier part of the search
process, whereas a high threshold will move the branching points further down. These two
parameters control the shape of the search tree and have a definite impact on the quality of
the solutions.
2.3.2 Ejection Chains and Trees
The search for ejection chains is a technique that was proposed a few years ago for Tabu
search approaches [RR96]. An ejection link is an edge between a and b that represents the
fact that a can be inserted in the route that contains b if b is removed. An ejection chain is a
chain of ejection edges where the last node is free, which means that it can be inserted freely
in a route that does not intersect the ejection chain, without removing any other node. Each
time an ejection chain is found, we can compute its cost, which is the difference in total
length once all the substitutions have been performed (which also implies the insertion of the
root node).
The implementation is based on a breadth-first search algorithm that explores the set of
chains starting from a root x. We use a marker for each node n to recall the cost of the
cheapest ejection chain that was found from x to n, and a reverse pointer to the parent in the
chain. The search of ejection chains was found to be an efficient technique in [CL98] to
minimize the number of routes by calling it each time no feasible insertion was found during
the greedy insertion. However, it is problem-dependent since it only works well when nodes
are of similar importance (as in the Solomon benchmarks). When nodes have different
processing times and characteristics, one must move to ejection trees.
An ejection tree is similar to an ejection chain but we allow multiple edges from one node a
to b1, .. , bn to represent the fact that the “forced insertion” of a into a route r causes the
ejection of b1,..,bn. For one node a and a route r, there are usually multiple subsets of such
{b1, .., bn} so we use a heuristic to find a set as small as possible. An ejection tree is then a
tree of root a such that all leaves are free nodes that can be inserted into different routes that
6
all have an empty intersection with the tree. There are many more ejection trees than there are
chains, and the search for ejection trees with lowest possible cost must be controlled with
topological parameters (maximum depth, maximum width, etc.). The use of ejection trees is
very similar to the use of ejection chains, i.e. we can use it to insert a node that has no
feasible route, or as a meta-heuristic by removing and re-inserting nodes.
The implementation is more complex because we cannot recursively enumerate all trees
using a marking algorithm. Thus we build a search tree using a stack of ejected nodes. When
we start, the stack contains the root node; then each step can be described as follows:
• Pick a node n from the stack.
• For all routes r into which a forced insertion is possible, create a branch of the search tree.
In this branch, perform the insertion (a into r) and produce a subset of nodes that got
“ejected”. To keep that set as small as possible, every free node is immediately reinserted. The set of ejected nodes is then placed into the stack.
• If the stack is empty we register the value of the current tree (each node in the search tree
corresponds to an ejection tree)
To make this algorithm work, it is necessary to put an upper bound on the depth of the tree
and on the branching factor. We only select no more than k routes for the forced insertion, by
filtering the k best routes once all possible routes have been tried. Furthermore, we use a LDS
scheme: each time we use a route that was not the best route found (the valuation is simply
the weight of the ejected set), we count one discrepancy. The LDS approach simply means to
cut all trees that would require more than D (a fixed parameter) discrepancies.
2.3.3 Large Neighborhood Search
Large Neighborhood Search (LNS) is the name given by Shaw [Sha98] to the application of
shuffling [CL95] to routing. The principle is to forget (remove) a fragment of the current
solution and to rebuild it using a limited search algorithm. For jobshop scheduling, we have
developed a large variety of heuristics to determine the fragment that is forgotten and we use
them in rotation until a fix-point is reached. We have used a truncated search to re-build the
solution, using the strength of branch-and-bound algorithms developed for jobshop
scheduling. In his paper, Shaw introduced a heuristic randomized criterion for computing the
“forgotten” set and proposed to use LDS to re-build the solution. Since he obtained excellent
results with this approach, we have implemented the same heuristic to select the set of n (an
integer parameter) nodes that are removed from the current solution. His procedure is based
on a relatedness criteria and a pseudo-random selection of successive “neighbors”. A
parameter is used to vary the heuristic from deterministic to totally random. We have
extended this heuristic so that nodes that are already without a route are picked first (when
they exist).
The implementation of LNS is then straightforward: select a set of k nodes using Shaw’s
procedure and then remove them from the current solution. These nodes are then re-inserted
using a LDS insertion algorithm. There are, therefore, four parameters needed to describe this
algorithm: two for LDS (number of discrepancies and threshold), the randomness parameter
and the number of nodes to be reinserted. As we shall later see, the procedure for reconstructing the solution could be anything, which opens many possible combinations.
Notice that the heuristic for selecting the fragment to remove is clearly problem-dependent.
For VRP, we use Shaw’s technique, for frequency allocation, we had to come up with a fairly
complex new method that computes the number of constraint violations for all tasks that
could not be inserted.
3. An Algebra of Hybrid Algorithms
3.1 Representing the Combination of Meta-Heuristics
We represent hybrid algorithms obtained through the composition of meta-heuristics with
algebraic formulas (terms). As for any term algebra, the grammar of all possible terms is
7
derived from a fixed set of operators, each of them representing one of the
resolution/optimization techniques that we presented in the previous section. There are two
kinds of terms in the grammar: <Build> terms represent algorithms that create a solution
(starting from an empty set of routes) and <Optimize> terms for algorithms that improve a
solution (we replace a set of routes with another one). A typical algorithm is, therefore, the
composition of one <Build> term and many <Optimize> terms.
More precisely, we may say that a build algorithm has no parameter and returns a solution
object that represents a set of routes. Each route is defined as a linked list of nodes. An
optimize algorithm has one input parameter, which is a current solution and returns another
set of routes. In addition, a global object is used to represent the optimization context, which
tells which valuation function should be used depending on the optimization criterion.
<Build>::
INSERT(i) |
<LDS> |
DO( <Build>,<Optimize>) |
FORALL(<LDS>, <Optimize>)
<Optimize> ::
<LDS> ::
CHAIN(n,m) |
TREE(n,m,k) |
LNS(n,h,<Build>) |
LOOP(n,<Optimize>) |
THEN(<Optimize>, …,
LDS(i,n,l)
Figure 3: A grammar for hybrid algorithms
The definition of the elementary operators is straightforward:
• INSERT(i) builds a solution by applying a greedy insertion approach and a
varying level of ILO according to the parameter i (cf. Section 2.2, 0 means no ILO
and 4 means full ILO)
• LDS(i,n,l) builds a solution by applying a limited discrepancy search on top of the
INSERT(i) greedy heuristic. The parameter n represents the maximum number of
discrepancies (number of branching points for one solution) and l represents the
threshold. A LDS term can also be used as a generator of different solutions when
it is used in a FORALL.
• FORALL(t1, t2) produces all the solutions that can be built with t1, which is
necessarily a LDS (as opposed to only the best) and applies the post-optimization
step t2 to each of them. The result is the best solution that was found.
• CHAIN(n,m) is a post-optimization step that select n nodes using the heuristic
represented by m and successively removes them (one at a time) and tries to reinsert them using an ejection chain. We did not come up with any significant
selection heuristic so we mostly use the one presented in Section 2.3.2.
• TREE(n,m,k) is similar but uses an ejection tree strategy for the post-optimization.
The extra-parameter represents the number of discrepancies for the LDS search (of
the ejection tree).
• LNS(n,h,t) applies Large Neighborhood Search as a post-optimization step. We
select n nodes using Shaws’s heuristics with the h randomness parameter and we
rebuild the solution using the algorithm represented by t, which must be a <Build>.
Notice that we do not restrict ourselves to a simple LDS term.
• DO(t1,t2) simply applies t1 to build a solution and t2 to post-optimize it
• THEN(t1,t2) is the composition of two optimization algorithms t1 and t2
8
•
LOOP(n,t) repeats n times the optimization algorithm t. This is used with an
optimization algorithm, which repetition will incrementally improve the value of
the current solution.
Here are some examples of algebraic terms.
LDS(3,3,100) represents a LDS search using the 3rd level of ILO (every move is tried) but
with the regular insertion procedure, trying 23 solutions (3 choice points when the
difference between the two best routes is less than 100) and returning the best.
DO(INSERT(2),CHAIN(80,2)) is an algorithm obtained by combining a regular greedy
heuristic with the 2nd level of ILO with a post-optimization phase of 80 removal/reinsertion through an ejection chain.
FORALL(LDS(0,4,100),LOOP(3,TREE(5,2))) is an algorithm that performs a LDS search
with no ILO and 24 branching points and then applies 3 times an ejection tree postoptimization step for each intermediate solution.
3.2 Evaluation
To evaluate the algorithms represented by the terms, we have defined a small interpreter to
apply the algorithm represented by the operator (a <Build> or an <Optimize>) to the current
problem (and solution for an <Optimize>). The metric for complexity that we use is the
number of calls to the insertion procedure. This is a reasonable metric since the CPU time is
roughly linear in the number of insertions and has the advantage that it is machine
independent and is easier to predict based on the structure of the term. To evaluate the quality
of a term, we run it on a set of test files and average the results. The generic objective
function is defined as the sum of the total lengths plus a penalty for the excess in the number
of routes over a pre-defined objective. In the rest of the paper, we report the number of
insertions and the average value of the objective function. When it is relevant (in order to
compare with other approaches) we will translate them into CPU (s) and (number of routes,
travel).
In [CSL99] we used the algebra to test a number of hand generated terms in the algebra to
evaluate the contribution of ILO, and the other four search meta-heuristics comparing
algorithms both with and without each of the meta-heuristics. The results are summarized in
Table 1, which shows different examples of algebraic terms that represent hybrid
combinations of meta-heuristics. For instance, the first two terms (INSERT(3) and
INSERT(0)) are a simple comparison of the basic greedy insertion algorithm with and
without ILO. In a previous paper [CL99] we had demonstrated that applying 2- and 3-opt
moves (with a hill-climbing strategy) as a post-processing step was both much slower and
less effective than ILO. Here we tried a different set of post-optimization techniques, using
two different objective functions: the number of routes and the total travel time. In this
section, the result is the average for the 12 R1* Solomon benchmarks.
We measure the run-time of each algorithm by counting the number of calls to the insertion
sub-procedure. The ratio with CPU time is not really constant but this measure is independent
of the machine and makes for easier comparisons. For instance, on a PentiumIII-500Mhz,
1000 insertions (first term) translate into 0.08 s of CPU time and 100K (4th term) translates
into 8s.
In order to experiment with different optimization goals, we have used the following
objective function:
f=
1
×
[(E×1000000×min(1,max(0,E −Eopt,t))+D+∑di]
test _cases t∈test∑
_ cases
i≤ N
Notice that we use Eopt,t to represent the optimal number of routes that is known for test case t.
If we use Eopt,t = 25, we simply optimize the total travel time for the solution. The constant
term (sum of the durations) comes from using this formula to evaluate a partial solution. In
the following table, we will use this function to compare different algorithms, but also report
9
the average number of routes when appropriate. Columns 2 and 3 correspond to minimizing
the number of trucks, whereas columns 4 and 5 correspond to minimizing the total travel time.
Term
Value
# of trucks
#of
INSERT(0)
14692316
1000
25293
1000
INSERT(3)
13939595
1151
22703
1167
LDS(3,8,50)
49789 (12.83)
162K
22314
182K
DO(LDS(4,3,100),CHAIN(80,2))
52652 (12.91)
100K
22200
100K
DO(LDS(3,3,100),
41269 (12.66)
99K
22182
104K
FORALL(LDS(3,2,100),CHAIN(20,2))
54466
101K
22190
100K
DO(INSERT(3),CHAIN(90,2))
10855525
97k
22219
98K
DO(LDS(3,2,100),LOOP(2,TREE(40,2)))
43564
90K
22393
63K
DO(LDS(3,2,100),LOOP(6,CHAIN(25,2)))
56010
101K
22154
108K
SuccLNS = DO(LDS(3,0,100),
40181
26K
22066
50K
objective
insertions
Value
# of insertions
travel
# of trucks
travel
LOOP(30,LNS(10,4,LDS(4,4,1000))
THEN(LOOP(50,LNS(4,4,LDS(3,3,1000))),
LOOP(40,LNS(6,4,LDS(3,3,1000))),
LOOP(30,LNS(8,4,LDS(3,3,1000))),
LOOP(20,LNS(10,4,LDS(3,3,1000))),
LOOP(10,LNS(12,4,LDS(3,3,1000))) ))
Table 1. A few hand-generated terms
In these preliminary experiments, LNS dominated as the technique of choice (coupled with
LDS and ejection tree optimization works well on larger problems. Additional experiments in
[CSL99] showed that the combination of these techniques with ILO worked better than LNS
alone. We can also notice here that the different heuristics are more-or-less suited for
different objective functions, which will become more obvious in the next section. We use
the name “succLNS” to represent the last (large) term, which is the best from this table and
was generated as a (simplified) representation of the strategy proposed in [Sha98], with the
additional benefit of ILO.
4. A Learning Algorithm for discovering New Terms
4.1 Tools for learning
The primary tools for learning are invention of new terms, mutation of existing terms, and the
crossing of existing terms with each other. Throughout learning a pool of the best n terms is
maintained (the pool size n is a fixed parameter of the experiment) from which new terms are
added and created from existing terms. The invention of new terms is defined by structural
induction from the grammar definition. The choice among the different subclasses (e.g. what
to pick when we need an <Optimize>) and the values for their parameters are made using a
random number generator and a pre-determined distribution. The result is that we can create
terms with an arbitrary complexity (there are no boundaries on the level of recursion, but the
invention algorithm terminates with probability 1). One of the key experimental parameters,
which are used to guide invention, is a bound on the complexity for the term (i.e., the
complexity goal). The complexity of a term can be estimated from its structure and only those
terms that satisfy the complexity goal will be allowed to participate in the pool of terms.
10
Complexity is an estimate of the number of insertions that will be made when running one of
the hybrid algorithms. For some terms, this number can be computed exactly, for others it is
difficult to evaluate precisely. However, since we use complexity as a guide when inventing a
new term, a crude estimate is usually enough. Here is the definition that we used.
•
•
•
•
•
•
•
•
•
complexity(INSERT(i)) = 1000.
n
complexity(LDS(i,n,l)) = (if (i = 4) 6000 else 1000) × (2 ).
complexity(FORALL(t1, t2)) = complexity(t1) + complexity(t2)
complexity(CHAIN(n,m)) = 1500 × n).
k
complexity(TREE(n,m,k)) = 600 × n × (2 ).
n
complexity(LNS(n,h,t)) = (complexity(t) ) / 100,.
complexity(DO(t1,t2)) = complexity(t1) + complexity(t2)
complexity(THEN(t1,t2)) = complexity(t1) + complexity(t2)
complexity(LOOP(n,t)) = n ×complexity(t).
Mutation is also defined by structural induction according to two parameters. The first
parameter tells if we want a shallow modification, an average modification or a deep
modification. In the first case, the structure of the term does not change and the mutation only
changes the leaf constants that are involved in the term (integers). Moreover, only small
changes for these parameters are supported. In the second case, the type (class) does not
change, but large changes in the parameters are allowed and, with a given probability and
some sub-terms can be replaced by terms of other classes. In the last case, a complete
substitution with a different term is allowed (with a given probability). The second parameter
gives the actual performance of the term, as measured in the experiment. The mutation
algorithm tries to adjust the term in such a way that the (real) complexity of the new term is
as close as possible to the global objective. This compensates the imprecision of the
complexity estimate quite effectively. The definition of the mutation operator is a key
component of this approach. If mutation is too timid, the algorithm is quickly stuck into local
optimums. The improvements shown in this paper compared to our earlier work [CSL99] are
largely due to a better tuning of the mutation operator. For instance, although we guide the
mutation according to the observed complexity (trying to raise or lower it), we randomly
decide (10% of the time) to ignore this guiding indication.
The complete description of the mutation operator is too long to be given in this paper, but
the following is the CLAIRE [CL96] method that we use to mutate a AND term. The two
parameters are respectively a Boolean that tells if the current term x is larger or smaller than
the complexity goal and an integer (i) which is the previously mentioned mutation level. The
AND object to which mutation is applied (x = AND(t1,t2)) has two slots, x.optim = t1 and
x.post = t2. We can notice that the default strategy is simply to recursively apply mutation to
t1 and t2, but we randomly select more aggressive strategies either to simplify the term or to
get a more complex one.
11
mutate(x:AND,s:boolean,i:integer) : Term
-> let y := random(100 / i), y2 := random(100) in
(if ((s & y2 > 20) | y > 90)
(if (y < 10)
// try to get a more complex term
THEN(x,invent(Optimizer))
else if (y < 20)THEN(optim(x),LOOP(randomIn(3,10), post(x)))
else if (y < 30)THEN(LOOP(randomIn(3,10),optim(x)), post(x))
else
// recursive application of mutation
THEN(mutate(optim(x),s,i), mutate(post(x),s,i)))
else
// try to get a simpler term
(if ((i = 3 & y < 50) | (i > 1 & y < 10)) optim(x)
else if ((i = 3) | (i > 1 & y < 20)) post(x)
else THEN(mutate(optim(x),s,i), mutate(post(x),s,i))))
Finally, crossover is also defined structurally and is similar to mutation. A crossover method
is defined for crossing integers, for crossing each of the terms, and recursively crossing their
components. A term can be crossed directly with another term or one of its sub-components.
The idea of crossover is to find an average, middle-point between two values. For an integer,
this is straightforward. For two terms from different classes, we pick a “middle” class based
on the hierarchical representation of the algebra tree. When this makes sense, we use the subterms that are available to fill the newly generated class. For two terms from the same class,
we apply the crossover operator recursively.
The influence of genetic algorithms [Gol89][Ree93] in our learning loop is obvious, since we
use a pool of terms to which we apply crossover and Darwinian selection. However,
crossover is mixed with mutation that is the equivalent of parallel randomized hill-climbing.
Hill-climbing translates the fact that we select the best terms at each iteration (cf. next
section). Randomized tells that the neighborhood of a term is very complex and we simply
randomly pick one neighbor. Parallel comes from the use of a pool to explore different paths
in simultaneous ways. The use of a mutation level index shows that we use three
neighborhood structures at the same time. We shall see in Section 5 that using a pure genetic
algorithm framework turned out to be more difficult to tune, which is why we are also
applying a hybrid approach at the learning level.
4.2 The Learning Loop
The learning process is performed with a serie of iterations and works with a pool of terms
(i.e. algorithms) of size M. During each iteration, terms are evaluated and the K best ones are
selected and mutated and crossed in different ways. Experimental parameters govern how
many terms will be generated via mutation, crossover, and invention along with how many of
the very best ones from the previous generation will be kept (K). We shall see in Section 4.2
the relative influence of these different techniques. After completing all experiments, the best
terms are rated by running tests using different data sets and averaging over all runs. This
second step of evaluation, described as “more thorough” in Figure 3, is explained in the
following sub-section. The goal is to identify the best term from the pool in a more robust
(stable) manner.
The best term is always kept in the pool. For each of the K “excellent” terms, we apply
different levels of mutation (1,2 and 3), we then cross them to produce (K * (K – 1) / 2) new
terms and finally we inject a few invented terms (M – 1 – K * 3 – (K * (K – 1) / 2)). The new
pool replaces the old one and this process (the “learning loop”) is iterated I times.
We have tried to increase the efficiency of the search by taking the distance to the best term
as a parameter for choice for the other “excellent” terms, but this was not successful. The use
of this distance to increase the diversity in the pool subset resulted in a lower ability to finely
explore the local neighborhood. The use of a tabu list is another direction that we are
currently exploring but the difficulty is to define the semantic of this list in a world of
potentially infinite terms.
12
The number of iterations I is one of a number of parameters that need to be set for a serie of
experiments. Additional parameters include the pool size M and a set of parameters
governing the break down of the pool (i.e., how many terms generated via invention,
mutation, and cross-over kept from the previous generation). In Section 5, we describe a set
of experiments aimed at determining “good” values for each of these parameters.
To avoid getting stuck with a local minimum, we do not run the learning loop for a very large
number of iterations. When we started studying the stability of the results and the influence
of the number of iterations, we noticed that the improvement obtained after 20 iterations are
not really significant, while the possibility of getting stuck with a local minimum of poor
quality is still high. Thus we decided to divide the set of N iterations of the previous learning
loop as follows (we pick N = 50 to illustrate our point).
- We compute the number R of rounds with R = N / 12 (e.g., 50 / 12 = 4)
- For each round, we run the learning loop 10 times, starting from a pool of randomly
generated terms.
We keep the best term for each round. (Note: If N is less than 24, only a single round is
executed)
- We create a new pool with these k best terms, completed with random invention and
apply the learning loop for the remaining number of iterations (N – R * 10).
The following figure summarizes the structure of the learning algorithm.
Randomized
Invention
R best results
Input pool (M terms)
I = 10
I = N – 10R
I = 10
Evaluate M terms and pick K bests
Evaluate K terms thoroughly and pick best
I
Iterations
Keep
Best
round
Crossover
Mutate
K
Bests
…
R
rounds
Last
Round
Inventions
Result = best term
Figure 3. The Learning Algorithm
Final result
This figure makes it clear that the algorithm is more complex than the preliminary version
that was presented in [CSL99]. The main reason for this complexity is the increase in
robustness. We shall see in section 6.1 that that this more sophisticated algorithm produces
better terms, but the most important improvement is the stability and statistical significance
of the results that will be shown in Section 5. In particular, we discuss the importance of the
training set in Section 5.4. We have used our approach in two types of situations: a machine
learning paradigm where the algorithm is trained on a data set and tested on a different data
set, and also a data mining paradigm where the algorithm is trained on the same set onto
which it is applied. Section 5 will also deal with the tuning of the learning loop parameters (N,
R, K, …).
13
4.3 Randomization and Further Tuning
Randomized algorithms such as LNS have a built-in instability, in the sense that different
runs will produce different results for the same algorithm. Thus it is difficult to compare
terms using one run only, even though the results are already averaged using a data set. A
better approach is to perform multiple runs for each term, but this is expensive and slows the
learning process quite a bit. The compromise that we make is that we first sort the whole pool
of term using a single run to select the subset of “excellent terms”, and then run multiple
experiments on these selected terms to sort them more precisely. This is important to increase
the probability of actually selecting the best term of the pool.
Another issue is the minimization of the standard deviation associated with a randomized
algorithm. Once an algorithm (i.e., a term) incorporates a randomized technique, its
performance is not simply measured by the value of one run or the average of multiple runs,
but also by the standard deviation of the values during these multiple runs. In order to ensure
that we minimize this deviation as well as the average value, we enrich the semantic of the
evaluation of the sub-pool of “excellent” terms. When we perform multiple runs on these
selected terms, the final value associated to each term is the average of the worst value and
the average value. Thus, we discriminate against terms that have a high deviation.
The balancing between minimizing the average and the standard deviation is problemdependent and the definition of a unique quality indicator only makes sense in a given
industrial settings. For some problems such as on-line optimization, standard deviation is
important since a poor solution translates into customer non-satisfaction. For some batch
problems, the goal is purely to save money and minimizing the average value is quite
sufficient. Thus, we report three numbers for each algorithm:
- The average value, E(v) that represents the average of the value of the term produced
by the algorithm over multiple learning experiments. If we define v as the valuation of
the term t produced at the end of the learning loop according the definition given in
Section 3.2, we have:
∑v
E(v) = 1 ×
L
i
i≤ L
assuming that we ran the learning process L times and obtained the values v1, …vL.
The standard deviation σ(v) represents the standard deviation of the previously defined
value v over the multiple learning runs:
-
⎛
⎝L
∑vi ⎞⎟⎠−E(v)
σ(v) = ⎜ 1 ×
-
2
2
i≤ L
Last, we also measure the average standard deviation E(σ), since we recall that the
terms produced by our learning algorithm represent randomized algorithms and that
their value vi is simply an average. More precisely, to produce a value for a term, we
make 10 runs of the algorithm on each test file of the data set and we record both the
average value vi and the standard deviation σi. We may then define E(σ) to represent
the average of this standard deviation, when multiple learning experiments are made:
∑
E(σ) = 1 × σ i
L
i≤ L
Obviously, reporting a standard deviation falls short of characterizing a distribution.
However, our experience is that standard deviation is a reasonable first-order indicator for all
practical purposes. For instance, the difficulties that we experienced with the early approach
of [CLS99], which we report in the next section, did translate into high standard deviation
and the reduction of E(σ) was indeed a symptom of the removal of the problem.
14
The three techniques that we use for creating new terms (invention, mutation and crossover)
have the potential to develop terms that are overly complex. The complexity bound that
guides each of these tools can prevent the creation of terms that are too expensive to evaluate.
However, it is still possible to generate terms, which meet the complexity bound, but which
are overly long and complex. Large terms are undesirable for two reasons: they are difficult
to read and they tend to get out of hand quickly (large terms crossed with large terms
generally produce even larger terms). For this reason, we have introduced a “diet” function,
which takes a bound on the physical size (i.e., number of subterms and components) of the
term. This function is quite simple: if the term is too large (too many sub-terms), the subterms are truncated at an arbitrary distance from the root (of the tree that describes the term).
5. Learning Characteristics
5.1 Convergence Rate and Stability
The algorithms defined by the terms have a number of randomness issues. As noted above,
algorithms such as LNS are randomized algorithms and hence, testing a term over a single
data set could yield different results. Also, the process of learning itself is randomized. To
deal with these issues, we must be careful to average each of our experiments over multiple
runs, recording the standard deviation over all runs, as explained in the previous section.
The learning process itself runs over a number of iterations. Showing that there is in fact a
“convergence” (i.e., the stability and quality of the results should improve as the number of
iterations grows) and determining the number of iterations achieves the best tradeoff between
processing time and producing the best results are two key issues. To determine this we have
run a serie of experiments over the same problem running from 10 iterations to 50 iterations
of the learning loop. For each experiment we report the three measures: E(v), σ(v), and E(σ),
as explained earlier. We ran these experiments twice, using two different complexity goals of
respectively 50K (columns 2 to 4) and 200K insertions (columns 5 to 7). The goal here is to
minimize the number of trucks. The results are described in Table 2:
σ(v)
E(v)
Goal
E(σ)
50K
50K
σ(v)
E(v)
50K
200K
E(σ)
200K
200K
10 iterations
39706
5842
1092
34600
4679
1193
20 iterations
34753
5433
775
31303
4040
740
30 iterations
35020
4663
1426
31207
2090
755
40 iterations
34347
4623
1142
30838
3698
866
50 iterations
32173
2069
1217
29869
1883
862
Table 2. Influence of the number of iterations
A number of observations can be made from these results. We see that the quality of the
terms is good, since they are clearly better than those that were hand-generated. This point
will be further developed in the next section. We also see that the standard deviation of the
generated algorithm is fairly small, but tend to increase when the number of iteration rises.
This is due to the fact that better terms are making a heavier use of the randomized LNS. Last,
we see that the standard deviation of the average value is quite high, which means that our
learning algorithm is not very robust, even though it is doing a good job at finding terms.
It is interesting to notice that this aspect was even worse with the simpler version presented in
[CSL99], where we did not partition the set of iterations into rounds. In that case, while the
average value goes down with a higher number of iterations, the worst-case value is pretty
much constant, which translates into a standard deviation that increases with the number of
iterations.
15
In practice, the remedy to this instability is to run the algorithm with a much larger number of
iterations, or to take the best result of many separate runs, which is what we shall do in the
last section where we attempt to invent higher-quality terms.
5.2 Relative contribution of Mutation, crossover, and invention
The next set of experiments compares four different settings of our Learning Loop, which
emphasize more heavily one of the term invention techniques. All settings use the same pool
size, the difference comes from the way the pool is re-combined during each iteration:
- al2 is our default setting, the pool size is 16 terms, out of which 3 “excellent terms” are
selected, producing 9 mutated terms and 3 products of crossover, completed by 3 newly
invented terms.
- am2 is a setting that uses a sub-pool of 4 “excellent” terms, yielding 12 mutated terms, but
only one crossover (the two best terms) and 2 inventions.
- ag2 is a setting that uses the crossover more extensively, with only one mutation (of the best
term), but 10 crossovers produced from the 5 “excellent” terms.
- ai2 is a setting that uses 6 mutated terms and 1 crossover, leaving the room for 8 invented
terms.
The next table reports our results using the 50K goals for term complexity and the same set of
data tests. We report results when the number of iteration is respectively set to 20 (one large
round) and 40 (three rounds).
σ(v)
E(v)
20
#of iterations
E(σ)
20
σ(v)
E(v)
20
40
E(σ)
40
40
Al2
34753
5433
775
31303
4040
740
Am2
34996
3778
1091
33832
2363
355
Ag2
40488
5627
1060
36974
4557
1281
Ai2
37209
4698
566
32341
2581
374
Table 3. Comparing Mutation, Crossover and Invention
We can see that the better approach is to use a combination of techniques (al* family), as we
have selected in the rest of the experiments.
These results also suggests that mutation is the strongest technique, which means that
randomized hill-climbing is better suited for our learning problem than a regular genetic
algorithm approach. This result is probably due to the inadequacy of our crossover operator.
It should be noticed that when we introduced crossover, it brought a significant improvement
before we refined our mutation operator as explained in Section 4.1 (i.e., at that time, al* was
much better than am*). We also notice that invention also works pretty well, given the time
(ai2 is a blend of mutation and invention). The conclusion is that the randomization of the
“evolution” technique is very important, due to the size of the search space. A logical step,
which will be explored in the future, is to use a more randomized crossover.
5.3 Number of Iterations versus Pool Size
There are two simple ways to improve the quality of the learning algorithm: we can increase
the number of iterations, or we can increase the size of the pool. The next set of experiments
16
compare 4 settings, which are roughly equivalent in term of complexity but use different pool
size vs. number-of-iterations compromise:
-
al3, which is our regular setting (cf. al2) with 30 iterations (whereas al2 uses 20
iterations)
ap0, with a smaller pool size (10), where only two best terms are picked to be mutated
and crossed. The number of iteration is raised to 50.
ap1, with a pool size of 35, with 6 best terms chosen for mutation and 5 for crossovers.
The number of iteration is reduced to 15.
ap2, with a pool size of 24, with 5 best terms chosen for mutation, and 4 for crossovers.
The number of iterations is set to 20.
These experiments are made with two complexity objectives, respectively 50 000 and 200
000 insertions.
σ(v)
E(v)
Goal
50K
E(σ)
50K
σ(v)
E(v)
50K
200K
E(σ)
200K
50K
Al3
35020
4663
1426
31207
2090
755
Ap0
34745
3545
829
31735
1912
189
Ap1
35101
4920
751
30205
1905
556
Ap2
37122
4910
979
31625
2401
468
Table 4. Pool Size versus Number of Iterations
These results show that a compromise must be indeed be found, since neither using a very
large pool nor using a small one seems a good idea. Our experience is that a size around 20
seems the best trade-off, but this is based on our global bound on the number of iterations,
which is itself limited by the total CPU time. Many of these experiments represent more than
a full day of CPU. When faster machines are available, one should re-examine the issue with
a number of iterations in the hundreds.
5.4 Importance of training set
Finally, we need to study the importance of the training set. It is a well-known fact for
researchers who develop hybrid algorithms for combinatorial optimization that a large set of
benchmarks is necessary to judge the value of a new idea. For instance, in the world of jobshop scheduling, the set of ORB benchmarks is interesting because techniques that
significantly improve the resolution of one problem often degrade the resolution of others.
The same lesson applies here: if the data set is too small, the learning algorithm will discover
“tricks” of little value since they do not apply generally. To measure the importance of this
phenomenon, we have run the following experiments, using the same corpus of the 12 R1*
Solomon benchmarks:
• rc1: train on 12 data samples, and measure on the same 12 data samples
• rc2: train on the first 6, but measure on the whole set
• rc3: train on the 12 data samples, but measure only on the last 6 (reference point)
• rc4: train on the first 6, measure on the last 6
• rc5: train on last 6 and measure on the last 6.
We use our “al2” setting for the learning algorithm, with a complexity goal of 50K insertions.
17
rc1
rc2
rc3
rc4
rc5
E(v)
σ(v)
37054
4154
34753
5433
35543
3362
34753
5433
29933
4053
Table 5a. Impact of the training set (I)
E(σ)
1136
983
1636
1418
2015
These results can be appreciated in different manners. On the one hand, it is clear that a large
training set is better than a smaller one, but this is more a robustness issue, as shown by the
degradation of the standard deviation with a smaller training set. Surprisingly, the average
value is actually better with a smaller subset. We also made experiment with a training set of
size 2 or 3, and the results were really not good, from which we concluded that there is a
minimal size around 5. On the other hand, the sensitivity to the training set may be a
desirable feature, and the degradation from learning with 6 vs. 12 data samples is not very
large. This is even true in the worst case (rc4) where the term is trained from different
samples than those that are used to test the result. The only result that is significantly
different is rc5, which corresponds to the case where we allow the algorithm to discover
“tricks” that will only work for a few problems. The gain is significant, but the algorithm is
less robust. The other four results are quite similar.
The following table shows the results obtained with training and testing respectively on the
R1*, C1* and RC1* data sets from the Solomon benchmarks. We added a fourth line with a
term that represents out “hand-tuned” hybrid algorithm, which we use as a reference point to
judge the quality of the invented terms.
evaluation →
train on R1*
train on C1*
train on RC1*
hand-tuned term
R1*
C1*
RC1*
41555
98267
65369
61238
98266
79128
42504
98266
66167
50442
98297
78611
Table 5b. Impact of the training set (II)
If we replace the difference into perspective using the reference algorithm, we see that the
learning algorithm is not overly sensitive to the training set. With the exception of the C1*
data set, which contains very specific problems with clusters, the algorithms found by using
R1* and RC1* do respectively well on the other data set. In the experiments reported
previously in this section, we have used the same set for training and evaluating. For
industrial problems, for which we have hundreds of sample, we can afford to use separate
sets for training and evaluation, but the results confirm that the differences between a proper
machine learning setting (different sets) and a “data mining” setting (same set) are not
significant.
Our conclusion is that we find the learning algorithm to be reasonably robust with respect to
the training set, and the ability to exploit specific features of the problem is precisely what
makes this approach interesting for an industrial application. For instance, a routing
algorithm for garbage pick-up may work on problems that have a specific type of graph, and
for which some meta-heuristic combinations are well suited, while they are less relevant for
the general case. The goal of this work is precisely to take this specificity into account.
18
6. Learning Experiments
6.1 Hand-crafted vs. discovered terms
The first experiment that we can make is to check whether the terms that are produced by the
learning algorithm are indeed better than we could produce directly using the algebra as a
component box. Here we try to build a term with a complexity of approximately 50Ki (which
translates into a run-time of 2s on a PIII-500MHz), which minimizes the total travel time.
The data set is the whole R1* set. In the following table, we compare:
- four terms that were hand-generated in an honest effort to produce a good contender.
- the term succLNS that we have shown in Figure 1 to be the best among our introduction
examples, which tries to emulate the LNS strategy of [Sha98].
- the term (Ti1) that we presented in [CSL99], that is the output of our first version of the
learning algorithm.
- the term (Ti2) that is obtained with the new learning algorithm with the al2 settings and
40 iterations.
Term
Objective
Run-time (i)
LDS(3,5,0)
DO(LDS(3,3,100),
LOOP(8,LNS(10,4,LDS(4,4,1000))))
DO(LDS(3,2,100),TREE (20,2))
FORALL(LDS(3,2,100),CHAIN(8,2))
SuccLNS (cf. Table 1)
Ti1:
FORALL(LDS+(0,2,2936,CHAIN(1,1)),
LOOP(48,LOOP(4,
LNS(3,16,LDS(3,1,287)))))
22669
22080
1.3Ki
99Ki
22454
22310
21934
21951
(invented
[CSL99])
21880
(invented !)
23Ki
59Ki
59Ki
57ki
Ti2: DO(INSERT(0),
THEN(LOOP(22,THEN(
LOOP(26,LNS(6,22,LDS(3,1,196))),
LNS(5,23,LDS(2,1,207)))),
LNS(4,26,LDS(1,1,209))))
57ki
Table 6. Inventing new terms (travel minimization)
These results are quite good, since not only the newly invented term is clearly better than the
hand-generated example, but it is even better than the complex SuccLNS term that is the
implementation with our algebra of a strategy found in [Sha98], which is itself the result of
careful tuning.
The term shown in the table is the best term found in 5 runs. The average value for term Ti2
was 21980, which is still much better than the hand-generated terms. It is also interesting to
note that the results of the algorithms have been improved significantly by the tuning of the
mutation operator and the introduction of a new control strategy for the Learning Loop, as
explained in Section 3.
6.2 Changing the Objective function
We now report a different set of experiments where we have changed the objective function.
The goal is now to minimize the number of trucks (and then to minimize travel for a given
number of trucks). We compare the same set of terms, together with two new terms that are
invented using this different objective function:
- the term (Ti3) that we presented in [CSL99], that is the output of our first version of the
learning algorithm.
19
-
the term (Ti4) that is obtained with the new learning algorithm with the al2 settings and
40 iterations.
Term
Objective
Run-time (i)
LDS(3,5,0)
68872
47000
1.3Ki
99Ki
47098
54837
40603
525000
36531
(invented
[CSL99])
28006
(invented !)
23Ki
59Ki
59Ki
59Ki
57ki
DO(LDS(3,3,100),
LOOP(8,LNS(10,4,LDS(4,4,1000))))
DO(LDS(3,2,100),TREE(20,2))
FORALL(LDS(3,2,100),CHAIN(8,2))
SuccLNS (cf. Table 1)
Ti2 (cf. Table 5)
Ti3:DO(LDS(2,5,145),
THEN(CHAIN(5,2),TREE(9,2,2)))
Ti4:DO(LDS(2,4,178),
THEN(LOOP(3,LOOP(81,LNS(7,2,LDS(2,1,534)))),
LNS(3,7,LDS(2,1,534))),
LOOP(3,LOOP(83,LNS(4,3,LDS(2,1,534)))),
LNS(3,25,LDS(0,2,983)))))
57ki
Table 7. Inventing new terms (travel minimization)
These results are even more interesting since then invented term is much better than the other
one, showing a very significant improvement since [CSL99]. The fact that the strategy used
to build SuccLNS was proposed to minimize the number of trucks makes the difference
between the terms quite surprising. The average number of routes corresponding to Ti4 is
12.33, which is excellent, as we shall discuss in the next section. The term Ti4 is the best
found in a set of 5 experiments and is not totally representative since the average value was
found 32600, which is still much better than the hand-generated terms.
It is interesting to notice that the term Ti3 is not good either as far as travel optimization is
concerned (the value of the objective function accentuates strongly the decrease in the quality
of the solution, because the value of an ideal solution is subtracted from it). The
specialization of the term according to the objective function is thus proven to be an
important feature. This is even more so in an industrial application context, where the
objective function also includes the satisfaction of soft constraints such as operator
preferences. These constraints are business-dependent and they change rapidly over the years.
6.3 Changing the complexity goal
Here we report the results found with different complexity goals, ranging from 50K to 1200K
insertions. These experiments were made on a PIII-800Mhz machine.
Value
Equivalent
# of trucks
Complexity
Equivalent run-time
50K
28006
12.33
56K
2s
200K
29732
12.44
211K
10s
600K
26000
12.26
460K
20s
1200K
24459
12.25
1200K
40s
Table 8. Inventing new terms
These results are impressive, especially with a limited amount of CPU time. In a previous
paper, using the best combination of LNS and ILO and restricting ourselves to 5 minutes of
CPU time, we obtained an average number of route of 12.31 on the R1* set of Solomon
benchmarks, whereas [TGB95] and [Sha95] respectively got 12.64 and 12.45, using 10
20
minutes for [Sha98] or 30 minutes for [TGB95] (the adjustment in CPU time is due to the
difference in hardware). Here we obtain a much better solution (12.25 routes on R1*) in less
than 1 minutes of CPU time. This is even better that was previously obtained when we
augmented the available CPU time by a factor of 3: 12.39 [Sha98] and 12.33[TGB95]. This
clearly shows that the algorithms that were found by our learning methods produce state-ofthe-art results.
The improvement with respect to [CSL99] is also obtained on the RC1* set, since the term
obtained with complexity 600K has an average number of routes of 11.875 in 30s of CPU
time, whereas our best strategy in [CSL99] obtained 12.0 routes in 5 minutest and whereas
[TGB95] and [Sha95] respectively got 12.08 and 12.05.
6.4 Future Directions
This paper has shown that this learning method produces precise tuning for hybrid algorithms
applied to medium-sized problems and short to medium run-times (seconds or minutes).
Depending on the parameters and the size of the data set, the ratio between the learning loop
and the final algorithm run-times varies from 1000 to 10000, with a practical value of at least
10000 to get stable and meaning-full results. For larger problems for which a longer run-time
is expected, this is a limiting factor. For instance, we use a hybrid algorithm for the ROADEF
challenge [Roa01] that can indeed be described with our algebra. However, the target runtime proposed in the challenge is one hour, which translates into more than a year for the
learning process. Needless to say, we could not apply this approach and used manual tuning
that produced very competitive results [Roa01].
Thus, we either need to find a way to train on smaller data set and running time, or to shorten
the training cycle. The first direction is difficult; we found that going from medium-sized
VRP to larger one seemed to preserve the relative ordering of hybrid algorithms, but this is
not true of frequency assignment problems given in the ROADEF challenge. On the other
hand, using techniques found for smaller problems is still a good idea, even though
parameters need to be re-tuned. Thus we plan to focus on “warm start” loops, trying to cut the
number of cycles by feeding the loop with terms that are better than random inventions.
Our second direction for future work is the extension of the algebra. There are two operators
that are natural to add in our framework. First, we may add a MIN(n,t) operator that applies n
times a build-or-optimize term t and selects the best iteration. This is similar to LOOP(n,t)
except that LOOP applied t recursively to the current solution, whereas MIN applies it n
times “in parallel”. The other extension is the introduction of a simple form of tabu search
[Glo86] as one of the meta-heuristics in the “tool box”, which could be used instead of LNS.
The interest of a tabu search is to explore a large number of simple moves, as opposed to
LNS, which explores a small number of complex moves. In our framework, the natural tabu
exploration is to apply push and pull as unit moves.
7. Conclusion
We presented an approach for learning hybrid algorithms that can be expressed as a
combination of meta-heuristics. Although these experiments were only applied to the field of
Vehicle Routing with Time Windows, hybrid algorithms have shown very strong results in
many fields of practical combinatorial optimization. Thus, we believe that the technique that
we propose here has a wide range of applicability. We showed how to represent a hybrid
program with a term from an algebra and presented a learning algorithm that discovers good
algorithms that can be expressed with this algebra. We showed that this learning algorithm is
stable and robust, and produces terms that are better than those that a user would produce
through manual tuning.
The idea of learning new optimization algorithm is not new. One of the most interesting
approaches is the work of S. Minton [Min97], which has inspired our own work. However,
we differ through the use of a rich program algebra, which is itself inspired from the work on
SALSA[LC98], and which enables to go much further in the scope of the invention. The
result is that we can create terms that truly represent state-of-the-art algorithms in a very
21
competitive field. The invention in [Min97] is mostly restricted to the choice of a few
heuristics and control parameters in a program template. Although it was already shown that
learning is already doing a better job than hand tuning in such a case, we think that the
application of learning to a hybrid program algebra is a major contribution to this field. The
major contribution of this paper is to provide a robust and stable algorithm, which happens to
be significantly better than our preliminary work in [CSL99].
This work on automated learning of hybrid algorithm is significant because we found that the
combination of heuristics is a difficult task. This may not be a problem to solve wellestablished hard problems such as benchmarks, but it makes using hybrid algorithms for
industrial application less appealing. The hardware on which these algorithms are running is
changed every 2 or 3 years with machines that are twice faster; the objective function of the
optimization task changes each time a new soft constraint is being added, which happens on a
continuous basis. For many businesses, the size of the problems is also evolving rapidly as
they need to service more customers. All these observations means that the optimization
algorithm needs to be maintained regularly by experienced developers, or it will become
rapidly quite far from the state-of-the-art (which is what happens most of the times, according
to our own industrial experience). Our goal is to incorporate our learning algorithm to
optimization applications as a self-adjusting feature.
To achieve this ambitious goal, more work is still needed. First, we need to demonstrate the
true applicability of this work with other domains. We are planning to evaluate our approach
on another (large) routing problem and on a complex bin-packing problem. We also need to
continue working on the learning algorithm to make sure that it is not wasting its computing
time and that similar or even better terms could not be found with a different approach.
Another promising future direction is the use of a library of terms and/or relational clichés.
By collecting the best terms from learning and using them as a library of terms and/or
deriving a set of term abstractions like the clichés of [SP91], it will be possible to bootstrap
the learning algorithm with a tool box of things that have worked in the past.
Acknowledgments
The authors are thankful to Jon Kettenring for pointing out the importance of a statistical
analysis on the learning process. Although this task is far from complete, many insights have
been gain through the set of systematic experiments that are presented here.
The authors are equally thankful to the anonymous reviewers who provided numerous
thought-provoking remarks and helped us to enrich considerably the content of this paper.
References
[CL95] Y. Caseau, F. Laburthe. Disjunctive Scheduling with Task Intervals. LIENS
Technical Report 95-25, École Normale Supérieure, France, 1995
[CL96] Y. Caseau, F. Laburthe. Introduction to the Claire Programming Language. LIENS
Technical Report 96-15, Ecole Normale Supérieure, 1996.
[CL97] Y. Caseau, F. Laburthe. Solving small TSPs with Constraints. Proc. of the 14th
International Conference on Logic Programming, The MIT Press, 1997.
[CL99]. Y. Caseau, F. Laburthe. Heuristics for Large Constrained Vehicle Routing Problems,
Journal of Heuristics 5 (3), Kluwer, Oct. 1999.
[CSL99] Y. Caseau, G. Silverstein, F. Laburthe. A Meta-Heuristic Factory for Vehicle
Routing Problems, Proc. of the 5th Int. Conference on Principles and Practice of Constraint
Programming CP’99, LNCS 1713, Springer, 1999.
[Glo86] F. Glover. Future paths for integer programming and links to artificial intelligence.
Computers and Operations Research, vol. 5, p. 533-549, 1986.
22
[Gol89] D.E. Goldberg. Genetic Algorithms in Search, Optimization, and Machine Learning.
Addison-Wesley, 1989.
[GHL94] M. Gendreau, A. Hertz, G. Laporte. A Tabu Search Heuristic for the Vehicle
Routing Problem, Management Science, vol. 40, p. 1276-1290, 1994.
[HG95] W. Harvey, M. Ginsberg. Limited Discrepancy Search, Proceedings of the 14th
IJCAI, p. 607-615, Morgan Kaufmann, 1995.
[KB95] G. Kontoravdis, J. Bard. A GRASP for the Vehicle Routing Problem with Time
Windows, ORSA Journal on Computing, vol. 7, N. 1, 1995.
[Lap92] G. Laporte. The Vehicle Routing Problem: an overview of Exact and Approximate
Algorithms, European Journal of Operational Research 59, p. 345-358, 1992.
[LC98] F. Laburthe, Y. Caseau. SALSA : A Language for Search Algorithms, Proc. of
Constraint Programming’98, M.Maher, J.-F. Puget eds., Springer, LNCS 1520, p.310-324,
1998.
[Min97] S. Minton. Configurable Solvers : Tailoring General Methods to Specific
Applications, Proc. of Constraint Programming, G. Smolka ed., Springer, LNCS 1330, p.372374, 1997.
[Ree93] C. Reeves (ed.). Modern Heuristic Techniques for Combinatorial Problems. Halsted
Press, Wiley, 1993.
[RR96] C. Rego, C. Roucairol. A Parallel Tabu Search Algorithm Using Ejection Chains for
the Vehicle Routing Problem, in Meta-Heuristics: Theory and Applications, Kluwer, 1996.
[RGP99] L.-M. Rousseau, M. Gendreau, G. Pesant. Using Constraint-Based Operators with
Variable Neighborhood Search to solve the Vehicle Routing Problem with Time Windows,
CP-AI-OR’99 workshop, Ferrara, February 1999.
[Roa01] The 2001 ROADEF Challenge. http://www.roadef.org.
[Rus95] R. Russell. Hybrid Heuristics for the Vehicle Routing Problem with Time Windows,
Transportation Science, vol. 29, n. 2, may 1995.
[SP91] G. Silverstein, M. Pazzani. Relational clichés: Constraining constructive induction
during relational learning, Machine Learning Proceedings of the Eighth International
Workshop (ML91), p. 203-207, Morgan Kaufmann 1991.
[Sol87] M. Solomon. Algorithms for the Vehicle Routing and Scheduling Problems with
Time Window Constraints, Operations Research, vol. 35, n. 2, 1987.
[Sha98] P. Shaw. Using Constraint Programming and Local Search Methods to Solve Vehicle
Routing Problems, Principles and Practice of Constraint Programming, proceedings of CP’98,
LNCS 1520, Springer, 1998.
[TBG95] E. Taillard, P. Badeau, M. Gendreau, F. Guertain, J.-Y. Rousseau. A New
Neighborhood Structure for the Vehicle Routing Problem with Time Windows, technical
report CRT-95-66, Université de Montréal, 1995.
23
| 2 |
Packrat Parsing:
Simple, Powerful, Lazy, Linear Time
Functional Pearl
Bryan Ford
arXiv:cs/0603077v1 [] 18 Mar 2006
Massachusetts Institute of Technology
Cambridge, MA
[email protected]
Abstract
1 Introduction
Packrat parsing is a novel technique for implementing parsers in a
lazy functional programming language. A packrat parser provides
the power and flexibility of top-down parsing with backtracking and
unlimited lookahead, but nevertheless guarantees linear parse time.
Any language defined by an LL(k) or LR(k) grammar can be recognized by a packrat parser, in addition to many languages that
conventional linear-time algorithms do not support. This additional
power simplifies the handling of common syntactic idioms such as
the widespread but troublesome longest-match rule, enables the use
of sophisticated disambiguation strategies such as syntactic and semantic predicates, provides better grammar composition properties,
and allows lexical analysis to be integrated seamlessly into parsing.
Yet despite its power, packrat parsing shares the same simplicity
and elegance as recursive descent parsing; in fact converting a backtracking recursive descent parser into a linear-time packrat parser
often involves only a fairly straightforward structural change. This
paper describes packrat parsing informally with emphasis on its use
in practical applications, and explores its advantages and disadvantages with respect to the more conventional alternatives.
There are many ways to implement a parser in a functional programming language. The simplest and most direct approach is
top-down or recursive descent parsing, in which the components
of a language grammar are translated more-or-less directly into a
set of mutually recursive functions. Top-down parsers can in turn
be divided into two categories. Predictive parsers attempt to predict what type of language construct to expect at a given point by
“looking ahead” a limited number of symbols in the input stream.
Backtracking parsers instead make decisions speculatively by trying different alternatives in succession: if one alternative fails to
match, then the parser “backtracks” to the original input position
and tries another. Predictive parsers are fast and guarantee lineartime parsing, while backtracking parsers are both conceptually simpler and more powerful but can exhibit exponential runtime.
Categories and Subject Descriptors
D.3.4 [Programming Languages]: Processors—Parsing; D.1.1
[Programming Techniques]: Applicative (Functional) Programming; F.4.2 [Mathematical Logic and Formal Languages]:
Grammars and Other Rewriting Systems—Parsing
General Terms
This paper presents a top-down parsing strategy that sidesteps the
choice between prediction and backtracking. Packrat parsing provides the simplicity, elegance, and generality of the backtracking
model, but eliminates the risk of super-linear parse time, by saving all intermediate parsing results as they are computed and ensuring that no result is evaluated more than once. The theoretical
foundations of this algorithm were worked out in the 1970s [3, 4],
but the linear-time version was apparently never put in practice due
to the limited memory sizes of computers at that time. However,
on modern machines the storage cost of this algorithm is reasonable for many applications. Furthermore, this specialized form of
memoization can be implemented very elegantly and efficiently in
modern lazy functional programming languages, requiring no hash
tables or other explicit lookup structures. This marriage of a classic
but neglected linear-time parsing algorithm with modern functional
programming is the primary technical contribution of this paper.
Languages, Algorithms, Design, Performance
Haskell, memoization, top-down parsing, backtracking, lexical
analysis, scannerless parsing, parser combinators
Packrat parsing is unusually powerful despite its linear time guarantee. A packrat parser can easily be constructed for any language
described by an LL(k) or LR(k) grammar, as well as for many languages that require unlimited lookahead and therefore are not LR.
This flexibility eliminates many of the troublesome restrictions imposed by parser generators of the YACC lineage. Packrat parsers
are also much simpler to construct than bottom-up LR parsers, making it practical to build them by hand. This paper explores the
manual construction approach, although automatic construction of
packrat parsers is a promising direction for future work.
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. To copy otherwise, to republish, to post on servers or to redistribute
to lists, requires prior specific permission and/or a fee.
ICFP’02, October 4-6, 2002, Pittsburgh, Pennsylvania, USA.
Copyright 2002 ACM 1-58113-487-8/02/0010 ...$5.00
A packrat parser can directly and efficiently implement common
disambiguation rules such as longest-match, followed-by, and notfollowed-by, which are difficult to express unambiguously in a
context-free grammar or implement in conventional linear-time
Keywords
Additive
← Multitive ‘+’ Additive | Multitive
Multitive ← Primary ‘*’ Multitive | Primary
Primary
← ‘(’ Additive ‘)’ | Decimal
Decimal
← ‘0’ | . . . | ‘9’
Figure 1. Grammar for a trivial language
parsers. For example, recognizing identifiers or numbers during
lexical analysis, parsing if-then-else statements in C-like languages, and handling do, let, and lambda expressions in Haskell
inherently involve longest-match disambiguation. Packrat parsers
are also more easily and naturally composable than LR parsers,
making them a more suitable substrate for dynamic or extensible
syntax [1]. Finally, both lexical and hierarchical analysis can be
seamlessly integrated into a single unified packrat parser, and lexical and hierarchical language features can even be blended together,
so as to handle string literals with embedded expressions or literate
comments with structured document markup, for example.
The main disadvantage of packrat parsing is its space consumption.
Although its asymptotic worst-case bound is the same as those of
conventional algorithms—linear in the size of the input—its space
utilization is directly proportional to input size rather than maximum recursion depth, which may differ by orders of magnitude.
However, for many applications such as modern optimizing compilers, the storage cost of a pacrkat parser is likely to be no greater than
the cost of subsequent processing stages. This cost may therefore
be a reasonable tradeoff for the power and flexibility of linear-time
parsing with unlimited lookahead.
The rest of this paper explores packrat parsing with the aim of providing a pragmatic sense of how to implement it and when it is
useful. Basic familiarity with context-free grammars and top-down
parsing is assumed. For brevity and clarity of presentation, only
small excerpts of example code are included in the text. However,
all of the examples described in this paper are available, as complete
and working Haskell code, at:
http://pdos.lcs.mit.edu/˜baford/packrat/icfp02
The paper is organized as follows. Section 2 introduces packrat
parsing and describes how it works, using conventional recursive
descent parsing as a starting point. Section 3 presents useful extensions to the basic algorithm, such as support for left recursion,
lexical analysis, and monadic parsing. Section 4 explores in more
detail the recognition power of packrat parsers in comparison with
conventional linear-time parsers. Section 5 discusses the three main
practical limitations of packrat parsing: determinism, statelessness,
and space consumption. Section 6 presents some experimental results to demonstrate the practicality of packrat parsing for real languages. Section 7 discusses related work, Section 8 points out directions for future exploration, and Section 9 concludes.
2 Building a Parser
Packrat parsing is essentially a top-down parsing strategy, and as
such packrat parsers are closely related to recursive descent parsers.
For this reason, we will first build a recursive descent parser for a
trivial language and then convert it into a packrat parser.
2.1 Recursive Descent Parsing
Consider the standard approach for constructing a recursive descent
parser for a grammar such as the trivial arithmetic expression lan-
guage shown in Figure 1. We define four functions, one for each
of the nonterminals on the left-hand sides of the rules. Each function takes takes the string to be parsed, attempts to recognize some
prefix of the input string as a derivation of the corresponding nonterminal, and returns either a “success” or “failure” result. On success,
the function returns the remainder of the input string immediately
following the part that was recognized, along with some semantic
value computed from the recognized part. Each function can recursively call itself and the other functions in order to recognize the
nonterminals appearing on the right-hand sides of its corresponding
grammar rules.
To implement this parser in Haskell, we first need a type describing
the result of a parsing function:
data Result v = Parsed v String
| NoParse
In order to make this type generic for different parse functions producing different kinds of semantic values, the Result type takes a
type parameter v representing the type of the associated semantic
value. A success result is built with the Parsed constructor and
contains a semantic value (of type v) and the remainder of the input
text (of type String). A failure result is represented by the simple value NoParse. In this particular parser, each of the four parse
functions takes a String and produces a Result with a semantic
value of type Int:
pAdditive
pMultitive
pPrimary
pDecimal
::
::
::
::
String
String
String
String
->
->
->
->
Result
Result
Result
Result
Int
Int
Int
Int
The definitions of these functions have the following general structure, directly reflecting the mutual recursion expressed by the grammar in Figure 1:
pAdditive
pMultitive
pPrimary
pDecimal
s
s
s
s
=
=
=
=
... (calls itself and pMultitive) ...
... (calls itself and pPrimary) ...
... (calls pAdditive and pDecimal) ...
...
For example, the pAdditive function can be coded as follows, using only primitive Haskell pattern matching constructs:
-- Parse an additive-precedence expression
pAdditive :: String -> Result Int
pAdditive s = alt1 where
-- Additive <- Multitive ’+’ Additive
alt1 = case pMultitive s of
Parsed vleft s’ ->
case s’ of
(’+’:s’’) ->
case pAdditive s’’ of
Parsed vright s’’’ ->
Parsed (vleft + vright) s’’’
_ -> alt2
_ -> alt2
_ -> alt2
-- Additive <- Multitive
alt2 = case pMultitive s of
Parsed v s’ -> Parsed v s’
NoParse -> NoParse
To compute the result of pAdditive, we first compute the value of
alt1, representing the first alternative for this grammar rule. This
alternative in turn calls pMultitive to recognize a multiplicativeprecedence expression. If pMultitive succeeds, it returns the semantic value vleft of that expression and the remaining input s’
following the recognized portion of input. We then check for a
‘+’ operator at position s’, which if successful produces the string
s’’ representing the remaining input after the ‘+’ operator. Finally,
we recursively call pAdditive itself to recognize another additiveprecedence expression at position s’’, which if successful yields
the right-hand-side result vright and the final remainder string
s’’’. If all three of these matches were successful, then we return as the result of the initial call to pAdditive the semantic value
of the addition, vleft + vright, along with the final remainder
string s’’’. If any of these matches failed, we fall back on alt2,
the second alternative, which merely attempts to recognize a single
multiplicative-precedence expression at the original input position
s and returns that result verbatim, whether success or failure.
The other three parsing functions are constructed similarly, in direct
correspondence with the grammar. Of course, there are easier and
more concise ways to write these parsing functions, using an appropriate library of helper functions or combinators. These techniques
will be discussed later in Section 3.3, but for clarity we will stick to
simple pattern matching for now.
2.2 Backtracking Versus Prediction
The parser developed above is a backtracking parser. If alt1 in
the pAdditive function fails, for example, then the parser effectively “backtracks” to the original input position, starting over with
the original input string s in the second alternative alt2, regardless of whether the first alternative failed to match during its first,
second, or third stage. Notice that if the input s consists of only
a single multiplicative expression, then the pMultitive function
will be called twice on the same string: once in the first alternative,
which will fail while trying to match a nonexistent ‘+’ operator,
and then again while successfully applying the second alternative.
This backtracking and redundant evaluation of parsing functions
can lead to parse times that grow exponentially with the size of the
input, and this is the principal reason why a “naive” backtracking
strategy such as the one above is never used in realistic parsers for
inputs of substantial size.
The standard strategy for making top-down parsers practical is to
design them so that they can “predict” which of several alternative rules to apply before actually making any recursive calls. In
this way it can be guaranteed that parse functions are never called
redundantly and that any input can be parsed in linear time. For example, although the grammar in Figure 1 is not directly suitable for
a predictive parser, it can be converted into an LL(1) grammar, suitable for prediction with one lookahead token, by “left-factoring”
the Additive and Multitive nonterminals as follows:
Additive
AdditiveSuffix
Multitive
MultitiveSuffix
←
←
←
←
Multitive AdditiveSuffix
‘+’ Additive | ε
Primary MultitiveSuffix
‘*’ Multitive | ε
Now the decision between the two alternatives for AdditiveSuffix
can be made before making any recursive calls simply by checking whether the next input character is a ‘+’. However, because the
prediction mechanism only has “raw” input tokens (characters in
this case) to work with, and must itself operate in constant time,
the class of grammars that can be parsed predictively is very restrictive. Care must also be taken to keep the prediction mechanism
consistent with the grammar, which can be difficult to do manu-
column
C1
C2
C3
C4
C5
C6
pAdditive
(7,C7)
(4,C7)
pMultitive
(3,C5)
(4,C7)
(3,C5)
(4,C7)
(3,C5)
(4,C7)
?
pPrimary
pDecimal
input
’2’
’*’
’(’
’3’
’+’
’4’
C7
C8
’)’
(end)
Figure 2. Matrix of parsing results for string ‘2*(3+4)’
ally and highly sensitive to global properties of the language. For
example, the prediction mechanism for MultitiveSuffix would have
to be adjusted if a higher-precedence exponentiation operator ‘**’
was added to the language; otherwise the exponentiation operator
would falsely trigger the predictor for multiplication expressions
and cause the parser to fail on valid input.
Some top-down parsers use prediction for most decisions but fall
back on full backtracking when more flexibility is needed. This
strategy often yields a good combination of flexibility and performance in practice, but it still suffers the additional complexity of
prediction, and it requires the parser designer to be intimately aware
of where prediction can be used and when backtracking is required.
2.3 Tabular Top-Down Parsing
As pointed out by Birman and Ullman [4], a backtracking top-down
parser of the kind presented in Section 2.1 can be made to operate in
linear time without the added complexity or constraints of prediction. The basic reason the backtracking parser can take super-linear
time is because of redundant calls to the same parse function on the
same input substring, and these redundant calls can be eliminated
through memoization.
Each parse function in the example is dependent only on its single parameter, the input string. Whenever a parse function makes
a recursive call to itself or to another parse function, it always supplies either the same input string it was given (e.g., for the call by
pAdditive to pMultitive), or a suffix of the original input string
(e.g., for the recursive call by pAdditive to itself after matching a
‘+’ operator). If the input string is of length n, then there are only
n + 1 distinct suffixes that might be used in these recursive calls,
counting the original input string itself and the empty string. Since
there are only four parse functions, there are at most 4(n + 1) distinct intermediate results that the parsing process might require.
We can avoid computing any of these intermediate results multiple
times by storing them in a table. The table has one row for each of
the four parse functions and one column for each distinct position
in the input string. We fill the table with the results of each parse
function for each input position, starting at the right end of the input
string and working towards the left, column by column. Within
each column, we start from the bottommost cell and work upwards.
By the time we compute the result for a given cell, the results of all
would-be recursive calls in the corresponding parse function will
already have been computed and recorded elsewhere in the table;
we merely need to look up and use the appropriate results.
Figure 2 illustrates a partially-completed result table for the input string ‘2*(3+4)’. For brevity, Parsed results are indicated as
(v,c), where v is the semantic value and c is the column number at
which the associated remainder suffix begins. Columns are labeled
C1, C2, and so on, to avoid confusion with the integer semantic values. NoParse results are indicated with an X in the cell. The next
cell to be filled is the one for pPrimary at column C3, indicated
with a circled question mark.
The rule for Primary expressions has two alternatives: a parenthesized Additive expression or a Decimal digit. If we try the alternatives in the order expressed in the grammar, pPrimary will first
check for a parenthesized Additive expression. To do so, pPrimary
first attempts to match an opening ‘(’ in column C3, which succeeds and yields as its remainder string the input suffix starting at
column C4, namely ‘3+4)’. In the simple recursive-descent parser
pPrimary would now recursively call pAdditive on this remainder string. However, because we have the table we can simply look
up the result for pAdditive at column C4 in the table, which is
(7,C7). This entry indicates a semantic value of 7—the result of
the addition expression ‘3+4’—and a remainder suffix of ‘)’ starting in column C7. Since this match is a success, pPrimary finally
attempts to match the closing parenthesis at position C7, which succeeds and yields the empty string C8 as the remainder. The result
entered for pPrimary at column C3 is thus (7,C8).
Although for a long input string and a complex grammar this result table may be large, it only grows linearly with the size of the
input assuming the grammar has a fixed number of nonterminals.
Furthermore, as long as the grammar uses only the standard operators of Backus-Naur Form [2], only a fixed number of previouslyrecorded cells in the matrix need to be accessed in order to compute
each new result. Therefore, assuming table lookup occurs in constant time, the parsing process as a whole completes in linear time.
Due to the “forward pointers” embedded in the results table, the
computation of a given result may examine cells that are widely
spaced in the matrix. For example, computing the result for
pPrimary at C3 above made use of results from columns C3, C4,
and C7. This ability to skip ahead arbitrary distances while making
parsing decisions is the source of the algorithm’s unlimited lookahead capability, and this capability makes the algorithm more powerful than linear-time predictive parsers or LR parsers.
2.4 Packrat Parsing
An obvious practical problem with the tabular right-to-left parsing
algorithm above is that it computes many results that are never
needed. An additional inconvenience is that we must carefully
determine the order in which the results for a particular column
are computed, so that parsing functions such as pAdditive and
pMultitive that depend on other results from the same column
will work correctly.
Packrat parsing is essentially a lazy version of the tabular algorithm
that solves both of these problems. A packrat parser computes results only as they are needed, in the same order as the original recursive descent parser would. However, once a result is computed
for the first time, it is stored for future use by subsequent calls.
A non-strict functional programming language such as Haskell provides an ideal implementation platform for a packrat parser. In fact,
packrat parsing in Haskell is particularly efficient because it does
not require arrays or any other explicit lookup structures other than
the language’s ordinary algebraic data types.
First we will need a new type to represent a single column of the
parsing result matrix, which we will call Derivs (“derivations”).
This type is merely a tuple with one component for each nonterminal in the grammar. Each component’s type is the result type of
the corresponding parse function. The Derivs type also contains
one additional component, which we will call dvChar, to represent
“raw” characters of the input string as if they were themselves the
results of some parsing function. The Derivs type for our example
parser can be conveniently declared in Haskell as follows:
data Derivs = Derivs {
dvAdditive
dvMultitive
dvPrimary
dvDecimal
dvChar
::
::
::
::
::
Result
Result
Result
Result
Result
Int,
Int,
Int,
Int,
Char}
This Haskell syntax declares the type Derivs to have a single constructor, also named Derivs, with five components of the specified
types. The declaration also automatically creates a corresponding
data-accessor function for each component: dvAdditive can be
used as a function of type Derivs → Result Int, which extracts
the first component of a Derivs tuple, and so on.
Next we modify the Result type so that the “remainder” component of a success result is not a plain String, but is instead an
instance of Derivs:
data Result v = Parsed v Derivs
| NoParse
The Derivs and Result types are now mutually recursive: the success results in one Derivs instance act as links to other Derivs
instances. These result values in fact provide the only linkage we
need between different columns in the matrix of parsing results.
Now we modify the original recursive-descent parsing functions so
that each takes a Derivs instead of a String as its parameter:
pAdditive
pMultitive
pPrimary
pDecimal
::
::
::
::
Derivs
Derivs
Derivs
Derivs
->
->
->
->
Result
Result
Result
Result
Int
Int
Int
Int
Wherever one of the original parse functions examined input characters directly, the new parse function instead refers to the dvChar
component of the Derivs object. Wherever one of the original
functions made a recursive call to itself or another parse function, in
order to match a nonterminal in the grammar, the new parse function instead instead uses the Derivs accessor function corresponding to that nonterminal. Sequences of terminals and nonterminals
are matched by following chains of success results through multiple
Derivs instances. For example, the new pAdditive function uses
the dvMultitive, dvChar, and dvAdditive accessors as follows,
without making any direct recursive calls:
-- Parse an additive-precedence expression
pAdditive :: Derivs -> Result Int
pAdditive d = alt1 where
-- Additive <- Multitive ’+’ Additive
alt1 = case dvMultitive d of
Parsed vleft d’ ->
case dvChar d’ of
Parsed ’+’ d’’ ->
case dvAdditive d’’ of
Parsed vright d’’’ ->
Parsed (vleft + vright) d’’’
_ -> alt2
_ -> alt2
_ -> alt2
-- Additive <- Multitive
alt2 = dvMultitive d
Finally, we create a special “top-level” function, parse, to produce
instances of the Derivs type and “tie up” the recursion between all
of the individual parsing functions:
-- Create a result matrix for an input string
parse :: String -> Derivs
parse s = d where
d
= Derivs add mult prim dec chr
add = pAdditive d
mult = pMultitive d
prim = pPrimary d
dec = pDecimal d
chr = case s of
(c:s’) -> Parsed c (parse s’)
[] -> NoParse
The “magic” of the packrat parser is in this doubly-recursive function. The first level of recursion is produced by the parse function’s
reference to itself within the case statement. This relatively conventional form of recursion is used to iterate over the input string
one character at a time, producing one Derivs instance for each
input position. The final Derivs instance, representing the empty
string, is assigned a dvChar result of NoParse, which effectively
terminates the list of columns in the result matrix.
The second level of recursion is via the symbol d. This identifier
names the Derivs instance to be constructed and returned by the
parse function, but it is also the parameter to each of the individual
parsing functions. These parsing functions, in turn, produce the rest
of the components forming this very Derivs object.
This form of data recursion of course works only in a non-strict language, which allow some components of an object to be accessed
before other parts of the same object are available. For example,
in any Derivs instance created by the above function, the dvChar
component can be accessed before any of the other components of
the tuple are available. Attempting to access the dvDecimal component of this tuple will cause pDecimal to be invoked, which in
turn uses the dvChar component but does not require any of the
other “higher-level” components. Accessing the dvPrimary component will similarly invoke pPrimary, which may access dvChar
and dvAdditive. Although in the latter case pPrimary is accessing
a “higher-level” component, doing so does not create a cyclic dependency in this case because it only ever invokes dvAdditive on
a different Derivs object from the one it was called with: namely
the one for the position following the opening parenthesis. Every
component of every Derivs object produced by parse can be lazily
evaluated in this fashion.
Figure 3 illustrates the data structure produced by the parser for the
example input text ‘2*(3+4)’, as it would appear in memory under
a modern functional evaluator after fully reducing every cell. Each
vertical column represents a Derivs instance with its five Result
components. For results of the form ‘Parsed v d’, the semantic value v is shown in the appropriate cell, along with an arrow
representing the “remainder” pointer leading to another Derivs instance in the matrix. In any modern lazy language implementation
that properly preserves sharing relationships during evaluation, the
arrows in the diagram will literally correspond to pointers in the
heap, and a given cell in the structure will never be evaluated twice.
Shaded boxes represent cells that would never be evaluated at all in
dvAdditive
14
7
7
’4’
dvMultitive
14
7
3
4
dvPrimary
2
7
3
4
dvDecimal
2
dvChar
’2’
4
3
’*’
’(’
’3’
’+’
’4’
’)’
Figure 3. Illustration of Derivs data structure produced by
parsing the string ‘2*(3+4)’
the likely case that the dvAdditive result in the leftmost column is
the only value ultimately needed by the application.
This illustration should make it clear why this algorithm can run in
O(n) time under a lazy evaluator for an input string of length n. The
top-level parse function is the only function that creates instances
of the Derivs type, and it always creates exactly n + 1 instances.
The parse functions only access entries in this structure instead of
making direct calls to each other, and each function examines at
most a fixed number of other cells while computing a given result.
Since the lazy evaluator ensures that each cell is evaluated at most
once, the critical memoization property is provided and linear parse
time is guaranteed, even though the order in which these results
are evaluated is likely to be completely different from the tabular,
right-to-left, bottom-to-top algorithm presented earlier.
3 Extending the Algorithm
The previous section provided the basic principles and tools required to create a packrat parser, but building parsers for real applications involves many additional details, some of which are affected
by the packrat parsing paradigm. In this section we will explore
some of the more important practical issues, while incrementally
building on the example packrat parser developed above. We first
examine the annoying but straightforward problem of left recursion.
Next we address the issue of lexical analysis, seamlessly integrating this task into the packrat parser. Finally, we explore the use of
monadic combinators to express packrat parsers more concisely.
3.1 Left Recursion
One limitation packrat parsing shares with other top-down schemes
is that it does not directly support left recursion. For example, suppose we wanted to add a subtraction operator to the above example
and have addition and subtraction be properly left-associative. A
natural approach would be to modify the grammar rules for Additive expressions as follows, and to change the parser accordingly:
Additive
←
|
|
Additive ‘+’ Multitive
Additive ‘-’ Multitive
Multitive
In a recursive descent parser for this grammar, the pAdditive function would recursively invoke itself with the same input it was provided, and therefore would get into an infinite recursion cycle. In
a packrat parser for this grammar, pAdditive would attempt to
access the dvAdditive component of its own Derivs tuple—the
same component it is supposed to compute—and thus would create a circular data dependency. In either case the parser fails, although the packrat parser’s failure mode might be viewed as slightly
“friendlier” since modern lazy evaluators often detect circular data
dependencies at run-time but cannot detect infinite recursion.
Fortunately, a left-recursive grammar can always be rewritten
into an equivalent right-recursive one [2], and the desired leftassociative semantic behavior is easily reconstructed using higherorder functions as intermediate parser results. For example, to make
Additive expressions left-associative in the example parser, we can
split this rule into two nonterminals, Additive and AdditiveSuffix.
The pAdditive function recognizes a single Multitive expression
followed by an AdditiveSuffix:
pAdditive :: Derivs -> Result Int
pAdditive d = case dvMultitive d of
Parsed vl d’ ->
case dvAdditiveSuffix d’ of
Parsed suf d’’ ->
Parsed (suf vl) d’’
_ -> NoParse
_ -> NoParse
The pAdditiveSuffix function collects infix operators and righthand-side operands, and builds a semantic value of type ‘Int → Int’,
which takes a left-hand-side operand and produces a result:
pAdditiveSuffix :: Derivs -> Result (Int -> Int)
pAdditiveSuffix d = alt1 where
-- AdditiveSuffix <- ’+’ Multitive AdditiveSuffix
alt1 = case dvChar d of
Parsed ’+’ d’ ->
case dvMultitive d’ of
Parsed vr d’’ ->
case dvAdditiveSuffix d’’ of
Parsed suf d’’’ ->
Parsed (\vl -> suf (vl + vr))
d’’’
_ -> alt2
_ -> alt2
_ -> alt2
-- AdditiveSuffix <- <empty>
alt3 = Parsed (\v -> v) d
3.2 Integrated Lexical Analysis
Traditional parsing algorithms usually assume that the “raw” input
text has already been partially digested by a separate lexical analyzer into a stream of tokens. The parser then treats these tokens
as atomic units even though each may represent multiple consecutive input characters. This separation is usually necessary because
conventional linear-time parsers can only use primitive terminals in
their lookahead decisions and cannot refer to higher-level nonterminals. This limitation was explained in Section 2.2 for predictive
top-down parsers, but bottom-up LR parsers also depend on a similar token-based lookahead mechanism sharing the same problem.
If a parser can only use atomic tokens in its lookahead decisions,
then parsing becomes much easier if those tokens represent whole
keywords, identifiers, and literals rather than raw characters.
Packrat parsing suffers from no such lookahead limitation, however. Because a packrat parser reflects a true backtracking model,
decisions between alternatives in one parsing function can depend
on complete results produced by other parsing functions. For this
reason, lexical analysis can be integrated seamlessly into a packrat
parser with no special treatment.
To extend the packrat parser example with “real” lexical analysis,
we add some new nonterminals to the Derivs type:
data Derivs = Derivs {
-- Expressions
dvAdditive
:: Result Int,
...
-- Lexical tokens
dvDigits
:: Result
dvDigit
:: Result
dvSymbol
:: Result
dvWhitespace :: Result
(Int, Int),
Int,
Char,
(),
-- Raw input
dvChar
:: Result Char}
The pWhitespace parse function consumes any whitespace that
may separate lexical tokens:
pWhitespace :: Derivs -> Result ()
pWhitespace d = case dvChar d of
Parsed c d’ ->
if isSpace c
then pWhitespace d’
else Parsed () d
_ -> Parsed () d
In a more complete language, this function might have the task of
eating comments as well. Since the full power of packrat parsing is
available for lexical analysis, comments could have a complex hierarchical structure of their own, such as nesting or markups for literate programming. Since syntax recognition is not broken into a unidirectional pipeline, lexical constructs can even refer “upwards” to
higher-level syntactic elements. For example, a language’s syntax
could allow identifiers or code fragments embedded within comments to be demarked so the parser can find and analyze them as
actual expressions or statements, making intelligent software engineering tools more effective. Similarly, escape sequences in string
literals could contain generic expressions representing static or dynamic substitutions.
The pWhitespace example also illustrates how commonplace
longest-match disambiguation rules can be easily implemented in
a packrat parser, even though they are difficult to express in a
pure context-free grammar. More sophisticated decision and disambiguation strategies are easy to implement as well, including
general syntactic predicates [14], which influence parsing decisions based on syntactic lookahead information without actually
consuming input text. For example, the useful followed-by and notfollowed-by rules allow a parsing alternative to be used only if the
text matched by that alternative is (or is not) followed by text matching some other arbitrary nonterminal. Syntactic predicates of this
kind require unlimited lookahead in general and are therefore outside the capabilities of most other linear-time parsing algorithms.
Continuing with the lexical analysis example, the function pSymbol
recognizes “operator tokens” consisting of an operator character
followed by optional whitespace:
-- Parse an operator followed by optional whitespace
pSymbol :: Derivs -> Result Char
pSymbol d = case dvChar d of
Parsed c d’ ->
if c ‘elem‘ "+-*/%()"
then case dvWhitespace d’ of
Parsed _ d’’ -> Parsed c d’’
_ -> NoParse
else NoParse
_ -> NoParse
Now we modify the higher-level parse functions for expressions to
use dvSymbol instead of dvChar to scan for operators and parentheses. For example, pPrimary can be implemented as follows:
and simple aliases cannot be assigned type classes. We must instead
wrap the parsing functions with a “real” user-defined type:
newtype Parser v = Parser (Derivs -> Result v)
-- Parse a primary expression
pPrimary :: Derivs -> Result Int
pPrimary d = alt1 where
-- Primary <- ’(’ Additive ’)’
alt1 = case dvSymbol d of
Parsed ’(’ d’ ->
case dvAdditive d’ of
Parsed v d’’ ->
case dvSymbol d’’ of
Parsed ’)’ d’’’ -> Parsed v d’’’
_ -> alt2
_ -> alt2
_ -> alt2
We can now implement Haskell’s standard sequencing (>>=),
result-producing (return), and error-producing combinators:
instance Monad Parser where
(Parser p1) >>= f2 = Parser pre
where pre d = post (p1 d)
post (Parsed v d’) = p2 d’
where Parser p2 = f2 v
post (NoParse) = NoParse
return x = Parser (\d -> Parsed x d)
fail msg = Parser (\d -> NoParse)
-- Primary <- Decimal
alt2 = dvDecimal d
This function demonstrates how parsing decisions can depend not
only on the existence of a match at a given position for a nonterminal such as Symbol, but also on the semantic value associated with
that nonterminal. In this case, even though all symbol tokens are
parsed together and treated uniformly by pSymbol, other rules such
as pPrimary can still distinguish between particular symbols. In a
more sophisticated language with multi-character operators, identifiers, and reserved words, the semantic values produced by the
token parsers might be of type String instead of Char, but these
values can be matched in the same way. Such dependencies of syntax on semantic values, known as semantic predicates [14], provide
an extremely powerful and useful capability in practice. As with
syntactic predicates, semantic predicates require unlimited lookahead in general and cannot be implemented by conventional parsing
algorithms without giving up their linear time guarantee.
3.3 Monadic Packrat Parsing
A popular method of constructing parsers in functional languages
such as Haskell is using monadic combinators [11, 13]. Unfortunately, the monadic approach usually comes with a performance
penalty, and with packrat parsing this tradeoff presents a difficult
choice. Implementing a packrat parser as described so far assumes
that the set of nonterminals and their corresponding result types is
known statically, so that they can be bound together in a single fixed
tuple to form the Derivs type. Constructing entire packrat parsers
dynamically from other packrat parsers via combinators would require making the Derivs type a dynamic lookup structure, associating a variable set of nonterminals with corresponding results.
This approach would be much slower and less space-efficient.
A more practical strategy, which provides most of the convenience
of combinators with a less significant performance penalty, is to
use monads to define the individual parsing functions comprising a
packrat parser, while keeping the Derivs type and the “top-level”
recursion statically implemented as described earlier.
Since we would like our combinators to build the parse functions
we need directly, the obvious method would be to make the combinators work with a simple type alias:
Finally, for parsing we need an alternation combinator:
(<|>) :: Parser v -> Parser v -> Parser v
(Parser p1) <|> (Parser p2) = Parser pre
where pre d = post d (p1 d)
post d NoParse = p2 d
post d r = r
With these combinators in addition to a trivial one to recognize
specific characters, the pAdditive function in the original packrat
parser example can be written as follows:
Parser pAdditive =
(do vleft <- Parser dvMultitive
char ’+’
vright <- Parser dvAdditive
return (vleft + vright))
<|> (do Parser dvMultitive)
It is tempting to build additional combinators for higher-level idioms such as repetition and infix expressions. However, using iterative combinators within packrat parsing functions violates the
assumption that each cell in the result matrix can be computed in
constant time once the results from any other cells it depends on
are available. Iterative combinators effectively create “hidden” recursion whose intermediate results are not memoized in the result
matrix, potentially making the parser run in super-linear time. This
problem is not necessarily serious in practice, as the results in Section 6 will show, but it should be taken into account when using
iterative combinators.
The on-line examples for this paper include a full-featured monadic
combinator library that can be used to build large packrat parsers
conveniently. This library is substantially inspired by PARSEC [13],
though the packrat parsing combinators are much simpler since they
do not have to implement lexical analysis as a separate phase or
implement the one-token-lookahead prediction mechanism used by
traditional top-down parsers. The full combinator library provides a
variety of “safe” constant-time combinators, as well as a few “dangerous” iterative ones, which are convenient but not necessary to
construct parsers. The combinator library can be used simultaneously by multiple parsers with different Derivs types, and supports
user-friendly error detection and reporting.
type Parser v = Derivs -> Result v
4 Comparison with LL and LR Parsing
Unfortunately, in order to take advantage of Haskell’s useful do
syntax, the combinators must use a type of the special class Monad,
Whereas the previous sections have served as a tutorial on how to
construct a packrat parser, for the remaining sections we turn to
S
R
A
P
the issue of when packrat parsing is useful in practice. This section informally explores the language recognition power of packrat
parsing in more depth, and clarifies its relationship to traditional
linear-time algorithms such as LL(k) and LR(k).
Although LR parsing is commonly seen as “more powerful” than
limited-lookahead top-down or LL parsing, the class of languages
these parsers can recognize is the same [3]. As Pepper points
out [17], LR parsing can be viewed simply as LL parsing with the
grammar rewritten so as to eliminate left recursion and to delay all
important parsing decisions as long as possible. The result is that
LR provides more flexibility in the way grammars can be expressed,
but no actual additional recognition power. For this reason, we will
treat LL and LR parsers here as being essentially equivalent.
4.1 Lookahead
The most critical practical difference between packrat parsing and
LL/LR parsing is the lookahead mechanism. A packrat parser’s decisions at any point can be based on all the text up to the end of the
input string. Although the computation of an individual result in the
parsing matrix can only perform a constant number of “basic operations,” these basic operations include following forward pointers
in the parsing matrix, each of which can skip over a large amount
of text at once. Therefore, while LL and LR parsers can only look
ahead a constant number of terminals in the input, packrat parsers
can look ahead a constant number of terminals and nonterminals
in any combination. This ability for parsing decisions to take arbitrary nonterminals into account is what gives packrat parsing its
unlimited lookahead capability.
To illustrate the difference in language recognition power, the following grammar is not LR(k) for any k, but is not a problem for a
packrat parser:
S
A
B
←
←
←
A|B
xAy|xzy
xByy|xzyy
Once an LR parser has encountered the ‘z’ and the first following
‘y’ in a string in the above language, it must decide immediately
whether to start reducing via nonterminal A or B, but there is no
way for it to make this decision until as many ‘y’s have been encountered as there were ‘x’s on the left-hand side. A packrat parser,
on the other hand, essentially operates in a speculative fashion, producing derivations for nonterminals A and B in parallel while scanning the input. The ultimate decision between A and B is effectively
delayed until the entire input string has been parsed, where the decision is merely a matter of checking which nonterminal has a success
result at that position. Mirroring the above grammar left to right
does not change the situation, making it clear that the difference
is not merely some side-effect of the fact that LR scans the input
left-to-right whereas packrat parsing seems to operate in reverse.
4.2 Grammar Composition
The limitations of LR parsing due to fixed lookahead are frequently
felt when designing parsers for practical languages, and many of
these limitations stem from the fact that LL and LR grammars are
not cleanly composable. For example, the following grammar represents a simple language with expressions and assignment, which
only allows simple identifiers on the left side of an assignment:
←
←
←
←
R | ID ‘=’ R
A | A EQ A | A NE A
P | P ‘+’ P | P ‘-’ P
ID | ‘(’ R ‘)’
If the symbols ID, EQ, and NE are terminals—i.e., atomic tokens produced by a separate lexical analysis phase—then an LR(1)
parser has no trouble with this grammar. However, if we try to
integrate this tokenization into the parser itself with the following
simple rules, the grammar is no longer LR(1):
ID
EQ
NE
←
←
←
’a’ | ’a’ ID
’=’ ’=’
’!’ ’=’
The problem is that after scanning an identifier, an LR parser must
decide immediately whether it is a primary expression or the lefthand side of an assignment, based only on the immediately following token. But if this token is an ‘=’, the parser has no way
of knowing whether it is an assignment operator or the first half
of an ‘==’ operator. In this particular case the grammar could be
parsed by an LR(2) parser. In practice LR(k) and even LALR(k)
parsers are uncommon for k > 1. Recently developed extensions to
the traditional left-to-right parsing algorithms improve the situation
somewhat [18, 16, 15], but they still cannot provide unrestricted
lookahead capability while maintaining the linear time guarantee.
Even when lexical analysis is separated from parsing, the limitations of LR parsers often surface in other practical situations, frequently as a result of seemingly innocuous changes to an evolving
grammar. For example, suppose we want to add simple array indexing to the language above, so that array indexing operators can
appear on either the left or right side of an assignment. One possible approach is to add a new nonterminal, L, to represent left-side
or “lvalue” expressions, and incorporate the array indexing operator
into both types of expressions as shown below:
S
R
A
P
L
←
←
←
←
←
R | L ‘=’ R
A | A EQ A | A NE A
P | P ‘+’ P | P ‘-’ P
ID | ‘(’ R ‘)’ | P ‘[’ A ‘]’
ID | ‘(’ L ‘)’ | L ‘[’ A ‘]’
Even if the ID, EQ, and NE symbols are again treated as terminals,
this grammar is not LR(k) for any k, because after the parser sees
an identifier it must immediately decide whether it is part of a P or
L expression, but it has no way of knowing this until any following
array indexing operators have been fully parsed. Again, a packrat parser has no trouble with this grammar because it effectively
evaluates the P and L alternatives “in parallel” and has complete
derivations to work with (or the knowledge of their absence) by the
time the critical decision needs to be made.
In general, grammars for packrat parsers are composable because
the lookahead a packrat parser uses to make decisions between alternatives can take account of arbitrary nonterminals, such as EQ in
the first example or P and L in the second. Because a packrat parser
does not give “primitive” syntactic constructs (terminals) any special significance as an LL or LR parser does, any terminal or fixed
sequence of terminals appearing in a grammar can be substituted
with a nonterminal without “breaking” the parser. This substitution
capability gives packrat parsing greater composition flexibility.
4.3 Recognition Limitations
Given that a packrat parser can recognize a broader class of languages in linear time than either LL(k) or LR(k) algorithms, what
kinds of grammars can’t a packrat parser recognize? Though
the precise theoretical capabilities of the algorithm have not been
thoroughly characterized, the following trivial and unambiguous
context-free grammar provides an example that proves just as troublesome for a packrat parser as for an LL or LR parser:
S
←
xSx|x
The problem with this grammar for both kinds of parsers is that,
while scanning a string of ‘x’s—left-to-right in the LR case or rightto-left in the packrat case—the algorithm would somehow have to
“know” in advance where the middle of the string is so that it can
apply the second alternative at that position and then “build outwards” using the first alternative for the rest of the input stream. But
since the stream is completely homogeneous, there is no way for the
parser to find the middle until the entire input has been parsed. This
grammar therefore provides an example, albeit contrived, requiring
a more general, non-linear-time CFG parsing algorithm.
5 Practical Issues and Limitations
Although packrat parsing is powerful and efficient enough for many
applications, there are three main issues that can make it inappropriate in some situations. First, packrat parsing is useful only to
construct deterministic parsers: parsers that can produce at most
one result. Second, a packrat parser depends for its efficiency on
being mostly or completely stateless. Finally, due to its reliance on
memoization, packrat parsing is inherently space-intensive. These
three issues are discussed in this section.
5.1 Deterministic Parsing
An important assumption we have made so far is that each of the
mutually recursive parsing functions from which a packrat parser is
built will deterministically return at most one result. If there are any
ambiguities in the grammar the parser is built from, then the parsing functions must be able to resolve them locally. In the example
parsers developed in this paper, multiple alternatives have always
been implicitly disambiguated by the order in which they are tested:
the first alternative to match successfully is the one used, independent of whether any other alternatives may also match. This behavior is both easy to implement and useful for performing longestmatch and other forms of explicit local disambiguation. A parsing
function could even try all of the possible alternatives and produce
a failure result if more than one alternative matches. What parsing
functions in a packrat parser cannot do is return multiple results to
be used in parallel or disambiguated later by some global strategy.
In languages designed for machine consumption, the requirement
that multiple matching alternatives be disambiguated locally is not
much of a problem in practice because ambiguity is usually undesirable in the first place, and localized disambiguation rules are
preferred over global ones because they are easier for humans to
understand. However, for parsing natural languages or other grammars in which global ambiguity is expected, packrat parsing is less
likely to be useful. Although a classic nondeterministic top-down
parser in which the parse functions return lists of results [23, 8]
could be memoized in a similar way, the resulting parser would not
be linear time, and would likely be comparable to existing tabular algorithms for ambiguous context-free grammars [3, 20]. Since
nondeterministic parsing is equivalent in computational complexity
to boolean matrix multiplication [12], a linear-time solution to this
more general problem is unlikely to be found.
5.2 Stateless Parsing
A second limitation of packrat parsing is that it is fundamentally
geared toward stateless parsing. A packrat parser’s memoization
system assumes that the parsing function for each nonterminal depends only on the input string, and not on any other information
accumulated during the parsing process.
Although pure context-free grammars are by definition stateless,
many practical languages require a notion of state while parsing and
thus are not really context-free. For example, C and C++ require
the parser to build a table of type names incrementally as types are
declared, because the parser must be able to distinguish type names
from other identifiers in order to parse subsequent text correctly.
Traditional top-down (LL) and bottom-up (LR) parsers have little
trouble maintaining state while parsing. Since they perform only a
single left-to-right scan of the input and never look ahead more than
one or at most a few tokens, nothing is “lost” when a state change
occurs. A packrat parser, in contrast, depends on statelessness for
the efficiency of its unlimited lookahead capability. Although a
stateful packrat parser can be constructed, the parser must start
building a new result matrix each time the parsing state changes.
For this reason, stateful packrat parsing may be impractical if state
changes occur frequently. For more details on packrat parsing with
state, please refer to my master’s thesis [9].
5.3 Space Consumption
Probably the most striking characteristic of a packrat parser is the
fact that it literally squirrels away everything it has ever computed
about the input text, including the entire input text itself. For this
reason packrat parsing always has storage requirements equal to
some possibly substantial constant multiple of the input size. In
contrast, LL(k), LR(k), and simple backtracking parsers can be designed so that space consumption grows only with the maximum
nesting depth of the syntactic constructs appearing in the input,
which in practice is often orders of magnitude smaller than the total
size of the text. Although LL(k) and LR(k) parsers for any nonregular language still have linear space requirements in the worst
case, this “average-case” difference can be important in practice.
One way to reduce the space requirements of the derivations structure, especially in parsers for grammars with many nonterminals,
is by splitting up the Derivs type into multiple levels. For example, suppose the nonterminals of a language can be grouped into
several broad categories, such as lexical tokens, expressions, statements, and declarations. Then the Derivs tuple itself might have
only four components in addition to dvChar, one for each of these
nonterminal categories. Each of these components is in turn a tuple
containing the results for all of the nonterminals in that category.
For the majority of the Derivs instances, representing character
positions “between tokens,” none of the components representing
the categories of nonterminals will ever be evaluated, and so only
the small top-level object and the unevaluated closures for its components occupy space. Even for Derivs instances corresponding
to the beginning of a token, often the results from only one or two
categories will be needed depending on what kind of language construct is located at that position.
On the other hand, for parsing complex modern programming languages in which the source code is usually written by humans and
the top priority is the power and expressiveness of the language, the
space cost of packrat parsing is probably reasonable. Standard programming practice involves breaking up large programs into modules of manageable size that can be independently compiled, and
the main memory sizes of modern machines leave at least three
orders of magnitude in “headroom” for expansion of a typical 10–
100KB source file during parsing. Even when parsing larger source
files, the working set may still be relatively small due to the strong
structural locality properties of realistic languages. Finally, since
the entire derivations structure can be thrown away after parsing is
complete, the parser’s space consumption is likely to be irrelevant
if its result is fed into some other complex computation, such as a
global optimizer, that requires as much space as the packrat parser
used. Section 6 will present evidence that this space consumption
can be reasonable in practice.
6 Performance Results
Although a detailed empirical analysis of packrat parsing is outside
the scope of this paper, it is helpful to have some idea of how a
packrat parser is likely to behave in practice before committing to a
new and unfamiliar parsing paradigm. For this reason, this section
presents a few experimental results with realistic packrat parsers
running on real source files. For more detailed results, please refer
to my master’s thesis [9].
6.1 Space Efficiency
The first set of tests measure the space efficiency of a packrat parser
for the Java1 programming language. I chose Java for this experiment because it has a rich and complex grammar, but nevertheless
adopts a fairly clean syntactic paradigm, not requiring the parser to
keep state about declared types as C and C++ parsers do, or to perform special processing between lexical and hierarchical analysis
as Haskell’s layout scheme requires.
The experiment uses two different versions of this Java parser.
Apart from a trivial preprocessing stage to canonicalize line breaks
and Java’s Unicode escape sequences, lexical analysis for both
parsers is fully integrated as described in Section 3.2. One parser
uses monadic combinators in its lexical analysis functions, while
the other parser relies only on primitive pattern matching. Both
parsers use monadic combinators to construct all higher-level parsing functions. Both parsers also use the technique described in Section 5.3 of splitting the Derivs tuple into two levels, in order to increase modularity and reduce space consumption. The parsers were
compiled with the Glasgow Haskell Compiler2 version 5.04, with
optimization and profiling enabled. GHC’s heap profiling system
was used to measure live heap utilization, which excludes unused
heap space and collectible garbage when samples are taken.
1 Java
is a trademark of Sun Microsystems, Inc.
2 http://www.haskell.org/ghc/
100
Monadic parser and scanner
Monadic parser, pattern-matching scanner
80
Maximum heap size (MB)
Even with such optimizations a packrat parser can consume many
times more working storage than the size of the original input text.
For this reason there are some application areas in which packrat
parsing is probably not the best choice. For example, for parsing
XML streams, which have a fairly simple structure but often encode
large amounts of relatively flat, machine-generated data, the power
and flexibility of packrat parsing is not needed and its storage cost
would not be justified.
Avg. 695:1
60
40
Avg. 301:1
20
0
0
20000
40000
60000
80000
Source file size (bytes)
100000
120000
140000
Figure 4. Maximum heap size versus input size
The test suite consists of 60 unmodified Java source files from the
Cryptix library3 , chosen because it includes a substantial number
of relatively large Java source files. (Java source files are small on
average because the compilation model encourages programmers to
place each class definition in a separate file.)
Figure 4 shows a plot of each parser’s maximum live heap size
against the size of the input files being parsed. Because some of the
smaller source files were parsed so quickly that garbage collection
never occurred and the heap profiling mechanism did not yield any
samples, the plot includes only 45 data points for the fully monadic
parser, and 31 data points for the hybrid parser using direct pattern matching for lexical analysis. Averaged across the test suite,
the fully monadic parser uses 695 bytes of live heap per byte of
input, while the hybrid parser uses only 301 bytes of heap per input byte. These results are encouraging: although packrat parsing
can consume a substantial amount of space, a typical modern machine with 128MB or more of RAM should have no trouble parsing source files up to 100-200KB. Furthermore, even though both
parsers use some iterative monadic combinators, which can break
the linear time and space guarantee in theory, the space consumption of the parsers nevertheless appears to grow fairly linearly.
The use of monadic combinators clearly has a substantial penalty
in terms of space efficiency. Modifying the parser to use direct
pattern matching alone may yield further improvement, though the
degree is difficult to predict since the cost of lexical analysis often
dominates the rest of the parser. The lexical analysis portion of the
hybrid parser is about twice as long as the equivalent portion of the
monadic parser, suggesting that writing packrat parsers with pattern
matching alone is somewhat more cumbersome but not unreasonable when efficiency is important.
6.2 Parsing Performance
The second experiment measures the absolute execution time of the
two packrat parsers. For this test the parsers were compiled by
GHC 5.04 with optimization but without profiling, and timed on a
1.28GHz AMD Athlon processor running Linux 2.4.17. For this
test I only used the 28 source files in the test suite that were larger
than 10KB, because the smaller files were parsed so quickly that
the Linux time command did not yield adequate precision. Figure 5 shows the resulting execution time plotted against source file
size. On these inputs the fully monadic parser averaged 25.0 Kbytes
3 http://www.cryptix.org/
6
Monadic parser and scanner
Monadic parser, pattern-matching scanner
Execution time (seconds)
5
Avg. 25.0Kbps
4
3
Avg. 49.8Kbps
2
1
0
0
20000
40000
60000
80000
Source file size (bytes)
100000
120000
140000
Figure 5. Execution time versus input size
per second with a standard deviation of 8.6 KB/s, while the hybrid
parser averaged 49.8 KB/s with a standard deviation of 16 KB/s.
In order to provide a legitimate performance comparison between
packrat parsing and more traditional linear-time algorithms, I converted a freely available YACC grammar for Java [5] into a grammar for Happy4 , an LR parser generator for Haskell. Unfortunately,
GHC was unable to compile the 230KB Haskell source file resulting from this grammar, even without optimization and on a machine with 1GB of RAM. (This difficulty incidentally lends credibility to the earlier suggestion that, in modern compilers, the temporary storage cost of a packrat parser is likely to be exceeded by
the storage cost of subsequent stages.) Nevertheless, the generated
LR parser worked under the Haskell interpreter Hugs.5 Therefore,
to provide a rough performance comparison, I ran five of the larger
Java sources through the LR and packrat parsers under Hugs using
an 80MB heap. For fairness, I only compared the LR parser against
the slower, fully monadic packrat parser, because the LR parser uses
a monadic lexical analyzer derived from the latter packrat parser.
The lexical analysis performance should therefore be comparable
and only the parsing algorithm is of primary importance.
Under Hugs, the LR parser consistently performs approximately
twice the number of reductions and allocates 55% more total heap
storage. (I could not find a way to profile live heap utilization under
Hugs instead of total allocation.) The difference in real execution
time varied widely however: the LR parser took almost twice as
long on smaller files but performed about the same on the largest
ones. One probable reason for this variance is the effects of garbage
collection. Since a running packrat parser will naturally have a
much higher ratio of live data to garbage than an LR parser over
time, and garbage collection both increases in overhead cost and
decreases in effectiveness (i.e., frees less space) when there is more
live data, garbage collection is likely to penalize a packrat parser
more than an LR parser as the size of the source file increases. Still,
it is encouraging that the packrat parser was able to outperform the
LR parser on all but the largest Java source files.
7 Related Work
This section briefly relates packrat parsing to relevant prior work.
For a more detailed analysis of packrat parsing in comparison with
other algorithms please refer to my master’s thesis [9].
4 http://www.haskell.org/happy
5 http://www.haskell.org/hugs
Birman and Ullman [4] first developed the formal properties of deterministic parsing algorithms with backtracking. This work was
refined by Aho and Ullman [3] and classified as “top-down limited
backtrack parsing,” in reference to the restriction that each parsing
function can produce at most one result and hence backtracking is
localized. They showed this kind of parser, formally known as a
Generalized Top-Down Parsing Language (GTDPL) parser, to be
quite powerful. A GTDPL parser can simulate any push-down automaton and thus recognize any LL or LR language, and it can even
recognize some languages that are not context free. Nevertheless,
all “failures” such as those caused by left recursion can be detected
and eliminated from a GTDPL grammar, ensuring that the algorithm is well-behaved. Birman and Ullman also pointed out the possibility of constructing linear-time GTDPL parsers through tabulation of results, but this linear-time algorithm was apparently never
put into practice, no doubt because main memories were much more
limited at the time and compilers had to operate as streaming “filters” that could run in near-constant space.
Adams [1] recently resurrected GTDPL parsing as a component of
a modular language prototyping framework, after recognizing its
superior composability in comparison with LR algorithms. In addition, many practical top-down parsing libraries and toolkits, including the popular ANTLR [15] and the PARSEC combinator library
for Haskell [13], provide similar limited backtracking capabilities
which the parser designer can invoke selectively in order to overcome the limitations of predictive parsing. However, all of these
parsers implement backtracking in the traditional recursive-descent
fashion without memoization, creating the danger of exponential
worst-case parse time, and thereby making it impractical to rely
on backtracking as a substitute for prediction or to integrate lexical
analysis with parsing.
The only prior known linear-time parsing algorithm that effectively
supports integrated lexical analysis, or “scannerless parsing,” is the
NSLR(1) algorithm originally created by Tai [19] and put into practice for this purpose by Salomon and Cormack [18]. This algorithm
extends the traditional LR class of algorithms by adding limited
support for making lookahead decisions based on nonterminals.
The relative power of packrat parsing with respect to NSLR(1) is
unclear: packrat parsing is less restrictive of rightward lookahead,
but NSLR(1) can also take leftward context into account. In practice, NSLR(1) is probably more space-efficient, but packrat parsing
is simpler and cleaner. Other recent scannerless parsers [22, 21] forsake linear-time deterministic algorithms in favor of more general
but slower ambiguity-tolerant CFG parsing.
8 Future Work
While the results presented here demonstrate the power and practicality of packrat parsing, more experimentation is needed to evaluate its flexibility, performance, and space consumption on a wider
variety of languages. For example, languages that rely extensively
on parser state, such as C and C++, as well as layout-sensitive languages such as ML and Haskell, may prove more difficult for a
packrat parser to handle efficiently.
On the other hand, the syntax of a practical language is usually
designed with a particular parsing technology in mind. For this
reason, an equally compelling question is what new syntax design possibilities are created by the “free” unlimited lookahead and
unrestricted grammar composition capabilities of packrat parsing.
Section 3.2 suggested a few simple extensions that depend on integrated lexical analysis, but packrat parsing may be even more useful
in languages with extensible syntax [7] where grammar composition flexibility is important.
Although packrat parsing is simple enough to implement by hand
in a lazy functional language, there would still be practical benefit in a grammar compiler along the lines of YACC in the C world
or Happy [10] and Mı́mico [6] in the Haskell world. In addition
to the parsing functions themselves, the grammar compiler could
automatically generate the static “derivations” tuple type and the
top-level recursive “tie-up” function, eliminating the problems of
monadic representation discussed in Section 3.3. The compiler
could also reduce iterative notations such as the popular ‘+’ and
‘*’ repetition operators into a low-level grammar that uses only
primitive constant-time operations, preserving the linear parse time
guarantee. Finally, the compiler could rewrite left-recursive rules to
make it easier to express left-associative constructs in the grammar.
One practical area in which packrat parsing may have difficulty and
warrants further study is in parsing interactive streams. For example, the “read-eval-print” loops in language interpreters often expect
the parser to detect at the end of each line whether or not more input
is needed to finish the current statement, and this requirement violates the packrat algorithm’s assumption that the entire input stream
is available up-front. A similar open question is under what conditions packrat parsing may be suitable for parsing infinite streams.
9 Conclusion
Packrat parsing is a simple and elegant method of converting a
backtracking recursive descent parser implemented in a non-strict
functional programming language into a linear-time parser, without
giving up the power of unlimited lookahead. The algorithm relies
for its simplicity on the ability of non-strict functional languages
to express recursive data structures with complex dependencies directly, and it relies on lazy evaluation for its practical efficiency. A
packrat parser can recognize any language that conventional deterministic linear-time algorithms can and many that they can’t, providing better composition properties and allowing lexical analysis
to be integrated with parsing. The primary limitations of the algorithm are that it only supports deterministic parsing, and its considerable (though asymptotically linear) storage requirements.
Acknowledgments
I wish to thank my advisor Frans Kaashoek, my colleagues Chuck
Blake and Russ Cox, and the anonymous reviewers for many helpful comments and suggestions.
10 References
[1] Stephen Robert Adams. Modular Grammars for Programming Language Prototyping. PhD thesis, University of
Southampton, 1991.
[2] Alfred V. Aho, Ravi Sethi, and Jeffrey D. Ullman. Compilers:
Principles, Techniques, and Tools. Addison-Wesley, 1986.
[3] Alfred V. Aho and Jeffrey D. Ullman. The Theory of Parsing,
Translation and Compiling - Vol. I: Parsing. Prentice Hall,
Englewood Cliffs, N.J., 1972.
[4] Alexander Birman and Jeffrey D. Ullman. Parsing algorithms
with backtrack. Information and Control, 23(1):1–34, Aug
1973.
[5] Dmitri Bronnikov. Free Yacc-able Java(tm) grammar, 1998.
http://home.inreach.com/bronikov/grammars/java.html.
[6] Carlos Camarão and Lucı́lia Figueiredo. A monadic combinator compiler compiler. In 5th Brazilian Symposium on
Programming Languages, Curitiba – PR – Brazil, May 2001.
Universidade Federal do Paraná.
[7] Luca Cardelli, Florian Matthes, and Martı́n Abadi. Extensible
syntax with lexical scoping. Technical Report 121, Digital
Systems Research Center, 1994.
[8] Jeroen Fokker. Functional parsers. In Advanced Functional
Programming, pages 1–23, 1995.
[9] Bryan Ford. Packrat parsing: a practical linear-time algorithm
with backtracking. Master’s thesis, Massachusetts Institute of
Technology, Sep 2002.
[10] Andy Gill and Simon Marlow. Happy: The parser generator
for Haskell. http://www.haskell.org/happy.
[11] Graham Hutton and Erik Meijer. Monadic parsing in Haskell.
Journal of Functional Programming, 8(4):437–444, Jul 1998.
[12] Lillian Lee. Fast context-free grammar parsing requires fast
boolean matrix multiplication. Journal of the ACM, 2002. To
appear.
[13] Daan Leijen.
Parsec, a fast combinator parser.
http://www.cs.uu.nl/˜daan.
[14] Terence J. Parr and Russell W. Quong. Adding semantic and
syntactic predicates to LL(k): pred-LL(k). In Computational
Complexity, pages 263–277, 1994.
[15] Terence J. Parr and Russell W. Quong. ANTLR: A predicatedLL(k) parser generator. Software Practice and Experience,
25(7):789–810, 1995.
[16] Terence John Parr. Obtaining practical variants of LL(k) and
LR(k) for k > 1 by splitting the atomic k-tuple. PhD thesis,
Purdue University, Apr 1993.
[17] Peter Pepper. LR parsing = grammar transformation + LL
parsing: Making LR parsing more understandable and more
efficient. Technical Report 99-5, TU Berlin, Apr 1999.
[18] Daniel J. Salomon and Gordon V. Cormack. Scannerless
NSLR(1) parsing of programming languages. In Proceedings
of the ACM SIGPLAN’89 Conference on Programming Language Design and Implementation (PLDI), pages 170–178,
Jul 1989.
[19] Kuo-Chung Tai. Noncanonical SLR(1) grammars. ACM
Transactions on Programming Languages and Systems,
1(2):295–320, Oct 1979.
[20] Masaru Tomita. Efficient parsing for natural language.
Kluwer Academic Publishers, 1985.
[21] M.G.J. van den Brand, J. Scheerder, J.J. Vinju, and E. Visser.
Disambiguation filters for scannerless generalized LR parsers.
In Compiler Construction, 2002.
[22] Eelco Visser. Scannerless generalized-LR parsing. Technical
Report P9707, Programming Research Group, University of
Amsterdam, 1997.
[23] Philip Wadler. How to replace failure by a list of successes:
A method for exception handling, backtracking, and pattern
matching in lazy functional languages. In Functional Programming Languages and Computer Architecture, pages 113–
128, 1985.
| 2 |
miRNA and Gene Expression based Cancer Classification using SelfLearning and Co-Training Approaches
Rania Ibrahim1, Noha A. Yousri2,1, Mohamed A. Ismail1 and Nagwa M. El-Makky1
[email protected], [email protected], [email protected] and
[email protected]
1
Computer and Systems Engineering Department, Alexandria University.
Alexanrdria 21544, Egypt.
2
Computer Science and Engineering, Egypt-Japan University of Science & Technology (E-JUST).
Abstract— miRNA and gene expression profiles have been
proved useful for classifying cancer samples. Efficient classifiers
have been recently sought and developed. A number of attempts
to classify cancer samples using miRNA/gene expression profiles
are known in literature. However, the use of semi-supervised
learning models have been used recently in bioinformatics, to
exploit the huge corpuses of publicly available sets. Using both
labeled and unlabeled sets to train sample classifiers, have not
been previously considered when gene and miRNA expression sets
are used. Moreover, there is a motivation to integrate both miRNA
and gene expression for a semi-supervised cancer classification as
that provides more information on the characteristics of cancer
samples.
In this paper, two semi-supervised machine learning
approaches, namely self-learning and co-training, are adapted to
enhance the quality of cancer sample classification. These
approaches exploit the huge public corpuses to enrich the training
data. In self-learning, miRNA and gene based classifiers are
enhanced independently. While in co-training, both miRNA and
gene expression profiles are used simultaneously to provide
different views of cancer samples. To our knowledge, it is the first
attempt to apply these learning approaches to cancer
classification. The approaches were evaluated using breast cancer,
hepatocellular carcinoma (HCC) and lung cancer expression sets.
Results show up to 20% improvement in F1-measure over
Random Forests and SVM classifiers. Co-Training also
outperforms Low Density Separation (LDS) approach by around
25% improvement in F1-measure in breast cancer.
Keywords— miRNA and gene expression analysis; Semisupervised Approaches; Self-Learning; Co-Training; Cancer sample
classifiers
I.
INTRODUCTION
MicroRNAs (miRNAs) are short (19–25 nucleotides)
noncoding single-stranded RNA molecules [1], which are
cleaved from 70–100 nucleotide miRNA precursors. miRNAs
regulate gene expression either at the transcriptional or
translational level, based on specific binding to the
complementary sequence in the coding or noncoding region of
mRNA transcripts [1]. Recent research has pointed out the
success of using miRNA and gene expression datasets in cancer
classification; miRNA profiles were used recently to
discriminate malignancies of the breast [2], lung ([2], [3]),
pancreas ([2], [4]) and liver ([5], [6], [7]). Enhancing the
accuracy of cancer classifiers, based on miRNA, gene, or mRNA
expressions, has been targeted in previous work ([8], [9], [10],
[11], [12], [13]). Moreover, different feature selections and
classification methods that efficiently detect the malignancy
status (normal or cancer) of the tissues were previously explored
in [11]. In addition, two classifiers are built [12], one for miRNA
data and another for mRNA data. The main drawbacks of the
approach is that it assumes paired miRNA and mRNA data for
each patient and it uses decision fusion rule to combine the
classifiers decision without enhancing the classifiers
themselves. Random Forests have been used in classifying
cancer in [14], [15] and [16]. Also, SVM has been used in
classifying cancer as in [17] and [18].
The idea of combining both labeled and unlabeled sets using
semi-supervised machine learning approaches has been used to
enhance classifiers in other domains like object detection [22],
word sense disambiguation [23] and subjective noun
identification [24]. Semi-supervised learning also has proved to
be effective in solving several biology problems like protein
classification [25] and prediction of factor-gene interaction [26].
However, in the field of sample classification using gene and
miRNA expression, semi-supervised machine learning
techniques were not considered before. Microarray experiments
are time consuming, expensive and limited, that is why usually
the number of samples of microarray-based studies is small [27].
Thus, huge publicly available gene/miRNA expression sets with
unlabeled samples are tempting to use for enriching training data
of sample classifiers. Integrating both miRNA and mRNA
expression profiles were thought to provide complementary
information [12], as miRNAs regulate gene expression at the
post-transcriptional level. In co-training, both miRNA and gene
expression profiles are used simultaneously to provide different
views of cancer samples. Semi-supervised machine learning
approaches are applied in this paper to discriminate cancer
subtypes. Discriminating cancer subtypes helps in
understanding the evolution of cancer and is used to find
appropriate therapies. For example, angiogenesis inhibitors like
bevacizumab are more effective in treating adenocarcinoma
lung cancer than squamous phenotypes ([19], [20]). Also, breast
cancer has an unpredictable response, and developing effective
therapies remain a major challenge in the clinical management
of breast cancer patients [21]. Moreover, identifying metastasis
hepatocellular carcinoma (HCC) samples is an important task as
metastasis is a complex process that involves multiple
alterations ([39], [40]).
In this paper, two semi-supervised machine learning
approaches, namely self-learning [28] and co-training ([29],
[30]) are used to enhance the classification accuracy of cancer
samples by combining both labeled and unlabeled miRNA and
gene expression profiles. In self-learning, a classifier is initially
constructed using the labeled set, then its accuracy is enhanced
by adding more data from unlabeled sets. Self-learning is used
on one particular source of expression, i.e either gene or miRNA
expression data. In co-training, two classifiers are trained, each
is specific to a different source of expression data (gene or
miRNA), termed as two views of the data. Based on the two
views, two classifiers are constructed and then used to train each
other. Exchanging information between the two classifiers
requires a mapping from miRNA expression to gene expression
or the opposite. A simple mapping is thus suggested based on
known biological relations between miRNAs and their target
genes.
The aforementioned classification approaches were
evaluated using gene and miRNA expression profiles of three
different cancer types: breast cancer, hepatocellular carcinoma
(HCC) and lung cancer. The results show around 20%
improvement in F1-measure in breast cancer, around 10%
improvement in precision in metastatic HCC cancer and 3%
improvement in F1-measure in squamous lung cancer over the
Random Forests and SVM classifiers. Also, the approaches were
compared to another semi-supervised approach called Low
Density Separation (LDS), which was used to enhance the
classifiers of cancer recurrence in [27]. The results show that cotraining outperforms LDS by exploiting the two different views,
i.e. miRNA expression view and gene expression view.
The paper is organized as follows section II discusses the
related work, while section III describes the proposed
approaches in details and section IV shows experimental results.
Finally section V concludes the paper.
II.
RELATED WORK
Using miRNA expression profiles to discriminate cancerous
samples from normal ones, and to classify cancer into its
subtypes, is an active research area and was applied to different
cancer types as breast [2], lung ([2], [3]), pancreatic in ([2], [4])
and liver in ([5], [6], [7]). The previous papers used one of the
following supervised machine learning techniques like SVM,
Prediction Analysis of Microarrays (PAM) and compound
covariate predictor.
Several attempts for enhancing cancer classifiers have been
recently introduced ([11], [12], [13]). In [11], number of feature
selection methods, as Pearson’s and Spearman’s correlations,
Euclidean distance, cosine coefficient, information gain and
mutual information and signal-to-noise ratio are used to enhance
cancer classifiers. Also different classification methods which
are k-nearest neighbor methods, multilayer perceptrons, and
support vector machines with linear kernel are used [11]. The
work has focused only on improving classifiers based on labeled
samples miRNA expression profiles and didn’t use publicity
available unlabeled sets, also, gene expression profiles were not
used to enhance miRNA based cancer samples classifiers.
Enhancing the classification accuracy by building two classifiers
one for miRNA data and another for mRNA data were explored
in [12]. That work first applies feature selection using relief-F
feature selection, then it uses bagged fuzzy KNN classifier and
finally it combines the two classifiers using fusion decision rule.
The drawback of the approach is that it assumes the existence of
both miRNA and mRNA data for each patient and it just uses
decision fusion rule to combine the classifiers decision without
enhancing the classifiers themselves. Another work [13] has
considered using discrete function learning (DFL) method on the
miRNA expression profiles to find the subset of miRNAs that
shows strong distinction of expression levels in normal and
tumor tissues and then uses these miRNAs to build a classifier.
The paper didn’t combine multiple miRNA dataset or use gene
expression dataset to enhance the classifier. Semi-supervised
machine learning approaches were introduced in classification
using expression sets by using LDS approach which was used in
[27] to enhance cancer recurrence classifiers. Semi-supervised
machine learning approaches make use of the publicity available
unlabeled sets to enrich the training data of the classifiers.
However, the approach depends only on gene expression, and
didn’t combine both miRNA and gene expression sets.
Other semi-supervised machine learning approaches like
self-learning and co-training were introduced in other domains.
The heuristic approach of self-learning (also known as selftraining) is one of the oldest approaches in semi-supervised
learning and that was introduced in [28]. Self-learning was used
in many applications as object detection [22], word sense
disambiguation [23] and subjective noun identification [24].
Also, co-training is a semi-supervised approach that appeared in
[29] and [30] and is also used in applications as word sense
disambiguation [31] and email classification [32].
In this paper, self-learning and co-training approaches are
used. Both approaches use unlabeled sets to enhance classifiers
accuracy. Co-training also enhances the results by combining
both miRNA and gene expression sets. The results show
improvements over Random Forests and SVM classifiers and
LDS approach.
III.
SELF-LEARNING AND CO-TRAINING ADAPTATION TO
MIRNA/GENE BASED CLASSIFICATION
In self-learning and co-training, the objective is to construct
a classifier to discriminate between different cancer subtypes,
given the following:
The expression vector of a sample i, denoted xi, is
defined as follows:
𝑥𝑖 = {𝑒𝑖1 , 𝑒𝑖2 , … , 𝑒𝑖𝑗 , … , 𝑒𝑖𝑀 }
Where 𝑒𝑖𝑗 is the expression value of the jth
miRNA/gene, and M is the number of
miRNAs/genes.
N is the number of samples.
Two sets are used in both self-learning and co-training,
which are defined as follows:
b) Use the initial classifier to identify the subtype labels
of the unlabeled set U.
c)
Fig 1. Self-Learning approach overview.
A set of labeled samples L; 𝐿 = {𝑥𝑖 , 𝑦𝑖 }𝑁
𝑖=1
Where yi is the cancer subtype label.
A set of unlabeled samples U; 𝑈 =
{𝑥𝑖 }𝑁
𝑖=1
The size of U is expected to be much larger than L (|U| >>
|L|), which is expected to help enhancing the accuracy of the
classifiers by adding more expression vectors to the training
data. Increasing the number of unlabeled sets leads to higher
enrichment in the training set. Moreover, increasing the overlap
between the miRNAs/genes in the labeled and unlabeled sets
leads to increasing the effect of adding the unlabeled sets.
Self-learning [28] is a semi-supervised machine learning
approach, in which the labeled set L is used to build the initial
classifier and the unlabeled set U is utilized to enhance its
accuracy by adding the unlabeled samples with the highest
classification confidence to the training set, thus resulting in
making the classifier learns based on its own decision. Cotraining ([29], [30]) is also a semi-supervised machine learning
approach, which requires two views of the data. Two classifiers
are constructed separately for each view. Each classifier is then
used to train the other one by classifying unlabeled samples and
train the other view with the samples with highest classification
confidence.
The next sections explain how the two approaches are
adapted to use the unlabeled set U to enhance the baseline
classifier constructed based on L.
A. Self-Leanring Adaptation
The steps of adapting the self-learning approach are
described as follows:
a)
Train an initial cancer subtype classifier using set L.
Choose the most confident subset of cancer samples
(U'), i.e. samples classified with a confidence greater
than a given threshold (α).
d) Append the set of most confident samples to the initial
training dataset to form a new training set (U' ∪ L) for
re-training the classifier.
e) Use the classifier constructed at step d to perform
several iterations over the unlabeled set(s). At each
iteration, re-apply steps b, c and d.
The resulting classifier can then be used to classify new
samples based on their miRNA/gene expression profiles.
The confidence threshold α should be appropriately selected.
Decreasing α can increase the false positives rate. On the other
hand, increasing α can result in restricting the learning process
to the highly confident samples, typically the ones that are
most similar to the training data, thus losing the benefit of
including more training samples to the labeled data. Tuning
parameter α is thus important, since it affects the classifier’s
accuracy to choose the samples that will enhance the
classifier.
The next section explains the co-training idea and
adaptation in details.
B. Co-Training Adaptation
In this paper, the co-training approach is adapted to classify
cancer subtypes by training two different classifiers; the first is
based on the gene expression view and the second is based on
the miRNA expression view. Each view captures a different
perspective of the underlying biology of cancer and integrating
them using the co-training pipeline exploits this information
diversity to enhance the classification accuracy. The following
steps describes co-training in details:
a)
Two initial cancer classifiers are separately
constructed; one from the miRNA expression
dataset (LmiRNA) and another one from the gene
expression dataset (Lgene) using manually labeled
cancer subtypes sets.
b) Let the initial classifiers separately classify the
unlabeled cancer miRNA/gene expression datasets
(UmiRNA/ Ugene) into cancer subtypes.
c)
Choose the most confident labeled subtypes
samples (U'miRNA& U'gene) that have classification
scores greater than α.
d) Retrieve miRNA-gene relations using miRanda.
For the classifiers to train each other, miRNA
expression should be mapped to gene expression
and vice versa. miRNAs and their target genes
databases are used to map the datasets. In our case,
miRanda [33] database is used.
e)
Append the mapped miRNA expression sets to the
gene expression training sets and the mapped gene
expression sets to the miRNA expression training
sets and re-train the classifiers.
f)
Use the classifier constructed at step e to perform
several iterations over the unlabeled set(s). At each
iteration, re-apply steps b, c, d and e.
miRNA expression vector is constructed from its
target gene’s expression vector.. Due to the many
to many relationship between miRNAs and genes,
it is suggested to use an aggregation of all
expression vectors of the target genes to represent
the miRNA expression vector. Similarly, a gene
expression vector is constructed by aggregating the
expression vectors of the miRNAs that target this
gene. To map a gene to a miRNA, or the opposite,
it is proposed to take the mean expression value of
all miRNAs related to a gene, or the opposite, i.e.
the mean expression value of all genes related to a
miRNA. Experimental results show that taking the
mean value of expressions has improved the
classification accuracy. Part of the future work
would be investigating the effect of using other
methods as a mapping function.
g) In step d, a mapping between the miRNA view and
gene view is required. As shown in figure 2,
miRNAs and their target genes are related by a
many to many relationship; multiple miRNAs
target the same gene, and multiple genes are
targeted by the same miRNA. For the classifier to
exploit the two views, i.e. gene and miRNA sets, a
After the co-training process, the two classifiers can be
used independently, one on gene expression profile and the
other on miRNA expression profile of cancer samples. The
next section shows the experimental results of both selflearning and co-training approaches.
IV.
Fig 2. miRNAs and their target genes are related
by a many to many relationship. The first column
represents miRNAs and the second column
represents target genes ids.
Co-training ( LmiRNA, UmiRNA , Lgene , Ugene, α)
Inputs: miRNA expression profile of labeled set (LmiRNA),
miRNA expression profile of unlabeled sets (U miRNA ),
gene expression profile of labeled set (Lgene), gene
expression profile of unlabeled sets (Ugene) and confident
threshold (α)
Outputs: Two classifiers (CmiRNA) and (Cgene) that can
separately classify cancer samples.
Begin
Repeat {
1. CmiRNA = TrainClassifier (LmiRNA)
2. Cgene = TrainClassifier (Lgene)
3. L’miRNA = Classify (CmiRNA, UmiRNA)
4. LαmiRNA= ChooseMostConfident(L’miRNA, α)
5. L’gene = Classify (Cgene, Ugene).
6. Lαgene= ChooseMostConfident(L’gene, α).
7. LmiRNA = LmiRNA U ConverToMiRNAs(Lαgene)
8. Lgene = Lgene U ConverToGenes(LαmiRNA)
} Until (no improvement in classification
accuracy or reaching max iterations)
End
Fig 3. Pseudo code of co-training approach.
EXPERIMENTAL RESULTS
Two core classifiers of self-learning and co-training were
used, which are Random Forests and SVM. RF is a known
classifier ensemble method [34] based on constructing multiple
decision trees. Each decision tree is built on a bootstrap sample
of the training data using a randomly selected subset of features.
For predicting a sample's label, a majority vote based on the
classification obtained from the different decision trees is
calculated. RF have been used in classifying cancer in [14], [15]
and [16]. RF implementation from the Weka repository [35] was
used, and the number of decision trees was set to 10. SVM
implementation was also used from the Weka repository [35].
The approaches were evaluated using three cancer types,
namely breast cancer, hepatocellular carcinoma (HCC) and lung
cancer. miRNA based classifiers were constructed for breast
cancer and HCC sets, while gene based classifiers were
constructed for all 3 sets. In addition, self-learning and cotraining were compared against LDS in breast cancer and HCC.
LDS Matlab implementation was downloaded from [41]. Tables
1 and 2 show the size of the training and testing sets for each
cancer type according to its subtypes. All miRNA and gene
expression profiles were downloaded from NCBI [36].
Moreover, table 3 shows sample size and miRNA/gene numbers
in the unlabeled sets.
Table 1. Training and testing samples size for breast cancer
and HCC subtypes using miRNA expression. (NM = nonmetastatic and M = metastatic)
Type
Breast Cancer (GSE15885)
HCC
(GSE6857)
Subtype ER+/ ER-/
ER-/
ER+/
NM
M
Her2 Her2 Her2- Her2
+
+
Train
Test
8
7
2
2
5
4
1
0
193
162
62
65
30
25
20
15
10
5
0
Class 0
Class 1
Class 2
Initial RF Classifier
Self-Learning
Class 3
Co-Training
Fig 4. Training data size comparison of initial RF classifier against self-learning and co-training on breast cancer using miRNA
expression sets. (Class 0 is ER+/Her2-, class 1 is ER-/Her2+, class 2 is ER-/Her2- and class3 is ER+/Her2+)
Table 2. Training and testing sample size for breast cancer,
HCC and lung cancer subtypes using gene expression. (NM =
non-metastatic, M = metastatic, A = adenocarcinoma and S =
squamous)
Type
Sub
Type
Breast Cancer (GSE20713)
ER+/
ER-/
ER-/ ER+/
Her2
Her2
Her2
Her2
+
+
HCC
N
M
M
Lung
A
S
Train
17
9
14
4
98
118
87
41
Test
17
8
14
4
98
120
86
40
Table 3. Sample size and miRNA/gene numbers of unlabeled
sets.
Sample
Size
Table 4. Precision, recall and F1-measure for breast cancer
subtypes RF classifiers using miRNA expression and gene
expression dataset.
Baseline
classifier
Self-learning
iteration 1
Self-learning
iteration 2
Co-training
miRNAs based
classifier
P
R
F1
28.9
53.9
37.7
Genes based
classifier
P
R
F1
40.8
44.2 42.5
31.4
53.9
39.7
54.6
58.1
56.3
46.8
61.5
53.2
-
-
-
56.9
61.5
59.1
44.7
53.5
48.7
Table 5. Precision, recall and F1-measure for breast cancer
subtypes SVM classifiers using miRNA expression and gene
expression dataset.
miRNAs based classifier
miRNA/gene
Numbers
Breast Cancer (GSE26659)
94
237 miRNAs
Breast Cancer (GSE35412)
35
378 miRNAs
Breast Cancer (GSE16179)
19
54675 genes
Breast Cancer (GSE22865)
12
54675 genes
Breast Cancer (GSE32161)
6
54675 genes
HCC (GSE10694)
78
121 miRNAs
HCC (GSE15765)
90
42718 genes
Lung Cancer (GSE42127)
176
48803 genes
P
24.5
R
38.5
F1
29.9
LDS
Self-Learning
42.3
31.4
30.8
53.8
35.6
39.7
Co-Training
62.2
61.5
61.9
Baseline SVM
classifier
A. Breast Cancer
Breast cancer is a heterogeneous disease that has a range
of phenotypically distinct tumor types. This heterogeneity has
an underlying spectrum of molecular alterations and initiating
events that was found clinically through a diversity of disease
presentations and outcomes [21]. Due to the complexity of
this type of tumor, there is a demand for an efficient
classification of breast cancer samples.
For breast cancer, both self-learning and co-training are
used. Self-learning was applied for both miRNA and gene
based classifiers. For sample classification using miRNA
expression dataset, an initial breast cancer subtype labeled
dataset (GSE15885) was used to build an initial cancer
subtype classifier. The initial classifier was then used to
predict the labels of the unlabeled breast cancer subtypes
(GSE26659 and GSE35412).Two iterations were performed
over the two unlabeled datasets. The confident samples, the
ones with classification confidence (α) greater than 0.9 were
added to the training dataset and the subtype classifier was
re-trained. The same operation was repeated for sample
classification using gene expression dataset where the initial
dataset (GSE20713) was used to build an initial classifier
and the unknown subtype breast cancer (GSE16179) was
used to enrich it. Table 4 shows the precision, recall and F1measure enhancement against the RF classifier. The results
show 12% improvement in F1-measure of breast cancer
subtype classifier using miRNA expression profiles and 6%
improvement in F1-measure of breast cancer subtype
classifier using gene expression profiles. Moreover, table 5
shows the enhancement over SVM and LDS classifiers, only
miRNA expression profiles were used in this comparison as
LDS requires a lot of memory and thus was unable to use
with large number of genes. The table shows that selflearning achieved 10% improvement in F1-measure over
SVM classifier and 4% improvement in F1-measure over
LDS classifier.
Co-training was evaluated in breast cancer subtypes in
both miRNA expression and gene expression. To enhance
sample classification using miRNA expression, one labeled
miRNA expression dataset (GSE15885) is used. One labeled
gene expression dataset (GSE20713) and three unlabeled
gene expression datasets (GSE16179, GSE22865 and
GSE32161) are mapped into miRNA expression values (as
explained in subsection B of section III). In addition, to
enhance sample classification using gene expression, one
labeled gene expression dataset (GSE20713) is used. One
labeled miRNA expression dataset (GSE15885) and two
unlabeled miRNA expression datasets (GSE26659 and
GSE35412) are mapped into gene expression values and
added to the gene training dataset. Table 4 shows the
significant improvements in F1-measure using co- training
over RF classifier. Increments up to 21% and 8% in F1measure are observed when using miRNA expression
profiles and gene expression profiles respectively.
Moreover, table 5 shows the enhancement of co-training
over SVM and LDS classifiers, co-training was able to
enhance the F1-measure by around 25% over the LDS
classifier.
To have a closer look on the behavior of the methods,
the number of training data at each class is determined and
shown at figure 4. The figure shows that co-training was able
to enrich the training data in all 4 classes which is reflected
in the highest improvement in the results and self- learning
was able to enrich that training set in class 0.
B. HCC
Hepatocellular carcinoma (HCC) represents an
extremely poor prognostic cancer that remains one of the
most common and aggressive human malignancies
worldwide ([37], [38]). Metastasis is a complex process that
involves multiple alterations ([39], [40]), that is why
discriminating metastasis and non-metastasis HCC is a
challenging problem.
For HCC, both self-learning and co-training approaches
were evaluated to discriminate between metastatic and nonmetastatic HCC. The self-learning steps are applied using
GSE6857 as an initial labeled miRNA expression dataset
and GSE10694 as the unlabeled subtypes HCC samples.
Also, GSE36376 was used as initial labeled gene expression
datasets and GSE15765 as the unlabeled subtypes HCC
samples. For co-training, to enhance sample subtype
classifier using miRNA expression, one labeled miRNA
expression dataset (GSE6857) is used. One labeled gene
expression dataset (GSE36376) and one unlabeled gene
expression datasets (GSE15765) are mapped into miRNA
expression values and added to the miRNA training datasets
and the sample subtype classifiers are re-trained. Also, in
order to enhance the sample classification using gene
expression, one labeled gene expression dataset (GSE36376)
is used. One labeled miRNA expression dataset (GSE6857)
and one unlabeled miRNA expression datasets (GSE10694)
are mapped into gene expression datasets and added to the
gene training dataset.
Table 6 shows detailed results for HCC subtype
classification using RF core classifier, there is around 10%
improvement in precision of HCC metastasis class using
miRNA expression sets and around 2% in F1-measure using
gene expression sets. Moreover, table 7 shows the
improvement of the techniques over SVM and LDS
classifiers. Co-training achieved 5% enhancement in recall
over SVM classifier and 6% enhancement in F1-measure
over LDS classifier. The improvement in HCC is less than
breast cancer as in breast cancer the number of used
unlabeled sets are larger. Also, the overlapping between the
miRNAs and genes between the initial set and the added sets
is an important factor. In order to understand why
enhancements in breast cancer were more significant, the
number of overlapping miRNAs and genes is calculated.
Tables 8 and 9 show that the higher the overlap between the
miRNAs and genes of the initial set and those of the added
sets, the higher the improvements become.
C. Lung Cancer
Lung cancer is the leading cause of cancer-related death in
both men and women worldwide, it results in over 160,000
deaths annually [8]. Only self-learning using gene
expression dataset was evaluated in lung cancer as no labeled
miRNA expression dataset was found on the web. The aim
of the cancer subtype classifier is to discriminate between
adenocarcinoma and squamous lung cancer subtypes. The
labeled gene expression dataset (GSE41271) was used to
build an initial classifier and the unlabeled gene expression
dataset (GSE42127) was used to enhance it. Table 10 shows
the enhancement achieved by self-learning, which is around
3% improvement in F1-measure of squamous lung cancer
class.
Table 10. Results of lung cancer RF subtypes classifiers using
gene expression dataset.
Class A
Table 6. Results of HCC RF subtype classifiers using
miRNA/gene expression dataset.
Class NM
miRNA
initial
classifie
r
miRNA
selflearning
miRNA
cotraining
Genes
initial
classifie
r
Genes
selflearning
Class M
P
75
R
93
F1
83
P
59
R
24
F1
34
Weighted
Evaluation
P
R
F1
70
73 72
76
95
84
69
24
36
74
75
Genes
initial
classifier
Genes
selflearning
Class S
P
83
R
95
F1
89
P
83
R
54
F1
65
Weighted
Evaluation
P
R
F1
83
83 83
84
96
90
87
56
68
85
76.6
95
84.9
69
27
39
74
75
75
95
98
96
98
95
97
96
96
96
100
96
98
97
100
98
98
98
98
R
66.1
F1
66.3
LDS
61.2
66.9
63.9
Self-Learning
62.2
61.5
61.9
Co-Training
67.7
71.4
69.5
Table 8. The number of overlapping miRNAs and genes
between initial datasets and added datasets in breast cancer.
GSE15885
GSE26659
GSE35412
GSE20713
GSE16179
miRNAs initial dataset
(GSE15885)
336
124
183
157
157
Genes initial dataset
(GSE20713)
7
7
7
54676
54676
Table 9. The number of overlapping miRNAs and genes
between initial datasets and added datasets in HCC.
GSE10694
GSE36376
GSE15765
miRNAs initial dataset
(GSE6857)
36
52
52
Genes initial dataset
(GSE36376)
47323
37282
CONCLUSION
In this paper, two semi-supervised machine learning
approaches were adapted to classify cancer subtype based on
miRNA and gene expression profiles. They both exploit the
expression profiles of unlabeled samples to enrich the
training data. The miRNA-gene relation is additionally used
to enhance the classification in co-training. Both selflearning and co-training approaches improved the accuracy
compared to Random Forests and SVM as baseline
classifiers. The results show up to 20% improvement in F1measure in breast cancer, 10% improvement in precision in
metastatic HCC cancer and 3% improvement in F1-measure
in squamous lung cancer. Co-Training also outperforms Low
Density Separation (LDS) approach by around 25%
improvement in F1-measure in breast cancer.
Table 7. Results of HCC subtype SVM classifiers using
miRNA expression dataset.
P
66.5
85
74
V.
Baseline SVM
classifier
85
REFERENCES
[1]
Y. Katayama, M. Maeda, K. Miyaguchi, S. Nemoto, M. Yasen, S. Tanaka,
H. Mizushima, Y. Fukuoka, S. Arii and H. Tanaka. Identification of
pathogenesis-related microRNAs in hepatocellular carcinoma by
expression profiling. Oncol. Lett. 4(4): 817–823. October 2012.
[2] S. Volinia, G. Calin and C. Liu. A microRNA expression signature of
human solid tumors defines cancer gene targets. Proceedings Natl. Acad.
Sci. USA, 103:2257–2261, 2006.
[3] N. Yanaihara, N. Caplen and E. Bowman. Unique micro RNA molecular
profiles in lung cancer diagnosis and prognosis. Cancer Cell, 9:189–198,
2006.
[4] EJ. Lee, Y. Gusev and J. Jiang. Expression profiling identifies microRNA
signature in pancreatic cancer. Int. J. Cancer, 120:1046–1054, 2007.
[5] Y. Murakami, T. Yasuda and K. Saigo. Comprehensive analysis of
microRNA expression patterns in hepatocellular carcinoma and nontumorous tissues. Oncogene. 25:2537–2545, 2006.
[6] A. Budhu, H. Jia and M. Forgues. Identification of metastasis-related
microRNAs in hepatocellular carcinoma. Hepatology, 47:897–907, 2008.
[7] H. Varnhort, U. Drebber and F. Schulze. MicroRNA gene expression
profile of hepatitis C virus-associated hepatocellular carcinoma.
Hepatology. 47:1223–1232, 2008.
[8] J. A. Bishop, H. Benjamin, H. Cholakh, A. Chajut, D. P. Clark and W. H.
Westra. Accurate Classification of Non–Small Cell Lung Carcinoma
Using a Novel MicroRNA-Based Approach. Clin. Cancer Res. 16(2):610619, 2010.
[9] N. Rosenfeld, R. Aharonov, E. Meiri, S. Rosenwald, Y. Spector, M.
Zepeniuk, H. Benjamin, N. Shabes, S. Tabak, A. Levy, D. Lebanony, Y.
Goren, E. Silberschein, N. Targan, A. Ben-Ari1, S. Gilad, N. Sion-Vardy,
A. Tobar, M. Feinmesser, O. Kharenko, O. Nativ, D. Nass, M. Perelman,
A. Yosepovich, B. Shalmon, S. Polak-Charcon, E. Fridman, A. Avniel1,
I. Bentwich, Z. Bentwich, D. Cohen, A. Chajut and I. Barshack.
MicroRNAs accurately identify cancer tissue origin. Nat. Biotech,
26(4):462-469, 2008.
[10] J. Lu, G. Getz, E. A. Miska, E. Alvarez-Saavedra, J. Lamb, D. Peck, A.
Sweet-Cordero, B. L. Ebert, R. H. Mak, A. A. Ferrando, J. R. Downing,
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
T. Jacks, H. Robert Horvitz and T. R. Golub. MicroRNA expression
profiles classify human cancers. Nature 435, 834-838, 2005.
K. Kim and S. Cho. Exploring Features and Classifiers to Classify
MicroRNA Expression Profiles of Human Cancer. International
Conference on Neural Information Processing (ICONIP), 2(10):234-241,
2010.
Y. Wang and M. H. Dunham. Classifier fusion for poorly-differentiated
tumor classification using both messenger RNA and microRNA
expression profiles. In Proceedings of the 2006 Computational Systems
Bioinformatics Conference (CSB 2006), Stanford, California, 2006.
Y. Zheng and C. Keong Kwoh. Cancer Classification with MicroRNA
Expression Patterns Found By An Information Theory Approach. Journal
of Computers (JCP), 1(5):30-39, 2006.
O. Okun and H. Priisalu. Random Forest for Gene Expression Based
Cancer Classification: Overlooked Issues. Pattern Recognition and Image
Analysis Volume 4478, pp. 483-490, 2007.
M. Klassen, M. Cummings and G. Saldana. Investigation of random forest
performance with cancer microarray data. In proceedings of the ISCA
23rd International Conference on Computers and Their Applications,
CATA 2008, April 9-11, 2008, Cancun, Mexico, pp. 64-69, ISCA, 2008.
R. Diaz-Uriarte, S. Alvarez de Andres. Gene selection and classification
of microarray data using random forest. BMC Bioinformatics, 7:3, 2006.
H. Chen, B. Yang, J. Liu and D. Liu. MicroRNA Signatures Predict
Oestrogen Receptor, Progesterone Receptor and HER2/neu Receptor
Status in Breast Cancer. Expert Syst. Appl. 38:9014-22, 2011.
R. Upstill-Goddard, D. Eccles, S. Ennis, S. Rafiq, W. Tapper, J. Fliege
and A. Collins. Support Vector Machine Classifier for Estrogen Receptor
Positive and Negative Early-Onset Breast Cancer. PLoS ONE 8(7):
e68606, 2013.
D. Johnson, L. Fehrenbacher and W. Novotny. Randomized phase II trail
comparing bevacizumab plus carboplatin and paclitaxel with carboplatin
and paclitaxel alone in previously untreated locally advanced or
metastatic non-small-cell lung cancer. J. Clin. Oncol. 22:2184-91, 2004.
M. Cohen, J. Gootenberg, P. Keegan and R. Pazdur. FDA drug approval
summary: bevacizumab (Avastin) plus carboplatin and paclitaxel as firstline treatment of advanced/metastatic recurrent nonsquamous non-small
cell lung cancer. Oncologist, 12:713-8, 2007.
A. J. Lowery, N. Miller, A. Devaney, R. E. McNeill, P. A. Davoren, C.
Lemetre, V. Benes, S. Schmidt, J. Blake, G. Ball and M. J. Kerin.
MicroRNA Signatures Predict Oestrogen Receptor, Progesterone
Receptor and HER2/neu Receptor Status in Breast Cancer. Breast
Cancer Res. 11(3), 2009.
C. Rosenberg, M. Hebert and H. Schneiderman. Semi-Supervised SelfTraining of Object Detection Models. 7th IEEE Workshop on
Applications of Computer Vision / IEEE Workshop on Motion and Video
Computing (WACV/MOTION), pp. 29-36, Breckenridge, CO, USA ,
2005.
R. Mihalcea. Co-training and Self-training for Word Sence
Disambiguation. In Proceedings of CoNLL, pp. 33-40, Boston, MA,
USA, 2004.
[24] E. Riloff, J. Wiebe and T. Wilson. Learning subjective nouns using
extraction pattern bootstrapping. Proceedings of the Seventh Conference
on Natural Language Learning, (CoNLL), pp. 25-32, Edmonton,
Candada, 2003.
[25] J. Weston. Semi-supervised protein classification using cluster kernels.
Bioinformatics, 21, 3241-3247, 2005.
[26] J. Ernst. Semi-supervised method for predicting transcription factor-gene
interactions in Escherichia coli. Pols Comput. Biol., 4(3), e1000044,
2008.
[27] M. Shi and B. Zhang. Semi-supervised learning improves gene
expression-based prediction of cancer recurrence. Bioinformatics,
27(21):3017-3023, 2011.
[28] O. Chapelle, B. Schölkopf and A. Zien. Semi-supervised learning.
Cambridge, Mass., MIT Press, 2006.
[29] A. Blum and T. Mitchell. Combining labeled and unlabeled data with cotraining. Proceedings of the Workshop on Computational Learning
Theory (COLT), pp. 92-100, Wisconsin, USA, 1998.
[30] T. Mitchell. The role of unlabeled data in supervised learning.
Proceedings of the Sixth International Colloquium on Cognitive Science
(ICCS), San Sebastian, Spain, 1999.
[31] R. Mihalcea. Co-training and Self-training for Word Sense
Disambiguation. In Proceedings of the Conference on Natural Language
Learning (CoNLL), pp. 33-40, Boston, USA, May 2004.
[32] S. Kiritchenko and S. Matwin. Email Classification with Co-Training. In
Proceedings of Conference of the Center for Advanced Studies on
Colleborative Research (CASCON), pp. 301-312, 2011.
[33] B. John, AJ. Enright, A. Aravin, T. Tuschl, C. Sander and D. Marks.
miRanda application: Human MicroRNA targets. PLoS Biol.
Jul;3(7):e264, 2005.
[34] L. Breiman. Random Foresets. Machine Learning, 45(1):5-32, 2001.
[35] http://www.cs.waikato.ac.nz/ml/weka/
[36] http://www.ncbi.nlm.nih.gov/geo
[37] S. Thorgeirsson and J. Grisham. Molecular pathogenesis of human
hepatocellular carcinoma. Nat. Genet. 31: 339–346, 2002.
[38] D. Parkin, F. Bray, J. Ferlay and P. Pisani. Global cancer statistics, 2002.
CA Cancer J. Clin., 55: 74–108, 2005.
[39] K.Yuki, S. Hirohashi, M. Sakamoto, T. Kanai and Y. Shimosato. Growth
and spread of hepatocellular carcinoma. A review of 240 consecutive
autopsy cases. Cancer .66: 2174–2179, 1990.
[40] A. Chambers, A. Groom and I. MacDonald. Dissemination and growth of
cancer cells in metastatic sites. Nat. Rev. Cancer, 2: 563–572, 2002.
[41] http://olivier.chapelle.cc/lds/
[42] X. Zhu. Semi-Supervised Learning Literature Survey. Computer
Sciences, University of Wisconsin-Madison, 2008.
| 5 |
arXiv:1706.08791v1 [] 27 Jun 2017
ON ORTHOGONAL HYPERGEOMETRIC GROUPS OF DEGREE FIVE
JITENDRA BAJPAI AND SANDIP SINGH
Abstract. A computation shows that there are 77 (up to scalar shifts) possible pairs of integer
coefficient polynomials of degree five, having roots of unity as their roots, and satisfying the conditions of Beukers and Heckman [1], so that the Zariski closures of the associated monodromy groups
are either finite or the orthogonal groups of non-degenerate and non-positive quadratic forms. Following the criterion of Beukers and Heckman [1], it is easy to check that only 4 of these pairs
correspond to finite monodromy groups and only 17 pairs correspond to monodromy groups, for
which, the Zariski closure has real rank one.
There are remaining 56 pairs, for which, the Zariski closures of the associated monodromy groups
have real rank two. It follows from Venkataramana [16] that 11 of these 56 pairs correspond to
arithmetic monodromy groups and the arithmeticity of 2 other cases follows from Singh [11]. In
this article, we show that 23 of the remaining 43 rank two cases correspond to arithmetic groups.
1. Introduction
We consider the monodromy action of the fundamental group π1 of P1 (C)\{0, 1, ∞} on the
solution space of the n Fn−1 type hypergeometric differential equation
(1.1)
D(α; β)w = 0
on P1 (C), having regular singularities at the points {0, 1, ∞}, where the differential operator D(α; β)
is defined as the following:
D(α; β) := (θ + β1 − 1) · · · (θ + βn − 1) − z(θ + α1 ) · · · (θ + αn )
d
where θ = z dz
, and α = (α1 , α2 , . . . , αn ), β = (β1 , β2 , . . . , βn ) ∈ Qn .
The monodromy groups of the hypergeometric differential equations (cf. Equation (1.1)) are also
called the hypergeometric groups, which are characterized by the following theorem of Levelt ([6];
cf. [1, Theorem 3.5]):
Theorem 1 (Levelt). If α1 , α2 , . . . , αn , β1 , β2 , . . . , βn ∈ Q such that αj − βk 6∈ Z, for all j, k =
1, 2, . . . , n, then there exists a basis of the solution space of the differential equation (1.1), with
respect to which, the hypergeometric group is a subgroup of GLn (C) generated by the companion
2010 Mathematics Subject Classification. Primary: 22E40; Secondary: 32S40; 33C80.
Key words and phrases. Hypergeometric group, Monodromy representation, Orthogonal group.
1
2
JITENDRA BAJPAI AND SANDIP SINGH
matrices A and B of the polynomials
(1.2)
f (X) =
n
Y
2πiαj
(X − e
j=1
)
and
g(X) =
n
Y
(X − e2πiβj )
j=1
respectively, and the monodromy is defined by g∞ 7→ A, g0 7→ B −1 , g1 7→ A−1 B, where g0 , g1 , g∞
are, respectively, the loops around 0, 1, ∞, which generate the π1 of P1 (C)\{0, 1, ∞} modulo the
relation g∞ g1 g0 = 1.
We now denote the hypergeometric group by Γ(f, g) which is a subgroup of GLn (C) generated by
the companion matrices of the polynomials f, g. We consider the cases where the coefficients of f, g
are integers with f (0) = ±1, g(0) = ±1 (for example, one may take f, g as product of cyclotomic
polynomials); in these cases, Γ(f, g) ⊂ GLn (Z). We also assume that f, g form a primitive pair [1,
Definition 5.1] and do not have any common root.
Beukers and Heckman [1, Theorem 6.5] have completely determined the Zariski closures G of
the hypergeometric groups Γ(f, g) which is briefly summarized as follows:
• If n is even and f (0) = g(0) = 1, then the hypergeometric group Γ(f, g) preserves a nondegenerate integral symplectic form Ω on Zn and Γ(f, g) ⊂ SpΩ (Z) is Zariski dense, that is,
G = SpΩ .
(0)
= −1, then Γ(f, g) preserves a non-degenerate integral quadratic
• If Γ(f, g) is infinite and fg(0)
n
form Q on Z and Γ(f, g) ⊂ OQ (Z) is Zariski dense, that is, G = OQ .
• It follows from [1, Corollary 4.7] that Γ(f, g) is finite if and only if either α1 < β1 < α2 <
β2 < · · · < αn < βn or β1 < α1 < β2 < α2 < · · · < βn < αn ; and in this case, we say that
the roots of f and g interlace on the unit circle.
We call a hypergeometric group Γ(f, g) arithmetic, if it is of finite index in G(Z); and thin, if it
has infinite index in G(Z) [9].
The following question of Sarnak [9] has drawn attention of many people: determine the pairs of
polynomials f, g, for which, the associated hypergeometric group Γ(f, g) is arithmetic. There have
been some progress to answer this question.
For the symplectic cases: infinitely many arithmetic Γ(f, g) in Spn (for any even n) are given by
Singh and Venkataramana in [13]; some other examples of arithmetic Γ(f, g) in Sp4 are given by
Singh in [10] and [12]; 7 examples of thin Γ(f, g) in Sp4 are given by Brav and Thomas in [2]; and
in [3], Hofmann and van Straten have determined the index of Γ(f, g) in Sp4 (Z) for some of the
arithmetic cases of [10] and [13].
For the orthogonal cases: when the quadratic form Q has signature (n − 1, 1), Fuchs, Meiri and
Sarnak give 7 (infinite) families (depending on n odd and ≥ 5) of thin Γ(f, g) in [5]; when the
quadratic form Q has signature (p, q) with p, q ≥ 2, 11 (infinite) families of arithmetic Γ(f, g) are
given by Venkataramana in [16]; an example of thin Γ(f, g) in O(2, 2) is given by Fuchs in [4];
and 2 examples of arithmetic Γ(f, g) in O(3, 2) are given by Singh in [11], which deals with the
ON ORTHOGONAL HYPERGEOMETRIC GROUPS OF DEGREE FIVE
3
14 orthogonal hypergeometric groups of degree five with a maximally unipotent monodromy, and
these cases were inspired by the 14 symplectic hypergeometric groups associated to Calabi-Yau
threefolds [10].
This article resulted from our effort to first write down all possible pairs f, g (up to scalar shifts)
of degree five integer coefficient polynomials, having roots of unity as their roots, forming a primitive
pair and satisfying the condition: f (0) = −1 and g(0) = 1; and then to answer the question to
determine the arithmeticity or thinness of the associated hypergeometric groups Γ(f, g). Note that,
these pairs of the polynomials f, g satisfy the conditions of Beukers and Heckman [1] and hence
the associated hypergeometric group Γ(f, g) is either finite or preserves a non-degenerate integral
quadratic form Q on Z5 , and Γ(f, g) ⊂ OQ (Z) is Zariski dense in the orthogonal group OQ of the
quadratic form Q.
It is clear that there are finitely many pairs of degree five integer coefficient polynomials having
roots of unity as their roots; among these pairs we find that there are 77 pairs (cf. Tables 1, 2,
3, 4, 5, 6, 7) which satisfy the conditions of the last paragraph. Now, we consider the question to
determine the arithmeticity or thinness of the associated orthogonal hypergeometric groups Γ(f, g).
We note that the roots of the polynomials f and g interlace for the 4 pairs f, g of Table 5
and hence the hypergeometric groups associated to these pairs are finite [1, Corollary 4.7]. The
quadratic forms associated to the 17 pairs of Tables 1 and 7 have signature (4, 1), and the thinness
of Γ(f, g) associated to the 7 pairs of Table 1 follows from Fuchs, Meiri, and Sarnak [5]. The
arithmeticity or thinness of Γ(f, g) associated to the other 10 pairs of Table 7 is still unknown.
We also note that the quadratic forms associated to the remaining 56 pairs (cf. Tables 2, 3, 4, 6)
have the signature (3, 2) and the arithmeticity of Γ(f, g) associated to the 11 pairs of Table 2 follows
from Venkataramana [16]; and the arithmeticity of Γ(f, g) associated to the other 2 pairs of Table 3
follows from Singh [11]. Therefore, before this article arithmeticity of only 13 hypergeometric groups
Γ(f, g) ⊂ O(3, 2) was known and the arithmeticity or thinness of the remaining 43 hypergeometric
groups Γ(f, g) ⊂ O(3, 2) was unknown.
In this article, we show the arithmeticity of more than half of the remaining 43 hypergeometric
groups Γ(f, g) ⊂ O(3, 2). In fact, we obtain the following theorem:
Theorem 2. The hypergeometric groups associated to the 23 pairs of polynomials in Table 4 are
arithmetic.
Therefore, 23 of the remaining 43 pairs correspond to arithmetic hypergeometric groups; and
the arithmeticity or thinness of the hypergeometric groups associated to the remaining 20 pairs (cf.
Table 6) is still unknown.
Acknowledgements
We thank Max Planck Institute for Mathematics for the postdoctoral fellowships, and for providing us very pleasant hospitality. We also thank the Mathematisches Forschungsinstitut Oberwolfach
4
JITENDRA BAJPAI AND SANDIP SINGH
where we met T.N. Venkataramana, and thank him for many discussions on the subject of the paper.
We also thank Wadim Zudilin for discussions at MPI.
2. Tables
In this section, we list all possible (up to scalar shifts) pairs f, g of degree five polynomials, which
are products of cyclotomic polynomials, and for which, the pair (f, g) form a primitive pair [1],
(0)
= −1, so that the associated monodromy group Γ(f, g) preserves a quadratic form Q, and
and fg(0)
Γ(f, g) ⊂ OQ (Z) as a Zariski dense subgroup (except for the 4 cases of Table 5). Note that once
we know the parameters α, β, the polynomials f, g can be determined by using the Equation (1.2).
Table 1. List of the 7 monodromy groups, for which, the associated quadratic form
Q has signature (4, 1), and thinness follows from Fuchs, Meiri, and Sarnak [5].
No.
α
1
0, 0, 0, 13 , 23
3
0, 0, 0, 14 , 34
5
0, 31 , 23 , 16 , 65
7
1
3
7
9
0, 10
, 10
, 10
, 10
β
1 1 3 5 7
2, 8, 8, 8, 8
1 1
3
7
9
2 , 10 , 10 , 10 , 10
1 1 2 3 4
2, 5, 5, 5, 5
1 1 3 5 7
2, 8, 8, 8, 8
No.
α
2
0, 0, 0, 31 , 32
4
0, 0, 0, 61 , 65
6
1
3
7
9
0, 10
, 10
, 10
, 10
β
1 1
5
7 11
2 , 12 , 12 , 12 , 12
1 1
3
7
9
2 , 10 , 10 , 10 , 10
1 1 2 1 3
2, 3, 3, 4, 4
Table 2. List of the 11 monodromy groups, for which, the associated quadratic
form Q has signature (3, 2), and arithmeticity follows from Venkataramana [16].
No.
α
8
0, 0, 0, 13 , 3
10
0, 31 , 13 , 23 , 32
12
0, 31 , 13 , 23 , 32
14
0, 31 , 23 , 14 , 43
16
3
7
9
1
, 10
, 10
, 10
0, 10
18
1
5
7 11
0, 12
, 12
, 12
, 12
β
1 1
3
7
9
2 , 10 , 10 , 10 , 10
2
1 1 2 3 4
2, 5, 5, 5, 5
1 1
3
7
9
2 , 10 , 10 , 10 , 10
1 1 2 3 4
2, 5, 5, 5, 5
1 1 1 3 3
2, 4, 4, 4, 4
1 1
3
7
9
2 , 10 , 10 , 10 , 10
No.
α
9
0, 13 , 31 , 23 , 3
11
0, 13 , 31 , 23 , 23
13
0, 13 , 31 , 23 , 23
15
0, 15 , 52 , 35 , 45
17
5
7 11
1
, 12
, 12
, 12
0, 12
β
1 1 1 3 3
2, 4, 4, 4, 4
2
1 1 1 5 5
2, 6, 6, 6, 6
1 1
5
7 11
2 , 12 , 12 , 12 , 12
1 1 1 3 3
2, 4, 4, 4, 4
1 1 1 3 3
2, 4, 4, 4, 4
ON ORTHOGONAL HYPERGEOMETRIC GROUPS OF DEGREE FIVE
5
Table 3. List of the 2 monodromy groups, for which, the associated quadratic form
Q has signature (3, 2), and arithmeticity follows from Singh [11].
No.
α
19
No.
β
0, 0, 0, 0, 0
1 1 1 3
2, 4, 4, 4, 4
3
20
α
β
0, 0, 0, 0, 0
1 1 1 5 5
2, 6, 6, 6, 6
Table 4. List of the 23 monodromy groups, for which, the associated quadratic
form Q has signature (3, 2), and arithmeticity is shown in Section 3 of this paper.
No.
α
No.
β
2
21
0, 0, 0, 13 , 3
23
0, 0, 0, 13 , 23
25
0, 0, 0, 16 , 56
27
0, 31 , 13 , 23 , 32
29
0, 31 , 23 , 14 , 43
31
0, 31 , 23 , 16 , 65
33
0, 51 , 25 , 35 , 54
35
0, 61 , 16 , 56 , 65
37
0, 61 , 16 , 56 , 65
39
0, 81 , 38 , 58 , 87
41
1
5
7 11
0, 12
, 12
, 12
, 12
43
1
5
7 11
0, 12
, 12
, 12
, 12
3
1 1 1 3
2, 4, 4, 4, 4
1 1 1 5 5
2, 6, 6, 6, 6
1 1 1 3 3
2, 4, 4, 4, 4
1 1 3 5 7
2, 8, 8, 8, 8
0, 0, 0, 31 , 3
24
0, 0, 0, 41 , 43
26
0, 13 , 31 , 23 , 23
28
0, 13 , 32 , 14 , 34
30
0, 13 , 32 , 16 , 56
32
0, 13 , 32 , 16 , 56
34
0, 15 , 52 , 35 , 45
36
0, 16 , 61 , 56 , 56
38
0, 18 , 83 , 58 , 78
40
0, 18 , 83 , 58 , 78
42
1
5
7 11
0, 12
, 12
, 12
, 12
1 1
5
7 11
2 , 12 , 12 , 12 , 12
1 1 1 2 2
2, 3, 3, 3, 3
1 1 1 3 3
2, 4, 4, 4, 4
1 1 3 5 7
2, 8, 8, 8, 8
1 1 1 2 2
2, 3, 3, 3, 3
1 1 2 1 3
2, 3, 3, 4, 4
1 1 2 1 3
2, 3, 3, 4, 4
1 1 3 5 7
2, 8, 8, 8, 8
β
2
22
α
1 1 3 1 5
2, 4, 4, 6, 6
1 1 1 5 5
2, 6, 6, 6, 6
1 1 3 1 5
2, 4, 4, 6, 6
1 1 1 3 3
2, 4, 4, 4, 4
1 1 3 5 7
2, 8, 8, 8, 8
1 1
5
7 11
2 , 12 , 12 , 12 , 12
1 1 2 1 3
2, 3, 3, 4, 4
1 1 1 3 3
2, 4, 4, 4, 4
1 1 2 1 3
2, 3, 3, 4, 4
1 1 1 2 2
2, 3, 3, 3, 3
1 1 2 3 4
2, 5, 5, 5, 5
Table 5. List of the 4 monodromy groups, for which, the associated quadratic form
Q is positive definite (since the roots of the corresponding polynomials interlace on
the unit circle [1, Corollary 4.7]).
No.
α
44
0, 13 , 32 , 41 , 34
46
0, 15 , 52 , 53 , 45
β
1 1
3
7
9
2 , 10 , 10 , 10 , 10
1 1
3
7
9
2 , 10 , 10 , 10 , 10
No.
α
45
0, 13 , 32 , 16 , 56
47
0, 18 , 83 , 58 , 78
β
1 1
3
7
9
2 , 10 , 10 , 10 , 10
1 1
3
7
9
2 , 10 , 10 , 10 , 10
6
JITENDRA BAJPAI AND SANDIP SINGH
Table 6. List of the 20 monodromy groups, for which, the associated quadratic
form Q has signature (3, 2), and arithmeticity or thinness is unknown.
No.
α
No.
β
1 1 1 1
2, 2, 2, 2, 2
1 1 1 2 2
2, 3, 3, 3, 3
1 1 2 3 4
2, 5, 5, 5, 5
1
48
0, 0, 0, 0, 0
50
0, 0, 0, 0, 0
52
0, 0, 0, 0, 0
54
0, 0, 0, 0, 0
56
0, 0, 0, 0, 0
58
0, 0, 0, 0, 0
60
0, 0, 0, 14 , 34
62
0, 0, 0, 16 , 56
64
0, 0, 0, 16 , 56
66
1
3
7
9
0, 10
, 10
, 10
, 10
1 1 3 5 7
2, 8, 8, 8, 8
53
0, 0, 0, 0, 0
55
0, 0, 0, 0, 0
57
0, 0, 0, 0, 0
59
0, 0, 0, 0, 0
61
0, 0, 0, 41 , 43
63
0, 0, 0, 61 , 65
65
0, 0, 0, 61 , 65
67
1
5
7 11
0, 12
, 12
, 12
, 12
3
7
9
1 1
2 , 10 , 10 , 10 , 10
1 1 1 1 2
2, 2, 2, 3, 3
1 1 1 2 2
2, 3, 3, 3, 3
0, 0, 0, 0, 0
1 1 2 1 3
2, 3, 3, 4, 4
51
1 1 2 1 5
2, 3, 3, 6, 6
1 1 2 1 3
2, 3, 3, 4, 4
1 1 1 1 2
2, 2, 2, 3, 3
0, 0, 0, 0, 0
1 1 1 1 3
2, 2, 2, 4, 4
1 1 1 1 2
2, 2, 2, 3, 3
β
49
α
1 1 1 1 5
2, 2, 2, 6, 6
1 1 3 1 5
2, 4, 4, 6, 6
5
7 11
1 1
2 , 12 , 12 , 12 , 12
1 1 1 2 2
2, 3, 3, 3, 3
1 1 1 2 2
2, 3, 3, 3, 3
1 1 2 3 4
2, 5, 5, 5, 5
1 1 1 2 2
2, 3, 3, 3, 3
Table 7. List of the 10 monodromy groups, for which, the associated quadratic
form Q has signature (4, 1), and arithmeticity or thinness is unknown.
No.
α
68
0, 0, 0, 31 , 3
70
0, 0, 0, 31 , 32
72
0, 0, 0, 41 , 43
74
0, 0, 0, 41 , 43
76
0, 0, 0, 61 , 65
β
2
1 1 1 1
2, 2, 2, 4, 4
3
1 1 2 3 4
2, 5, 5, 5, 5
5
7 11
1 1
2 , 12 , 12 , 12 , 12
1 1 2 3 4
2, 5, 5, 5, 5
1 1
5
7 11
2 , 12 , 12 , 12 , 12
No.
α
69
0, 0, 0, 13 , 3
71
0, 0, 0, 14 , 34
73
0, 0, 0, 14 , 34
75
0, 0, 0, 16 , 56
77
1
3
7
9
0, 10
, 10
, 10
, 10
β
1 1 1 1 5
2, 2, 2, 6, 6
2
1 1 2 1 5
2, 3, 3, 6, 6
1 1 3 5 7
2, 8, 8, 8, 8
1 1 3 5 7
2, 8, 8, 8, 8
1 1 2 3 4
2, 5, 5, 5, 5
ON ORTHOGONAL HYPERGEOMETRIC GROUPS OF DEGREE FIVE
7
3. Proof of Theorem 2
In this section, we show the arithmeticity of all the hypergeometric groups associated to the
pairs of polynomials listed in Table 4, and it proves Theorem 2.
Strategy. We first compute the quadratic forms Q (up to scalar multiplications) preserved by the
hypergeometric groups of Theorem 2, and then show that the real rank of the orthogonal group
SOQ is two, and the Q - rank is either one or two. We then form a basis {ǫ1 , ǫ2 , ǫ3 , ǫ∗2 , ǫ∗1 } of Q5 ,
which satisfy the following condition: in Q - rank two cases, the matrix form of the quadratic form
Q, with respect to the basis {ǫ1 , ǫ2 , ǫ3 , ǫ∗2 , ǫ∗1 }, is anti-diagonal; and in Q - rank one cases, the vectors
ǫ1 , ǫ∗1 are Q - isotropic non-orthogonal vectors (that is, Q(ǫ1 , ǫ1 ) = Q(ǫ∗1 , ǫ∗1 ) = 0 and Q(ǫ1 , ǫ∗1 ) 6= 0),
and the vectors ǫ2 , ǫ3 , ǫ∗2 are Q - orthogonal to the vectors ǫ1 , ǫ∗1 .
Let P be the parabolic Q - subgroup of SOQ , which preserves the following flag:
{0} ⊂ Qǫ1 ⊂ Qǫ1 ⊕ Qǫ2 ⊕ Qǫ3 ⊕ Qǫ∗2 ⊂ Qǫ1 ⊕ Qǫ2 ⊕ Qǫ3 ⊕ Qǫ∗2 ⊕ Qǫ∗1
and U be the unipotent radical of P. It can be checked easily that the unipotent radical U is
isomorphic to Q3 (as a group), and in particular, U(Z) is a free abelian group isomorphic to Z3 .
We prove Theorem 2 (except for the cases 36, 39, and 42 of Table 4) by using the following
criterion of Raghunathan [8], and Venkataramana [15] (cf. [16, Theorem 6]): If Γ(f, g) ⊂ OQ (Z) is
Zariski dense and intersecting U(Z) in a finite index subgroup of U(Z), then Γ(f, g) has finite index
in OQ (Z). Note that, the criterion of Venkataramana [15] (for K - rank one) and Raghunathan
[8] (for K - rank two) are for all absolutely almost simple linear algebraic groups defined over a
number field K, and we have stated in a way to use it to prove our theorem.
To prove Theorem 2 for the cases 36, 39 and 42 of Table 4 (note that, for all these cases, the corresponding orthogonal groups have Q - rank two), we use the following criterion of Venkataramana
[14, Theorem 3.5]: If Γ(f, g) ⊂ OQ (Z) is a Zariski dense subgroup and intersecting the highest and
second highest root groups of SOQ non-trivially, then Γ(f, g) has finite index in OQ (Z).
For an easy reference to the root system, and the structures of the corresponding unipotent
subgroups of SOQ (of Q - rank two), we refer the reader to Remark 3.2 (cf. [11, Section 2.2]).
Notation. Let f, g be a pair of integer coefficient monic polynomials of degree 5, which have
roots of unity as their roots, do not have any common root, form a primitive pair, and satisfy
the condition: f (0) = −1 and g(0) = 1. Let A, B be the companion matrices of f, g respectively,
and Γ(f, g) be the group generated by A and B. Let C = A−1 B, I be the 5 × 5 identity matrix;
e1 , e2 , . . . , e5 be the standard basis vectors of Q5 over Q; and v be the last column vector of C − I,
that is, v = (C − I)e5 . Let Q be the non-degenerate quadratic form on Q5 , preserved by Γ(f, g).
Note that, Q is unique only up to scalars, and its existence follows from Beukers and Heckman [1].
8
JITENDRA BAJPAI AND SANDIP SINGH
It is clear that C(v) = −v (since C 2 = I), and hence v is Q - orthogonal to the vectors e1 , e2 , e3 , e4
(since Q(ei , v) = Q(C(ei ), C(v)) = Q(ei , −v) = −Q(ei , v), for i = 1, 2, 3, 4); and Q(v, e5 ) 6= 0 (since
Q is non-degenerate). We may now assume that Q(v, e5 ) = 1.
It can be checked easily that the set {v, Av, A2 v, A3 v, A4 v} (similarly {v, Bv, B 2 v, B 3 v, B 4 v})
forms a basis of Q5 (cf. [11, Lemma 2.1]). Therefore, to determine the quadratic form Q on Q5 , it
is enough to compute Q(v, Aj v), for j = 0, 1, 2, 3, 4. Also, since v is Q - orthogonal to the vectors
e1 , e2 , e3 , e4 , and Q(v, e5 ) = 1 (say), we get
Q(v, Aj v) = coefficient of e5 in Aj v.
(3.1)
We compute the signature of the quadratic form Q by using [1, Theorem 4.5], which says the
following: By renumbering, we may assume that 0 ≤ α1 ≤ α2 ≤ . . . ≤ α5 < 1 and 0 ≤ β1 ≤ β2 ≤
. . . ≤ β5 < 1. Let mj = #{k : βk < αj } for j = 1, 2, 3, 4, 5. Then, the signature (p, q) of the
quadratic form Q is determined by the equation
(3.2)
|p − q| =
5
X
(−1)j+mj .
j=1
Now using the above equality with the non-degeneracy of the quadratic form Q (that is, p+q = 5),
one can check easily that all the quadratic forms Q associated to the pairs of polynomials in Tables
4 and 6 have signature (3, 2), that is, the associated orthogonal groups OQ have real rank two.
Therefore, the quadratic forms Q in the cases of Tables 4 and 6, have two linearly independent
isotropic vectors in Q5 (by the Hasse-Minkowski theorem), which are not Q - orthogonal, that is,
the associated orthogonal groups OQ have Q - rank ≥ 1.
Therefore, we are able to use the criterion of Raghunathan [8] and Venkataramana [15] (cf.
[16, Theorem 6]) to show the arithmeticity of the hypergeometric groups associated to the pairs
of polynomials of Table 4, and could not succeed to do the same for the hypergeometric groups
associated to the pairs of Table 6.
In the proof of arithmeticity of the hypergeometric groups associated to the pairs of polynomials
of Table 4, we denote by X the change of basis matrix (cf. second paragraph of Section 3), that is,
ǫ1 = X(e1 ),
ǫ2 = X(e2 ),
ǫ3 = X(e3 ),
ǫ∗2 = X(e4 ),
ǫ∗1 = X(e5 ).
We also denote by a, b the matrices A, B respectively, with respect to the new basis, that is,
a = X −1 AX,
b = X −1 BX.
ON ORTHOGONAL HYPERGEOMETRIC GROUPS OF DEGREE FIVE
9
We compute the quadratic forms Q on Q5 by using Equation (3.1), and denote by Q only, the
matrix forms of the quadratic forms Q, with respect to the standard basis {e1 , e2 , e3 , e4 , e5 } of Q5 ,
so that the conditions (note that At , B t denote the transpose of the respective matrices)
At QA = Q,
B t QB = Q
are satisfied.
Let P be the parabolic Q - subgroup of SOQ (defined in the third paragraph of Section 3),
and let U be the unipotent radical of P. Since U(Z) is a free abelian group isomorphic to Z3 ,
to show that Γ(f, g) ∩ U(Z) has finite index in U(Z), it is enough to show that it contains three
linearly independent vectors in Z3 . In the proof below, we denote the three corresponding unipotent
elements by q1 , q2 , and q3 (note that these are the words in a and b which are respectively some
conjugates of A and B (see the last paragraph), and hence (with respect to some basis of Q5 )
q1 , q2 , q3 ∈ Γ(f, g)); and in some of the Q - rank two cases, we are able to show only two unipotent
elements q1 and q2 inside Γ(f, g), which correspond, respectively, to the highest and second highest
roots (Remarks 3.2, 3.3, 3.4), and the arithmeticity of Γ(f, g) follows; thanks to the criterion [14,
Theorem 3.5] of Venkataramana.
We now note the following remarks:
Remark 3.1. As explained above, to show the arithmeticity of the hypergeometric groups we need
to produce enough unipotent elements in Γ(f, g). Note that in the case of symplectic hypergeometric
groups (when n is even and f (0) = g(0) = 1) the matrix C = A−1 B itself is a non-trivial unipotent
element, and to produce more unipotent elements in Γ(f, g), one can take the conjugates of C
and then the commutators of those conjugates, by keeping in mind the structure of the required
unipotent elements. But in case of the orthogonal hypergeometric groups, the element C = A−1 B
is not unipotent (since one of the eigen value of C is −1 and all other eigen values are 1) and
if we take C 2 (to get rid of the eigen value −1), it becomes the identity element. Therefore, to
find a non-trivial unipotent element in Γ(f, g), we take conjugates of C by the elements of Γ(f, g),
and then try to see the commutators of those conjugates. The idea of finding the first non-trivial
unipotent element is completely experiment based and we use Maple extensively to find it in some
of the cases. Once we find a non-trivial unipotent element, we take the conjugates of it and then
compute the commutators of such conjugates.
Remark 3.2. For the Q - rank two cases, the explicit structures of the unipotent subgroups
corresponding to the roots of the orthogonal group have been given in [11, Seciton 2.2]. For
pedagogical reasons, we summarize it here.
Note that if we consider the maximal torus
t1 0 0 0
0 t2 0 0
0
T = 0 0 1 −1
0 0 0 t2
0
0
0
0
0
0
0
0
: ti ∈ Q ∗ , ∀ 1 ≤ i ≤ 2
−1
t1
10
JITENDRA BAJPAI AND SANDIP SINGH
inside the orthogonal group, and if ti is the character of T defined by
t
1
0
0
0
0
0
t2
0
0
0
0
0
1
0
0
0
0
0
t−1
2
0
0
0
0
0
t−1
1
7−→ ti ,
for i = 1, 2,
−1 −1 −1 −1
−1
then the roots of the orthogonal group are given by Φ = {t1 , t2 , t1 t2 , t1 t−1
2 , t1 , t2 , t1 t2 , t2 t1 }.
−1
+
If we fix a set of simple roots ∆ = {t2 , t1 t−1
2 }, then the set of positive roots Φ = {t2 , t1 t2 , t1 , t1 t2 },
−1
−1 −1 −1 −1
−
the set of negative roots Φ = {t2 , t2 t1 , t1 , t1 t2 }, and t1 t2 , t1 are respectively the highest
and second highest roots in Φ+ .
The unipotent subgroups corresponding to the highest and the second highest roots are respectively given by
1 0 0 x
λ
0
1 0 x 0 − 2λ1 x2
3
0 1 0 0 − λ1 x
0 1 0 0
0
λ2
λ
Ut1 t2 = 0 0 1 0
and Ut1 = 0 0 1 0 − λ13 x : x ∈ Q
0 : x ∈ Q
0 0 0 1
0
0
0 0 0 1
0
0
0
0
1
0
0
0
0
1
where λ1 =
λ2 =
and λ3 = Q(ǫ3 , ǫ3 ), and
is a basis of Q5 , with
respect to which, the matrix associated to the quadratic form Q has the anti-diagonal form.
Q(ǫ1 , ǫ∗1 ),
Q(ǫ2 , ǫ∗2 )
{ǫ1 , ǫ2 , ǫ3 , ǫ∗2 , ǫ∗1 }
Remark 3.3. Since Γ(f, g) is Zariski dense in the orthogonal group OQ (by [1, Theorem 6.5]),
to apply the criterion [14, Theorem 3.5] in the cases of hypergeometric groups of degree five, for
which, the corresponding orthogonal groups OQ have Q - rank two, it is enough to show that
[Ut1 t2 (Z) : Γ(f, g) ∩ Ut1 t2 (Z)]
and
[Ut1 (Z) : Γ(f, g) ∩ Ut1 (Z)]
are finite. Since
[U(Z) : Γ(f, g) ∩ U(Z)] < ∞ ⇒ [Ut1 t2 (Z) : Γ(f, g) ∩ Ut1 t2 (Z)] < ∞
and
[Ut1 (Z) : Γ(f, g) ∩ Ut1 (Z)] < ∞,
the arithmeticity of the hypergeometric groups in Table 4, for which, the associated orthogonal
groups have Q - rank two, also follows from [14, Theorem 3.5].
Remark 3.4. In the three cases 36, 39 and 42, we find the unipotent elements (in Γ(f, g)) corresponding to the highest and the second highest roots, and finding the third unipotent element in
these cases does not seem to be direct compare to the other 20 cases of Table 4, so we do not put
much effort in computing the third unipotent element (to show that [U(Z) : Γ(f, g) ∩ U(Z)] < ∞).
Also, once we find that the group Γ(f, g) has finite index in the integral orthogonal group (using
[14, Theorem 3.5]) it follows trivially that Γ(f, g) contains the unipotent elements corresponding
to each roots, and hence [U(Z) : Γ(f, g) ∩ U(Z)] is also finite.
Remark 3.5. In some of the remaining 20 cases (cf. Table 6), for which, the associated quadratic
forms Q have signature (3, 2), we find some non-trivial unipotent elements (but not enough to show
the arithmeticity), and for the other cases, it is even not possible (for us) to find a non-trivial
unipotent element to start with. Thus, we are not able to show the arithmeticity of these groups,
and believe that one may apply the ping-pong argument (similar to Brav and Thomas [2]) to show
the thinness of these groups.
ON ORTHOGONAL HYPERGEOMETRIC GROUPS OF DEGREE FIVE
11
We now give a detailed explanation of the computation for finding out the unipotent elements
inside the hypergeometric group Γ(f, g) in the first case (Case 21) of Table 4. Following the similar
computation, we find out the unipotent elements for the other cases of Table 4.
3.1. Arithmeticity of Case 21. In this case
1 2
;
α = 0, 0, 0, ,
3 3
β=
f (x) = x5 − 2x4 + x3 − x2 + 2x − 1;
1 1 1 3 3
, , , ,
2 4 4 4 4
g(x) = x5 + x4 + 2x3 + 2x2 + x + 1.
Let A and B be the companion matrices of f (X) and g(X) respectively, and let C = A−1 B.
Then
A=
0
0
0
0
1
0
0
0
0
−1
1
0
0
0
−3
1
0
0
0
−2
1
0
0
0
−1
0
1
0
0
−1
0
1
0
0
1
0
1
0
0
−2
0
0
1
0
−3
0
0
1
0
−1
0
0
0
1
1
0
0
0
1
2
0
0
0
0
−1
,
B=
0
0
1
0
−2
0
0
0
1
−1
,
C=
.
Let {e1 , e2 , e3 , e4 , e5 } be the standard basis of Q5 over Q, and let v = (C − I)(e5 ) = −3e1 −
e2 − 3e3 + e4 − 2e5 . Then Bv = 2e1 − e2 + 3e3 + e4 + 3e5 , B 2 v = −3e1 − e2 − 7e3 − 3e4 − 2e5 ,
B 3 v = 2e1 − e2 + 3e3 − 3e4 − e5 , B 4 v = e1 + 3e2 + e3 + 5e4 − 2e5 ; and hence by the Equation (3.1)
it follows that, with respect to the basis {v, Bv, B 2 v, B 3 v, B 4 v}, the matrix of the quadratic form
(up to scalar multiplication) preserved by the hypergeometric group Γ(f, g) is given by
Q=
−2
3
−2
−1
−2
3
−2
3
−2
−1
−2
3
−2
3
−2
−1
−2
3
−2
3
−2
−1
−2
3
−2
.
We now let Z be the change of basis matrix from the basis {v, Bv, B 2 v, B 3 v, B 4 v} to the standard
basis {e1 , e2 , e3 , e4 , e5 }. Then, a computation shows that
Z=
−5/32
−11/32
3/32
5/32
1/2
−1/2
−1/4
1/4
0
7/16
−3/16
−5/16
1/16
−1/16
−5/32
1/4
−1/4
0
0
−1/4
11/32
−3/32
−5/32
5/32
−5/32
and, with respect to the standard basis {e1 , e2 , e3 , e4 , e5 }, the matrix (up to scalar multiplication)
of the quadratic form preserved by the hypergeometric group Γ(f, g) is given by
−32Z t QZ =
5
−5
−3
11
5
−5
5
−5
−3
11
−3
−5
5
−5
−3
11
−3
−5
5
−5
5
11
−3
−5
5
12
JITENDRA BAJPAI AND SANDIP SINGH
(here we multiply by −32, just to get rid of the denominators) where Z t denotes the transpose of
the matrix Z. Thus, we find that
At (−32Z t QZ)A = (−32Z t QZ) = B t (−32Z t QZ)B.
After computing the isotropic vectors of the quadratic form Q, we change the standard basis
{e1 , e2 , e3 , e4 , e5 } to the basis {ǫ1 , ǫ2 , ǫ3 , ǫ∗2 , ǫ∗1 }, for which, the change of basis matrix is given by
4
X=
0
0
0
−1
4
0
−2
6
2
4
4
−2
2
−2
4
4
−6
10
0
0
0
2
−2
−3
.
With respect to the new basis, the matrices of Q, A and B are X t QX, a = X −1 AX and b =
X −1 BX; which are respectively
0
0
0
0
0
0
0
0
−128
0
64
0
0
−128
0
0
−128
0
0
0
0
2
4
2
−128
0
0 ,
0
0
0
1/2
−1/2
−1
−1
3
0
−3
6
1
−2
3
−2
0
0
0
2
4
2
0
2
2 ,
1/2
−1/2
0
3
0
−1/2
0
−1
−1
3
0
−3
6
1
−2
3
−2
2
−2
3/4
2
2 .
1/2
0
It is clear from the above computation that the Q - rank of the orthogonal group SOQ is two. If
we denote by
w2 = a−1 b4 a,
w1 = ab−1 ab,
then
−1
−1
2
−3
−1
0
−1
2
−4
−1
0
0
1
−4
0
w1 =
w3 =
w3 = w12 w2−1 ,
0
0
0
−1
1
0
0
0
0
−1
1
0
−2
2
4
0
1
0
0
−2
0
0
1
0
−4
0
0
0
1
0
0
0
0
0
1
and the final three unipotent elements are
w2 =
,
,
w4 =
q2 = w32 q1 ,
q1 = w4 w2 ,
w4 = (w3 w1−1 )2 w3 ,
1
2
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
1
−2
0
0
0
0
1
1
−2
0
−4
−8
0
1
0
0
4
0
0
1
0
0
0
0
0
1
2
0
0
0
0
1
,
,
q3 = w2 ,
which are respectively
1
0
0
0
0
0
0
−4
1
0
0
0
1
0
0
0
1
0
0
0
0
4
0 ,
0
1
1
0
0
0
0
0
−4
0
1
0
0
0
1
0
0
0
1
0
0
0
16
0
−8 ,
0
1
1
0
0
0
0
2
0
0
1
0
0
0
1
0
0
0
1
0
0
0
0
0
0 .
−2
1
ON ORTHOGONAL HYPERGEOMETRIC GROUPS OF DEGREE FIVE
13
It is clear from the above computation that the group generated by q1 , q2 , q3 is a subgroup of
Γ(f, g) ∩ U(Z), and has finite index in U(Z), the integer points of the unipotent radical U of the
parabolic subgroup P, which preserves the flag
{0} ⊂ Qǫ1 ⊂ Qǫ1 ⊕ Qǫ2 ⊕ Qǫ3 ⊕ Qǫ∗2 ⊂ Qǫ1 ⊕ Qǫ2 ⊕ Qǫ3 ⊕ Qǫ∗2 ⊕ Qǫ∗1 .
Therefore Γ(f, g)∩U(Z) is of finite index in U(Z). Since Γ(f, g) is also Zariski dense in the orthogonal
group OQ (by Beukers and Heckman [1]), the arithmeticity of Γ(f, g) follows from Venkataramana
[15] (cf. [8], [16, Theorem 6]).
Remark 3.6. We follow the same notation as that of Section 3.1 for the remaining cases, and the
arithmeticity of Γ(f, g) follows by the similar arguments used in Section 3.1. Also, in the proof of
the arithmeticity of the remaining cases, we denote the commutator of two elements s, t (say) in
Γ(f, g) by [s, t] which is sts−1 t−1 .
3.2. Arithmeticity of Case 22. In this case
1 2
;
α = 0, 0, 0, ,
3 3
β=
f (x) = x5 − 2x4 + x3 − x2 + 2x − 1;
1 1 3 1 5
, , , ,
2 4 4 6 6
g(x) = x5 + x3 + x2 + 1.
The matrix (up to scalar multiplication) of the quadratic form Q, with respect to the standard
basis {e1 , e2 , e3 , e4 , e5 }, and the change of basis matrix X, are
1
3
1
Q=
−5
−7
3
1
−5
1
3
1
3
1
3
1
3
1
−5
1
3
−7
−5
1 ;
3
1
0
1
0
X =
−1
0
2
−2
−1/2
0
0
0
4
−2
0
0
−2
−3
2
−2
−1/2
1
−1/2
2 ,
−1/2
0
and the matrices of Q, A and B, with respect to the new basis, are X t QX, a = X −1 AX and
b = X −1 BX; which are respectively
0
0
0
0
0
0
0
0
0
−16
0
−16
0
−16
0
0
0
0
0
8
8
0
0 ,
0
0
1/2
1/4
1
−1
1
−3
4
9/4
1/2
−1
−9/8
−2
3
3/2
2
−4
−5/2
−2
4
7/2
1/2
1/4
1
−1
5/4
−1/8
1/2 ,
−3/2
1
1/2
1
0
5/4
3/2
−2
−11/8
2
−1
1/2
−2
0
−3/2
−2
4
7/2
5/4
−1/8
1/2 .
−3/2
1/2
It is clear from the above computation that the Q - rank of the orthogonal group SOQ is two. If
we denote by
w1 = [a−1 , b], w2 = (aba)2 (bab)−1 , w3 = [w22 , w1 ],
then
w1 =
1
4
−4
−1
2
−1
0
0
1
0
1
0
−8
−16
16
0
1
0
0
−1/2
0
−1
−1
1/2
1/2
0
1
0
0
−8
0
0
1
0
−2
−2
0
1
1
0
0
0
1
0
−4
0
0
0
1
2
0
0
0
1
0
0
0
0
0
1
0
0
0
0
1
,
w2 =
0
0
0
−1
0
−4
0
4
0
−1
,
w3 =
,
14
JITENDRA BAJPAI AND SANDIP SINGH
and the final three unipotent elements are
q1 = [w22 , w3 ],
q3 = (w1−4 q22 )4 q1 ,
q2 = w3 q1−1 ,
which are respectively
1
0
0
0
0
0
0
−16
1
0
0
0
1
0
0
0
1
0
0
0
1
0
0
0
0
−8
0 ,
0
0
1
0
−8
0
1
0
0
0
1
0
0
0
1
0
0
0
3.3. Arithmeticity of Case 23. In this case
1 2
α = 0, 0, 0, ,
;
3 3
−64
0
0
1
0
0
0
1
0
0
0
1
0
0
0
0
1
β=
f (x) = x5 − 2x4 + x3 − x2 + 2x − 1;
1
0
0
0
16
0
−4 ,
0
1 1 1 5 5
, , , ,
2 6 6 6 6
0
0
0 .
−32
1
g(x) = x5 − x4 + x3 + x2 − x + 1.
The matrix (up to scalar multiplication) of the quadratic form Q, with respect to the standard
basis {e1 , e2 , e3 , e4 , e5 }, and the change of basis matrix X, are
3
3
1
Q=
−3
3
1
−3
3
3
1
3
3
3
1
3
3
−3
1
3
−7
0
−2
0
0
X =
−2
−7
−3
1 ;
3
0
3
0
0
−2
8
−1
0
−8
2
0
8
−4
−2
0
1
1
−3
2 ,
−1
1
and the matrices of Q, A and B, with respect to the new basis, are X t QX, a = X −1 AX and
b = X −1 BX; which are respectively
0
0
0
0
16
0
−64
0
0
16
0
0
16
0
0
0
0
0
0
0
0
1
0
0
16
0
0 ,
0
0
0
0
0
−1/2
1
−4
1
0
−1
0
0
0
0
−2
0
0
1/2
1
1 ,
1
2
0
1
0
0
0
−1
0
0
1
−4
1
0
−1
0
0
0
0
0
0
−1
1
1
1 .
1
1
It is clear from the above computation that the Q - rank of the orthogonal group SOQ is two. If
we denote by
w1 = a−1 b6 a,
w2 = ab−6 a−1 ,
w5 = [a−1 , b]a−2 ,
w6 = dw1 d,
d = ba−1 ,
w3 = (ab−4 )2 ,
w4 = (a−1 b4 )2 ,
w7 = w6−4 w5 w1 w5−1 w2 ,
then
d=
0
0
0
0
1/2
0
1
0
0
0
0
0
1
0
0
0
0
0
1
0
2
0
0
0
0
,
w1 =
1
4
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
1
−4
0
0
0
0
1
,
w2 =
1
0
0
0
0
4
1
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
−4
1
,
ON ORTHOGONAL HYPERGEOMETRIC GROUPS OF DEGREE FIVE
w3 =
1
0
0
−1
0
2
1
0
−2
1
0
0
1
0
0
0
0
0
1
0
0
0
0
−2
1
w4 =
,
w6 =
1
2
0
1
−2
0
1
0
0
−1
0
0
1
0
0
0
0
0
1
−2
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
1
0
0
−8
0
0
1
0
0
8
0
0
1
and the final three unipotent elements are
q1 = w42 w1−1 ,
,
1
,
w7 =
w5 =
0
1
0
0
0
−1
1
0
0
0
−2
0
1
0
0
−8
−4
8
0
−1
4
−8
1
1
4
1
0
0
0
0
0
1
0
0
0
−8
0
1
0
0
0
0
0
1
0
128
0
−32
0
1
4
0
0
1
0
0
0
1
0
0
0
1
0
0
0
q2 = dw7 d,
15
,
,
q3 = w1 ,
which are respectively
1
0
0
0
0
0
0
2
1
0
0
0
1
0
0
0
1
0
0
0
0
−2
0 ,
0
1
1
0
0
0
0
0
−16
0
1
0
0
0
1
0
0
0
1
0
0
0
3.4. Arithmeticity of Case 24. In this case
1 3
;
α = 0, 0, 0, ,
4 4
β=
f (x) = x5 − 3x4 + 4x3 − 4x2 + 3x − 1;
1
1
0
0
0
1 1 1 5 5
, , , ,
2 6 6 6 6
32
0
−4 ,
0
0
0
0
0 .
−4
1
g(x) = x5 − x4 + x3 + x2 − x + 1.
The matrix (up to scalar multiplication) of the quadratic form Q, with respect to the standard
basis {e1 , e2 , e3 , e4 , e5 }, and the change of basis matrix X, are
7
Q=
9
7
−7
9
7
9
7
7
9
7
9
−7
7
9
7
−25
−7
7
9
−25
−7
7 ;
9
7
0
−1
0
X =
1
0
2
0
1
−2
0
−1
4
4
3
−6
−4
−2
2
0
1
−2
3
−6 ,
4
−1
and the matrices of Q, A and B, with respect to the new basis, are X t QX, a = X −1 AX and
b = X −1 BX; which are respectively
0
0
0
0
0
0
0
−32
0
0
−64
0
0
−32
0
0
16
0
0
0
16
0
0 ,
0
0
1
0
−4
1
1/2
0
1
0
−2
0
1
0
−1
0
−2
−4
1
−2
−4
0
2
1
1 ,
1
2
1
−4
−4
−1
1/2
0
1
−1
−2
−1/2
1
0
−1
0
−4
−4
0
−2
−4
0
4
3/2
1 .
2
2
It is clear from the above computation that the Q - rank of the orthogonal group SOQ is two. If
we denote by
c = a−1 b,
d = [c, a],
e = [a−1 , b2 ],
s = (cac)4 ,
w1 = e−1 de−1 d−1 ,
w2 = d[e, c]d−1 ,
16
JITENDRA BAJPAI AND SANDIP SINGH
w3 = w1−2 d−1 w2 sw2−1 d,
w4 = w3−1 sd−4 ,
w7 = d−82 (w4 dw4−1 )2 ,
w5 = [w4 , d],
w6 = a4 w4 dw4−1 d7 a−4
w8 = w62 (w4 dw4−1 d7 )−10 ,
then the final three unipotent elements are
q3 = (w5−2 w7 )288 q1 ,
q2 = w7−240 q3 ,
q1 = [w6 , w8 ],
which are respectively
1
0
0
0
0
0
0
−23040
1
0
0
0
1
0
0
0
1
0
0
0
0
−11520
0 ,
0
1
1
0
0
0
0
0
23040
0
1
0
0
0
1
0
0
0
1
0
0
0
3.5. Arithmeticity of Case 25. In this case
1 5
α = 0, 0, 0, ,
;
6 6
66355200
0
5760 ,
0
1
β=
f (x) = x5 − 4x4 + 7x3 − 7x2 + 4x − 1;
1
0
0
0
46080
0
0
1
0
0
0
1
0
0
0
1
0
0
0
0
1 1 1 3 3
, , , ,
2 4 4 4 4
0
0
0 .
23040
1
g(x) = x5 + x4 + 2x3 + 2x2 + x + 1.
The matrix (up to scalar multiplication) of the quadratic form Q, with respect to the standard
basis {e1 , e2 , e3 , e4 , e5 }, and the change of basis matrix X, are
37
11
Q = −35
−37
11
−35
−37
37
11
−35
11
37
11
−35
11
37
−37
−35
11
37
37
−37
−35 ;
11
37
4
4
X = 4
4
0
0
−24
−10
2
−24
−9
2
−48
−19
2
−24
−7
2
−24
−11
11
6
18 ,
8
9
and the matrices of Q, A and B, with respect to the new basis, are X t QX, a = X −1 AX and
b = X −1 BX; which are respectively
0
0
0
0
0
0
0
0
192
0
2304
0
0
192
0
0
−384
0
0
0
−384
0
0 ,
0
0
0
2
0
0
0
−6
66
121/4
−10
108
105/2
−2
23
11
0
0
0
−2
24
12
−47/2
−39
−8 ,
−1
−9
0
2
0
0
0
−1/2
0
0
0
−12
−5/2
0
−1
0
0
0
0
0
0
1
5/4
6
1 .
−1
0
It is clear from the above computation that the Q - rank of the orthogonal group SOQ is two. If
we denote by
w1 = b4 , w2 = a−1 b4 a, w3 = w2 w1−12 , w4 = [a, b4 ],
w5 = (ab)−1 b−2 (ab)−1 b2 ,
w6 = w1−1 w2 w3−1 w4−1 w5−1 ,
then the final three unipotent elements are
q1 = w1 ,
q2 = w5−1 w4−1 w2 w6−1 w2−1 q1−277 ,
q3 = w2−32 q1384 q28 ,
ON ORTHOGONAL HYPERGEOMETRIC GROUPS OF DEGREE FIVE
17
which are respectively
1
0
0
0
0
0
0
1
1
0
0
0
1
0
0
0
1
0
0
0
1
0
0
0
0
2
0 ,
0
0
1
0
96
0
1
0
0
0
1
0
0
0
1
0
0
0
1
0
0
0
768
0
16 ,
0
β=
f (x) = x5 + x4 + x3 − x2 − x − 1;
0
0
1
0
0
0
1
0
0
0
1
0
0
0
0
1
3.6. Arithmeticity of Case 26. In this case
1 1 2 2
α = 0, , , ,
;
3 3 3 3
64
1 1 3 1 5
, , , ,
2 4 4 6 6
0
0
0 .
128
1
g(x) = x5 + x3 + x2 + 1.
The matrix (up to scalar multiplication) of the quadratic form Q, with respect to the standard
basis {e1 , e2 , e3 , e4 , e5 }, and the change of basis matrix X, are
1
0
Q = −2
1
0
2
−2
1
1
0
−2
0
1
0
−2
0
1
1
−2
0
2
1
−2 ;
0
−2
0
X = −4
−2
1
0
0
4
0
2
0
1
0
4
0
4
0
0
2
−4
−1
−1
1
0 ,
1
3
and the matrices of Q, A and B, with respect to the new basis, are X t QX, a = X −1 AX and
b = X −1 BX; which are respectively
0
0
0
0
8
0
0
0
0
0
−8
0
−16
0
−8
0
0
0
0
0
8
0
0 ,
0
0
0
−1
0
0
1
−2
−1
−1
2
0
2
−3
−1
0
0
0
4
−4
−1
0
1
−1
2 ,
1
3
0
−1
0
0
0
0
0
−1/2
−1
2
0
0
1
0
0
0
0
2
0
0
−1/2
−1
−1 .
1
0
It is clear from the above computation that the Q - rank of the orthogonal group SOQ is two. If
we denote by
c = a−1 b, e = [a2 , b2 ], u = [a−1 , b2 ],
w = [a−1 , b4 ],
w2 = w1 (t2 a3 )−2 , w3 = w2 (w2−1 a−3 w2 )2 w1−1 ,
then the final three unipotent elements are
q1 = w4−4 q2 ,
t = w−1 ue−1 , w1 = (t2 a3 )−2 t4 (a3 (b4 a−1 )3 )2 ,
w4 = w3−2 w−4 w3 ,
q2 = w6 w4−32 w3−81 ,
w5 = cw1 c,
q 3 = w3 ,
which are respectively
1
0
0
0
0
0
0
16
1
0
0
0
1
0
0
0
1
0
0
0
0
16
0 ,
0
1
1
0
0
0
0
0
−64
0
1
0
0
0
1
0
0
0
1
0
0
0
1024
0
−32 ,
0
1
1
0
0
0
0
8
0
0
1
0
0
0
1
0
0
0
1
0
0
0
0
0
0 .
8
1
w6 = w5 w3 w5−1 ,
18
JITENDRA BAJPAI AND SANDIP SINGH
3.7. Arithmeticity of Case 27. In this case
1 1 2 2
;
α = 0, , , ,
3 3 3 3
β=
f (x) = x5 + x4 + x3 − x2 − x − 1;
1 1 3 5 7
, , , ,
2 8 8 8 8
g(x) = x5 + x4 + x + 1.
The matrix (up to scalar multiplication) of the quadratic form Q, with respect to the standard
basis {e1 , e2 , e3 , e4 , e5 }, and the change of basis matrix X, are
7
−5
Q = −1
7
−5
−1
7
7
−5
−1
−5
7
−5
−1
−5
7
7
−1
−5
−9
1
2
X = 1
0
−9
7
−1 ;
−5
7
0
0
0
0
0
4
−1
1
4
0
2
4
0
1
0
1
1
0
0 ,
−1
0
and the matrices of Q, A and B, with respect to the new basis, are X t QX, a = X −1 AX and
b = X −1 BX; which are respectively
0
0
0
0
−4
0
−16
0
0
−4
0
0
−4
0
0
0
0
0
0
0
−1
−1
1
1
−4
0
0 ,
0
0
1
1
0
0
0
0
0
0
1
0
1
4
−1
0
0
1
0
0
0 ,
−1
0
−1
−1
1
1
1
0
0
−1
0
0
0
0
1
0
1
4
−1
−1
0
0
0
0
0 .
−1
0
It is clear from the above computation that the Q - rank of the orthogonal group SOQ is two. If
we denote by
d = (ab−4 )2 ,
w1 = (a2 b4 a−1 d−1 )−2 ,
w5 = w2 w42 ,
w2 = (a−1 ba−4 b5 )2 ,
w6 = w34 w5 w34 w5−1 ,
w3 = [a−1 , b3 ],
w4 = w1 w3−1 w1−1
w7 = [w34 , w5 ],
then the final three unipotent elements are
q1 = [w7 , w5−1 ],
q2 = w7−2 q1 ,
q3 = w6−2 q2 ,
which are respectively
1
0
0
0
0
0
0
−16
1
0
0
0
1
0
0
0
1
0
0
0
0
16
0 ,
0
1
1
0
0
0
0
0
32
0
1
0
0
0
1
0
0
0
1
0
0
0
3.8. Arithmeticity of Case 28. In this case
1 2 1 3
;
α = 0, , , ,
3 3 4 4
f (x) = x5 + x3 − x2 − 1;
−128
0
−8 ,
0
1
β=
1
0
0
0
0
16
0
0
1
0
0
0
1
0
0
0
1
0
0
0
1 1 3 5 7
, , , ,
2 8 8 8 8
g(x) = x5 + x4 + x + 1.
0
0
0 .
−16
1
ON ORTHOGONAL HYPERGEOMETRIC GROUPS OF DEGREE FIVE
19
The matrix (up to scalar multiplication) of the quadratic form Q, with respect to the standard
basis {e1 , e2 , e3 , e4 , e5 }, and the change of basis matrix X, are
3
−3
Q = −1
5
−5
−3
−1
5
3
−3
−1
−3
3
−3
−1
−3
3
5
−1
−3
0
0
X = 2
2
−5
5
−1 ;
−3
3
0
0
4/3
1
6
4/3
0
6
0
−1
10
−8/9
−1/3
−6
0
1
−2
−1
0 ,
0
−3
and the matrices of Q, A and B, with respect to the new basis, are X t QX, a = X −1 AX and
b = X −1 BX; which are respectively
0
0
0
0
0
0
0
0
48
0
−64/9
0
0
48
0
0
16
0
0
0
16
0
0 ,
0
0
−1/2
1/6
−3/2
0
−1
−7
−1/9
7/12
4/3
1/27
−7/36
−12
2/3
7/4
−6
−4/3
−1
−8
−2/9
1/6
−11/4
19/36
−17/4 ,
5/3
−1/2
−1/2
1/6
−3/2
0
−1
−1
−1/9
−5/12
1/3
1/27
−1/36
−3
2/3
1/4
−6
−4/3
−1
−8
−2/9
1/6
1/4
1/36
1/4 .
5/3
−1/2
It is clear from the above computation that the Q - rank of the orthogonal group SOQ is two. If
we denote by
c = a−1 b,
e = ab−1 ,
w1 = a2 b4 a−2 ,
w2 = w1−1 b4 w1 ,
w5 = [w2 , w42 ], w6 = [c, w4−1 ],
then the final three unipotent elements are
q1 = w5−9 q2 ,
w3 = a−1 b4 a,
w7 = w6−16 w53 ,
q2 = w824 q3 ,
w4 = w3 b6 w3 ,
w8 = ew6 e,
q3 = w7 w8−8 ,
which are respectively
1
0
0
0
0
0
0
144
1
0
0
0
1
0
0
0
1
0
0
0
0
−48
0 ,
0
1
1
0
0
0
0
0
96
0
1
0
0
0
1
0
0
0
1
0
0
0
10368
0
216 ,
0
1
1
0
0
0
0
−432
0
0
1
0
0
0
1
0
0
0
1
0
0
0
0
0
0 .
144
1
3.9. Arithmeticity of Case 29 (Q - rank one case). In this case
1 2 1 3
1 1 5 7 11
α = 0, , , ,
, , , ,
;
β=
3 3 4 4
2 12 12 12 12
f (x) = x5 + x3 − x2 − 1;
g(x) = x5 + x4 − x3 − x2 + x + 1.
The matrix (up to scalar multiplication) of the quadratic form Q, with respect to the standard
basis {e1 , e2 , e3 , e4 , e5 }, and the change of basis matrix X, are
1
−3
Q = −1
3
−5
−3
−1
3
1
−3
−1
−3
1
−3
−1
−3
1
3
−1
−3
−5
3
−1 ;
−3
1
−1
0
X = −1
0
0
0
0
0
−1
−2
1
−2
−1
−1
−4
1
−1
1
2
−1
−1
−1
1 ,
2
−1
20
JITENDRA BAJPAI AND SANDIP SINGH
and the matrices of Q, A and B, with respect to the new basis, are X t QX, a = X −1 AX and
b = X −1 BX; which are respectively
0
0
0
0
0
−24
−12
−12
0
−12
−24
12
0
−12
12
−8
−12
0
0
0
−12
0
0 ,
0
0
−1/2
1/2
0
0
1/2
−3
−3/2
1
1/2
1/6
0
0
−1/3
1/2
3
0
0
2
−1/2
1/2
3/2
−1/2
1 ,
0
−1/2
−1/2
1/2
0
0
−2
1/2
1
1/2
1/6
0
0
−1/3
1/2
−1/2
3
0
0
3
3/2
−1/2
1/2
−1/2
1 .
0
−3/2
A computation shows that the Q - rank of the orthogonal group SOQ is one. If we denote by
c = a−1 b,
d = aba,
e = [a−3 , b−1 ],
w2 = a3 e−1 a−3 dca−3 ,
w1 = cdca−3 ,
w4 = w3 w2−1 , w5 = w3−1 w43 ,
then the final three unipotent elements are
q1 = w18 q2 ,
w6 = [e, w12 ],
q2 = w14 w7 ,
w3 = w1−2 w23 ,
w7 = w53 w6 ,
q3 = w34 q1−1 ,
which are respectively
1
0
0
0
0
0
0
24
1
0
0
0
1
0
0
0
1
0
0
0
216
−18
18 ,
18
1
1
0
0
0
0
0
24
0
1
0
0
0
1
0
0
0
1
0
0
0
3.10. Arithmeticity of Case 30. In this case
1 2 1 5
α = 0, , , ,
;
3 3 6 6
f (x) = x5 − x4 + x3 − x2 + x − 1;
24
−10
2 ,
18
1
β=
1
0
0
0
24
0
0
1
0
0
0
1
0
0
0
0
1
0
0
0
1 1 1 3 3
, , , ,
2 4 4 4 4
24
2
−10 .
−18
1
g(x) = x5 + x4 + 2x3 + 2x2 + x + 1.
The matrix (up to scalar multiplication) of the quadratic form Q, with respect to the standard
basis {e1 , e2 , e3 , e4 , e5 }, and the change of basis matrix X, are
1
−7
1
Q=
17
1
−7
1
17
1
−7
1
−7
1
−7
1
−7
1
17
1
−7
1
17
1 ;
−7
1
0
1
0
X =
−1
0
1
0
−1
1
0
−1
2
0
−1
3
−24
4
1
0
−1
2
1
4 ,
0
1
and the matrices of Q, A and B, with respect to the new basis, are X t QX, a = X −1 AX and
b = X −1 BX; which are respectively
0
0
0
0
0
0
0
0
−24
0
576
0
0
−24
0
0
−24
0
0
0
−24
0
0 ,
0
0
1
−1
0
1
1
−4
24
−3
7
−48
7
1
−7
1
0
0
0
−3
24
−4
0
1
0 ,
0
0
1
−1
0
1
1
−2
24
−5
6
−48
8
1
−7
1
1
0
−1
−3
24
−4
2
0
0 .
1
0
ON ORTHOGONAL HYPERGEOMETRIC GROUPS OF DEGREE FIVE
21
It is clear from the above computation that the Q - rank of the orthogonal group SOQ is two. If
we denote by
d = [a, b−1 ],
e = [b−1 , a2 ],
s = ba2 b−1 ,
r = aba−1 ,
2
w5 = d2 w12 d−1 ,
w2 = [e−1 , t−1 ] , w3 = d−12 w2 , w4 = sw2 s−1 ,
then the final three unipotent elements are
q1 = w3 ,
t = [s, d−1 ],
q2 = [w6 , w2 ]w3−1728 ,
w1 = b−1 a−1 b,
w6 = w4 w52 w4−1 w52 ,
q 3 = w2 ,
which are respectively
1
0
0
0
0
0
0
−12
1
0
0
0
1
0
0
0
1
0
0
0
1
0
0
0
0
12
0 ,
0
0
1
0
−3456
0
1
0
0
0
1
0
0
0
1
0
0
0
3.11. Arithmeticity of Case 31. In this case
1 2 1 5
;
α = 0, , , ,
3 3 6 6
248832
0
−144 ,
0
1
β=
f (x) = x5 − x4 + x3 − x2 + x − 1;
1
0
0
0
−12
0
0
1
0
0
0
1
0
0
0
1
0
0
0
0
1 1 3 5 7
, , , ,
2 8 8 8 8
0
0
0 .
12
1
g(x) = x5 + x4 + x + 1.
The matrix (up to scalar multiplication) of the quadratic form Q, with respect to the standard
basis {e1 , e2 , e3 , e4 , e5 }, and the change of basis matrix X, are
1
5
1
Q=
−7
5
1
−7
1
5
1
5
1
5
1
5
1
−7
1
5
1
1
−7
1 ;
5
1
0
−1
0
X =
1
0
2
0
−1
2
0
−1
2
0
0
−8
−12
−1
2
0
−1
2
1
0 ,
−2
3
and the matrices of Q, A and B, with respect to the new basis, are X t QX, a = X −1 AX and
b = X −1 BX; which are respectively
0
0
0
0
0
0
0
0
−24
0
144
0
0
−24
0
0
12
0
0
0
12
0
0 ,
0
0
−1
−1/2
0
1
1
10
12
0
2
0
−1
2
5
1
−14
−24
−3
−8
−12
−1
6
2
0 ,
−3
−2
−1
−1/2
0
1
1
6
12
2
1
0
−1/2
2
5
1
−12
−24
−4
−8
−12
−1
0
1/2
0 .
0
−2
It is clear from the above computation that the Q - rank of the orthogonal group SOQ is two. If
we denote by
2
d = [b−1 , a] ,
w1 = ab4 a−1 ,
w2 = [a2 , w1−1 ],
w3 = w2 dw2−1 d,
then the final three unipotent elements are
q1 = w42 q2−1 ,
q2 = w2−1 w4 w2 w4 ,
q3 = (q1 d6 )2 q1−1 ,
w4 = w3 d−2 ,
22
JITENDRA BAJPAI AND SANDIP SINGH
which are respectively
1
0
0
0
0
0
0
24
1
0
0
0
1
0
0
0
1
0
0
0
0
12
0 ,
0
1
1
0
0
0
0
0
−48
0
1
0
0
0
1
0
0
0
1
0
0
0
−96
0
4 ,
0
1
1
0
0
0
48
0
0
1
0
0
0
1
0
0
0
1
0
0
0
0
0
0
0 .
24
1
3.12. Arithmeticity of Case 32. In this case
1 1 5 7 11
1 2 1 5
, , , ,
;
β=
α = 0, , , ,
3 3 6 6
2 12 12 12 12
f (x) = x5 − x4 + x3 − x2 + x − 1;
g(x) = x5 + x4 − x3 − x2 + x + 1.
The matrix (up to scalar multiplication) of the quadratic form Q, with respect to the standard
basis {e1 , e2 , e3 , e4 , e5 }, and the change of basis matrix X, are
1
2
1
Q=
−1
1
2
1
−1
1
2
1
2
1
2
1
2
1
−1
1
2
1
−1
1 ;
2
1
0
−1
0
X =
1
0
1
0
−1/2
1
0
−1/2
1
0
1/2
−4
−3
0
1
0
−1/2
1
1/2
−1 ,
−3/2
2
and the matrices of Q, A and B, with respect to the new basis, are X t QX, a = X −1 AX and
b = X −1 BX; which are respectively
0
0
0
0
3
0
0
0
0
0
−3
0
9
0
−3
0
0
0
0
0
3
0
0 ,
0
0
−1/2
−1/2
0
1
1
3
3/2
−1/2
3/2
0
−3/4
1
2
1/2
−7
−6
−1/2
−4
−3
0
15/4
9/4
0 ,
−5/2
−3/2
−1/2
−1/2
0
1
1
1
3/2
1/2
1/2
0
−1/4
1
2
1/2
−5
−6
−3/2
−4
−3
0
−1/4
1/4
0 .
3/2
−3/2
It is clear from the above computation that the Q - rank of the orthogonal group SOQ is two. If
we denote by
d = [b−1 , a], e = ab6 a−1 , w1 = [a2 , e−1 ], w2 = w1 dw1−1 d,
then the final three unipotent elements are
2 −1 2
−1
q2 , q2 = w1−1 w2 d−2 w1 w2 d−2 ,
q1 = q2 dq2 w2
q3 = (q2−1 dq2 )−96 q1 ,
which are respectively
1
0
0
0
0
0
0
−96
1
0
0
0
1
0
0
0
1
0
0
0
0
−96
0 ,
0
1
1
0
0
0
0
0
−48
0
1
0
0
0
1
0
0
0
1
0
0
0
−384
0
16 ,
0
1
1
0
0
0
0
−192
0
0
1
0
0
0
1
0
0
0
1
0
0
0
0
0
0 .
−192
1
ON ORTHOGONAL HYPERGEOMETRIC GROUPS OF DEGREE FIVE
3.13. Arithmeticity of Case 33. In this case
1 2 3 4
α = 0, , , ,
;
5 5 5 5
f (x) = x5 − 1;
β=
1 1 1 2 2
, , , ,
2 3 3 3 3
23
g(x) = x5 + 3x4 + 5x3 + 5x2 + 3x + 1.
The matrix (up to scalar multiplication) of the quadratic form Q, with respect to the standard
basis {e1 , e2 , e3 , e4 , e5 }, and the change of basis matrix X, are
29
−25
11
Q=
11
−25
11
11
29
−25
11
−25
29
−25
11
−25
29
11
11
−25
−25
−25
11
11 ;
−25
29
−1
−1
1
X =
1
0
−2
2
3
−1
2
4
0
0
0
−1
0
−1
−2
2
0
6
4
1 ,
2
5
and the matrices of Q, A and B, with respect to the new basis, are X t QX, a = X −1 AX and
b = X −1 BX; which are respectively
0
0
0
0
0
0
0
−18
0
0
36
0
0
−18
0
0
18
0
0
0
18
0
0 ,
0
0
0
−3
0
0
0
0
3
−2
4
4
0
−1
1
0
0
1
−1
2
1
−1
2
4
0 ,
1
2
0
−3
0
0
12
−1
−12
3
−8
10
4
2
−3
1
4
−4
1
−3
4
1
−28
19
−5 .
−9
7
It is clear from the above computation that the Q - rank of the orthogonal group SOQ is two. If
we denote by
w1 = [a−1 , b−6 ],
w2 = [a−1 , b3 ],
w3 = a−3 b−6 a3 ,
w4 = w22 w1−1 w3 w2−2 w1 w3−1 ,
w5 = w4−2 w1 w4−1 w2−2 ,
then the final three unipotent elements are
q2 = w53 w410 ,
q1 = w3 ,
q3 = w4 ,
which are respectively
1
0
0
0
0
0
0
6
1
0
0
0
1
0
0
0
1
0
0
0
0
6
0 ,
0
1
1
0
0
0
0
0
24
0
1
0
0
0
1
0
0
0
1
0
0
0
−144
0
−12 ,
0
1
1
0
0
0
0
−48
0
0
1
0
0
0
1
0
0
0
1
0
0
0
0
0
0 .
−48
3.14. Arithmeticity of Case 34 (Q - rank one case). In this case
1 2 3 4
1 1 2 1 3
α = 0, , , ,
, , , ,
;
β=
5 5 5 5
2 3 3 4 4
f (x) = x5 − 1;
g(x) = x5 + 2x4 + 3x3 + 3x2 + 2x + 1.
1
24
JITENDRA BAJPAI AND SANDIP SINGH
The matrix (up to scalar multiplication) of the quadratic form Q, with respect to the standard
basis {e1 , e2 , e3 , e4 , e5 }, and the change of basis matrix X, are
5
−7
5
Q=
5
−7
−7
5
5
5
−7
5
−7
5
−7
5
−7
5
5
5
−7
0
1
0
X =
−1
−7
5
5 ;
−7
5
0
−2
3
2
3
4 ,
2
−1
−4
6
−2
−3
7
−3
−1
5
−5
−2
3
−1
1
and the matrices of Q, A and B, with respect to the new basis, are X t QX, a = X −1 AX and
b = X −1 BX; which are respectively
0
0
0
0
0
24
−36
12
0
−36
24
0
0
12
0
24
−12
0
0
0
−12
0
0 ,
0
0
1
1
0
0
1
1
−5
5
0
−1
1
0
2
−3
0
1
−2
−1
−2
4
−1
0
1 ,
0
−1
1
1
0
0
1
−3
1
3
−2
2
0
0
2
−3
0
1
−2
−1
−2
4
1
1
1 .
0
−1
A computation shows that the Q - rank of the orthogonal group SOQ is one. If we denote by
c = a−1 b,
d = [a, b−1 ],
w3 = w22 (d2 w22 )−1 w22 ,
w1 = d(ede−1 )2 , w2 = c[a2 , b],
2
w5 = w2 w3−5 w2−1 w4−2 , w6 = d98 w5 ,
e = [a−4 , b−4 ],
w4 = w1−4 w33 ,
then the final three unipotent elements are
q1 = w6−432 q2294 ,
q2 = w146 w6 ,
q3 = w4432 q230 ,
which are respectively
1
0
0
0
0
0
0
−59616
1
0
0
0
1
0
0
0
1
0
0
0
1
0
0
0
740430720
−9936
−14904 ,
−24840
0
1
0
−432
0
1
0
0
0
1
0
0
0
1
0
0
0
−23328
216
108 ,
−108
1
1
0
0
0
1296
0
0
1
0
0
0
1
0
0
0
1
0
0
0
0
−279936
−432
−648 .
216
1
3.15. Arithmeticity of Case 35 [(f(x), g(x)), (-f(-x), -g(-x)) case]. In this case
1 1 1 2 2
1 1 5 5
, , , ,
;
β=
α = 0, , , ,
6 6 6 6
2 3 3 3 3
f (x) = x5 − 3x4 + 5x3 − 5x2 + 3x − 1;
g(x) = x5 + 3x4 + 5x3 + 5x2 + 3x + 1.
The matrix (up to scalar multiplication) of the quadratic form Q, with respect to the standard
basis {e1 , e2 , e3 , e4 , e5 }, and the change of basis matrix X, are
4
0
Q = −5
0
13
0
−5
0
4
0
−5
0
4
0
−5
0
4
0
−5
0
13
0
−5 ;
0
4
−1
3
X = −4
3
−1
−5
1/4
7/4
−19/4
13
−3/4
−18
1
21/2
14
−1/2
−13/2
−4
1/2
8
−5
15
−26 ,
18
−20
ON ORTHOGONAL HYPERGEOMETRIC GROUPS OF DEGREE FIVE
25
and the matrices of Q, A and B, with respect to the new basis, are X t QX, a = X −1 AX and
b = X −1 BX; which are respectively
0
0
0
0
−36
0
1/2
0
0
−36
0
0
−18
0
0
0
0
0
0
0
−5/2
9/16
−9/2
1/4
−18
0
0 ,
0
15/2
−19/16
27/2
21/4
2
5/8
−15/8
41/16
−11/64
−63/64
−45/2
−1/8
−117/8
1/4
−3/16
−7/16
−1/2
−1/8
−9/8
0
0
−23/2
57/2
−35/8
−71/16
45/64
833/64
99/2
−73/8
−1269/8
81/4
−43/16
−647/16
15/2
−9/8
−137/8
−655/8
3/2
51/16
81/2 ,
11/4
7/2
403/2
−509/16
801/2 .
411/4
87/2
It is clear from the above computation that the Q - rank of the orthogonal group SOQ is two. If
we denote by
d = aba−2 ,
e = aba,
t = ba−1 b,
w4 = dw3 dw3−1 ,
w1 = [a−6 , e],
w5 = w3 dw3−1 d,
w2 = a−48 w1 ,
w6 = [w5 , a6 ],
w3 = tb−6 t−1 ,
w7 = [w4 , a6 ],
then the final three unipotent elements are
q1 = w6 w7−1 w1−10 w2 w1−2 w23 ,
q3 = w2384 q1−60 ,
q2 = dq1 d,
which are respectively
1
0
0
0
0
0
0
−768
1
0
0
0
1
0
0
0
1
0
0
0
0
384
0 ,
0
1
1
0
0
0
0
0
−4608
0
1
0
0
0
1
0
0
0
1
0
0
0
3.16. Arithmeticity of Case 36. In this case
1 1 5 5
;
α = 0, , , ,
6 6 6 6
f (x) = x5 − 3x4 + 5x3 − 5x2 + 3x − 1;
382205952
0
−165888 ,
0
1
β=
1
0
0
0
−3072
0
0
1
0
0
0
1
0
0
0
1
0
0
0
0
1 1 2 1 3
, , , ,
2 3 3 4 4
0
0
0 .
1536
1
g(x) = x5 + 2x4 + 3x3 + 3x2 + 2x + 1.
The matrix (up to scalar multiplication) of the quadratic form Q, with respect to the standard
basis {e1 , e2 , e3 , e4 , e5 }, and the change of basis matrix X, are
13
2
Q = −14
−7
22
2
−14
−7
13
2
−14
2
13
2
−14
2
13
−7
−14
2
22
−7
−14 ;
2
13
1
−1
0
X=
1
−1
7
0
19
−8
3/49
166/7
2
1/49
242/7
5
−1/49
185/7
−6
4/49
114/7
36
42
64 ,
48
26
26
JITENDRA BAJPAI AND SANDIP SINGH
and the matrices of Q, A and B, with respect to the new basis, are X t QX, a = X −1 AX and
b = X −1 BX; which are respectively
0
0
0
0
0
0
0
0
252
0
3/49
0
0
252
0
0
−72
0
0
0
20
−3
0
0
−72
0
0 ,
0
0
2
−2/7
0
1
−1/2
127
−73/49
−2042/7
−19
11/49
306/7
0
1
−84
0
0
20
0
0
−21/2
4
−66/49
−240 .
72/7
0
19
−1/49
10/7
−19/7
1/343
−24/49
0
1
−84
6
−4/49
26/7
−3
2/49
−33/14
−464
3392/49
−240 ,
254/7
−19
−6
It is clear from the above computation that the Q - rank of the orthogonal group SOQ is two. If
we denote by
d = ba6 b−1 ,
w2 = b−1 a6 b,
w1 = ba−1 ba−4 ,
w3 = [w2−1 , w1 ],
w4 = dw1 d−1 w1 ,
w5 = w4 w3−1 ,
then the final two unipotent elements, corresponding to the highest and second highest roots, are
q 1 = a6 ,
q2 = w57 q12263 ,
which are respectively
1
0
0
0
0
0
0
84
1
0
0
0
1
0
0
0
1
0
0
0
1
0
0
0
0
24
0 ,
0
0
1
0
−24
0
1
0
0
0
1
0
0
0
1
0
0
0
338688
0
−28224 ,
0
1
and the arithmeticity of Γ(f, g) follows from [14, Theorem 3.5] (cf. Remarks 3.2, 3.3, 3.4).
3.17. Arithmeticity of Case 37. In this case
1 1 5 5
α = 0, , , ,
;
6 6 6 6
f (x) = x5 − 3x4 + 5x3 − 5x2 + 3x − 1;
β=
1 1 1 3 3
, , , ,
2 4 4 4 4
g(x) = x5 + x4 + 2x3 + 2x2 + x + 1.
The matrix (up to scalar multiplication) of the quadratic form Q, with respect to the standard
basis {e1 , e2 , e3 , e4 , e5 }, and the change of basis matrix X, are
73
17
Q = −71
−55
73
17
−71
−55
73
17
−71
17
73
17
−71
17
73
−55
−71
17
73
−55
−71 ;
17
73
2
2
X = 2
2
0
0
0
0
−8
0
2
−6
2
1
3
−1
−7/2
−9
5
−7/2
1
−3
6 ,
−4
4
ON ORTHOGONAL HYPERGEOMETRIC GROUPS OF DEGREE FIVE
27
and the matrices of Q, A and B, with respect to the new basis, are X t QX, a = X −1 AX and
b = X −1 BX; which are respectively
0
0
0
0
0
0
0
0
−432
0
144
0
0
−432
0
0
144
0
0
0
144
0
0 ,
0
0
0
−3/2
−3
−1
0
3/2
−3/2
9/4
−6
7/2
−3/2
−6
2
3
0
−1
3
−12
8
−8
0
−3/2
−3
−1
0
25/12
5/2 ,
1/2
0
4
6
−4
4
−6
7/2
−3/2
−6
2
3
0
−1
3
−3
3
−9/2
−2
25/12
5/2 .
1/2
0
It is clear from the above computation that the Q - rank of the orthogonal group SOQ is two. If
we denote by
d = ba−6 b−1 ,
w1 = (ab−1 a)8 ,
e = a−1 b−4 a,
w2 = w12 e,
w3 = w2−3 w12 ,
w4 = dw2 d−1 ,
then the final three unipotent elements are
q2 = w24032 q130 ,
q1 = [w4 , w2−7 ],
q3 = w3112 q1−1 ,
which are respectively
1
0
0
0
0
0
0
−1344
1
0
0
0
1
0
0
0
1
0
0
0
0
−448
0 ,
0
1
1
0
0
0
0
0
−16128
0
1
0
0
0
1
0
0
0
1
0
0
0
3.18. Arithmeticity of Case 38. In this case
1 3 5 7
;
α = 0, , , ,
8 8 8 8
f (x) = x5 − x4 + x − 1;
−130056192
0
16128 ,
0
1
β=
1
0
0
0
0
1 1 1 2 2
, , , ,
2 3 3 3 3
1344
0
0
1
0
0
0
1
0
0
0
1
0
0
0
0
0
0 .
448
1
g(x) = x5 + 3x4 + 5x3 + 5x2 + 3x + 1.
The matrix (up to scalar multiplication) of the quadratic form Q, with respect to the standard
basis {e1 , e2 , e3 , e4 , e5 }, and the change of basis matrix X, are
73
−53
1
Q=
55
−71
−53
1
55
73
−53
1
−53
73
−53
1
−53
73
55
1
−53
−2
0
1
X =
2
−71
55
1 ;
−53
73
−1
71/7
−4/49
9
−50/7
4/49
−2
−3
0
−7
26/7
8/49
−11
118/7
4/49
5
7
−5
−1 ,
4
13
and the matrices of Q, A and B, with respect to the new basis, are X t QX, a = X −1 AX and
b = X −1 BX; which are respectively
0
0
0
0
0
0
0
0
−252
0
144/49
0
0
−252
0
0
−36
0
0
0
−36
0
0 ,
0
0
3
−5/7
1
29/49
1
−24/7
12/49
−13
64/49
−60/343
30/7
46/7
−37/49
−6
11/343
152/2401
−125/49
−3/7
12/49
−6
−2
6/7
3 ,
3/49
0
28
JITENDRA BAJPAI AND SANDIP SINGH
−4
−2
0
−6/7
3
802/7
40/49
22
1126/49
−24/343
75/7
164/7
−33/49
−1
8389/343
436/2401
230/49
−239/7
4/49
−16
89
123/7
16 .
926/49
−26
It is clear from the above computation that the Q - rank of the orthogonal group SOQ is two. If
we denote by
e = a−1 b6 a,
c = a−1 b, d = cbc,
w1 = [e, a4 ],
w2 = w149 e−48 ,
w3 = ed−3 , w4 = w3 w1−1 w3 w1 ,
then the final three unipotent elements are
q1 = q2−1 w26 ,
q2 = w47 e−247 ,
q3 = e,
which are respectively
1
0
0
0
0
0
0
−3528
1
0
0
0
1
0
0
0
1
0
0
0
0
504
0 ,
0
1
0
0
0
1
0
0
144
0
1
0
0
0
1
0
0
0
1
0
0
0
3.19. Arithmeticity of Case 39. In this case
1 3 5 7
;
α = 0, , , ,
8 8 8 8
f (x) = x5 − x4 + x − 1;
127008
0
1764 ,
0
1
β=
1
0
0
0
42
0
0
1
0
0
0
1
0
0
0
1
0
0
0
0
1 1 2 1 3
, , , ,
2 3 3 4 4
0
0
0 .
−6
1
g(x) = x5 + 2x4 + 3x3 + 3x2 + 2x + 1.
The matrix (up to scalar multiplication) of the quadratic form Q, with respect to the standard
basis {e1 , e2 , e3 , e4 , e5 }, and the change of basis matrix X, are
13
−11
1
Q=
13
−11
−11
1
13
13
−11
1
−11
13
−11
1
−11
13
13
1
−11
−11
13
1 ;
−11
13
−8
−8
X = −8
2
−2
4
−3/2
−17/2
0
−3
−12
0
−3
−16
−4
3/2
1/2
0
0
0
and the matrices of Q, A and B, with respect to the new basis, are
b = X −1 BX; which are respectively
0
0
0
0
0
0
0
0
−192
0
36
0
0
−192
0
0
48
0
0
0
3/2
39/16
−5
1/2
48
0
0 ,
0
0
−3
5/2
−3
73/16
−7
−1/2
−11
19/8
−33/8
69/64
297/64
6
−11/4
−35/4
1
3/8
7/8
10
−9/4
−21/4
3/8
19/8
−33/8
69/64
297/64
6
−11/4
−35/4
1
3/8
7/8
10
−9/4
−21/4
−3/4
−39/32
5/2 .
−1/4
3/2
−1
X t QX,
3/8
−3
1
1
1 ,
−2
a = X −1 AX and
−5/4
−73/32
7/2 ,
1/4
11/2
ON ORTHOGONAL HYPERGEOMETRIC GROUPS OF DEGREE FIVE
29
It is clear from the above computation that the Q - rank of the orthogonal group SOQ is two. If
we denote by
c = a−1 b,
d = [a−4 , b3 ],
e = ab,
w1 = cdc,
w2 = ce,
w4 = w12 w3 ,
w3 = w2 w1 w2−1 ,
then the final two unipotent elements, corresponding to the highest and second highest roots, are
q2 = w4−96 q1132 ,
q1 = [w1 , w3 ],
which are respectively
1
0
0
0
0
0
0
96
1
0
0
0
1
0
0
0
1
0
0
0
1
0
0
0
0
24
0 ,
0
0
1
0
−576
0
1
0
0
0
1
0
0
0
1
0
0
0
−221184
0
768 ,
0
1
and the arithmeticity of Γ(f, g) follows from [14, Theorem 3.5] (cf. Remarks 3.2, 3.3, 3.4).
3.20. Arithmeticity of Case 40. In this case
1 3 5 7
α = 0, , , ,
;
8 8 8 8
f (x) = x5 − x4 + x − 1;
β=
1 1 1 3 3
, , , ,
2 4 4 4 4
g(x) = x5 + x4 + 2x3 + 2x2 + x + 1.
The matrix (up to scalar multiplication) of the quadratic form Q, with respect to the standard
basis {e1 , e2 , e3 , e4 , e5 }, and the change of basis matrix X, are
1
−3
1
Q=
5
1
−3
1
5
1
−3
1
−3
1
−3
1
−3
1
5
1
−3
1
5
1 ;
−3
1
0
1
0
X =
−1
0
1
8/9
1/6
1
8/9
1/6
4
8/9
−1/3
1
8/9
−17/6
1
8/9
1/6
1
1/2
2 ,
1/2
0
and the matrices of Q, A and B, with respect to the new basis, are X t QX, a = X −1 AX and
b = X −1 BX; which are respectively
0
0
0
0
0
0
0
0
24
0
64/9
0
0
24
0
0
−8
0
0
0
1/2
−8
0 −1/18
−1
0 ,
0 −1/3
0
1
−3/2
−4/3
5/4
5/18
32/81
−115/108
2
13/9
−5/3
−1/3
16/27
−13/18
−1
−8/9
17/6
1/2
3/4
7/36 −1/18
−1
1/2 ,
−5/6 −1/3
−1/2
1/2
1
4/9
19/12
1/6
8/27
−13/12
0
−1/3
−2
−1
0
−5/6
−1
−8/9
17/6
3/4
7/36
1/2 .
−5/6
−1/2
It is clear from the above computation that the Q - rank of the orthogonal group SOQ is two. If
we denote by
w1 = [a, b−1 ],
w2 = [a−4 , b3 ],
w3 = ba4 b−1 ,
w4 = [w3 , w1 ],
w5 = w2 w1 w2−1 w1−89 ,
then the final three unipotent elements are
q1 = w53 q2−8 ,
q2 = w5 w4−10 ,
q3 = (w118 q2 )40 q1 ,
30
JITENDRA BAJPAI AND SANDIP SINGH
which are respectively
1
0
0
0
0
0
0
240
1
0
0
0
1
0
0
0
1
0
0
0
0
80
0 ,
0
1
1
0
0
0
0
0
32
0
1
0
0
0
1
0
0
0
1
0
0
0
1
0
0
0
576
0
36 ,
0
−1440
0
0
1
0
0
0
1
0
0
0
1
0
0
0
0
1
0
0
0 .
−480
1
3.21. Arithmeticity of Case 41 (Q - rank one case). In this case
1 1 2 1 3
1 5 7 11
, , , ,
;
β=
α = 0, , , ,
12 12 12 12
2 3 3 4 4
f (x) = x5 − x4 − x3 + x2 + x − 1;
g(x) = x5 + 2x4 + 3x3 + 3x2 + 2x + 1.
The matrix (up to scalar multiplication) of the quadratic form Q, with respect to the standard
basis {e1 , e2 , e3 , e4 , e5 }, and the change of basis matrix X, are
5
−5
3
Q=
1
−5
3
1
5
−5
3
−5
5
−5
3
−5
5
1
3
−5
−1
−1
1
3 ;
−5
5
0
0
X= 1
1
0
5
−3
2
−4
9
1
−5
13
0
−4
10
2
−2
1
3
2
1
1 ,
−3
−3
and the matrices of Q, A and B, with respect to the new basis, are X t QX, a = X −1 AX and
b = X −1 BX; which are respectively
0
0
0
0
0
24
−12
−36
0
−12
−24
60
0
−36
60
24
12
0
0
0
12
0
0 ,
0
0
1/6
19/6
−5/2
8/3
0
0
−4
1
0
0
−1
0
1/6
−3/2
−5/6
8/3
−1/6
1/2
−13/6
1/3
−49/6
0
1 ,
−19/6
−5/6
1/6
3/2
7/6
−10/3
0
0
−4
1
0
0
−1
0
1/6
1/2
−11/6
−1/3
−1/6
1/2
−13/6
1/3
−13/6
0
1 .
−1/6
−5/6
A computation shows that the Q - rank of the orthogonal group SOQ is one. If we denote by
c = a−1 b,
d = [a−1 , b−5 ],
w2 = c[d, w1 ]c,
e = bab−1 ,
w3 = [e, r −1 ],
r = [b−1 , a],
w4 = w2 w3 w2 ,
w7 = [w4 , r −1 ]w539 ,
then the final three unipotent elements are
q1 = w6504 w7306 ,
s = b−1 ab,
w5 = [s, r −1 ],
w1 = [a−1 , b2 ],
w6 = w4 w5−1 w4 w54 ,
w8 = [w1 , r −1 ],
q2 = w6360 w7342 ,
q3 = w862208 q2−78 ,
which are respectively
1
0
0
0
0
0
0
62208
1
0
0
0
1
0
0
0
1
0
0
0
−806215680
−57024
−36288 ,
−25920
1
1
0
62208
0
0
1
0
0
0
0
1
0
0
0
0
1
0
0
0
0
−806215680
−67392
−25920 ,
−36288
1
1
−1866240
0
0
0
1
0
0
0
0
1
0
0
0
0
1
0
0
0
0
−4208445849600
4510080
2021760 .
1710720
1
ON ORTHOGONAL HYPERGEOMETRIC GROUPS OF DEGREE FIVE
3.22. Arithmeticity of Case 42. In this case
1 5 7 11
α = 0, , , ,
;
12 12 12 12
f (x) = x5 − x4 − x3 + x2 + x − 1;
β=
1 1 2 3 4
, , , ,
2 5 5 5 5
31
g(x) = x5 + 2x4 + 2x3 + 2x2 + 2x + 1.
The matrix (up to scalar multiplication) of the quadratic form Q, with respect to the standard
basis {e1 , e2 , e3 , e4 , e5 }, and the change of basis matrix X, are
21
−19
11
Q=
1
−19
11
1
21
−19
11
−19
21
−19
11
−19
21
1
11
−19
−9
−3
−4
X = −2
5
−9
1
11 ;
−19
21
−2
−5/4
7/4
−1
0
3
1
5/2
7/2
2
15/4
7/4
0
0
0
4
−15/4
−4
−3/2 ,
1/4
−1
and the matrices of Q, A and B, with respect to the new basis, are X t QX, a = X −1 AX and
b = X −1 BX; which are respectively
0
0
0
0
0
0
0
−10
0
0
25
0
0
−10
0
0
−40
0
0
0
33/8
−25/2
−3
9
−40
0
0 ,
0
3/8
−11/2
−1
1
17/16
45/32
9/32
−9/4
−5/8
15/8
−11/4
−3/4
3
15/4
3/4
9/4
15/8
−5/8
9/32
−9/4
−5/8
15/8
−3/2
−11/4
−3/4
3
15/4
3/4
9/4
15/8
−5/8
9/2
45/32
−3/2
15/2
0
17/16
3/32
−11/8
−1/4 ,
1/4
9/8
33/32
−25/8
−3/4 .
9/4
15/8
It is clear from the above computation that the Q - rank of the orthogonal group SOQ is two. If
we denote by
c = a−1 b,
d = b−4 a6 b4 ,
w4 = w1−384 w3 , w5 = w296 w4 ,
e = da4 da−4 ,
w1 = [e, a6 ],
w2 = [w1 , a3 ],
w6 = cw3 c, w7 = da6 d−1 a6 ,
w3 = [w1−24 , w2 ],
w8 = w65 w7 w3−1 w7−1 , w9 = w8−1 w5 w8 ,
then the final two unipotent elements, corresponding to the highest and second highest roots, are
q1 = w3 ,
q2 = w93840 q1238886400 ,
which are respectively
1
0
0
0
0
0
0
−3840
1
0
0
0
1
0
0
0
1
0
0
0
0
15360
0 ,
0
1
1
0
0
0
0
0
7372800
0
1
0
0
0
1
0
0
0
1
0
0
0
43486543872000
0
11796480 ,
0
1
and the arithmeticity of Γ(f, g) follows from [14, Theorem 3.5] (cf. Remarks 3.2, 3.3, 3.4).
32
JITENDRA BAJPAI AND SANDIP SINGH
3.23. Arithmeticity of Case 43. In this case
1 5 7 11
;
α = 0, , , ,
12 12 12 12
β=
f (x) = x5 − x4 − x3 + x2 + x − 1;
1 1 3 5 7
, , , ,
2 8 8 8 8
g(x) = x5 + x4 + x + 1.
The matrix (up to scalar multiplication) of the quadratic form Q, with respect to the standard
basis {e1 , e2 , e3 , e4 , e5 }, and the change of basis matrix X, are
1
−3
1
Q=
1
1
−3
1
1
1
−3
1
−3
1
−3
1
−3
1
1
1
−3
0
1
0
X =
−1
1
1
1 ;
−3
1
−1
−3/4
1/6
−3/4
3/2
0
1
0
1
3/4
−1/6
and the matrices of Q, A and B, with respect to the new basis, are
b = X −1 BX; which are respectively
0
0
0
0
4
0
0
0
0
0
16/3
0
1
0
16/3
0
0
0
0
0
4
0
0 ,
0
0
1
−1/8
3
3/4
1
−1/4
−1/6
1/8
11/32
−49/48
1
−9/4
7/6
−3/4
−9/16
1/8
0
1
0
−1
0
9/8
−3 ,
−3/4
2
2
−1
−2 ,
2
−1/6
−1
0
1
3/4
1
−1/8
3
3/4
3
X t QX,
a = X −1 AX and
−1
−7/4
1/6
1/4
7/16
−25/24
−2
−9/2
5/3
−3/2
−9/8
1/4
0
1
0
−1
−6
3/2
−12 .
−3
2
It is clear from the above computation that the Q - rank of the orthogonal group SOQ is two. If
we denote by
w1 = [a−1 , b], w2 = a3 b4 a−3 , w3 = (a3 b4 )4 , w4 = w2 w1 w2 ,
then the final three unipotent elements are
q1 = q22 w424 w3−6 ,
q2 = w112 w3−3 ,
q3 = (w312 q2 )−8 q1 ,
which are respectively
1
0
0
0
0
0
0
−64
1
0
0
0
1
0
0
0
1
0
0
0
0
48
0 ,
0
1
1
0
0
0
0
0
12
0
1
0
0
0
1
0
0
0
1
0
0
0
−288
0
−48 ,
0
1
1
0
0
0
0
−384
0
0
1
0
0
0
1
0
0
0
1
0
0
0
0
0
0 .
288
1
References
[1] F. Beukers, G. Heckman, Monodromy for the hypergeometric function n Fn−1 , Invent. math. 95 (1989), no. 2,
325-354.
[2] C. Brav, H. Thomas, Thin Monodromy in Sp(4), Compositio Math. 150 (2014), no. 3, 333-343.
[3] J. Hofmann, D. van Straten, Some monodromy groups of finite index in Sp4 (Z), J. Aust. Math. Soc. 99(2015),
no. 1, 48-62.
[4] E. Fuchs, The ubiquity of thin groups. Thin groups and superstrong approximation, 73-92, Math. Sci. Res. Inst.
Publ., 61, Cambridge Univ. Press, Cambridge, 2014.
[5] E. Fuchs, C. Meiri, P. Sarnak, Hyperbolic monodromy groups for the hypergeometric equation and Cartan involutions, J. Eur. Math. Soc. (JEMS) 16 (2014), no. 8, 1617-1671.
ON ORTHOGONAL HYPERGEOMETRIC GROUPS OF DEGREE FIVE
33
[6] A. H. M. Levelt, Hypergeometric functions, Doctoral thesis, University of Amsterdam, 1961.
[7] M. S. Raghunathan, Discrete subgroups of Lie groups, Springer-Verlag Berlin Heidelberg New York (1972).
[8] ———–, A note on generators for arithmetic subgroups of algebraic groups, Pacific Journal of Math., 152 (1992),
no. 2, 365-373.
[9] P. Sarnak, Notes on thin matrix groups. Thin groups and superstrong approximation, 343-362, Math. Sci. Res.
Inst. Publ., 61, Cambridge Univ. Press, Cambridge, 2014.
[10] S. Singh, Arithmeticity of four hypergeometric monodromy groups associated to Calabi-Yau threefolds, Int.
Math. Res. Not. (IMRN), 2015 (2015), no. 18, 8874-8889.
[11] ———–, Orthogonal hypergeometric groups with a maximally unipotent monodromy, Exp. Math. 24 (2015),
no. 4, 449-459.
[12] ———–, Arithmeticity of some hypergeometric monodromy groups in Sp(4), J. Algebra 473 (2017), 142-165.
[13] S. Singh, T. N. Venkataramana, Arithmeticity of certain symplectic hypergeometric groups, Duke Math. J. 163
(2014), no. 3, 591-617.
[14] T. N. Venkataramana, Zariski dense subgroups of arithmetic groups, J. Algebra 108 (1987), no. 2, 325-339.
[15] ———–, On system of generators for arithmetic subgroups of higher rank groups, Pacific Journal of Math. 166
(1994), no. 1, 193-212.
[16] ———–, Hypergeometric groups of orthogonal type, J. Eur. Math. Soc. (JEMS) 19 (2017), no. 2, 581-599.
Max Planck Institute for Mathematics, Vivatsgasse 7, 53111 Bonn, Germany
E-mail address: [email protected], [email protected]
| 4 |
Subsets and Splits